patent_id
stringlengths 7
8
| description
stringlengths 125
2.47M
| length
int64 125
2.47M
|
---|---|---|
11861826 | DESCRIPTION OF EMBODIMENTS Hereinafter, an embodiment of an imaging mass spectrometer including an imaging data processing device according to the present invention will be described with reference to the accompanying drawings. FIG.1is a schematic block diagram of an imaging mass spectrometer according to the present embodiment. The imaging mass spectrometer of the present embodiment includes an imaging mass spectrometry unit1for performing a measurement on a sample by mass spectrometric imaging, an optical microscopic image acquiring unit2for taking an optical microscopic image on the sample, a data processing unit3, and an input unit4and a display unit5which are user interfaces. The imaging mass spectrometry unit1includes, for example, a matrix-assisted laser desorption/ionization (MALDI) ion trap time-of-flight mass spectrometer, and performs mass spectrometry on many minute areas (measurement points) in a two-dimensional measurement area on a sample such as a piece of a biological tissue to acquire mass spectrometric data for each measurement point. The optical microscopic image acquiring unit2is formed by adding an image acquiring unit to an optical microscope and acquires a microscopic image of a two-dimensional area of the surface on a sample. Here, the optical microscopic image acquiring unit2is used to acquire an optical microscopic image, which is used for determining a measurement area when a measurement is performed by the mass spectrometric imaging, and to take a stained image of a stained sample. The data processing unit3receives the mass spectral data in each minute area collected by the image mass spectrometry unit1and the optical microscopic image data input from the optical microscopic image acquiring unit2and performs predetermined processing. The data processing unit3includes, as functional blocks, a data collector31, a data storage section32, an image creator33, an optical image creator34, an image superimposition processor35, and the like. The data storage section32includes a spectral data storage area321for storing data collected by measurement by the imaging mass spectrometry unit1, and an optical image data storage area322for storing data collected by measurement (imaging) by the optical microscopic image acquiring unit2. The image superimposition processor35includes, as lower functional blocks, functional blocks such as an image display processor351, an image deformation range specification receiving section352, a grid spacing adjustment receiving section353, and an image deformation processor354. In general, the data processor3is in fact a personal computer (or a higher-performance workstation), and is configured to execute a function of each of the blocks by operating dedicated software installed in the computer on the computer. In that case, the input unit4is a pointing device such as a keyboard or a mouse, and the display unit5is a display monitor. Next, the measurement work for the sample by the imaging mass spectrometer of the present embodiment will be described. First, when an operator sets a target sample at a predetermined measurement position of the optical microscopic image acquiring unit2and performs a predetermined operation with the input unit4, the optical microscopic image acquiring unit2takes an image of the surface of the sample and displays the image on the screen of the display unit5. The operator (user) instructs the whole sample or a measurement area, which is a part of the sample, on the image by using an input unit4. The operator takes out a sample once and attaches a matrix for MALDI to the surface of the sample. Then, the operator sets the sample with the matrix attached at a predetermined measurement position in the imaging mass spectrometry section1, and performs a predetermined operation using the input unit4. Then, the operator sets the sample100with the matrix attached at a predetermined measurement position in the imaging mass spectrometry section1, and performs a predetermined operation using the input unit4. This allows the imaging mass spectrometry section1to acquire mass spectrometry data over a predetermined mass-to-charge ratio range by performing mass spectrometry on each of the many micro areas in the measurement area indicated as described above on the sample. At this time, the data collector31performs so-called profile acquisition, collects profile spectral data, which is a waveform continuous in the direction of the mass-to-charge ratio within the mass-to-charge ratio range, and stores the collected data into the spectral data storage area321of the data storage section32. When a pattern on a sample surface (borders of different tissues, etc.) can be observed relatively clearly even with the matrix attached to the sample surface, the optical microscopic image acquiring unit2may capture an image after the matrix is preliminarily attached to the sample surface. After the measurement by the mass spectrometric imaging, the operator takes out the sample and removes the matrix attached to the sample surface with a solvent. Then, the sample is stained with a predetermined staining reagent, and the stained sample is set again at a predetermined measurement position of the optical microscopic image acquiring unit2. When the operator performs a predetermined operation with the input unit4, the optical microscopic image acquiring unit2takes an image of the surface of the sample, and the data collector31stores the stained image data obtained by the imaging into the optical image data storage area322of the data storage section32. Thus, the mass spectrometric imaging data and the stained image data for the same sample are stored into the data storage section32. Next, with reference toFIGS.2to4A-4B, a description will be given of image superimposition work performed in a state where the above-mentioned data is stored and image deformation processing performed at the time of the image superimposition work.FIG.2is a flowchart showing the procedure for the image superimposition work,FIG.3is a view showing an example of a display screen at the time of the image superimposition work, andFIGS.4A-4Bare explanatory views of the image deformation processing. When the operator performs a predetermined operation with the input unit4, the optical image creator34reads out the stained image data from the optical image data storage area322of the data storage section32and creates a stained image for the sample on the basis of the data (step S1). When the operator specifies a compound having a two-dimensional distribution desired to be confirmed with an input unit4, the image creator33reads out signal intensity value data at a mass-to-charge ratio M corresponding to the specified compound from the spectral data storage area321of the data storage section32and creates a mass spectrometric image at the mass-to-charge ratio M for the sample on the basis of the data (step S2). The image display processor351displays an image superimposition work screen60, as shown inFIG.3, on the screen of the display unit5(step S3). The image superimposition work screen60is provided with an image display area61in which superimposed images, obtained by superimposing a stained image and a mass spectrometric image for the same sample, are disposed. The superimposed images displayed at this time in the image display area61are simply superimposed images obtained by making one of the two images translucent, and no image alignment has been performed. In the device of the present embodiment, image alignment by linear image deformation such as affine deformation is also possible, but here, nonlinear image deformation is performed. In the present invention, the stained image out of the two images is deformed, but the operator may be enabled to select an image that is an image deformation target. When the operator performs a predetermined operation with the input unit4to instruct nonlinear image deformation to be performed, the image display processor351displays grid lines62, as shown inFIG.3, on the entire surface of the superimposed images displayed in the image display area61(step S4). Here, an intersection62aof the vertical and horizontal grid lines62is the grid point in the present invention. However, the grid point may be indicated using a cross shape instead of the grid lines, or the grid point may be indicated only by a simple dot. Further, the grid line62to be displayed may be different from a simple solid line, such as a dotted line, or the color of the grid line62may be made changeable by the operator as appropriate. A grid spacing adjustment slider63is disposed in the image superimposition work screen60, and when the operator performs an operation of moving a knob of the slider63with the input unit4, the grid spacing adjustment receiving section353adjusts the spacing between grid lines62displayed on the superimposed images in the image display area61in accordance with the operation (step S5). When the operator depresses the image deformation range “SET” button64with the input unit4, the image deformation range specification receiving section352is activated to make it possible to specify a desired range on the superimposed images as the image deformation range by using the pointing device. Here, the shape of the image deformation range that can be specified is rectangular, and the size of the range is arbitrary. This range can be set irrespective of the grid lines62. FIG.4Bshows an example in which an image deformation range is specified in the image display area61. The superimposed images are displayed in practice, but is omitted here. Here, the image deformation range can be specified so as to include many (or one) rectangular block(s) surrounded by four adjacent intersections (grid points)62aalong the grid lines62. It is also possible to specify a plurality of image deformation ranges at once. When the image deformation range is determined, the operator depresses an image deformation range “SELECT” button65with the input unit4, whereby the image deformation range specification receiving section352determines the image deformation range set on the image at that time. Here, the image deformation range has the rectangular shape, but it may be possible to form the image deformation range into an arbitrary shape, for example, by moving a cursor with the pointing device and taking the range surrounded by the locus of the cursor as the image deformation range. Next, the operator selects one of the intersections62aof the grid lines62on the image in the image display area61, that is, one grid point, as a control point by clicking with the pointing device, and then performs an operation of dragging and dropping the control point in an arbitrary direction and to an arbitrary position (step S7). The image deformation processor354accepts this operation, and nonlinearly deforms the stained image within the image deformation range according to a predetermined algorithm when the control point is within the image deformation range. On the other hand, when the selected control point is out of the image deformation range, as shown inFIG.4A, the image deformation processor354sets the four rectangular blocks surrounding the control point as the image deformation range and nonlinearly deforms the stained image within the image deformation range according to the predetermined algorithm (step S8). That is, in the latter case, the image deformation range is automatically determined in accordance with the spacing between the grid lines as in the conventional device, whereas in the former case, the image deformation range can be arbitrarily determined by the operator. Then, the operator confirms whether the position of the stained image after the deformation and the position of the mass spectrometric image are matched on the displayed image (step S9) and returns from step S9to step S5when image deformation is required. Then, by repeating steps S5to S9, the accuracy of the alignment between the stained image and the mass spectrometric image is gradually increased, and when the operator determines that the deviation has reached an acceptable level, the processing proceeds from step S9to step S10, and the operator depresses the “SAVE” button66with the input unit4. Thus, the image display processor351stores the data constituting the superimposed images at that time into the data storage section32. As the algorithm for the image deformation described above, a well-known method disclosed in various documents such as Non Patent Literature 2 may be used, and only the image deformation range is different betweenFIGS.4A and4B. In the case ofFIG.4A, with the image deformation range being limited by the grid-line spacing, the grid-line spacing needs to be widened when the image is desired to be deformed greatly. Then, the control points cannot be set finely. On the other hand, for setting the control points finely, the grid-line spacing needs to be narrowed, and then, the range in which the image is deformed is limited considerably. In contrast, in the case ofFIG.4B, the grid-line spacing and the image deformation range are not related to each other, whereby the image deformation range can be widened to deform a wide range on the image at once, while the grid-line spacing can be narrowed so as to set the control points finely. Also, when there is a portion on the image, which is not desired to be deformed, at a position relatively close to the control point, the image deformation range can be set so as to exclude the portion. Thus, the image alignment can be efficiently performed such that the same site on the sample is at the same position on the two images that are superimposed. In the above embodiment, it has been possible to set the image deformation range irrespective of the spacing between the grid lines that are for setting the control point on the image, but as shown inFIG.3, the spacing between the grid lines displayed on the superimposed images is constant. In contrast, as in the embodiment described below, when the spacing between the grid lines displayed on the superimposed images is not one type but a plurality of types of grid-line spacing can be mixed, effects similar to those of the above embodiment can be obtained. FIGS.5A-5Bare explanatory views of image deformation processing in an imaging mass spectrometer according to another embodiment of the present invention. FIGS.5A-5Bare views showing grid lines62displayed on superimposed images (not shown) displayed in the image display area61as inFIGS.4A-4B. In the imaging mass spectrometer of the present embodiment, first, as shown inFIG.5A, the operator sets the grid lines62having relatively coarse grid-line spacing and thereafter specifies, as dense grid point range, one or more blocks having a rectangular shape (or any two-dimensional shape connecting a plurality of intersections62a) formed by the grid lines62. Then, the operator further sets a dense grid-line spacing within the dense grid point range. As in the above embodiment, since the intersection62aof the vertical and horizontal grid lines62is a grid point, setting the grid-line spacing is substantially the same as setting the grid-point spacing. That is, in the present embodiment, it is possible to set the grid-point spacing in two stages of being coarse and dense. Thus, as shown inFIG.5B, grid lines having two types of grid-line spacing are displayed in a mixed state in the image display area61. The operator selects an arbitrary intersection62aon the grid line62, that is, a grid point, as a control point, and then performs an operation of dragging and dropping the control point in an arbitrary direction and to an arbitrary position. Here, as shown inFIG.4A, the image deformation range is a range of four adjacent blocks. Therefore, the image deformation range is wide in an area where the grid-line spacing is wide, and the image deformation range is narrow in an area where the grid-line spacing is narrow. Therefore, by appropriately determining the coarse and dense grid point ranges and the grid-line spacing in each of the ranges in accordance with the desired amount and range of deformation on the image, the work efficiency of image alignment can be improved compared to the conventional device. Although the imaging mass spectrometer of the above embodiment has performed the characteristic image deformation processing as described above in the superimposition of an optical microscopic image such as a stained image and a mass spectrometric image, it is clear that the present invention can also be applied in the superimposition of a mass spectrometric image and an image for the same sample, the image being obtained by other measurement, for example, Raman spectroscopic imaging, infrared spectroscopic imaging, X-ray analytical imaging, surface analytical imaging using a particle beam such as an electron beam or an ion beam, or an image obtained by surface analytical imaging using a probe such as a scanning probe microscope (SPM). The present invention is not limited to an imaging mass spectrometer but is also effective in the superimposition of different images obtained for the same sample by using various measurement methods as described above. Note that the “same sample” here is not necessarily the same sample. For example, even different samples may be treated as substantially the same sample so long as the samples are adjacent piece samples in continuous piece samples formed by slicing a biological tissue into very thin pieces. In such a case, it is sufficiently useful to apply the present invention in the superimposition of the images respectively obtained for different samples that can be considered as the same sample. Further, the above embodiment is merely an example of the present invention, and it is natural that, even when modification, correction, and addition are made as appropriate in the scope of the gist of the present invention in addition to the various modifications described above, those are included in the scope of claims of the present invention. REFERENCE SIGNS LIST 1. . . Imaging Mass Spectrometry Unit2. . . Optical Microscopic Image Acquiring Unit3. . . Data Processing Unit31. . . Data Collector32. . . Data Storage Section321. . . Spectral Data Storage Area322. . . Optical Image Data Storage Area33. . . Image Creator34. . . Optical Image Creator35. . . Image Superimposition Processor351. . . Image Display Processor352. . . Image Deformation Range Specification Receiving Section353. . . Grid Spacing Adjustment Receiving Section354. . . Image Deformation Processor4. . . Input Unit5. . . Display Unit60. . . Image Superimposition Work Screen61. . . Image Display Area62. . . Grid63. . . Grid Spacing Adjustment Slider64. . . Image Deformation Range “SET” Button65. . . Image Deformation Range “SELECT” Button66. . . “SAVE” Button | 18,939 |
11861827 | DETAILED DESCRIPTION FIG.1illustrates an example neural network1, in accordance with one or more embodiments of the present disclosure. Alternative terms for “artificial neural network” include “neural network”, “artificial neural net” or “neural net”. The artificial neural network1comprises nodes6-18and edges19-21, wherein each edge19-21is a directed connection from a first node6-18to a second node6-18. In general, the first node6-18and the second node6-18are different nodes6-18, it is also possible that the first node6-18and the second node6-18are identical. For example, inFIG.1the edge19is a directed connection from the node6to the node9, and the edge20is a directed connection from the node7to the node9. An edge19-21from a first node6-18to a second node6-18is also denoted as “ingoing edge” for the second node6-18and as “outgoing edge” for the first node6-18. In this embodiment, the nodes6-18of the artificial neural network1can be arranged in layers2-5, wherein the layers2-5can comprise an intrinsic order introduced by the edges19-21between the nodes6-18. For instance, edges19-21may exist only between neighboring layers of nodes6-18. In the displayed embodiment, there is an input layer2comprising only nodes6-8without an incoming edge, an output layer5comprising only nodes17,18without outgoing edges, and hidden layers3,4in-between the input layer2and the output layer5. In general, the number of hidden layers3,4can be chosen arbitrarily. The number of nodes6-8within the input layer2usually relates to the number of input values of the neural network, and the number of nodes17,18within the output layer5usually relates to the number of output values of the neural network. For example, a (real) number can be assigned as a value to every node6-18of the neural network1. Here, x(n)idenotes the value of the i-th node6-18of the n-th layer2-5. The values of the nodes6-8of the input layer2are equivalent to the input values of the neural network1, the values of the nodes17,18of the output layer5are equivalent to the output value of the neural network1. Furthermore, each edge19-21can comprise a weight being a real number, e.g. the weight is a real number within the interval [−1, 1] or within the interval [0, 1]. Here, w(m,n)i,jdenotes the weight of the edge between the i-th node6-18of the m-th layer2-5and the j-th node6-18of the n-th layer2-5. Furthermore, the abbreviation w(n)i,jis defined for the weight w(n,n+1)i,j. In an embodiment, to calculate the output values of the neural network1, the input values are propagated through the neural network1. As an example, the values of the nodes6-18of the (n+1)-th layer2-5can be calculated based on the values of the nodes6-18of the n-th layer2-5by Equation 1 below as follows: xj(n+1)=f(Σixi(n)·wi,j(n)). Eqn. 1: Herein, the function f is a transfer function (another term is “activation function”). Known transfer functions are step functions, sigmoid functions (e.g. the logistic function, the generalized logistic function, the hyperbolic tangent, the arctangent function, the error function, the smoothstep function) or rectifier functions. The transfer function is mainly used for normalization purposes. For example, the values are propagated layer-wise through the neural network1, wherein values of the input layer2are given by the input of the neural network1, wherein values of the first hidden layer3can be calculated based on the values of the input layer2of the neural network1, wherein values of the second hidden layer4can be calculated based in the values of the first hidden layer3, etc. In order to set the values w(m,n)i,jfor the edges19-21, the neural network1has to be trained using training data. For instance, training data may comprise training input data and training output data (denoted as ti). For a training step, the neural network1is applied to the training input data to generate calculated output data. As an example, the training data and the calculated output data comprise a number of values, said number being equal to the number of nodes17,18of the output layer5. In an embodiment, a comparison between the calculated output data and the training data is used to recursively adapt the weights within the neural network1(backpropagation algorithm). For instance, the weights are changed according to Equation 2 below as follows: wi,j′(n)=wi,j(n)−γ·δj(n)·xi(n)Eqn. 2: wherein γ is a learning rate, and the numbers δ(n)jcan be recursively calculated using Equation 3 and 4 below as follows: δj(n)=(Σkδk(n+1)·wj,k(n+1))·f′(Σixi(n)·wi,j(n)) Eqn. 3: based on δ(n+1)j, if the (n+1)-th layer is not the output layer5, and δj(n)=(xk(n+1)−tj(n+1))·f′(Σixi(n)·wi,j(n)) Eqn. 4: if the (n+1)-th layer is the output layer5, wherein f′ is the first derivative of the activation function, and y(n+1)jis the comparison training value for the j-th node of the output layer5. In the following, with respect toFIG.2, an example of a convolutional neural network (CNN)22will be described. Attention is drawn to the fact that the term “layer” that the term “layer” is used in a slightly different way for classical neural networks and convolutional neural networks in the literature. For a classical neural network, the term “layer” refers only to the set of nodes forming a layer (a certain “generation” of nodes). For a convolutional neural network, the term “layer” is often (and in this description) used as an object that actively transforms data, in other words as a set of nodes of the same “generation” and either the set of incoming or the set of outgoing nodes. FIG.2illustrates an example convolutional neural network22, in accordance with one or more embodiments of the present disclosure. In this embodiment, the convolutional neural network22comprises an input layer23, a convolutional layer24, a pooling layer25, a fully connected layer26(also termed dense layer), and an output layer27. Alternatively, the convolutional neural network22can comprise several convolutional layers24, several pooling layers25, and several fully connected layers26as well as other types of layers. The order of the layers can be chosen arbitrarily. Usually, fully connected layers26are used as the last layers before the output layer27. As an example, within a convolutional neural network22the nodes28-32of one layer23-27can be considered to be arranged as a d-dimensional matrix or as a d-dimensional image. For instance, in the two-dimensional case the value of the node28-32indexed with i and j in the n-th layer23-27can be denoted as x(n)[i,j]. However, the arrangement of the nodes28-32of one layer23-27does not have an effect on the calculations executed within the convolutional neural network22as such, since these are given solely by the structure and the weights of the edges. As an example, a convolutional layer24is characterized by the structure and the weights of the incoming edges forming a convolution operation based on a certain number of kernels. For instance, the structure and the weights of the incoming edges are chosen such that the values x(n)kof the nodes29of the convolutional layer24are calculated as a convolution x(n)k=Kk*x(n−1)based on the values x(n−1)of the nodes28of the preceding layer23, where the convolution * is defined in the two-dimensional case as given in Equation 5 below as: xk(n)[i,j]=(Kk*x(n−1))[i,j]=Σi′Σj′Kk[i′,j′]·x(n−1)[i−i′,j−j′].Eqn. 5: Here, the k-th kernel Kkis a d-dimensional matrix (in this embodiment a two-dimensional matrix), which is usually small compared to the number of nodes28-32(e.g. a 3×3 matrix, or a 5×5 matrix). As an example, this implies that the weights of the incoming edges are not independent, but chosen such that they produce said convolution equation. For instance, for a kernel being a 3×3 matrix, there are only 9 independent weights (each entry of the kernel matrix corresponding to one independent weight), irrespectively of the number of nodes28-32in the respective layer23-27. For example, for a convolutional layer24, the number of nodes29in the convolutional layer24is equivalent to the number of nodes28in the preceding layer23multiplied with the number of kernels. If the nodes28of the preceding layer23are arranged as a d-dimensional matrix, using a plurality of kernels can be interpreted as adding a further dimension (denoted as “depth” dimension), so that the nodes29of the convolutional layer24are arranged as a (d+1)-dimensional matrix. If the nodes28of the preceding layer23are already arranged as a (d+1)-dimensional matrix comprising a depth dimension, using a plurality of kernels can be interpreted as expanding along the depth dimension, so that the nodes29of the convolutional layer24are arranged also as a (d+1)-dimensional matrix, wherein the size of the (d+1)-dimensional matrix with respect to the depth dimension is by a factor of the number of kernels larger than in the preceding layer23. The advantage of using convolutional layers24is that spatially local correlation of the input data can exploited by enforcing a local connectivity pattern between nodes of adjacent layers, e.g. by each node being connected to only a small region of the nodes of the preceding layer. In the present embodiment, the input layer23comprises 36 nodes28, arranged as a two-dimensional 6×6 matrix. The convolutional layer24comprises 72 nodes29, arranged as two two-dimensional 6×6 matrices, each of the two matrices being the result of a convolution of the values of the input layer23with a kernel. Equivalently, the nodes29of the convolutional layer24can be interpreted as arranged in a three-dimensional 6×6×2 matrix, wherein the last dimension is the depth dimension. A pooling layer25can be characterized by the structure and the weights of the incoming edges and the activation function of its nodes30forming a pooling operation based on a non-linear pooling function f. For example, in the two-dimensional case the values x(n)of the nodes30of the pooling layer25can be calculated based on the values x(n−1)of the nodes29of the preceding layer24in Equation 6 as: x(n)[i,j]=f(x(n−1)[id1,jd2], . . . ,x(n−1)[id1+d1−1,jd2+d2−1]) Eqn. 6: In other words, by using a pooling layer25, the number of nodes29,30can be reduced by replacing a number d1·d2of neighboring nodes29in the preceding layer24with a single node30being calculated as a function of the values of said number of neighboring nodes29in the preceding layer24. For instance, the pooling function f can be the max-function, the average or the L2-Norm. For example, for a pooling layer25the weights of the incoming edges are fixed and are not modified by training. The advantage of using a pooling layer25is that the number of nodes29,30and the number of parameters is reduced. This leads to the amount of computation in the network22being reduced and to a control of overfitting. In the present embodiment, the pooling layer25is a max-pooling layer, replacing four neighboring nodes29with only one node30, the value being the maximum of the values of the four neighboring nodes29. The max-pooling is applied to each d-dimensional matrix of the previous layer24; in this embodiment, the max-pooling is applied to each of the two two-dimensional matrices, reducing the number of nodes29,30from 72 to 18. A fully-connected layer26can be characterized by the fact that a majority, e.g. all, edges between nodes30of the previous layer25and the nodes31of the fully-connected layer36are present, and wherein the weight of each of the edges can be adjusted individually. In this embodiment, the nodes30of the preceding layer25of the fully connected layer26are displayed both as two-dimensional matrices, and additionally as non-related nodes30(indicated as a line of nodes30, wherein the number of nodes30was reduced for a better presentability). In this embodiment, the number of nodes31in the fully connected layer26is equal to the number of nodes30in the preceding layer25. Alternatively, the number of nodes30,31can differ. Furthermore, in this embodiment the values of the nodes32of the output layer27are determined by applying the Softmax function onto the values of the nodes31of the preceding layer26. By applying the Softmax function, the sum of the values of all nodes32of the output layer27is 1, and all values of all nodes32of the output layer27are real numbers between 0 and 1. As an example, if using the convolutional neural network22for categorizing input data, the values of the output layer27can be interpreted as the probability of the input data falling into one of the different categories. A convolutional neural network22can also comprise a ReLU (acronym for “rectified linear units”) layer. For instance, the number of nodes and the structure of the nodes contained in a ReLU layer is equivalent to the number of nodes and the structure of the nodes contained in the preceding layer. As an example, the value of each node in the ReLU layer is calculated by applying a rectifying function to the value of the corresponding node of the preceding layer. Examples for rectifying functions are f(x)=max(0,x), the tangent hyperbolics function or the sigmoid function. As an example, convolutional neural networks22can be trained based on the backpropagation algorithm. For preventing overfitting, methods of regularization can be used, e.g. dropout of nodes28-32, stochastic pooling, use of artificial data, weight decay based on the L1 or the L2 norm, or max norm constraints. FIG.3illustrates an example flowchart of a characterization method, in accordance with one or more embodiments of the present disclosure. The computer-implemented method uses magnetic resonance data of the liver of a patient to determine at least one tissue score characterizing the tissue, which may at a later stage be used for diagnostics and/or therapy by, for example, a physician. Two sorts of input data sets33are used as input data, namely at least one magnetic resonance image data set34, which is an anatomical image data set34showing the morphology of the anatomy, and at least one magnetic resonance parameter map35, which contains spatially resolved quantitative values of a certain parameter, e.g. from quantitative MRI. The at least one image data set34may be acquired using a Dixon technique for fat-water separation and/or may be proton density or relaxation time weighted. It is noted that, in embodiments, at least one of the at least one image data set34may be used to determine at least one of the at least one parameter map35. The at least one parameter map35may be or comprise a T1 map, a T2 map, a map of reciprocal values of relaxation times, e.g. R2* as a measure of iron overload, a proton density map describing fat fraction, and the like. Optional further input data sets33(not shown inFIG.3) may comprise functional magnetic resonance data sets, for example an apparent diffusion coefficient map (ADC maps) and/or an elastography map of liver stiffness. In at least one pre-processing step S1, the input data sets33may be registered to each other, to account for movements of the patient and/or different imaging regions and/or pauses in between the application of different magnetic resonance imaging techniques. The registration may be performed landmark-based and/or intensity based and/or affine and/or elastic. In an additional pre-processing step S1, a region of interest, e.g. containing the parenchymal liver tissue of interest, is segmented and/or detected. Known, for instance computer-implemented, segmentation and/or detection algorithms may be implemented. Pre-processing may further include standard image processing like intensity normalization, noise reduction and/or smoothing. In a step S2, the input data sets33are used as input data for a trained function which comprises a neural network, e.g. a deep convolutional neural network. Further input data may comprise additional scalar and/or vector information36, in this embodiment provided in an electronic health record37of the patient, which is accessible by the computation unit performing the pre-processing and executing the trained function. The additional information36may, for example, comprise demographic information, medical history information, and laboratory results. The trained function uses the input data, regarding the input data sets33constrained to the region of interest, to determine output data38, in this case comprising at least one tissue score39and optionally predictive outcome information40and uncertainty information41. The tissue scores39in this embodiment at least comprise the NAS and a fibrosis stage, but may also comprise further scores. The predictive outcome information40may comprise at least one risk score, for example the probability of a certain event or the success of a certain therapy. The uncertainty information41is determined by an uncertainty estimation subfunction using standard methods for uncertainty estimation, for example Bayesian deep learning models. The neural network of the trained function is trained by using training data sets from a training patient cohort, each training data set comprising input training data, e.g. the input data sets33and the additional information36, as well as output training data for the respective training patient, wherein the tissue scores39of the output training data are preferably taken from histopathological liver biopsy results of the respective training patients and the (optional) predictive outcome information40is derived from anonymized outcome data. FIG.4illustrates an example structure of a neural network42of a trained function, in accordance with one or more embodiments of the present disclosure. It is noted at this point that the topology shown is merely exemplary; a variety of other topologies may also be used. To the left inFIG.4, the input data43comprising the input data sets33, e.g. the image data sets34and the parameter maps35, and the additional information36, are indicated. Each input data set33in this embodiment is input into a dedicated convolutional partial neural network44, each comprising at least one convolutional layer45. The convolutional partial neural networks44independently extract relevant features46from the respective input data sets33. These features are concatenated, for example in a flattening layer, to a feature vector47. The convolutional partial neural networks44may, of course, also comprise further layers, for example pooling layers. As for the additional scalar and/or vector information36, these are also analyzed to extract relevant features using a dense partial neural network48having at least one fully connected layer49. The results are intermediate data50, which are also added to the feature vector47by concatenation. It is noted that the dense partial neural network48is optional; it is also conceivable to add the additional information36directly to the feature vector47. The convolutional partial neural networks44and the dense partial neural network48can be understood as feature extractors. The feature vector47is then fed through multiple fully connected layers51of a further dense partial neural network52, which then gives the output data38. It is noted that separate dense partial neural networks52may be provided for the at least one tissue score and the predictive outcome information, if this to be determined. FIG.5illustrates an example characterization system53, in accordance with one or more embodiments of the present disclosure. The characterization system53is configured to perform the characterization method according to the embodiments of the disclosure and thus comprises a first interface54for receiving the input data43, a computation unit55for analyzing the input data43, and a second interface56for providing the output data38. The computation unit55, which may comprise at least one processor (e.g. processing circuitry, a CPU, one or more processors, etc.) and/or at least one storage means, comprises the trained function57as described above for performing step S2and may additionally comprise a pre-processing sub-unit58to carry out step S1. FIG.6illustrates an example magnetic resonance imaging device59, in accordance with one or more embodiments of the present disclosure. The magnetic resonance imaging device59comprises, as known, a main magnet unit60housing the main field magnet and having a bore60into which a patient can be placed, for example using a patient table, for acquisition. The patient table as well as other typical components of a magnetic resonance imaging device59, like gradient coils and high frequency coils, are not shown for purposes of brevity. The magnetic resonance imaging device59is controlled using a control device61, which may be implemented as one or more processors, processing circuitry, and alternatively be referred to as a controller or control computer. In this embodiment, a characterizing system53according to the disclosure is integrated into the control device61, such that magnetic resonance input data sets33may be analyzed regarding the characterization of liver tissue on-site. To retrieve additional information36from electronic health records37, the control device61may be connected to a database62, for example via a local network or an internet connection or any suitable number and/or type of wired and/or wireless links. Although the present disclosure has been described in detail with reference to the preferred embodiment, the present disclosure is not limited by the disclosed examples from which the skilled person is able to derive other variations without departing from the scope of the disclosure | 21,640 |
11861828 | DETAILED DESCRIPTION The present invention generally relates to methods and systems for automated estimation of midline shift in brain CT (computed tomography) images. Embodiments of the present invention are described herein to give a visual understanding of such methods and systems. A digital image is often composed of digital representations of one or more objects (or shapes). The digital representation of an object is often described herein in terms of identifying and manipulating the objects. Such manipulations are virtual manipulations accomplished in the memory or other circuitry/hardware of a computer system. Accordingly, is to be understood that embodiments of the present invention may be performed within a computer system using data stored within the computer system. Further, reference herein to pixels of an image may refer equally to voxels of an image and vice versa. Embodiments described herein provide for the automated estimation of midline shift of a brain of a patient. Midline shift is clinically defined as a shift or displacement of the brain past its centerline. Midline shift can be detected and quantified as the perpendicular distance from a midline structure located on the actual centerline of the brain to a line representing the ideal centerline of the brain. At the axial plane of the foramen of Monro, the septum pallicidum of the brain is located on the actual centerline of the brain and therefore the midline shift may be calculated as the perpendicular distance from the septum pallicidum to a line formed between the anterior falx and posterior falx. Advantageously, embodiments described herein provide for improved accuracy and efficiency of midline shift estimation and enable support for timely clinical diagnosis of brain injury. FIG.1shows a method100for quantifying shift of an anatomical object of a patient, in accordance with one or more embodiments. The steps of method100may be performed by one or more suitable computing devices, such as, e.g., computer702ofFIG.7. At step102, a 3D (three dimensional) medical image of an anatomical object of a patient is received. In one embodiment, the anatomical object of the patient is the brain of the patient. However, the anatomical object may be any organ, bone, lesions, or any other anatomical object of the patient. In one embodiment, the 3D medical image is a CT image (e.g., a non-contrast CT image). However, the 3D medical image may be of any other suitable modality, such as, e.g., MRI (magnetic resonance imaging), x-ray, or any other medical imaging modality or combinations of medical imaging modalities. The 3D medical image is comprised of a plurality of 2D (two dimensional) cross sectional slices. The 3D medical image may be received directly from an image acquisition device, such as, e.g., a CT scanner, as the 3D medical image is acquired, or can be received by loading a previously acquired 3D medical image from a storage or memory of a computer system or receiving a 3D medical image that has been transmitted from a remote computer system. In one embodiment, for example during a preprocessing stage, the 3D medical image is identified, from a plurality of 3D candidate medical images, as depicting a shift in the anatomical object (e.g., a midline shift of the brain) using a machine learning classification network. The classification network is trained to discriminate images in the plurality of 3D candidate medical images that depict and that do not depict shift in the anatomical object. The classification network may be any suitable machine learning based network, such as, e.g., a deep neural network. The classification network is trained during a prior offline or training stage, as described in further detail below with regards toFIG.4. Once trained, the trained classification network may be applied during an online or testing stage to identify the 3D medical image, from the plurality of 3D candidate medical images, as depicting a shift in the anatomical object. At step104, an initial location of landmarks on the anatomical object is determined in the 3D medical image using a first machine learning network. In one embodiment, where the anatomical object is the brain of the patient, the landmarks may comprise an anterior falx, a posterior falx, and a septum pallicidum of the brain. The first machine learning network receives the 3D medical image as input and generates one or more 3D landmark heatmaps as output. The 3D landmark heatmaps provide a voxel-wise identification of the initial location of the landmarks such that, for example, a voxel intensity value of 1 corresponds to the initial location of the landmarks and a voxel intensity value of 0 does not correspond to the initial location of the landmarks. Each of the one or more 3D landmark heatmaps identifies a respective landmark. For example, the one or more 3D landmark heatmaps may comprise a first 3D landmark heatmap identifying the anterior falx, a second 3D landmark heatmap identifying the posterior falx, and a third 3D landmark heatmap identifying the septum pallicidum. The first machine learning network may be any suitable machine learning based network, such as, e.g., a deep neural network. The first machine learning network is trained during a prior offline or training stage, as described in further detail below with regards toFIG.4. Once trained, the trained first machine learning network may be applied during an online or testing stage (e.g., at step104ofFIG.1) to determine the initial location of the landmarks in the 3D medical image. FIG.2shows an illustrative comparison between a ground truth 3D landmark heatmap202and a predicted 3D landmark heatmap204shown overlaid on a 3D medical image, in accordance with one or more embodiments. Predicted 3D landmark heatmap204is an example of the 3D landmark heatmap generated by the first machine learning network at step104ofFIG.1. As shown inFIG.2, ground truth 3D landmark heatmap202identifies a ground truth initial location of landmarks comprising an anterior falx206, a septum pallicidum208, and a posterior falx210of the brain and predicted 3D landmark heatmap204identifies a predicted initial location of landmarks comprising an anterior falx212, a septum pallicidum214, and a posterior falx216of the brain. At step106ofFIG.1, a 2D slice depicting the initial location of the landmarks is extracted from the 3D medical image. The 2D slice is extracted from the 3D medical image by calculating a point coordinate of the foramen of Monro in the 3D medical image and extracting the 2D slice corresponding to the point coordinate from the 3D medical image. Accordingly, the 2D slice is the slice of the 3D medical image at the axial plane of the foramen of Monro of the brain of the patient. An exemplary 2D slice is shown as 2D slice302inFIG.3, described in further detail below. At step108ofFIG.1, the initial location of the landmarks in the 2D slice is refined using a second machine learning network. The initial location of the landmarks is refined to determine a more precise location of the landmarks in the 2D slice. The second machine learning network receives the 2D slice as input and generates one or more 2D landmark heatmaps as output. The 2D landmark heatmaps provide a pixel-wise identification of the refined location of the landmarks such that, for example, a pixel intensity value of 1 corresponds to the refined location of the landmarks and a pixel intensity value of 0 does not correspond to the refined location of the landmarks. Each of the one or more 2D landmark heatmaps identifies a respective landmark. For example, the one or more 2D landmark heatmaps may comprise a first 2D landmark heatmap identifying the anterior falx, a second 2D landmark heatmap identifying the posterior falx, and a third 2D landmark heatmap identifying the septum pallicidum. The second machine learning network may be any suitable machine learning based network, such as, e.g., a deep neural network. The second machine learning network is trained during a prior offline or training stage, as described in further detail below with regards toFIG.4. Once trained, the trained second machine learning network may be applied during an online or testing stage (e.g., at step108ofFIG.1) to determine the refined location of the landmarks in the 2D slice. FIG.3shows an illustrative comparison of a ground truth 2D landmark heatmap304and a predicted 2D landmark heatmap306for a 2D slice302, in accordance with one or more embodiments. Predicted 2D landmark heatmap306is shown overlaid on 2D slice302. Predicted 2D landmark heatmap306is an example of the 2D landmark heatmap generated by the second machine learning network at step108ofFIG.1. As shown inFIG.3, ground truth 2D landmark heatmap304identifies a ground truth refined location of landmarks comprising an anterior falx308, a septum pallicidum310, and a posterior falx312of the brain and predicted 2D landmark heatmap306identifies a predicted refined location of landmarks comprising an anterior falx314, a septum pallicidum316, and a posterior falx318of the brain. At step110ofFIG.1, a shift of the anatomical object is quantified based on the refined location of the landmarks in the 2D slice. In one embodiment, where the anatomical object is the brain of the patient, the shift of the anatomical object comprises a midline shift of the brain. At the axial plane of the foramen of Monro, corresponding to the 2D slice, the septum pallicidum is located along the actual centerline of the brain. The line between the anterior falx and the posterior falx represents the ideal centerline of the brain. Accordingly, the midline shift may be quantified by calculating the perpendicular distance from the septum pallicidum to the line formed between the anterior falx and the posterior falx, as located based on the refined location of the landmarks in the 2D slice. For example, as shown in predicted 2D landmark heatmap306inFIG.3, the midline shift is represented as the length of line322, representing the perpendicular distance from septum pallicidum316to line320between anterior falx314and posterior falx318. In one embodiment, where the refined location of one or more of the landmarks is not determined at step108, the initial location of the one or more of the landmarks determined at step104may be utilized at step110to quantify the shift of the anatomical object. For example, if the refined location of the septum pallicidum is not determined at step108, the initial location of the septum pallicidum determined at step104is utilized to quantify the shift of the anatomical object at step110. At step112ofFIG.1, the quantified shift of the anatomical object is output. For example, the quantified shift of the anatomical object can be output by displaying the quantified shift of the anatomical object on a display device of a computer system, storing the quantified shift of the anatomical object on a memory or storage of a computer system, or by transmitting the quantified shift of the anatomical object to a remote computer system. In one embodiment, the quantified shift of the anatomical object may be output for clinical decision making. For example, the quantified shift of the anatomical object may be output to a clinical decision support system for automatically recommending a treatment or course of action for the patient based on the quantified shift. Advantageously, embodiments described herein provide for the quantification of a shift of an anatomical object by determining an initial location of landmarks in the 3D medical image using the first machine learning network and refining the initial location of the landmarks in a 2D slice, extracted from the 3D medical image, using the second machine learning network. The determination of the location of landmarks in the 3D medical image and in the 2D slice enables effective extraction of useful image features from surrounding brain tissues. The first machine learning network predicts general regions of interest, which include the target location of the landmarks and the 2D slice at the axial place of the foramen of Monro, in the 3D medical image. The second machine learning network predicts refined locations of the landmarks on the 2D slice. In general, 3D machine learning networks require significant amount of training data and therefore it is often challenging to train such 3D networks to achieve desired performance. In accordance with embodiments described herein, the required size of the training data is reduced. FIG.4shows a workflow400for training machine learning networks for quantifying a shift of an anatomical object of a patient, in accordance with one or more embodiments. As shown inFIG.4, workflow400is performed for training classification network402, first machine learning network404, and second machine learning network406. In one example, classification network402is the classification network applied to identify the 3D medical image, from a plurality of 3D candidate medical images, received at step102ofFIG.1, first machine learning network404is the first machine learning network applied to determine an initial location of landmarks in the 3D medical image at step104ofFIG.1, and second machine learning network406is the second machine learning network applied to refined the initial location of the landmarks in the 2D slice at step108ofFIG.1. Workflow400is performed during a prior training or offline stage to train the machine learning networks for performing various medical imaging analysis tasks. Classification network402, first machine learning network404, and second machine learning network406are trained independently. Once trained, the trained machine learning networks are applied during an online or testing stage (e.g., at method100ofFIG.1). Classification network402, first machine learning network404, and second machine learning network406may be implemented using any suitable machine learning network. In one example, classification network402, first machine learning network404, and/or second machine learning network406are deep learning neural networks. Classification network402is trained with training images with associated ground truth labels408to classify the training images as depicting a shift of an anatomical object (e.g., a midline shift of the brain). The ground truth labels may be manually annotated labels indicating whether the training images depicted a shift of the anatomical object. First machine learning network404is trained with 3D training medical images410and corresponding ground truth landmark annotations412. 3D training medical images410are the training images classified as depicting a shift of the anatomical object by classification network402. Ground truth landmark annotations412are 3D training landmark heatmaps created by applying a 2D Gaussian smoothing kernel to landmarks (e.g., anterior falx, posterior falx, and septum pallicidum of the brain). To increase axial information density, the 3D training landmark heatmaps are generated by stacking multiple 2D heatmaps. The center of the landmarks in the 3D training landmark heatmaps represent the ground truth location of the landmarks. In one example, 11 2D heatmap may be stacked to form the 3D training landmark heatmaps, however any other suitable number of 2D heatmaps may be utilized to generated the 3D training landmark heatmaps, e.g., depending on the CT acquisition protocol. Second machine learning network406are trained with 2D training slices414and ground truth landmark annotations416. 2D training slices414are 2D slices extracted from the 3D landmark heatmaps generated by first machine learning network404at the axial plane of the foramen of Monro of the brain. Ground truth landmark annotations416are 2D training landmark heatmaps created by applying a 3D Gaussian smoothing kernel to landmarks. Embodiments described herein are described with respect to the claimed systems as well as with respect to the claimed methods. Features, advantages or alternative embodiments herein can be assigned to the other claimed objects and vice versa. In other words, claims for the systems can be improved with features described or claimed in the context of the methods. In this case, the functional features of the method are embodied by objective units of the providing system. Furthermore, certain embodiments described herein are described with respect to methods and systems utilizing trained machine learning based networks (or models), as well as with respect to methods and systems for training machine learning based networks. Features, advantages or alternative embodiments herein can be assigned to the other claimed objects and vice versa. In other words, claims for methods and systems for training a machine learning based network can be improved with features described or claimed in context of the methods and systems for utilizing a trained machine learning based network, and vice versa. In particular, the trained machine learning based networks applied in embodiments described herein can be adapted by the methods and systems for training the machine learning based networks. Furthermore, the input data of the trained machine learning based network can comprise advantageous features and embodiments of the training input data, and vice versa. Furthermore, the output data of the trained machine learning based network can comprise advantageous features and embodiments of the output training data, and vice versa. In general, a trained machine learning based network mimics cognitive functions that humans associate with other human minds. In particular, by training based on training data, the trained machine learning based network is able to adapt to new circumstances and to detect and extrapolate patterns. In general, parameters of a machine learning based network can be adapted by means of training. In particular, supervised training, semi-supervised training, unsupervised training, reinforcement learning and/or active learning can be used. Furthermore, representation learning (an alternative term is “feature learning”) can be used. In particular, the parameters of the trained machine learning based network can be adapted iteratively by several steps of training. In particular, a trained machine learning based network can comprise a neural network, a support vector machine, a decision tree, and/or a Bayesian network, and/or the trained machine learning based network can be based on k-means clustering, Q-learning, genetic algorithms, and/or association rules. In particular, a neural network can be a deep neural network, a convolutional neural network, or a convolutional deep neural network. Furthermore, a neural network can be an adversarial network, a deep adversarial network and/or a generative adversarial network. FIG.5shows an embodiment of an artificial neural network500, in accordance with one or more embodiments. Alternative terms for “artificial neural network” are “neural network”, “artificial neural net” or “neural net”. Machine learning networks described herein, such as, e.g., the classification network, the first machine learning network, and the second machine learning network utilized in method100ofFIG.1and the classification network402, the first machine learning network404, and the second machine learning network406in workflow400ofFIG.4, may be implemented using artificial neural network500. The artificial neural network500comprises nodes502-522and edges532,534, . . . ,536, wherein each edge532,534, . . . ,536is a directed connection from a first node502-522to a second node502-522. In general, the first node502-522and the second node502-522are different nodes502-522, it is also possible that the first node502-522and the second node502-522are identical. For example, inFIG.5, the edge532is a directed connection from the node502to the node506, and the edge534is a directed connection from the node504to the node506. An edge532,534, . . . ,536from a first node502-522to a second node502-522is also denoted as “ingoing edge” for the second node502-522and as “outgoing edge” for the first node502-522. In this embodiment, the nodes502-522of the artificial neural network500can be arranged in layers524-530, wherein the layers can comprise an intrinsic order introduced by the edges532,534, . . . ,536between the nodes502-522. In particular, edges532,534, . . . ,536can exist only between neighboring layers of nodes. In the embodiment shown inFIG.5, there is an input layer524comprising only nodes502and504without an incoming edge, an output layer530comprising only node522without outgoing edges, and hidden layers526,528in-between the input layer524and the output layer530. In general, the number of hidden layers526,528can be chosen arbitrarily. The number of nodes502and504within the input layer524usually relates to the number of input values of the neural network500, and the number of nodes522within the output layer530usually relates to the number of output values of the neural network500. In particular, a (real) number can be assigned as a value to every node502-522of the neural network500. Here, x(n)idenotes the value of the i-th node502-522of the n-th layer524-530. The values of the nodes502-522of the input layer524are equivalent to the input values of the neural network500, the value of the node522of the output layer530is equivalent to the output value of the neural network500. Furthermore, each edge532,534, . . . ,536can comprise a weight being a real number, in particular, the weight is a real number within the interval [−1, 1] or within the interval [0, 1]. Here, w(m,n)i,jdenotes the weight of the edge between the i-th node502-522of the m-th layer524-530and the j-th node502-522of the n-th layer524-530. Furthermore, the abbreviation w(n)i,jis defined for the weight w(n,n+1)i,j. In particular, to calculate the output values of the neural network500, the input values are propagated through the neural network. In particular, the values of the nodes502-522of the (n+1)-th layer524-530can be calculated based on the values of the nodes502-522of the n-th layer524-530by xj(n+1)=f(Σixi(n)·wi,j(n)). Herein, the function f is a transfer function (another term is “activation function”). Known transfer functions are step functions, sigmoid function (e.g. the logistic function, the generalized logistic function, the hyperbolic tangent, the Arctangent function, the error function, the smoothstep function) or rectifier functions. The transfer function is mainly used for normalization purposes. In particular, the values are propagated layer-wise through the neural network, wherein values of the input layer524are given by the input of the neural network500, wherein values of the first hidden layer526can be calculated based on the values of the input layer524of the neural network, wherein values of the second hidden layer528can be calculated based in the values of the first hidden layer526, etc. In order to set the values w(m,n)i,j, for the edges, the neural network500has to be trained using training data. In particular, training data comprises training input data and training output data (denoted as ti). For a training step, the neural network500is applied to the training input data to generate calculated output data. In particular, the training data and the calculated output data comprise a number of values, said number being equal with the number of nodes of the output layer. In particular, a comparison between the calculated output data and the training data is used to recursively adapt the weights within the neural network500(backpropagation algorithm). In particular, the weights are changed according to w′i,j(n)=wi,j(n)−γ·δj(n)·xi(n) wherein γ is a learning rate, and the numbers δ(n)jcan be recursively calculated as δj(n)=(Σkδk(n+1)·wj,k(n+1))·f′(Σixi(n)·wi,j(n) based on δ(n+1)j, if the (n+1)-th layer is not the output layer, and δj(n)=(xk(n+1)−tj(n+1))·f′(Σixi(n)·wi,j(n)) if the (n+1)-th layer is the output layer530, wherein f′ is the first derivative of the activation function, and y(n+1)jis the comparison training value for the j-th node of the output layer530. FIG.6shows a convolutional neural network600, in accordance with one or more embodiments. Machine learning networks described herein, such as, e.g., the classification network, the first machine learning network, and the second machine learning network utilized in method100ofFIG.1and the classification network402, the first machine learning network404, and the second machine learning network406in workflow400ofFIG.4, may be implemented using convolutional neural network600. In the embodiment shown inFIG.6, the convolutional neural network comprises600an input layer602, a convolutional layer604, a pooling layer606, a fully connected layer608, and an output layer610. Alternatively, the convolutional neural network600can comprise several convolutional layers604, several pooling layers606, and several fully connected layers608, as well as other types of layers. The order of the layers can be chosen arbitrarily, usually fully connected layers608are used as the last layers before the output layer610. In particular, within a convolutional neural network600, the nodes612-620of one layer602-610can be considered to be arranged as a d-dimensional matrix or as a d-dimensional image. In particular, in the two-dimensional case the value of the node612-620indexed with i and j in the n-th layer602-610can be denoted as x(n)[i,j]. However, the arrangement of the nodes612-620of one layer602-610does not have an effect on the calculations executed within the convolutional neural network600as such, since these are given solely by the structure and the weights of the edges. In particular, a convolutional layer604is characterized by the structure and the weights of the incoming edges forming a convolution operation based on a certain number of kernels. In particular, the structure and the weights of the incoming edges are chosen such that the values x(n)kof the nodes614of the convolutional layer604are calculated as a convolution x(n)k=Kk*x(n−1)based on the values x(n−1)of the nodes612of the preceding layer602, where the convolution * is defined in the two-dimensional case as xk(n)[i,j]=(Kk*x(n−1))[i,j]=Σi′Σj′Kk[i′,j′]·x(n−1)[i−i′,j−j′]. Here the k-th kernel Kkis a d-dimensional matrix (in this embodiment a two-dimensional matrix), which is usually small compared to the number of nodes612-618(e.g. a 3×3 matrix, or a 5×5 matrix). In particular, this implies that the weights of the incoming edges are not independent, but chosen such that they produce said convolution equation. In particular, for a kernel being a 3×3 matrix, there are only 9 independent weights (each entry of the kernel matrix corresponding to one independent weight), irrespectively of the number of nodes612-620in the respective layer602-610. In particular, for a convolutional layer604, the number of nodes614in the convolutional layer is equivalent to the number of nodes612in the preceding layer602multiplied with the number of kernels. If the nodes612of the preceding layer602are arranged as a d-dimensional matrix, using a plurality of kernels can be interpreted as adding a further dimension (denoted as “depth” dimension), so that the nodes614of the convolutional layer604are arranged as a (d+1)-dimensional matrix. If the nodes612of the preceding layer602are already arranged as a (d+1)-dimensional matrix comprising a depth dimension, using a plurality of kernels can be interpreted as expanding along the depth dimension, so that the nodes614of the convolutional layer604are arranged also as a (d+1)-dimensional matrix, wherein the size of the (d+1)-dimensional matrix with respect to the depth dimension is by a factor of the number of kernels larger than in the preceding layer602. The advantage of using convolutional layers604is that spatially local correlation of the input data can exploited by enforcing a local connectivity pattern between nodes of adjacent layers, in particular by each node being connected to only a small region of the nodes of the preceding layer. In embodiment shown inFIG.6, the input layer602comprises 36 nodes612, arranged as a two-dimensional 6×6 matrix. The convolutional layer604comprises 72 nodes614, arranged as two two-dimensional 6×6 matrices, each of the two matrices being the result of a convolution of the values of the input layer with a kernel. Equivalently, the nodes614of the convolutional layer604can be interpreted as arranges as a three-dimensional 6×6×2 matrix, wherein the last dimension is the depth dimension. A pooling layer606can be characterized by the structure and the weights of the incoming edges and the activation function of its nodes616forming a pooling operation based on a non-linear pooling function f. For example, in the two dimensional case the values x(n)of the nodes616of the pooling layer606can be calculated based on the values x(n−1)of the nodes614of the preceding layer604as x(n)[i,j]=f(x(n−1)[id1,jd2], . . . ,x(n−1)[id1+d1−1,jd2+d2−1]) In other words, by using a pooling layer606, the number of nodes614,616can be reduced, by replacing a number d1·d2 of neighboring nodes614in the preceding layer604with a single node616being calculated as a function of the values of said number of neighboring nodes in the pooling layer. In particular, the pooling function f can be the max-function, the average or the L2-Norm. In particular, for a pooling layer606the weights of the incoming edges are fixed and are not modified by training. The advantage of using a pooling layer606is that the number of nodes614,616and the number of parameters is reduced. This leads to the amount of computation in the network being reduced and to a control of overfitting. In the embodiment shown inFIG.6, the pooling layer606is a max-pooling, replacing four neighboring nodes with only one node, the value being the maximum of the values of the four neighboring nodes. The max-pooling is applied to each d-dimensional matrix of the previous layer; in this embodiment, the max-pooling is applied to each of the two two-dimensional matrices, reducing the number of nodes from72to18. A fully-connected layer608can be characterized by the fact that a majority, in particular, all edges between nodes616of the previous layer606and the nodes618of the fully-connected layer608are present, and wherein the weight of each of the edges can be adjusted individually. In this embodiment, the nodes616of the preceding layer606of the fully-connected layer608are displayed both as two-dimensional matrices, and additionally as non-related nodes (indicated as a line of nodes, wherein the number of nodes was reduced for a better presentability). In this embodiment, the number of nodes618in the fully connected layer608is equal to the number of nodes616in the preceding layer606. Alternatively, the number of nodes616,618can differ. Furthermore, in this embodiment, the values of the nodes620of the output layer610are determined by applying the Softmax function onto the values of the nodes618of the preceding layer608. By applying the Softmax function, the sum the values of all nodes620of the output layer610is 1, and all values of all nodes620of the output layer are real numbers between 0 and 1. A convolutional neural network600can also comprise a ReLU (rectified linear units) layer or activation layers with non-linear transfer functions. In particular, the number of nodes and the structure of the nodes contained in a ReLU layer is equivalent to the number of nodes and the structure of the nodes contained in the preceding layer. In particular, the value of each node in the ReLU layer is calculated by applying a rectifying function to the value of the corresponding node of the preceding layer. The input and output of different convolutional neural network blocks can be wired using summation (residual/dense neural networks), element-wise multiplication (attention) or other differentiable operators. Therefore, the convolutional neural network architecture can be nested rather than being sequential if the whole pipeline is differentiable. In particular, convolutional neural networks600can be trained based on the backpropagation algorithm. For preventing overfitting, methods of regularization can be used, e.g. dropout of nodes612-620, stochastic pooling, use of artificial data, weight decay based on the L1 or the L2 norm, or max norm constraints. Different loss functions can be combined for training the same neural network to reflect the joint training objectives. A subset of the neural network parameters can be excluded from optimization to retain the weights pretrained on another datasets. Systems, apparatuses, and methods described herein may be implemented using digital circuitry, or using one or more computers using well-known computer processors, memory units, storage devices, computer software, and other components. Typically, a computer includes a processor for executing instructions and one or more memories for storing instructions and data. A computer may also include, or be coupled to, one or more mass storage devices, such as one or more magnetic disks, internal hard disks and removable disks, magneto-optical disks, optical disks, etc. Systems, apparatus, and methods described herein may be implemented using computers operating in a client-server relationship. Typically, in such a system, the client computers are located remotely from the server computer and interact via a network. The client-server relationship may be defined and controlled by computer programs running on the respective client and server computers. Systems, apparatus, and methods described herein may be implemented within a network-based cloud computing system. In such a network-based cloud computing system, a server or another processor that is connected to a network communicates with one or more client computers via a network. A client computer may communicate with the server via a network browser application residing and operating on the client computer, for example. A client computer may store data on the server and access the data via the network. A client computer may transmit requests for data, or requests for online services, to the server via the network. The server may perform requested services and provide data to the client computer(s). The server may also transmit data adapted to cause a client computer to perform a specified function, e.g., to perform a calculation, to display specified data on a screen, etc. For example, the server may transmit a request adapted to cause a client computer to perform one or more of the steps or functions of the methods and workflows described herein, including one or more of the steps or functions ofFIG.1or4. Certain steps or functions of the methods and workflows described herein, including one or more of the steps or functions ofFIG.1or4, may be performed by a server or by another processor in a network-based cloud-computing system. Certain steps or functions of the methods and workflows described herein, including one or more of the steps ofFIG.1or4, may be performed by a client computer in a network-based cloud computing system. The steps or functions of the methods and workflows described herein, including one or more of the steps ofFIG.1or4, may be performed by a server and/or by a client computer in a network-based cloud computing system, in any combination. Systems, apparatus, and methods described herein may be implemented using a computer program product tangibly embodied in an information carrier, e.g., in a non-transitory machine-readable storage device, for execution by a programmable processor; and the method and workflow steps described herein, including one or more of the steps or functions ofFIG.1or4, may be implemented using one or more computer programs that are executable by such a processor. A computer program is a set of computer program instructions that can be used, directly or indirectly, in a computer to perform a certain activity or bring about a certain result. A computer program can be written in any form of programming language, including compiled or interpreted languages, and it can be deployed in any form, including as a stand-alone program or as a module, component, subroutine, or other unit suitable for use in a computing environment. A high-level block diagram of an example computer702that may be used to implement systems, apparatus, and methods described herein is depicted inFIG.7. Computer702includes a processor704operatively coupled to a data storage device712and a memory710. Processor704controls the overall operation of computer702by executing computer program instructions that define such operations. The computer program instructions may be stored in data storage device712, or other computer readable medium, and loaded into memory710when execution of the computer program instructions is desired. Thus, the method and workflow steps or functions ofFIG.1or4can be defined by the computer program instructions stored in memory710and/or data storage device712and controlled by processor704executing the computer program instructions. For example, the computer program instructions can be implemented as computer executable code programmed by one skilled in the art to perform the method and workflow steps or functions ofFIG.1or4. Accordingly, by executing the computer program instructions, the processor704executes the method and workflow steps or functions ofFIG.1or4. Computer702may also include one or more network interfaces706for communicating with other devices via a network. Computer702may also include one or more input/output devices708that enable user interaction with computer702(e.g., display, keyboard, mouse, speakers, buttons, etc.). Processor704may include both general and special purpose microprocessors, and may be the sole processor or one of multiple processors of computer702. Processor704may include one or more central processing units (CPUs), for example. Processor704, data storage device712, and/or memory710may include, be supplemented by, or incorporated in, one or more application-specific integrated circuits (ASICs) and/or one or more field programmable gate arrays (FPGAs). Data storage device712and memory710each include a tangible non-transitory computer readable storage medium. Data storage device712, and memory710, may each include high-speed random access memory, such as dynamic random access memory (DRAM), static random access memory (SRAM), double data rate synchronous dynamic random access memory (DDR RAM), or other random access solid state memory devices, and may include non-volatile memory, such as one or more magnetic disk storage devices such as internal hard disks and removable disks, magneto-optical disk storage devices, optical disk storage devices, flash memory devices, semiconductor memory devices, such as erasable programmable read-only memory (EPROM), electrically erasable programmable read-only memory (EEPROM), compact disc read-only memory (CD-ROM), digital versatile disc read-only memory (DVD-ROM) disks, or other non-volatile solid state storage devices. Input/output devices708may include peripherals, such as a printer, scanner, display screen, etc. For example, input/output devices708may include a display device such as a cathode ray tube (CRT) or liquid crystal display (LCD) monitor for displaying information to the user, a keyboard, and a pointing device such as a mouse or a trackball by which the user can provide input to computer702. An image acquisition device714can be connected to the computer702to input image data (e.g., medical images) to the computer702. It is possible to implement the image acquisition device714and the computer702as one device. It is also possible that the image acquisition device714and the computer702communicate wirelessly through a network. In a possible embodiment, the computer702can be located remotely with respect to the image acquisition device714. Any or all of the systems and apparatus discussed herein may be implemented using one or more computers such as computer702. One skilled in the art will recognize that an implementation of an actual computer or computer system may have other structures and may contain other components as well, and thatFIG.7is a high level representation of some of the components of such a computer for illustrative purposes. The foregoing Detailed Description is to be understood as being in every respect illustrative and exemplary, but not restrictive, and the scope of the invention disclosed herein is not to be determined from the Detailed Description, but rather from the claims as interpreted according to the full breadth permitted by the patent laws. It is to be understood that the embodiments shown and described herein are only illustrative of the principles of the present invention and that various modifications may be implemented by those skilled in the art without departing from the scope and spirit of the invention. Those skilled in the art could implement various other feature combinations without departing from the scope and spirit of the invention. | 41,303 |
11861829 | DESCRIPTION OF EMBODIMENTS At present, the exemplary implementations are described comprehensively with reference to the accompanying drawings. However, the examples of implementations may be implemented in a plurality of forms, and it is not to be understood as being limited to the examples described herein. Conversely, the implementations are provided to make this application more comprehensive and complete, and comprehensively convey the idea of the examples of the implementations to a person skilled in the art. In addition, the described features, structures or characteristics may be combined in one or more embodiments in any appropriate manner. In the following descriptions, a lot of specific details are provided to give a comprehensive understanding of the embodiments of this application. However, a person of ordinary skill in the art is to be aware that, the technical solutions in this application may be implemented without one or more of the particular details, or another method, unit, apparatus, or step may be used. In other cases, well-known methods, apparatuses, implementations, or operations are not shown or described in detail, in order not to obscure the aspects of this application. The block diagrams shown in the accompanying drawings are merely functional entities and do not necessarily correspond to physically independent entities. That is, the functional entities may be implemented in a software form, or in one or more hardware modules or integrated circuits, or in different networks and/or processor apparatuses and/or microcontroller apparatuses. The flowcharts shown in the accompanying drawings are merely exemplary descriptions, do not need to include all content and operations/steps, and do not need to be performed in the described orders either. For example, some operations/steps may be further divided, while some operations/steps may be combined or partially combined. Therefore, an actual execution order may change according to an actual case. AI involves a theory, a method, a technology, and an application system that use a digital computer or a machine controlled by the digital computer to simulate, extend, and expand human intelligence, perceive an environment, obtain knowledge, and use knowledge to obtain an optimal result. In other words, AI is a comprehensive technology in computer science and attempts to understand the essence of intelligence and produce a new intelligent machine that can react in a manner similar to human intelligence. AI is to study the design principles and implementation methods of various intelligent machines, to enable the machines to have the functions of perception, reasoning, and decision-making. The AI technology is a comprehensive discipline, and relates to a wide range of fields including both hardware-level technologies and software-level technologies. The basic AI technologies generally include technologies such as a sensor, a dedicated AI chip, cloud computing, distributed storage, a big data processing technology, an operating/interaction system, and electromechanical integration. AI software technologies mainly include several major directions such as a computer vision (CV) technology, a speech processing technology, a natural language processing technology, and machine learning/deep learning. The CV is a science that studies how to use a machine to “see”, and furthermore, that uses a camera and a computer to replace human eyes to perform machine vision such as recognition, tracking, and measurement on a target, and further perform graphic processing, so that the computer processes the target into an image more suitable for human eyes to observe, or an image transmitted to an instrument for detection. As a scientific discipline, CV studies related theories and technologies and attempts to establish an AI system that can obtain information from images or multidimensional data. The CV technology generally includes image segmentation, image processing, image recognition, image semantic understanding, image retrieval, OCR, video processing, video semantic understanding, video content/behavior recognition, 3D object reconstruction, 3D technology, virtual reality, augmented reality (AR), synchronous positioning, map construction, and other technologies, and further includes common biometric recognition technologies such as face recognition and fingerprint recognition. Machining Learning (ML) is a multi-field interdiscipline, and relates to a plurality of disciplines such as the probability theory, statistics, the approximation theory, convex analysis, and the algorithm complexity theory. ML specializes in studying how a computer simulates or implements a human learning behavior to obtain new knowledge or skills, and reorganize an existing knowledge structure, so as to keep improving its performance. ML is the core of AI, is a basic way to make the computer intelligent, and is applied to various fields of AI. ML and deep learning generally include technologies such as an artificial neural network, a belief network, reinforcement learning, transfer learning, inductive learning, and learning from demonstrations. The medical image detection method provided by the embodiments of this application relates to CV technology, ML technology, etc. of AI, and are specifically explained using the following embodiments. First, abbreviations and key terms involved in the embodiments of this application are defined. Convolutional neural network (CNN) refers to a neural network in a deep learning classification detection technology, including a convolution layer, a pooling layer, and a fully connected layer. Region-based Convolutional Neural Networks (RCNN) refer to generating candidate regions on an image, extracting a feature of each candidate region by using a deep network, then transmitting the feature into a classifier of each category, determining whether the feature belongs to this category, and then using a regression machine to finely correct a candidate box position. Dilated convolution refers to adding a dilated space in a standard convolution operation, a gap existing between convolution kernels so as to enlarge a receptive field of the convolution operation without increasing the number of convolution parameters at the same time. CT image: CT refers to computed tomography, and an image obtained by scanning a certain part of a human body using X-ray, Y-ray, ultrasonic wave, and the like is referred to as a CT image. Slice refer to a slice in a CT image, and the CT image is composed of a plurality of continuous slices. Region-of-interest detection refers to detecting a region of interest in a medical image, such as a target organ region and a suspected lesion region, and providing a confidence score. Feature map refers to a feature map obtained by convolution of an image and a filter. The feature map may be in convolution with the filter to generate a new feature map. Anchor refers to rectangular boxes with different sizes and aspect ratios defined on the feature map in advance. Proposal refers to an anchor obtained after performing categorical regression non-maximum suppression (NMS). Bounding boxes are abbreviated as BBox. Ground true bounding boxes (gt_BBoxes) refer to a true region of interest annotated by a doctor, i.e., a true bounding box. Intersection over Union (IoU) refers to a ratio of an intersection to a union of two bounding boxes. Region of Interest Pooling (ROI pooling) refers to putting forward a proposal obtained by the network and adjusting to a uniform size during detection. FPN refers to an article detection method: combining a feature of a shallow network and a feature of a deep network to obtain a new feature map and then further predicting same. Region Propose Network (RPN) refers to processing an extracted convolution feature map, the RPN being used for searching for a predefined number of regions that possibly include objects. Confidence score represents a reliability level of a predicted parameter, and the higher the confidence score is, the more reliable the predicted parameter is. FIG.1schematically shows a flowchart of a deep learning based medical image detection method according to an embodiment of this application. The deep learning based medical image detection method provided by the embodiment of this application may be executed by any electronic device having a computing processing capability, such as a terminal device, a server, a server cluster, and a cloud server, which is not limited by this application. In the exemplary explanations, it is described by taking the method according to the embodiment of this application being executed by a cloud server as an example. As shown inFIG.1, the deep learning based medical image detection method provided by the embodiment of this application includes the following steps: Step S110: Acquire a to-be-detected medical image, the to-be-detected medical image including a plurality of slices. In the embodiment of this application, the to-be-detected medical image may be a CT image including a region of interest (such as a target organ and an target part), and the CT image may include a plurality of continuous slices. In the following image processing process, a plurality of slices of the CT image may be selected, for example, any three adjacent slices (in the following example explanation, the three adjacent slices are respectively referred to as a first slice, a second slice, and a third slice), but this application is not limited thereto; the appropriate number of slices may be selected according to required accuracy and a provided computing amount. For example in one implementation, the selected slices may be slices next to each other as slice No. 1, 2, 3, 4, 5, . . . , and etc. For another example, the selected slices may be every N slices, wherein N is a positive integer. When N=2, the selected slices include slice No. 1, 3, 5, 7, 9, . . . , and etc. When N=3, the selected slices include slice No. 1, 4, 7, 10, 13, . . . , and etc. When N=4, the selected slices include slice No. 1, 5, 9, 13, 17, . . . , and etc. The technical solution provided by the embodiment of this application can be applied to any 3D medical image. The following embodiments take the CT image as an example for exemplary explanation, and the embodiments do not limit to the CT image. In the technical solution provided by the embodiment of this application, the deep neural network model generally processes using a single slice as a processing unit, and therefore, in this embodiment, step S120to step S150all use a single slice as the processing unit, to introduce the processing process of the deep neural network model. In actual applications, the deep neural network model may process a plurality of slices included in the to-be-detected medical image one by one for multiple times; the deep neural network model may also concurrently process the plurality of slices included in the to-be-detected medical image in one time; and the processing capability of the deep neural network model is not limited herein. Step S120: Extract, for each slice in the to-be-detected medical image, N basic feature maps of the slice by a deep neural network. N is an integer greater than or equal to 1. In the embodiment of this application, the deep neural network may include a feature extraction network. The feature extraction network may extract low-level feature maps and high-level feature maps including different information for the slice as the basic feature maps. N may be equal to 5, but this application is not limited thereto. The value of N may be determined according to the structure of the feature extraction network. Step S130: Merge, for each slice in the to-be-detected medical image, features of the N basic feature maps of the slice by the deep neural network, to obtain M enhanced feature maps of the slice. M is an integer greater than or equal to 1. For example, M may be equal to 3, but this application is not limited thereto. The value of M may be determined according to the value of N and specific requirements. In an exemplary embodiment, N basic feature maps of the slice may include A low-level feature maps and B high-level feature maps, where both A and B are integers greater than 1; in this case, the merging features of the N basic feature maps of the slice to obtain M enhanced feature maps of the slice may include: performing convolution processing on an ithlow-level feature map of the slice; upsampling a jthhigh-level feature map of the slice; and adding a feature map obtained by the convolution processing on the ithlow-level feature map and a feature map obtained by the upsampling of the jthhigh-level feature map, to obtain a kthenhanced feature map of the slice, where 1≤i<A, 1<j≤B, 1<k≤M, and i, j, and k are all integers. In one implementation, adding a feature map obtained by the convolution processing on the ithlow-level feature map and a feature map obtained by the upsampling of the jthhigh-level feature map to obtain a kthenhanced feature map of the slice may include concatenating the feature map obtained by the convolution processing on the ithlow-level feature map and the feature map obtained by the upsampling of the jthhigh-level feature map to obtain the kthenhanced feature map of the slice. In another implementation, adding a feature map obtained by the convolution processing on the ithlow-level feature map and a feature map obtained by the upsampling of the jthhigh-level feature map to obtain a kthenhanced feature map of the slice may include adding each element in the feature map obtained by the convolution processing on the ithlow-level feature map and each corresponding element in the feature map obtained by the upsampling of the jthhigh-level feature map to obtain corresponding element in the kthenhanced feature map of the slice. In an exemplary embodiment, the N basic feature maps of the slice may include A low-level feature maps and B high-level feature maps, where both A and B are integers greater than 1; in this case, the merging features of the N basic feature maps of the slice to obtain M enhanced feature maps of the slice may include: performing convolution processing on an Athlow-level feature map of the slice, to obtain a first high-level feature map of the slice as a first enhanced feature map. In another implementation, the method may include performing convolution processing on an (A−1)thlow-level feature map of the slice, upsampling the first high-level feature map of the slice; and adding a feature map obtained by the convolution processing on the (A−1)thlow-level feature map and a feature map obtained by the upsampling of the first high-level feature map, to obtain a second high-level feature map of the slice as a second enhanced feature map. In another implementation, the method may include performing convolution processing on an (A−2)t′ low-level feature map of the slice, upsampling the second high-level feature map of the slice; and adding a feature map obtained by the convolution processing on the (A−2)thlow-level feature map and a feature map obtained by the upsampling of the second high-level feature map, to obtain a third high-level feature map of the slice as a third enhanced feature map. In another implementation, when it's needed, the above implementation may be repeat accordingly. In an exemplary embodiment, A=3, B=3, and M=3. In this case, the merging features of the N basic feature maps of the slice, to obtain M enhanced feature maps of the slice may include:performing convolution processing on a third low-level feature map of the slice, to obtain a first high-level feature map of the slice as a first enhanced feature map;performing convolution processing on a second low-level feature map of the slice; upsampling the first high-level feature map of the slice; and adding a feature map obtained by the convolution processing on the second low-level feature map and a feature map obtained by the upsampling of the first high-level feature map, to obtain a second high-level feature map of the slice as a second enhanced feature map; andperforming convolution processing on a first low-level feature map of the slice; upsampling the second high-level feature map of the slice; and adding a feature map obtained by the convolution processing on the first low-level feature map and a feature map obtained by the upsampling of the second high-level feature map, to obtain a third high-level feature map of the slice as a third enhanced feature map. Step S140: Respectively perform, for each slice in the to-be-detected medical image, a hierarchically dilated convolutions operation on the M enhanced feature maps of the slice by the deep neural network, to generate a superposed feature map of each enhanced feature map. In an exemplary embodiment, the respectively performing a hierarchically dilated convolutions operation on the M enhanced feature maps of the slice, to generate a superposed feature map of each enhanced feature map may include:processing, for each enhanced feature map in the M enhanced feature maps, the enhanced feature map by K dilated convolution layers, to obtain K dilated feature maps of the enhanced feature map, K being an integer greater than 1;processing, for each enhanced feature map in the M enhanced feature maps, the enhanced feature map by common convolution layers, to obtain convolution feature maps of the enhanced feature map; andobtaining, for each enhanced feature map in the M enhanced feature maps, the superposed feature map of the enhanced feature map based on the K dilated feature maps and the convolution feature maps of the enhanced feature map. For example, K may be equal to 3, but this application is not limited thereto. It can be selected according to specific application scenes. In an exemplary embodiment, the obtaining a superposed feature map of the enhanced feature map based on the K dilated feature maps and the convolution feature maps of the enhanced feature map may include:concatenating the K dilated feature maps and the convolution feature maps of the enhanced feature map to obtain a concatenated feature map of the enhanced feature map;obtaining respective weights of the K dilated convolution layers and the common convolution layers based on the concatenated feature map of the enhanced feature map; andobtaining the superposed feature map of the enhanced feature map based on the K dilated feature maps and the convolution feature maps of the enhanced feature map, and the respective weights of the K dilated convolution layers and common convolution layers. In an exemplary embodiment, receptive fields of the K dilated convolution layers are different. In an exemplary embodiment, the K dilated convolution layers share convolution kernel parameters (i.e., keeping the parameters consistent), so as to reduce the parameter amount, avoid overfitting to a certain degree, and improve the training speed and prediction speed. In the embodiment of this application, for other slices in the plurality of adjacent slices, the processing process of obtaining first to third superposed feature maps is similar to that of the first slice, and the process above may be referred to. Step S150: Predict position information of a region of interest and a confidence score thereof in the to-be-detected medical image by the deep neural network based on the superposed feature map of each slice in the to-be-detected medical image. In an exemplary embodiment, the predicting position information of a region of interest and a confidence score thereof in the to-be-detected medical image based on the superposed feature map of each slice in the to-be-detected medical image may include:processing the superposed feature map of each slice in the to-be-detected medical image, to obtain position information of an initial region of interest and an initial confidence score thereof in the to-be-detected medical image; andprocessing the position information of the initial region of interest and the initial confidence score thereof, to obtain the position information of the region of interest and the confidence score thereof in the to-be-detected medical image. In an exemplary embodiment, the processing the superposed feature map, to obtain position information of an initial region of interest and an initial confidence score thereof in the to-be-detected medical image may include:obtaining a dth depth feature map based on a dth superposed feature map of each slice in the to-be-detected medical image, d being an integer greater than or equal to 1 and less than M; andpreliminarily classifying M depth feature maps, to obtain the position information of the initial region of interest and the initial confidence score thereof in the to-be-detected medical image. In the embodiment of this application, the deep neural network may include a feature merging network, a hierarchically dilated convolution network, a preliminary classification network, and a prediction network, the feature merging network may be used for merging the low-level features and high-level features in the slice in the to-be-detected medical image, so as to better detect a large object and a small object in the to-be-detected medical image. The hierarchically dilated convolution network may be used for performing a hierarchically dilated convolutions operation on a feature obtained after merging the low-level feature and high-level feature, to capture the surrounding information of the region of interest of the slice in the to-be-detected medical image so as to facilitate detecting the region of interest more accurately. In the embodiment of this application, the feature merging network and the hierarchically dilated convolution network may be used as basic networks for the deep neural network, and the high-level network of the deep neural network may adopt an improved FPN network as a detection network. The FPN network may include an RPN network and an RCNN network. The preliminary classification network may be the RPN network, and the prediction network may be the RCNN network, but this application is not limited thereto. Upon the feature extraction by the feature merging network and the hierarchically dilated convolution network, a new feature map can be obtained, and then the new feature map is inputted into the RPN network for preliminarily classifying, that is, the RPN network may be used for performing binary classification (distinguishing whether it is a region of interest) and position regression on the bounding box preset on the new feature map, to obtain position information of an initial region of interest and initial confidence score thereof; then the RPN network inputs the position information of the initial region of interest and the initial confidence score thereof into the RCNN network, for a more accurate category classification and position regression at a second phase, i.e., obtaining a final prediction result, to obtain position information of the final region of interest and the confidence score thereof. In an exemplary embodiment, the method may further include: acquiring a training dataset, the training dataset including a medical image annotated with the position information of the region of interest and the confidence score thereof; acquiring a slice annotated with the position information of the region of interest and the confidence score thereof in the medical image and two slices adjacent thereto in an up-and-down direction; and training the deep neural network using the slice annotated with the position information of the region of interest and the confidence score thereof in the medical image and the two slices adjacent thereto in the up-and-down direction. For example, taking the CT image as an example, establishing the training dataset may use a DeepLesion dataset open-sourced by the National Institutes of Health Clinical Center (NIHCC) as the training dataset, but this application is not limited thereto. Using the deep neural network provided by the embodiment of this application may only use one slice annotated with a true region of interest (e.g., a lesion region) and confidence score thereof and two slices adjacent thereto in the up-and-down direction, i.e., only using three slices (generally, the number of slices collected for a certain part of a patient in one time is far greater than 3) in each CT image in the training dataset to train the deep neural network; the trained deep neural network can reach a capability with a relatively high accuracy for detecting large and small regions of interest, so as to lower redundant information and reduce a computing amount and a data processing amount. For the deep learning based medical image detection method provided by an implementation of this application, on one hand, by acquiring a to-be-detected medical image including a plurality of slices and using a deep neural network to process each slice in the to-be-detected medical image, 3D information in the to-be-detected medical image is used for automatically predicting the position information of the region of interest and the confidence score thereof in the to-be-detected medical image, improving the reliability of the prediction result; on the other hand, the enhanced feature map may be obtained by merging basic feature maps at different layers of each slice in the to-be-detected medical image, that is, the low-level feature and the high-level feature of the slice in the to-be-detected medical image are fused; since the low-level feature is helpful for a small scale feature in the to-be-detected medical image, objects of different scales in the to-be-detected medical image are better detected after merging the low-level feature and the high-level feature. In addition, a hierarchically dilated convolutions operation may be performed on the merged enhanced feature map, so as to capture surrounding information of the region of interest of the slice in the to-be-detected medical image, to assist in determining whether it is a true region of interest according to the surrounding information, facilitating more accurate detection of the region of interest. In the embodiment of this application, the feature extraction network may use any one or a combination of ResNet, MobileNet, DenseNet, and the like as the basic feature extraction network of the deep neural network. The training of the deep model is relatively easy since the ResNet adopts residual connection and Batch Normalization (BN). Therefore, in the following embodiment below, taking ResNet50 being the feature extraction network as an example for exemplary explanation, this application is actually not limited thereto. In the embodiment of this application, the schematic diagram of the ResNet50 model is as shown in Table 1 below. A Rectified Linear Unit (ReLU) layer and a BN layer are concatenated behind each convolution layer. TABLE 1ResNet50 structure tableLayer nameOutput sizeResNet50Conv1 (a first convolution256 × 2567 × 7, 64, stride 2layer)Conv2_x (a second128 × 1283 × 3 max pool, stride 2convolution layer)1 × 1, 64×3 blocks3 × 3, 641 × 1, 256Conv3_x (a third64 × 641 × 1, 128×4 blocksconvolution layer)3 × 3, 1281 × 1, 512Conv4_x (a fourth32 × 321 × 1, 256×6 blocksconvolution layer)3 × 3, 2561 × 1, 1024Conv5_x (a fifth16 × 161 × 1, 512×3 blocksconvolution layer)3 × 3, 5121 × 1, 2048 FIG.2schematically shows a schematic structural diagram of a block in a Resnet50 network according to an embodiment of this application. The second convolution layer of ResNet50 is taken as an example to explain the structure of the block herein. The block structures of other convolution layers may refer toFIG.2. FIG.3schematically shows a schematic structural diagram of an improved FPN network according to an embodiment of this application.FIG.3provides an improved FPN network structure. As shown inFIG.3, the embodiment of this application differs from Faster-RCNN in that before preliminarily classifying the RPN network, the FPN network merges the low-level feature and the high-level feature:(1) bottom-up feature extraction network, for example, the ResNet50 network is used for extracting features; and(2) top-down feature enhanced network, the feature of the current layer extracted by ResNet50 upon 1×1 convolution dimension reduction and the high-level feature upsampled by two times are directly added for feature merging. Since the low-level feature is quite helpful for detecting small objects, the objects can be better detected after merging the low-level feature and the high-level feature. Although low-level feature semantic information is relatively little, the object position is accurate; although high-level feature semantic information is relatively rich, the object position is relatively rough; and the merged feature is adopted for prediction, so as to capture multi-scale object information. In addition, due to the particularity of the region of interest in the medical image (such as the target organ region and the suspected lesion region), it is required to determine whether it is a region of interest according to the surrounding information, and the embodiment of this application further adds hierarchically dilated convolutions (HDC) operations in the FPN structure (for example, HDC1, HDC2, and HDC3 inFIG.3, but this application is not limited thereto; and the number of the HDCs depends on specific application scenes) to obtain different sizes of information surrounding the feature map, so as to help more accurate lesion detection; and each HDC structure is as shown, for example, inFIG.4below. FIG.4schematically shows a schematic structural diagram of a feature merging network and a hierarchically dilated convolution network according to an embodiment of this application. As shown inFIG.4, first to fifth convolution layers of the ResNet50 form a bottom-up route, to generate first to fifth basic feature maps ((11) to (15) inFIG.4) of each slice of the to-be-detected medical image, and further include a top-down route. A lateral connection exists between the bottom-up route and the top-down route; the main function of lateral connection of 1*1 convolution kernels herein is to reduce the number of convolution kernels, that is, to reduce the number of feature maps, without changing the size of the feature map. Bottom-up is a forward process of the network. In the forward process, the size of the feature map may change after passing through some layers, but may not change after passing through some other layers. Layers with the size of the feature map unchanged are classified as a stage; therefore, the feature extracted each time is an output of the last layer of each stage, so that a feature pyramid is constituted. The top-down process is executed using upsampling, while the lateral connection is merging the upsampling result with the feature maps with the same size generated in the bottom-up process. Upon merging, 3*3 convolution kernels may further be adopted to perform convolution on each merging result (not shown inFIG.4), with the purpose of eliminating an aliasing effect of upsampling. It is assumed here that the generated feature map result is that a first enhanced feature map (21), a second enhanced feature map (22), and a third enhanced feature map (23) respectively have one-to-one correspondence to a fifth basic feature map (15), a third basic feature map (13), and a first basic feature map (11) originally from a bottom-up convolution result. Still referring toFIG.4, taking a first slice as an example, the processing modes for other slices are similar to the processing mode of the first slice. Upon passing through a first dilated convolution layer, a second dilated convolution layer, and a third dilated convolution layer of the HDC1, the first enhanced feature map (21) of the first slice separately forms a first dilated feature map (31), a second dilated feature map (32), and a third dilated feature map (33); upon passing through a common convolution layer (for example, 1×1 convolution) of the HDC1, the first enhanced feature map (21) of the first slice further generates a first convolution feature map (34); after concatenating the first to third dilated feature maps of the first slice to the first convolution feature map, a first concatenated feature map (41) is generated to obtain the weights respectively allocated to the first to third dilated convolution layers and the common convolution layer by the HDC1; the corresponding weights are respectively multiplied by the first to third dilated feature maps of the first slice and the first convolution feature map, and then accumulated to obtain a first accumulation feature map (51); for example, assume that respective weights of the first to third dilated convolution layers and the common convolution layer of the HDC1 are respectively a1 to a4, the first accumulation feature map (51)=a1×first dilated feature map (31)+a2×second dilated feature map (32)+a3×third dilated feature map (33)+a4×first convolution feature map (34); then vector addition between the first accumulation feature map (51) and the first enhanced feature map (21) is executed; and further passing through one 1×1 convolution, dimension reduction is executed to obtain a first superposed feature map (61) to reach the purpose of reducing parameters. Similarly, upon passing through a first dilated convolution layer, a second dilated convolution layer, and a third dilated convolution layer of the HDC2, the second enhanced feature map (22) of the first slice separately forms a fifth dilated feature map (35), a sixth dilated feature map (36), and a seventh dilated feature map (37); upon passing through a common convolution layer (for example, 1×1 convolution) of the HDC2, the second enhanced feature map (22) of the first slice further generates a second convolution feature map (38); after concatenating the fifth dilated feature map (35), the sixth dilated feature map (36), and the seventh dilated feature map (37) of the first slice to the second convolution feature map (38), a second concatenated feature map (42) is generated to obtain the weights respectively allocated to the first to third dilated convolution layers and the common convolution layer by the HDC2; the corresponding weights are respectively multiplied by the fifth dilated feature map (35), the sixth dilated feature map (36), and the seventh dilated feature map (37) of the first slice to the second convolution feature map (38), and then accumulated to obtain a second accumulation feature map (52); for example, assume that respective weights of the first to third dilated convolution layers and the common convolution layer of the HDC2 are respectively b1 to b4, the second accumulation feature map (52)=b1×fifth dilated feature map (35)+b2×sixth dilated feature map (36)+b3×seventh dilated feature map (37)+b4×second convolution feature map (38); then addition between the second accumulation feature map (52) and the second enhanced feature map (22) is executed; further passing through one 1×1 convolution, dimension reduction is executed to obtain a second superposed feature map (62) to reach the purpose of reducing parameters. Upon passing through a first dilated convolution layer, a second dilated convolution layer, and a third dilated convolution layer of the HDC3, the third enhanced feature map (23) of the first slice separately forms a ninth dilated feature map (39), a tenth dilated feature map (310), and an eleventh dilated feature map (311); upon passing through a common convolution layer (for example, 1×1 convolution) of the HDC3, the third enhanced feature map (23) of the first slice further generates a third convolution feature map (312); after concatenating the ninth dilated feature map (39), the tenth dilated feature map (310), and the eleventh dilated feature map (311) of the first slice to the third convolution feature map (312), a third concatenated feature map (43) is generated to obtain the weights respectively allocated to the first to third dilated convolution layers and the common convolution layer by the HDC3; the corresponding weights are respectively multiplied by the ninth dilated feature map (39), the tenth dilated feature map (310), and the eleventh dilated feature map (311) of the first slice and the third convolution feature map (312), and then accumulated to obtain a third accumulation feature map (53); for example, assume that respective weights of the first to third dilated convolution layers and the common convolution layer of the HDC3 are respectively c1 to c4, the third accumulation feature map (53)=clx ninth dilated feature map (39)+c2×tenth dilated feature map (310)+c3×eleventh dilated feature map (311)+c4×third convolution feature map (312); then addition between the third accumulation feature map (53) and the third enhanced feature map (23) is executed; further passing through one 1×1 convolution, dimension reduction is executed to obtain a third superposed feature map (63) to reach the purpose of reducing parameters. FIG.5schematically shows a schematic structural diagram of a hierarchically dilated convolution network according to an embodiment of this application. The share weight inFIG.5represents that the first to third dilated convolution layers share convolution kernel parameters. FIG.5provides an example of a hierarchically dilated convolution structure. In the embodiment of this application, it is assumed that the hierarchically dilated convolution structures at the first stage to the third stage are the same, and therefore, only one of them is used as an example herein. After merging the low-level feature and the high-level feature, a depth feature map is obtained, for example, assume that it is a first enhanced feature map (21) of the first slice, the processing of other enhanced feature maps is similar to that of the first enhanced feature map (21). The first enhanced feature map (21) passes through one 1×1 common convolution layer and dilated convolution layers with three 3×3 sizes (i.e., the first to third dilated convolution layers). Dilated convolution enlarges the receptive fields by dilating the convolution kernel, and the receptive fields are grown exponentially. Dilated convolution does not increase the parameter amount; a weight value given by an excess point is 0, and no training is needed. The receptive fields of different dilated convolution layers here are different, so as to capture information of different scales of the slice in the CT image. Then four results (for example, the first dilated feature map (31), the second dilated feature map (32), the third dilated feature map (33), and the first convolution feature map (34)) are concatenated to obtain a new concatenated feature map (for example, a first concatenated feature map (41)); this new concatenated feature map includes surrounding information of three different receptive fields. The so-called dilated convolution indicates injecting dilation into the convolution kernel (i.e., 0); and the number of the injected dilations is decided by parameter dilation (abbreviated as d in the drawings). For example, d=1, the receptive field of the convolution kernel is 3×3; d=2, the receptive field of the convolution kernel is 7×7; and d=3, the receptive field of the convolution kernel is 11×11. Different receptive fields have different important degrees to the detection of the regions of interest, and the receptive field required by a small object is different from that of a large object. Hence, a Squeeze and Excitation module (SE module) is used for automatically learning a corresponding weight. The importance of different receptive fields to different objects may be learnt through the SE module. Finally, dimension reduction is executed by one 1×1 convolution to achieve the purpose of lowering the parameter. Upon the operations above, vector addition is performed on a first superposed feature map of the first slice, a first superposed feature map of the second slice, and a first superposed feature map of the third slice to obtain a new first depth feature map; vector addition is performed on a second superposed feature map of the first slice, a second superposed feature map of the second slice, and a second superposed feature map of the third slice to obtain a new second depth feature map; vector addition is performed on a third superposed feature map of the first slice, a third superposed feature map of the second slice, and a third superposed feature map of the third slice to obtain a new third depth feature map; and then the three new first to third depth feature maps are inputted into the RPN network for preliminarily classifying, and then enter the RCNN network for final prediction, to obtain final lesion position information and confidence score. For the method provided by the embodiment of this application, regarding the to-be-detected medical image similar to a CT image, its specific 3D information may be used for inputting a plurality of adjacent slices into a deep neural network for detecting a region of interest; upon ROI-pooling, information of the plurality of slices is merged to obtain a new feature map, so as to further predict the position information of the region of interest, i.e., using the 3D information of the CT image to improve the reliability of the prediction result. Applying computed tomography to a human part may obtain a 3D imaging picture. In addition, during the training and prediction phases of the model, only three slices of one CT image may be inputted, which does not increase the computing amount or introduce access redundant information. Moreover, the method above further considers the multi-scale problem existing in the CT image lesion detection, i.e., scales for different regions of interest are greatly different from each other, from 1 mm to 500 mm, and the like. It is obvious that for the CT image for the simultaneous detection of the large object and small object, the deep neural network provided by the embodiment of this application has a more sensitive information extraction capability. In the embodiment of this application, the deep neural network is trained using the training dataset in advance. During parameter initialization, the first to fifth convolution layers of the ResNet50 may adopt the parameter of the ResNet50 trained in advance on an ImageNet dataset, and the newly added layer may adopt a Gaussian distribution with a variance of 0.01 and a mean of 0 for initialization. In the embodiment of this application, during training the model, in the RPN network, the IoU value of anchor and gt_BBoxes of greater than 0.5 is used as a positive sample, and the IoU value of less than 0.3 is used as a negative sample; and the number is 48. For the RCNN network, the IoU value of proposal and gt_BBoxes of greater than 0.5 is used as a positive sample, and the IoU value of less than 0.4 is used as a negative sample; and the sampling number is 48. In the embodiment of this application, a loss function may be divided into two parts; a first part is, for a sorting loss of an article in each bounding box, a cross entropy loss function is adopted; the other part is, for a regression loss of each bounding box position, a smooth L1 loss function is adopted. In the embodiment of this application, stochastic gradient descent (SGD) based gradient descent method may be adopted to solve a convolutional template parameter w and an offset parameter b of the neural network model; in each iteration process, a prediction result error is calculated and backpropagated to a convolutional neural network model; and gradient is calculated and the parameter of the convolutional neural network model is updated. FIG.6schematically shows a schematic diagram of a deep learning based medical image detection method according to an embodiment of this application. FIG.6shows the use flowchart of the method provided by the embodiment of this application; when a front end A (which may be, for example, a medical image acquisition device) acquires image data, such as a CT image, a plurality of CT slices of the CT image may be uploaded to a rear end; the rear end uses the deep learning based medical image detection method provided by the embodiment above, to obtain a region of a suspected lesion and a corresponding confidence score as diagnosis information to be outputted to a front end B (for example, a doctor client). FIG.7schematically shows a schematic diagram of a detection effect of a deep learning based medical image detection method provided by an embodiment of this application. As shown inFIG.7, the CT image shown in (a) is inputted into the deep neural network in the embodiment of this application, so that a detection result as shown in FIG. (b) can be outputted. In the deep learning based medical image detection method provided by the implementation of this application, the deep neural network may adopt an improved Feature Pyramid Network (FPN) network; on one hand, the capability of the network for capturing multi-scale information is strengthened, so as to enhance the detection capability of the network for the regions of interest of different scales; on the other hand, as compared with the related technologies, under the condition of similar detectable accuracy rate of the region of interest, the technical solution provided by the embodiment of this application only uses, during the deep neural network training phase, a slice with annotation information and two slices adjacent thereto in the up-and-down direction, that is, a model with a relatively high detectable rate can be trained using a total of three slices in each medical image, and the 3D information in the medical image can be used, without bringing excess redundant information, so as to reduce a data processing amount of the training process and the prediction phase, improve the computing processing rate and efficiency, and facilitate faster detection of the position of the region of interest and the confidence score thereof in the medical image. Moreover, the deep learning based medical image detection method may be applied to multi-scale CT image detection for assisting a doctor in detecting a suspected lesion region in the CT image; it can be arranged to hospitals in different sizes, community rehabilitation centers, and the like, assisting a doctor in shortening diagnosis time, reducing workload of the doctor, and improving the working efficiency of the doctor. Other contents and specific implementations in the embodiment of this application may refer to the embodiment above, and will not be repeated herein. FIG.8schematically shows a block diagram of a deep learning based medical image detection apparatus according to an embodiment of this application. The deep learning based medical image detection apparatus provided by the embodiment of this application may be disposed in any electronic device having a computing processing capability, such as a terminal device, a server, a server cluster, and a cloud server, which is not limited by this application; and in the exemplary explanations, it is described by taking the apparatus according to the embodiment of this application being disposed in a cloud server as an example for execution. As shown inFIG.8, the deep learning based medical image detection apparatus800provided by the embodiment of this application may include an image acquisition module810, a feature extraction module820, a feature merging module830, a dilated convolution module840, and a region-of-interest prediction module850. The image acquisition module810is configured to acquire a to-be-detected medical image, the to-be-detected medical image including a plurality of slices. The feature extraction module820is configured to extract, for each slice in the to-be-detected medical image, N basic feature maps of the slice through a deep neural network, N being an integer greater than 1. The feature merging module830is configured to perform, for each slice in the to-be-detected medical image, feature merge on the N basic feature maps of the slice through the deep neural network, to obtain M enhanced feature maps of the slice, M being an integer greater than 1. The dilated convolution module840is configured to respectively perform, for each slice in the to-be-detected medical image, a hierarchically dilated convolutions operation on the M enhanced feature maps of the slice through the deep neural network, to generate a superposed feature map of each enhanced feature map of the slice. The region-of-interest prediction module850is configured to predict position information of a region of interest and a confidence score thereof in the to-be-detected medical image through the deep neural network based on the superposed feature map of each slice in the to-be-detected medical image. In the exemplary embodiment, the N basic feature maps include A low-level feature maps and B high-level feature maps, both A and B being integers greater than 1. The feature merging module830is configured to:perform convolution processing on an ithlow-level feature map of the slice; upsample a jthhigh-level feature map of the slice; and add a feature map obtained by the convolution processing on the ithlow-level feature map and a feature map obtained by the upsampling of the jthhigh-level feature map, to obtain a kthenhanced feature map of the slice, where 1≤i<A, 1<j≤B, 1<k≤M, and i, j, and k are all integers. In the exemplary embodiment, the N basic feature maps include A low-level feature maps and B high-level feature maps, both A and B being integers greater than 1; and the feature merging module830is configured to:perform convolution processing on an Athlow-level feature map of the slice, to obtain a first high-level feature map of the slice as a first enhanced feature map. In the exemplary embodiment, A=3, B=3, and M=3. The feature merging module830is configured to:perform convolution processing on a third low-level feature map of the slice, to obtain a first high-level feature map of the slice as a first enhanced feature map;perform convolution processing on a second low-level feature map of the slice; upsample the first high-level feature map of the slice; and add a feature map obtained by the convolution processing on the second low-level feature map and a feature map obtained by the upsampling of the first high-level feature map, to obtain a second high-level feature map of the slice as a second enhanced feature map; andperform convolution processing on a first low-level feature map of the slice; upsample the second high-level feature map of the slice; and add a feature map obtained by the convolution processing on the first low-level feature map and a feature map obtained by the upsampling of the second high-level feature map, to obtain a third high-level feature map of the slice as a third enhanced feature map. In the exemplary embodiment, the dilated convolution module840may include:a dilated feature obtaining unit, configured to respectively process, for each enhanced feature map in the M enhanced feature maps, the enhanced feature map through K dilated convolution layers, to obtain K dilated feature maps of the enhanced feature map, K being an integer greater than 1;a convolution feature obtaining unit, configured to process, for each enhanced feature map in the M enhanced feature maps, the enhanced feature map through common convolution layers, to obtain convolution feature maps of the enhanced feature map; and a superposed feature obtaining unit, configured to obtain, for each enhanced feature map in the M enhanced feature maps, the superposed feature map of the enhanced feature map based on the K dilated feature maps and the convolution feature maps of the enhanced feature map. In the exemplary embodiment, the superposed feature obtaining unit is configured to:concatenate the K dilated feature maps and the convolution feature maps of the enhanced feature map to obtain a concatenated feature map of the enhanced feature map;obtain respective weights of the K dilated convolution layers and the common convolution layers based on the concatenated feature map of the enhanced feature map; andobtain the superposed feature map of the enhanced feature map based on the enhanced feature map, the K dilated feature maps and convolution feature maps, and the respective weights of the K dilated convolution layers and common convolution layers. In the exemplary embodiment, receptive fields of the K dilated convolution layers are different. In the exemplary embodiment, the K dilated convolution layers share convolution kernel parameters. In the exemplary embodiment, the region-of-interest prediction module840may include a preliminary classification unit and a region-of-interest predicting unit. The preliminary classification unit is configured to process the superposed feature map of each slice in the to-be-detected medical image, to obtain position information of an initial region of interest and an initial confidence score thereof in the to-be-detected medical image; andthe region-of-interest predicting unit is configured to process the position information of the initial region of interest and the initial confidence score thereof, to obtain the position information of the region of interest and the confidence score thereof in the to-be-detected medical image. In the exemplary embodiment, the preliminary classification unit is configured to:obtain a dthdepth feature map based on a dthsuperposed feature map of each slice in the to-be-detected medical image, d being an integer greater than or equal to 1 and less than M; andpreliminarily classify M depth feature maps, to obtain the position information of the initial region of interest and the initial confidence score thereof in the to-be-detected medical image. In the exemplary embodiment, the deep learning based medical image detection apparatus800may further include:a training set acquisition module, configured to acquire a training dataset, the training dataset including a medical image annotated with the position information of the region of interest and the confidence score thereof;a slice acquisition module, configured to acquire a slice annotated with the position information of the region of interest and the confidence score thereof in the medical image and two slices adjacent thereto in an up-and-down direction; anda model training module, configured to train the deep neural network using the slice annotated with the position information of the region of interest and the confidence score thereof in the medical image and the two slices adjacent thereto in the up-and-down direction. In the exemplary embodiment, the to-be-detected medical image may include a CT image. Since each functional module of the deep learning based medical image detection apparatus800of the exemplary embodiment of this application corresponds to each step of the exemplary embodiment of the deep learning based medical image detection method above, and the details are not repeated here. In the exemplary embodiment of this application, also provided is an electronic device capable of implementing the method above. FIG.9shows a schematic structural diagram of a computer system adapted to implement an electronic device according to an embodiment of this application. The computer system of the electronic device shown inFIG.9is merely an example, and does not constitute any limitation on functions and use ranges of the embodiments of this application. As shown inFIG.9, the computer system includes a central processing unit (CPU)901, which can perform various appropriate actions and processing according to a program stored in a read-only memory (ROM)902or a program loaded into a random access memory (RAM)903from a storage part908. The RAM903further stores various programs and data required for system operations. The CPU901, the ROM902, and the RAM903are connected to each other through a bus904. An input/output (I/O) interface905is also connected to the bus904. The following components are connected to the I/O interface905: an input part906including a keyboard, a mouse, or the like; an output part907including a cathode ray tube (CRT), a liquid crystal display (LCD), a speaker, or the like; a storage part908including a hard disk or the like; and a communication part909of a network interface card, including a LAN card, a modem, or the like. The communication part909performs communication processing by using a network such as the Internet. A driver910is also connected to the I/O interface905as required. A removable medium911, such as a magnetic disk, an optical disc, a magneto-optical disk, or a semiconductor memory, is installed on the drive910as required, so that a computer program read from the removable medium is installed into the storage part908as required. Particularly, according to an embodiment of this application, the processes described in the following by referring to the flowcharts may be implemented as computer software programs. For example, this embodiment of this application includes a computer program product, the computer program product includes a computer program carried on a computer-readable medium, and the computer program includes program code used for performing the methods shown in the flowcharts. In such an embodiment, the computer program may be downloaded and installed from a network through the communication part1009, and/or installed from the removable medium1011. When the computer program is executed by the central processing unit (CPU)901, the above functions defined in the system of this application are performed. The computer-readable medium shown in this application may be a computer-readable signal medium or a computer-readable storage medium or any combination of the two. The computer-readable storage medium may be, for example, but is not limited to, an electric, magnetic, optical, electromagnetic, infrared, or semi-conductive system, apparatus, or component, or any combination of the above. A more specific example of the computer-readable storage medium may include but is not limited to: an electrical connection having one or more wires, a portable computer magnetic disk, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or a flash memory), an optical fiber, a compact disk read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any appropriate combination thereof. In this application, the computer-readable storage medium may be any tangible medium containing or storing a program, and the program may be used by or used in combination with an instruction execution system, an apparatus, or a device. In this application, a computer-readable signal medium may include a data signal being in a baseband or propagated as a part of a carrier wave, the data signal carrying computer-readable program code. A data signal propagated in such a way may assume a plurality of forms, including, but not limited to, an electromagnetic signal, an optical signal, or any appropriate combination thereof. The computer-readable signal medium may be further any computer readable medium in addition to a computer-readable storage medium. The computer readable medium may send, propagate, or transmit a program that is used by or used in conjunction with an instruction execution system, an apparatus, or a device. The computer-readable medium may send, propagate, or transmit a program that is used by or used in conjunction with an instruction system, an apparatus, or a device. The program code included in the computer-readable medium may be transmitted by using any suitable medium, including but not limited to, wireless transmission, a wire, a cable, radio frequency (RF) or the like, or any other suitable combination thereof. The flowcharts and block diagrams in the accompanying drawings illustrate possible system architectures, functions and operations that may be implemented by a system, a method, and a computer program product according to various embodiments of this application. In this regard, each box in a flowchart or a block diagram may represent a module, a program segment, or a part of code. The module, the program segment, or the part of code includes one or more executable instructions used for implementing designated logic functions. In some implementations used as substitutes, functions annotated in boxes may alternatively occur in a sequence different from that annotated in an accompanying drawing. For example, actually two boxes shown in succession may be performed basically in parallel, and sometimes the two boxes may be performed in a reverse sequence. This is determined by a related function. Each box in a block diagram and/or a flowchart and a combination of boxes in the block diagram and/or the flowchart may be implemented by using a dedicated hardware-based system configured to perform a specified function or operation, or may be implemented by using a combination of dedicated hardware and a computer instruction. Related modules or units described in the embodiments of this application may be implemented in a software manner, or may be implemented in a hardware manner, and the module or the unit described can also be set in a processor. Names of these modules or units do not constitute a limitation on the modules or the units in a case. According to another aspect, this application further provides a computer-readable medium. The computer-readable medium may be included in the electronic device described in the foregoing embodiments, or may exist alone and is not disposed in the electronic device. The computer-readable medium carries one or more programs, the one or more programs, when executed by the electronic device, causing the electronic device to implement the deep learning based medical image detection method according to the embodiments above. For example, the electronic device may implement the following as shown inFIG.1: step S110: acquiring a to-be-detected medical image, the to-be-detected medical image including a plurality of slices; step S120: extracting, for each slice in the to-be-detected medical image, N basic feature maps of the slice by a deep neural network, N being an integer greater than 1; step S130: merging, for each slice in the to-be-detected medical image, features of the N basic feature maps of the slice by the deep neural network, to obtain M enhanced feature maps of the slice, M being an integer greater than 1; step S140: respectively performing, for each slice in the to-be-detected medical image, a hierarchically dilated convolutions operation on each enhanced feature map by the deep neural network, to generate a superposed feature map of each enhanced feature map of the slice; and step S150: predicting position information of a region of interest and a confidence score thereof in the to-be-detected medical image by the deep neural network based on the superposed feature map of each slice in the to-be-detected medical image. Although several modules or units of a device or an apparatus for action execution are mentioned in the foregoing detailed descriptions, the division is not mandatory. Actually, according to the implementations of this application, the features and functions of two or more modules or units described above may be specifically implemented in one module or unit. On the contrary, the features and functions of one module or unit described above may be further divided to be embodied by a plurality of modules or units. According to the foregoing descriptions of the implementations, a person skilled in the art may readily understand that the exemplary implementations described herein may be implemented by using software, or may be implemented by combining software and necessary hardware. Therefore, the technical solutions of the implementations of this application may be implemented in a form of a software product. The software product may be stored in a non-volatile storage medium (which may be a CD-ROM, a USB flash drive, a removable hard disk, or the like) or on a network, and includes several instructions for instructing a computing device (which may be a personal computer, a server, a touch terminal, network device, or the like) to perform the methods according to the implementations of this application. Other embodiments of this application are apparent to a person skilled in the art from consideration of the specification and practice of this application here. This application is intended to cover any variations, uses or adaptive changes of this application. Such variations, uses or adaptive changes follow the general principles of this application, and include well-known knowledge and conventional technical means in the art that are not disclosed in this application. The specification and the embodiments are considered as merely exemplary, and the scope and spirit of this application are pointed out in the following claims. It is to be understood that this application is not limited to the precise structures described above and shown in the accompanying drawings, and various modifications and changes can be made without departing from the scope of this application. The scope of this application is subject only to the appended claims. | 65,705 |
11861830 | It is to be understood that the figures are not necessarily drawn to scale, nor are the objects in the figures necessarily drawn to scale in relationship to one another. The figures are depictions that are intended to bring clarity and understanding to various embodiments of apparatuses, systems, and methods disclosed herein. Wherever possible, the same reference numbers will be used throughout the drawings to refer to the same or like parts. Moreover, it should be appreciated that the drawings are not intended to limit the scope of the present teachings in any way. DETAILED DESCRIPTION Provided herein is technology relating to analysis of images and particularly, but not exclusively, to methods and systems for determining the volume of a region of interest using optical coherence tomography data. In this detailed description of the various embodiments, for purposes of explanation, numerous specific details are set forth to provide a thorough understanding of the embodiments disclosed. One skilled in the art will appreciate, however, that these various embodiments may be practiced with or without these specific details. In other instances, structures and devices are shown in block diagram form. Furthermore, one skilled in the art can readily appreciate that the specific sequences in which methods are presented and performed are illustrative and it is contemplated that the sequences can be varied and still remain within the spirit and scope of the various embodiments disclosed herein. The section headings used herein are for organizational purposes only and are not to be construed as limiting the described subject matter in any way. All literature and similar materials cited in this application, including but not limited to, patents, patent applications, articles, books, treatises, and internet web pages are expressly incorporated by reference in their entirety for any purpose. Unless defined otherwise, all technical and scientific terms used herein have the same meaning as is commonly understood by one of ordinary skill in the art to which the various embodiments described herein belongs. When definitions of terms in incorporated references appear to differ from the definitions provided in the present teachings, the definition provided in the present teachings shall control. Definitions To facilitate an understanding of the present technology, a number of terms and phrases are defined below. Additional definitions are set forth throughout the detailed description. Throughout the specification and claims, the following terms take the meanings explicitly associated herein, unless the context clearly dictates otherwise. The phrase “in one embodiment” as used herein does not necessarily refer to the same embodiment, though it may. Furthermore, the phrase “in another embodiment” as used herein does not necessarily refer to a different embodiment, although it may. Thus, as described below, various embodiments of the invention may be readily combined, without departing from the scope or spirit of the invention. In addition, as used herein, the term “or” is an inclusive “or” operator and is equivalent to the term “and/or” unless the context clearly dictates otherwise. The term “based on” is not exclusive and allows for being based on additional factors not described, unless the context clearly dictates otherwise. In addition, throughout the specification, the meaning of “a”, “an”, and “the” include plural references. The meaning of “in” includes “in” and “on.” As used herein, “optical coherence tomography” or “OCT” refers to a medical imaging technique that uses light to capture micrometer-resolution, three-dimensional images from within optical scattering media (e.g., biological tissue). Optical coherence tomography is based on low-coherence interferometry, typically employing near-infrared light. The use of relatively long wavelength light allows it to penetrate into the scattering medium. As used herein, an axis extending from an OCT apparatus and the sample (e.g., tissue) under examination is the z-axis. Planes normal to the z-axis are x-y planes. See, e.g.,FIG.1. As used herein, an “A-scan” is an amplitude modulation scan that provides one-dimensional information in the direction of the z-axis, e.g., an axial depth scan. For example, in some embodiments an A-scan is used to determine the length of a tissue, tissue segment, tissue feature, etc. in the direction of (or substantially, essentially, or approximately along) the z-axis or to determine the location of a tissue segment or tissue feature along a path in the direction of the z-axis. See, e.g.,FIG.1. As used herein, a “B-scan” is a two-dimensional, cross-sectional or “profile” view of the sample under examination, e.g., a two-dimensional scan in the x-z or y-z planes. The two-dimensional cross-sectional B-scan may be produced by laterally combining a series of axial depth A-scans. See, e.g.,FIG.1. As used herein, a “C-scan” is a two-dimensional, cross-sectional or “plan” view of the sample under examination, e.g., a two-dimensional scan in the x-y plane. See, e.g.,FIG.1. As used herein, the term “image segmentation” or “segmentation” refers to a digital method of dividing image data into regions that may consist of a pixel area that is homogeneous in terms of certain characteristics, or of an area that groups pixels corresponding to an object that is visualized in the image. In this way, multiple layers or image fragments may be created, for example, to represent tissue layers or regions of a tissue that have similar characteristics. Accordingly, segmentation refers to the process of partitioning a digital image into multiple regions (e.g., sets of pixels). In some embodiments, the goal of segmentation is to simplify and change the representation of an image into something that is more meaningful and easier to analyze. Image segmentation is typically used to locate objects and boundaries (lines, curves, etc.) in images. The result of image segmentation is a set of regions that collectively cover the entire image, or a set of contours extracted from the image. In some embodiments, the segments correspond to biological features (e.g., tissues, tissue layers, etc.). However, the technology is not limited to segments that correspond to biological features and, in various embodiments, the segments correspond to any division of the image appropriate for the methods, technology, analysis, etc. desired by the user. Methods for finding and segmenting a desired tissue layer or boundary surface are well-known in the art. See, e.g., Ishikawa et al. (2005) “Macular Segmentation with Optical Coherence Tomography”Invest Ophthalmol Vis Sci46: 2012, incorporated herein by reference in its entirety. A “system” denotes a set of components, real or abstract, comprising a whole where each component interacts with or is related to at least one other component within the whole. As used herein, a “region of interest” refers to a region (e.g., portion, sub-sample, sub-volume, etc.) of an image and/or of a sample (e.g., a tissue) that is assessed by the methods provided herein. In particular embodiments, the “region of interest” refers to a tissue abnormality, lesion, or other feature of a tissue that is subjected to the metric analysis (e.g., measurement of an area; measurement of a volume) provided herein. As used herein, an “increase” or a “decrease” refers to a detectable (e.g., measured) positive or negative change in the value of a variable (e.g., a volume) relative to a previously measured value of the variable, relative to a pre-established value, and/or relative to a value of a standard control. An increase is a positive change relative to the previously measured value of the variable, the pre-established value, and/or the value of a standard control. Similarly, a decrease is a negative change relative to the previously measured value of the variable, the pre-established value, and/or the value of a standard control. Other terms indicating quantitative changes or differences, such as “more” or “less,” are used herein in the same fashion as described above. Description Optical coherence tomography (OCT) is a method of using interferometry to determine the echo time delay and magnitude of backscattered light reflected off an object of interest. OCT is similar in principle to ultrasound, but in OCT light is used instead of sound and interferometry is used to determine the time delay of reflected light. The original OCT method, known as TD-OCT, encoded the location of each reflection in the time information relating the position of a moving reference mirror to the location of the reflection. An advance in OCT was the use of light wavelengths instead of time delay to determine the spatial location of reflected light. Fourier transform analysis is used to provide a technology based in the spectral domain (SD-OCT) rather than in the time domain (TD-OCT). SD-OCT acquires all information in a single axial scan through the tissue simultaneously by evaluating the frequency spectrum of the interference between the reflected light and a stationary reference mirror. See, e.g., Wojtkowski et al. (2004) “Ophthalmic imaging by spectral optical coherence tomography” Am J Ophthalmol 138: 412-9; Wojtkowski et al. (2002) “In vivo human retinal imaging by Fourier domain optical coherence tomography” J Biomed Opt 7: 457-63; and Wojtkowski et al. (2003) “Real-time in vivo imaging by high-speed spectral optical coherence tomography” Opt Lett 28: 1745-47, each incorporated herein in its entirety by reference. SD-OCT is advantageous over TD-OCT because the interference pattern is split by a grating into its frequency components and all of these components are simultaneously detected by a charge-coupled device (CCD), thus making it faster. Further, data are acquired without mechanical movement of a scanning mirror as in TD-OCT. The SD-OCT technique significantly increases signal-to-noise ratio and increases the speed of data collection by a factor of 50 relative to TD-OCT. For example, a conventional time-domain OCT functions at 400 A-scan/s, while an SD-OCT system scans at 20,000 A-scan/s. Because of the increase in speed, a single cross-sectional scan of 1000 A-scans can be captured, processed, streamed to disk, and displayed in 60 ms (or 1/42 of the time required for a time-domain scan). Because of this speed, there is less movement of the subject during the SD-OCT scan and thus a more stable image is produced with a significant decrease in artifact of the image. Also because of this speed, a stack of 100 cross-sectional scans can be acquired in the time normally used to gather 6 low-resolution cross-sectional scans on a time-domain system. The image stack can be processed to produce a three dimensional representation of structures (see Wojtkowski et al. (2005) “Three-dimensional retinal imaging with high-speed ultrahigh-resolution optical coherence tomography” Ophthalmology 112: 1734-46, incorporated herein by reference). SD-OCT imaging thus frequently uses a series of scans. Focusing the light beam to a point on the surface of the sample under test, and recombining the reflected light with the reference will yield an interferogram with sample information corresponding to a single A-scan (along the z-axis). Scanning of the sample can be accomplished by either scanning the light on the sample, or by moving the sample under test. A linear scan will yield a two-dimensional data set corresponding to a cross-sectional image (e.g., in the x-z plane), whereas an area scan achieves a three-dimensional data set corresponding to a volumetric image (e.g., a volume in the x-y-z space), also called full-field OCT. Accordingly, a stack of B-scans can undergo further analysis and produce a three dimensional representation of structures. Furthermore, it is possible to collapse three-dimensional OCT volumes (e.g., along a z-axis (e.g., along the depth axis)) to a two-dimensional representative image along any plane of a 3D volume using algorithms to calculate a single representative pixel intensity for each line in the projection. One technique of obtaining such an “en face” picture with optical coherence tomograms is referred to as a summed voxel projection (SVP) (see, e.g., Jiao et al (2005) “Simultaneous acquisition of sectional and fundus ophthalmic images with spectral-domain optical coherence tomography” Optics Express 13: 444-452, incorporated herein by reference). Image registration and alignment is based on tissue structural features, e.g., to correct motion artifacts (see, e.g., Jorgensen et al (2007) “Enhancing the signal-to-noise ratio in ophthalmic optical coherence tomography by image registration-method and clinical examples” J Biomed Opt 12: 041208). For example, 3D data sets are presented with all pixels in each given axial scan summed to produce an OCT fundus image, which resembles a 2D photograph summing reflections from all tissue layers. The OCT fundus image can be used for image alignment or registration based on tissue features, such as blood vessel continuities or discontinuities. The 3D OCT can also be aligned or registered to a fundus photograph acquired simultaneously or nearly so. Automated or manual segmenting defines tissue layers in the SD-OCT data. Because of the unique optically clear pathway through the eye, OCT has been used for imaging disorders affecting the retina. In some current uses, obtaining and processing each of a series of 500×500-pixel images takes on the order of seconds and the technology can now acquire 3D data sets comprising several hundred scans of 200×200×1024 pixels in 2 seconds. In exemplary embodiments, this method is used to scan through the layers of a structured tissue sample such as the retina with very high axial resolution (3 to 15 μm), providing images demonstrating 3D structure. SD-OCT images show multiple tissue (e.g., retinal) layers of different reflectivity. These tissue layers are typically segmented using a computer algorithm and/or by manual tracing. When an abnormality occurs in a tissue (e.g., in the retina (e.g., a “retinal lesion”)), the layered structure of the tissue (e.g., retina) is altered, resulting in a thickening, thinning, or loss of tissue (e.g., retinal, RPE) layers at the corresponding location, which are imaged by SD-OCT. In some embodiments, the lesion is present in the image as a protrusion in one of the segmented features of the image. Thus, volumetric analysis of tissue abnormalities, lesions, etc. is desirable to evaluate, monitor, and treat the abnormalities, lesions, etc. Examples of OCT display image technologies are provided, e.g., by U.S. Pat. No. 8,944,597, incorporated herein by reference. See also U.S. Pat. No. 8,913,793 (incorporated herein by reference in its entirety), which relates to display of OCT images in various ways, including three-dimensional surface renderings, topographical contour maps, contour maps, en-face color maps, and en-face grayscale maps. Further, some embodiments related to retinal pathology provide clinicians with a cross-section of the pathology in the context of a map of the retina. For example, some embodiments provide a cross-section of a retinal abnormality presented in the context of a retinal thickness map. In some embodiments, two sequential scans of differing types (e.g., resolutions) are performed and simultaneously displayed, preferably on the same display. In some embodiments, the two display types are acquired using a single interaction with the user interface, say a single click or a single voice command. Paunescu et al. (“Reproducibility of nerve fiber thickness, macular thickness, and optic nerve head measurements using StratusOCT”Invest Ophthalmol Vis Sci45(6): 1716-24, incorporated herein by reference in its entirety) describe methods of capturing a fundus image nearly “simultaneously” with the OCT, showing the location of the OCT beam on the retina. “Simultaneity”, as used herein, simply means that data collection happens quickly enough that the side-by-side display of the two types of data are sufficiently synchronized that they present two views of the same object and structure. U.S. Pat. App. Pub. No. 2003/0199769 (incorporated herein by reference in its entirety), for example, suggests taking a Scanning Laser Ophthalmoscope (SLO) image point-by-point simultaneously with the OCT scan. This approach uses an additional imaging system consisting of a beam splitter and the SLO detector, and depends on hardware alignment between the OCT and SLO detectors. For the purpose of providing a fast fundus image, a Line Scanning Laser Ophthalmoscope (LSLO) is generally faster than the SLO and equally useful, as is the line-scan ophthalmoscope (ISO) of U.S. Patent Publication No. 2006/0228011, incorporated herein by reference in its entirety. Various embodiments are related to visualization of images, e.g., to provide output to a user and to convey results of image analysis methods as described herein. For example, some embodiments provide information useful for live-time decisions and/or planning of clinical treatments, for analysis of previous clinical treatments (stents, drugs, genes, etc.), for similar purposes in preclinical studies, etc. Automated segmentation results may be displayed in cross-sectional view or longitudinal view or en face view. In addition, images may be displayed in a three-dimensional view or a “fly-through” view. Different features may be displayed using different shading relative to one another or as different colors. Quantification results may be displayed in an image view and/or reported in tables or text. In some embodiments, surface and/or volume visualization techniques are used to provide views of the three-dimensional image data from any angle and, in some embodiments, with virtual lighting from any angle, in an interactive fashion. In some embodiments, such volumes are digitally sliced along any plane or arbitrary surface to create a reformatted two dimensional view. Software for visualization and analysis of biological image data include those sold under the trade names of ParaView, ScanImage, μManager, MicroPilot, ImageJ, Vaa3D, ilastik (which includes machine learning, e.g., to aid a user in identifying image features), CellProfiler, CellExplorer, BrainExplorer, Zen (Zeiss), Amira (VSG), Imaris (Bitplane), ImagePro (MediaCybernetics), Neurolucida (MBF Bioscience), LabVIEW (National Instruments), MATLAB (Mathworks), and Virtual Finger (see, e.g., Peng et al (2014)Nature Communications5: 4342). See also, Walter et al (2010)Nature Methods7: S26-S41; Eliceiri et al (2013)Nature Methods9: 697; and Long (2012) PLoS Computational Biology 9: e1002519, each incorporated herein in its entirety. Further, in some embodiments the technology incorporates an image analysis library such as VTK, ITK, OpenCV, or the Java ImgLib. Methods Provided herein are embodiments of methods for processing and analyzing OCT image data. In some embodiments, the methods provide one or more measurements (e.g., distance, area, and/or volume measurements; e.g., measurements in one, two, or three dimensions, and, in some embodiments, measurements in one, two, or three dimensions as a function of time). Accordingly, in some embodiments the methods provide a technology to monitor changes is the size, location, and/or shape of lesions of the retina, layers of the retina, subretinal tissue, and RPE. For example, particular embodiments relate to a method for determining the area and/or volume of a region of interest within a biological tissue using an image produced by optical coherence tomography. The method comprises producing, acquiring, analyzing, displaying, manipulating, etc. three-dimensional OCT data and producing, acquiring, analyzing, displaying, manipulating, etc. two-dimensional “fundus” OCT data. For example, the three-dimensional OCT data provide a three-dimensional image of the biological tissue comprising the region of interest and the two-dimensional OCT data are fundus image data of the biological tissue comprising the region of interest. In some preferred embodiments, the two-dimensional fundus data are associated with (e.g., registered with, linked to, etc.) the three-dimensional image of the biological tissue. In some embodiments, user interaction with the two-dimensional image data (e.g., analyzing, displaying, manipulating, etc. the two-dimensional image data) produces a linked, associated, coordinated interaction (e.g., analysis, display, manipulation, etc.) of the three-dimensional image data. For example, in some embodiments, methods comprise display of the two-dimensional fundus data and user interaction with the display of the two-dimensional fundus data. Then, in some embodiments, a user interacts with the two-dimensional fundus data—e.g., the user interacts with the display of the two-dimensional fundus data by use of an input device, e.g., a touch screen, mouse, track ball, etc. to provide a boundary around the region of interest and the user receives sensory feedback, e.g., the boundary is displayed superimposed on the two-dimensional fundus image data as the user interacts with the displayed image. Further, indication of the boundary around the region of interest in the two-dimensional fundus image provides an associated, coordinated boundary around the region of interest in the three-dimensional image data. In this way, the user, “draws” the boundary around the region of interest using the technology provided herein, e.g., using a combination of the OCT image data (e.g., the three-dimensional image data and associated two-dimensional fundus image data), an output device (e.g., display), an input device (e.g., a touch screen), and a computer configured to calculate the area and/or volume of a region of interest according to the methods and technologies described herein. In some embodiments, user interaction with the three-dimensional OCT data (e.g., analyzing, displaying, manipulating, etc. the three-dimensional OCT data) produces a linked, associated, coordinated interaction (e.g., analysis, display, manipulation, etc.) of the two-dimensional fundus data. For example, in some embodiments, methods comprise display of the three-dimensional OCT data and user interaction with the display of the three-dimensional OCT data (e.g., examination of one or more “slices” of the three-dimensional OCT data, by “fly-through” of the OCT data, or by otherwise examining the three-dimensional OCT data on a display). Then, in some embodiments, a user interacts with the three-dimensional OCT data—e.g., the user interacts with the display of the three-dimensional OCT data by use of an input device, e.g., a touch screen, mouse, track ball, etc. to provide a boundary around the region of interest and the user receives sensory feedback, e.g., the boundary is displayed superimposed on the three-dimensional OCT data and/or on the two-dimensional fundus image data as the user interacts with the displayed image. Further, indication of the boundary around the region of interest in the three-dimensional OCT image provides an associated, coordinated boundary around the region of interest in the two-dimensional image data. In this way, the user, “draws” the boundary around the region of interest using the technology provided herein, e.g., using a combination of the OCT image data (e.g., the three-dimensional image data and associated two-dimensional fundus image data), an output device (e.g., display), an input device (e.g., a touch screen), and a computer configured to calculate the area and/or volume of a region of interest according to the methods and technologies described herein. In some embodiments, a user provides a continuous boundary around the region of interest. In some embodiments a user provides a discontinuous boundary (e.g., a series of points, dots, lines, line segments (e.g., straight line segments, curved line segments), etc.) marking some of the region of interest (e.g., marking one or more locations of the edge of the region of interest). In some embodiments, a user provides points or portions of a boundary around a region of interest and an automated image processing algorithm completes the boundary using image analysis and the user-defined points or partial boundary to complete the boundary (e.g., using interpolation analysis to connect the user-provided portions of the boundary). In embodiments of the technology in which the images are segmented, the technology is not limited by how the images are segmented. For example, various embodiments provide for the automated segmentation of the images (e.g., by computer algorithm that identifies image segments), semi-automated segmentation, or manual segmentation of the image (e.g., by a user who identifies image segments). See also, U.S. Pat. No. 8,811,745 (incorporated herein by reference), which describes systems and methods for segmentation and identification of structured features in images (e.g., an ocular image showing layered structures or other features of the retina). Some embodiments further provide for automated detection and identification (e.g., marking) of biological features in images such as, e.g., blood vessels. See, e.g., U.S. Pat. No. 8,750,615 (incorporated herein by reference in its entirety), which describes a system and related methods for automatic or semi-automatic segmentation and quantification of blood vessel structure and physiology, including segmentation, quantification, and visualization of vessel walls, plaques, and macrophages. The image processing technology provides in particular a method for measuring a linear distance, an area, and/or a volume of a region of interest within a biological tissue using an image produced by optical coherence tomography. In an exemplary embodiment, an OCT apparatus (e.g., a SD-OCT apparatus) and a tissue are positioned for acquisition of OCT data (e.g., OCT image data such as, e.g., SD-OCT image data comprising three-dimensional image data and a fundus image). See, e.g.,FIG.1showing an OCT apparatus (“OCT”) and a sample (e.g., a tissue) in a schematic drawing. After acquiring OCT data (e.g., three dimensional OCT image data), the data are segmented to produce an image showing the segments (e.g., representing tissue layers and/or other features of the sample). For example,FIG.2A(bottom panel) shows a projection of three-dimensional OCT data (e.g., an image as shown on a display such as, e.g., a computer screen) in two dimensions (e.g., a cross-section in a plane parallel, effectively parallel, and/or substantially parallel to the z-axis). The example OCT image inFIG.2A(bottom panel) has been segmented (see, e.g., upper and lower lines corresponding to a first segment and a second segment), e.g., to show tissue layers. Further, the exemplary OCT image shown inFIG.2A(bottom panel) comprises a region of interest as a protrusion in the upper segment. In exemplary embodiments, such a protrusion may indicate abnormal tissue growth, a lesion (a retinal lesion), central neovascularization (e.g., associated with macular degeneration) or other abnormal feature in a biological tissue. Also shown inFIG.2A(upper panel) is an exemplary OCT fundus image (e.g., as shown on a display such as, e.g., a computer screen) in a plane normal, effectively normal, and/or substantially normal to the z-axis (e.g., in the x-y plane). The exemplary fundus image shown inFIG.2A(upper panel) shows the region of interest (FIG.2A(upper panel), black outlined shape). According to embodiments of the technology provided herein, the images are analyzed to determine the area and/or volume of the region of interest (e.g., the protrusion shown inFIG.2A(bottom panel)). In some embodiments, the greatest linear dimension of the region of interest is determined (e.g., by examination of the fundus image and/or the three dimensional OCT data (e.g., image)). SeeFIG.2B(upper panel), g. The greatest linear dimension is the greatest distance across the region of interest. For example, the greatest linear dimension can be determined by identifying the longest line segment having each of its two ends touching the perimeter of the region of interest. In some embodiments, the greatest linear dimension of the region of interest is provided by a user. In particular, in some embodiments the fundus image is provided to a user on a display and the user draws a line segment having each of its two ends touching the perimeter of the region of interest using a computer and computer input device (e.g., mouse, touch screen, light pen, etc.). As the user draws the line segment, the line segment is provided on the fundus image of the region of interest on the display. In some embodiments, a computer determines and provides the greatest linear dimension of the region of interest (e.g., by identifying the longest line segment having each of its two ends touching the perimeter of the region of interest). In some embodiments, the computer displays a line on a display showing the greatest linear dimension of the region of interest. In some embodiments, a boundary is provided around the region of interest, e.g., an area enclosing the region of interest is identified in the fundus image. For example, in some embodiments a circle having a diameter (see, e.g.,FIG.2B(top panel), d) greater than or equal to the greatest linear dimension g is provided to circumscribe the region of interest. The boundary has an area A (see, e.g.,FIG.2B(top panel), grey region) and the region of interest is within the area A. In embodiments in which the boundary is a circle, the area A=π×(d/2)2. The technology is not limited in the shape of the boundary. The boundary may be any shape (e.g., circle, ellipse, square, etc., or an irregular shape) enclosing the region of interest and having an area. See, e.g.,FIG.2D-1(showing a circle boundary) andFIG.2D-2(showing an irregular boundary). In some embodiments, a computer determines the boundary. In some embodiments, a user determines the boundary. For example, in some embodiments the fundus image is provided to a user on a display and the user draws a shape enclosing the region of interest using a computer and computer input device (e.g., mouse, touch screen, light pen, etc.). As the user draws the boundary, the boundary is provided on the fundus image of the region of interest on the display. In preferred embodiments, the area A of the boundary is determined by computer analysis, e.g., according to algorithms for determining the area of shapes (e.g., irregular shapes). Extension of the boundary substantially in the direction of the z-axis (e.g., through the sample (e.g., tissue)) defines a volume v in the three dimensional OCT data (e.g., image). The volume v is defined by the first and second segments and by extension of the boundary through the segments.FIG.2B(bottom panel), grey region, shows a cross-section of the volume defined by the first segment, second segment, and the extended boundary. Computer analysis of the three dimensional data (e.g., image data) provides a volume v of the volume defined by the first segment, second segment, and the extended boundary. The data are analyzed to determine a distance t (e.g., thickness) between the first segment and the second segment in the direction of the z-axis. In particular embodiments, t is the average distance between the first segment and the second segment measured along the perimeter of the boundary. In alternative embodiments, the distance t may also be the maximum distance between the first segment and the second segment measured along the perimeter of the boundary, the minimum distance between the first segment and the second segment measured along the perimeter of the boundary, and/or any other distance calculated between the first segment and the second segment measured along the perimeter of the boundary. Average distance may be calculated using an average calculated in a sliding window moved along the perimeter of the boundary. The distance t provides a measurement for the normal distance between the first segment and the second segment in a normal sample (e.g., a normal tissue), e.g., a sample that does not comprise abnormal growth, does not comprise a lesion, etc. As such, preferred embodiments are those in which the boundary is provided in a region of the data (e.g., images) corresponding to healthy, normal sample (e.g., healthy, normal tissue), e.g., healthy, normal, etc. relative to the region of interest, which corresponds to abnormal sample (e.g., abnormal tissue comprising a feature such as a lesion). The technology provides methods for calculating the area A and/or the volume V of a region of interest (e.g., an abnormality, lesion, etc.). Thus, in some embodiments, the area A defined by the boundary and the distance t are used to calculate a volume n. The volume n is subtracted from the volume v determined above to provide the volume V of the region of interest (e.g., abnormality, lesion, etc.). Accordingly, the volume n is calculated as the product of the area A of the boundary and the thickness t, as determined above.FIG.2C(lower panel) shows a volume n in cross-sectional view (white rectangle). The volume n has a height that is the distance t. The top and bottom of the volume n each have an area A. Accordingly, the volume V of the region of interest (FIG.2C(lower panel), grey region) is calculated by subtracting the volume n (FIG.2C(lower panel), white region) from the volume v (FIG.2B(lower panel), grey region). While certain embodiments are described above using a boundary that is a circle, the technology comprises use of a boundary of any shape (e.g., circle, ellipse, square, etc., or an irregular shape) enclosing the region of interest and having an area A. For example,FIG.2D-1andFIG.2D-2show a volume n having a top and bottom that are circles having area A and height t.FIG.2D-3andFIG.2D-4show a volume n having a top and bottom that are an irregular shape having area A and height t. In some embodiments, the area A is determined from examining the three-dimensional OCT image to localize the margins of the area on the two-dimensional image. For example, in some embodiments, area A is calculated from the interpolation of points on the edges of the area using a three-dimensional image that is registered with the two-dimensional image. Accordingly, the technology provides a general method for determining the area A and/or the volume V of a region of interest in OCT data, e.g., comprising the steps of acquiring OCT data, determining the volume v (e.g., defined by the first and second segments and by the extension of the boundary through the segments), calculating the volume n (e.g., as the product of the area A of the boundary and the distance t), and subtracting n from v. Systems Some embodiments of the technology provide systems determining the area and/or the volume of a region of interest in OCT data (e.g., in OCT data acquired from a biological tissue, e.g., OCT image of a biological tissue such as a retina). Systems according to the technology comprise, e.g., an OCT apparatus (e.g., a SD-OCT apparatus), a computer, and software to instruct a computer to perform a method as described herein. Some embodiments further comprise a display (e.g., to provide three dimensional OCT data (e.g., three dimensional OCT images) and/or two dimensional OCT data (e.g., a fundus image) to a user) and an input device (e.g., for a user to provide information to the computer (e.g., to provide a boundary enclosing a region of interest). For example, in some embodiments, computer-based analysis is used to calculate the area A of the boundary, determine the distance t between the first segment and the second segment, calculate the volume v (e.g., defined by the first segment, second segment, and the boundary extended through the segments), and the volume n (e.g., product of area A and distance t), and volume V (volume of the region of interest). In some embodiments, one or more of these calculations use data provided by a user and/or data acquired by the computer. For instance, some embodiments comprise a computer system upon which embodiments of the present technology may be implemented. In various embodiments, a computer system includes a bus or other communication mechanism for communicating information and a processor coupled with the bus for processing information. In various embodiments, the computer system includes a memory, which can be a random access memory (RAM) or other dynamic storage device, coupled to the bus, and instructions to be executed by the processor. Memory also can be used for storing temporary variables or other intermediate information during execution of instructions to be executed by the processor. In various embodiments, the computer system can further include a read only memory (ROM) or other static storage device coupled to the bus for storing static information and instructions for the processor. A storage device, such as a magnetic disk or optical disk, can be provided and coupled to the bus for storing information and instructions. In various embodiments, the computer system is coupled via the bus to a display, such as a cathode ray tube (CRT) or a liquid crystal display (LCD), for displaying information to a computer user (e.g., three dimensional OCT images and/or two dimensional OCT images such as a fundus image). An input device, including alphanumeric and other keys, can be coupled to the bus for communicating information and command selections to the processor. Another type of user input device is a cursor control, such as a mouse, a trackball, a light pen, a touch screen, or cursor direction keys, for communicating direction information and command selections to the processor and for controlling cursor movement on the display (e.g., to draw shapes, lines, etc. to show on the computer display). This input device typically has two degrees of freedom in two axes, a first axis (e.g., x) and a second axis (e.g., y), that allows the device to specify positions in a plane. These x and y axes are not necessarily coincident with the x and y axes shown inFIG.1(e.g., with respect to the sample and images). A computer system can perform embodiments of the present technology. Consistent with certain implementations of the present technology, results can be provided by the computer system in response to the processor executing one or more sequences of one or more instructions contained in the memory. Such instructions can be read into the memory from another computer-readable medium, such as a storage device. Execution of the sequences of instructions contained in the memory can cause the processor to perform the methods described herein. Alternatively, hard-wired circuitry can be used in place of or in combination with software instructions to implement the present teachings. Thus, implementations of the present technology are not limited to any specific combination of hardware circuitry and software. The term “computer-readable medium” as used herein refers to any medium or media that participates in providing instructions to the processor for execution. Such a medium can take many forms, including but not limited to, non-volatile media, volatile media, and transmission media. Examples of non-volatile media can include, but are not limited to, optical or magnetic disks, such as a storage device. Examples of volatile media can include, but are not limited to, dynamic memory. Examples of transmission media can include, but are not limited to, coaxial cables, copper wire, and fiber optics, including the wires that comprise the bus. Common forms of computer-readable media include, for example, a floppy disk, a flexible disk, hard disk, magnetic tape, or any other magnetic medium, a CD-ROM, any other optical medium, punch cards, paper tape, any other physical medium with patterns of holes, a RAM, PROM, and EPROM, a FLASH-EPROM, any other memory chip or cartridge, or any other tangible medium from which a computer can read. Various forms of computer readable media can be involved in carrying one or more sequences of one or more instructions to the processor for execution. For example, the instructions can initially be carried on the magnetic disk of a remote computer. The remote computer can load the instructions into its dynamic memory and send the instructions over a network connection (e.g., a LAN, a WAN, the internet, a telephone line). A local computer system can receive the data and transmit it to the bus. The bus can carry the data to the memory, from which the processor retrieves and executes the instructions. The instructions received by the memory may optionally be stored on a storage device either before or after execution by the processor. In accordance with various embodiments, instructions configured to be executed by a processor to perform a method are stored on a computer-readable medium. The computer-readable medium can be a device that stores digital information. For example, a computer-readable medium includes a compact disc read-only memory (CD-ROM) as is known in the art for storing software. The computer-readable medium is accessed by a processor suitable for executing instructions configured to be executed. In accordance with such a computer system, some embodiments of the technology provided herein further comprise functionalities for collecting, storing, and/or analyzing data (e.g., OCT images, e.g., three dimensional OCT images, two dimensional OCT images). For example, some embodiments contemplate a system that comprises a processor, a memory, and/or a database for, e.g., storing and executing instructions, analyzing image data, performing calculations using the data, transforming the data, and storing the data. It some embodiments, an algorithm applies a model for calculating (e.g., approximating) an area and/or a volume in the image data. Some embodiments provide for the resizing, cropping, flattening, or other manipulation of image data. Particular embodiments provide a database to organize, search, process, analyze, share and visualize image data and image metadata. In some embodiments, area and/or volume data (e.g., comprising information relating to the area A and/or volume V of a region of interest for a subject) are stored (e.g., associated with a time at which the area A and/or volume V is determined and/or associated with the particular subject. For example, volume data (e.g., comprising information relating to the area A and/or volume V of a region of interest for a subject) are acquired at more than one time (e.g., over a period of days, weeks, months, years, or decade) and an area A (e.g., A1) and/or a volume V (e.g., V1) acquired at one time is compared to an area A (e.g., A2) and/or a volume V (e.g., V2) acquired at another time. In some embodiments, the difference in the two values of A (e.g., A2−A1) and/or V (e.g., V2−V1) is used to inform a treatment of the subject. For example, in some embodiments the magnitude of the area A1and/or volume V1acquired at one time is used to determine a treatment, dosage, pharmaceutical administration, medical intervention (e.g., surgery), etc. Then, determining the area A2and/or volume V2at a later time provides an indication of the effectiveness of the treatment, e.g., in some embodiments an A2and/or a V2that is less than A1and/or V1for the region of interest indicates that the treatment was effective. Many diagnostics involve determining the presence of, size of, location of, etc. a region of interest in a sample. Thus, in some embodiments, an equation comprising variables representing the presence of, size of, location of, etc. a region of interest in a sample produces a value that finds use in making a diagnosis or assessing the presence or qualities of a region of interest. As such, in some embodiments this value is presented by a device, e.g., by an indicator related to the result (e.g., an LED, an icon on a display, a sound, or the like). In some embodiments, a device stores the value, transmits the value, or uses the value for additional calculations. Thus, in some embodiments, the present technology provides the further benefit that a clinician, who is not likely to be trained in image analysis, pathology, and/or the biology of particular tissues need not understand the raw data. The data are presented directly to the clinician in its most useful form. The clinician is then able to utilize the information to optimize the care of a subject. The present technology contemplates any method capable of receiving, processing, and transmitting the information to and from laboratories conducting the assays, information providers, medical personal, and/or subjects. For example, in some embodiments of the present technology, data are acquired from analyzing a subject's tissue and the data are submitted to an analysis service (e.g., a clinical lab at a medical facility, a tissue profiling business, etc.), located in any part of the world (e.g., in a country different than the country where the subject resides or where the information is ultimately used). For example, the subject may visit a medical center to be tested and to have data sent to the profiling center. Where the data comprises previously determined biological information, the information may be directly sent to the profiling service by the subject (e.g., data transmitted to a computer of the profiling center using electronic communication systems). Once received by the profiling service, the data are processed and a profile is produced that is specific for the diagnostic or prognostic information desired for the subject. The profile data are then prepared in a format suitable for interpretation by a treating clinician. For example, rather than providing raw image data, the prepared format may represent a diagnosis or risk assessment for the subject, along with recommendations for particular treatment options. The data may be displayed to the clinician by any suitable method. For example, in some embodiments, the profiling service generates a report that can be printed for the clinician (e.g., at the point of care) or displayed to the clinician on a computer display. In some embodiments, the information is first analyzed at the point of care or at a regional facility. The raw data are then sent to a central processing facility for further analysis and/or to convert the raw data to information useful for a clinician or patient. The central processing facility provides the advantage of privacy (all data are stored in a central facility with uniform security protocols), speed, and uniformity of data analysis. The central processing facility can then control the fate of the data following treatment of the subject. For example, using an electronic communication system, the central facility can provide data to the clinician, the subject, or researchers. In some embodiments, the subject is able to access the data using the electronic communication system. The subject may chose further intervention or counseling based on the results. In some embodiments, the data are used for research use. For example, the data may be used to further optimize the inclusion or elimination of markers as useful indicators of a particular condition associated with a disease. Applications OCT is widely used, for example, to obtain high-resolution images of the anterior segment of the eye and the retina. As such, the technique finds use, for example, in assessing axonal integrity in diseases such as, e.g., multiple sclerosis, other neurodegenerative diseases, and glaucoma. OCT finds use for monitoring the progression of glaucoma and to image coronary arteries to detect lipid-rich plaques. In an exemplary use, the technology finds use in measuring retinal thickness. Retinal thickness may be abnormally large in cases of retinal edema or traction by membranes in the vitreous humor. On the other hand, the retina and/or RFE may appear thin or absent in cases of atrophic degeneration, chorioretinitis, or trauma to the retina. Meanwhile, changes in retinal thickness may be localized or extend over large areas. In certain cases, the overall contour of the retina may become abnormal. For example, pronounced myopia, particularly due to posterior staphylomas, may create a highly concave retina. Retina layers overlying regions of RPE atrophy may become markedly thinned or lost. Detachment of the retinal pigment epithelium (RPE), subretinal cysts, or subretinal tumors may produce a relative convexity of the retina. Therefore, mapping the retina contour or retinal thickness makes it possible to determine the extent and severity of such conditions and to monitor progress of treatment. In addition, the technique finds use in imaging brain tissue in vivo, e.g., using OCT to produce detailed images of mice brains through a transparent zirconia window implanted in the skull. OCT finds use to identify root canals in teeth (e.g., canal in the maxillary molar). Also, OCT finds use in interventional cardiology to diagnose coronary artery disease. Furthermore, OCT finds use in industrial applications, such as in non-destructive testing (NDT), material thickness measurements, and for examining thin silicon wafers and compound semiconductor wafers (e.g., to make thickness measurements, surface roughness characterization, surface imaging, cross-section imaging, and volume loss measurements). OCT systems with feedback can be used to control manufacturing processes. OCT finds use in the pharmaceutical industry to control the coating of tablets. In some embodiments, the technology finds use in metric analysis of a CNV lesion complex and/or a region of RPE loss, e.g., as associated with macular degeneration, in OCT (e.g., SD-OCT) images (see, e.g., Examples 2 and 3). Although the disclosure herein refers to certain illustrated embodiments, it is to be understood that these embodiments are presented by way of example and not by way of limitation. EXAMPLES Prophetic Example 1 In some embodiments, the technology finds use in diagnosing and treating a patient. For example, the technology aids a physician who determines that a patient has a subretinal choroidal neovascular membrane with subretinal fluid developing in the macula of an eye. A volumetric raster OCT scan of a 6 mm×6 mm region of the central macula is obtained (e.g., using default settings) to capture the 3D image. The OCT scan is registered to a retinal angiographic image obtained during the same visit. The physician determines the boundaries of the lesion in the fundus angiogram image. Using a computer mouse, the user defines a region that includes the lesion but extends beyond it into retinal tissue that appears normal, thereby defining the area A for analysis. The defined region happens to be irregular in shape (e.g., not perfectly circular). The segmentation algorithm is run, which segments the internal limiting membrane layer and the retinal pigment epithelium layer of the retina. The volume of the defined region of interest is calculated. From this the volume of the abnormality, V1, is calculated by the software. This volume V1is 1.5 mm3. At this first visit, the patient is given a drug treatment to treat the lesion. At a second visit, the scan and angiogram studies are repeated on the patient's eye and the data are registered with software. Again, the physician identifies the region of interest and draws on the angiogram image the region of interest that circumscribes the lesion and some normal retina, which is not circular. After the segmentation algorithm is run, a second volume from the second visit is obtained. From this, V2is calculated. V2is determined to be 0.75 mm3. The ratio V2/V1is 0.5. The physician determines that the treatment has lessened the volume of the abnormality by 50%, indicating a treatment effect. The physician plans to continue treatment with administration of the same drug at the second visit due to a good initial response to treatment. Example 2-Metric Analysis of a CNV Lesion Quantitative analysis of OCT data has been used in clinical trials targeting wet AMD in patients. In one class of treatments comprising administration of anti-VEGF agents (e.g., Lucentis, Eylea), metric evaluation of retinal thickness is used to monitor subretinal fluid accumulation. In addition, combination therapies targeting VEGF and PDGF find use in treatment of patients. In these treatments, metric assessment (e.g., measurement of the volume and/or area) of CNV is used to monitor the effectiveness of the PDGF treatment. Accordingly, the technology described herein finds use in the quantitative analysis (e.g., metric analysis (e.g., determination of volume and/or area)) of CNV size based on SD-OCT. In an exemplary application of embodiments of the technology, an SD-OCT scan and an associated fundus image registered pixel-to-pixel to the SD-OCT data are provided. In some embodiments, the technology is based on the use of SD-OCT data only, but an improved technology is provided by use of SD-OCT data and an associated fundus image. For example, providing both OCT data and a registered fundus image improve user analysis and grading of the tissues and lesions in the patient. In some embodiments, the SD-OCT and fundus image are displayed on a display side by side, e.g., in a split view mode, e.g., as provided by a software implementation of the technology provided herein (see, e.g.,FIG.3AandFIG.3B). In this view mode, the long horizontal white line in the fundus image ofFIG.3Amarks the plane view of the OCT data displayed inFIG.3B; and the location of the vertical white tick mark on the fundus image (FIG.3A) is correlated to the location of the two vertical white tick marks on the OCT image (FIG.3B). FIG.3Ashows a fundus image andFIG.3Bshows an associated view of OCT data. The fundus image and OCT scan show a CNV complex, which occupies approximately left three-quarters of the OCT image field inFIG.3B(e.g., to the left of the white tick marks). The view of the OCT data shows thick, multi-layers of reflective materials that are packed together. The retina appears nearly normal in the right quarter of the OCT image (e.g., to the right of the white tick marks), where retinal pigment epithelium is visible and flat. The retinal pigment epithelium appears to be nearing a normal state at the left edge of the OCT data view shown. The left edge of the OCT image and the vertical tick marks in the OCT image mark the edges of the CNV lesion, e.g., in some embodiments a user marks the edge (e.g., boundary) of the CNV lesion and in some embodiments a method implemented in computer software marks the edge (e.g., boundary) of the CNV lesion. Accordingly, using embodiments of the technology provided herein, a user explores the OCT (e.g., SD-OCT) scan and/or fundus image to locate the edge of the CNV lesion. As the user evaluates the image(s) and identifies the edge of the CNV lesion, the user marks the edge of the lesion on the fundus photo. Then, in some embodiments, after exploring the OCT (e.g., SD-OCT) data set, the user identifies (e.g., traces, marks) the area encompassing the extent of the CNV lesion. Alternatively, in some embodiments, software automatically determines and indicates the area encompassing the extent of the CNV lesion, e.g., in some embodiments software uses image analysis and one or more points, dots, line segments, etc. provided by the user identifying the edge of the lesion. See, e.g.,FIG.4AandFIG.5Ashowing the areas marked on fundus images and the views of the registered OCT data inFIG.4BandFIG.5B.FIG.4AandFIG.4Bare image data of a patient's retina acquired at a time point;FIG.5AandFIG.5Bare image data of the same region of the same patient's retina at another (e.g., later) time point. The long horizontal white lines in the fundus images ofFIG.4AandFIG.5Amarks the plane view of the OCT data displayed inFIG.4BandFIG.5B, respectively; and the location of the vertical white tick mark on the fundus image (FIG.4AandFIG.5A) is correlated to the location of the two vertical white tick marks on the OCT images (FIG.4BandFIG.5B). The area of the region of interest is then derived automatically according to the technology provided herein (e.g., by an algorithm to determine the area of the region of interest defined by the line encompassing the region of interest). After segmenting the image data, in some embodiments, the volume of the CNV lesion complex is calculated (e.g., by calculating the volume v within the boundary of area A and between the first segment and the second segment; calculating the average thickness t between the first segment and the second segment along the boundary (e.g., along the perimeter of area A); and calculating the volume V of the region of interest using V=v−(t×A)). In an exemplary use of the technology, a subject is enrolled in a treatment course and monitored by OCT imaging. At a visit, the area of a CNV lesion is 11.8 mm2or 4.64 disc area (see, e.g.,FIG.4Ashowing the size of a lesion prior to a treatment). At a later visit of the same subject, the area of the CNV lesion is reduced to 8.17 mm2or 3.21 disc area (see, e.g.,FIG.5Ashowing the size of the same region of the same patient after treatment). The reduction in the area of the region of interest (e.g., the smaller area of the region of interest inFIG.5Arelative to the area of the region of interest inFIG.4A) indicates that the treatment is effective. Example 3-Metric Analysis of a Region of RPE Loss The technology finds use in managing the care and treatment of a patient having AMD, e.g., to monitor vision defects and associated lesions of the retina and/or RPE. For example, during examination of the patient, OCT data are obtained from the patient's eye. The data show a complex region of RPE loss (see, e.g.,FIG.6Ashowing a B scan and an en face infrared image). A user scrolls through the stacked B scans in the 3D image to mark the border of the atrophy, e.g., because the structure of interest is not visible on, or not definitely located in, the en face image (FIG.6B, showing the region of atrophy partially defined, using the boundary as the location where the layer of the external limiting membrane of the retina is lost). Finally, the boundary within the area of RPE loss is found (FIG.6C, showing the completed circumscribed region of RPE loss). The region is calculated to have an area of 4.75 mm2and the distance of the nearest border of RPE lost to the foveal center is 150 microns. Other useful metrics are provided by and/or calculated from parameters associated with the boundaries of regions of interest as defined with this methodology. For instance, metrics defining the shape of a lesion obtained from measurements described herein (e.g., from a measurement of the perimeter of the boundary of area A) have prognostic value in some embodiments of the technology (see, e.g., Domalpally (2013) “Circularity Index as a Risk Factor for the Progression of Geographic Atrophy”Ophthalmology120(12): 2666-71). All publications and patents mentioned in the above specification are herein incorporated by reference in their entirety for all purposes. Various modifications and variations of the described compositions, methods, and uses of the technology will be apparent to those skilled in the art without departing from the scope and spirit of the technology as described. Although the technology has been described in connection with specific exemplary embodiments, it should be understood that the invention as claimed should not be unduly limited to such specific embodiments. Indeed, various modifications of the described modes for carrying out the invention that are obvious to those skilled in the art are intended to be within the scope of the following claims. | 60,101 |
11861831 | DESCRIPTION OF THE EMBODIMENTS Reference will now be made in detail to the exemplary embodiments of the disclosure, examples of which are illustrated in the accompanying drawings. Wherever possible, the same reference numbers will be used throughout the drawings to refer to the same or like parts. The present disclosure describes an approach for providing prognosis of coronary artery disease (“CAD”) and for predicting plaque growth/shrinkage based on patient-specific geometry and blood flow characteristics. Specifically, the present disclosure describes a system that receives patient information (e.g., 3D cardiac imaging, patient demographics, and history) and provides a patient-specific and location-specific risk score for the pathogenesis of CAD. Although the present disclosure is described with particular reference to coronary artery disease, the same systems and methods are applicable to creating a patient-specific prediction of lesion formation in other vascular systems beyond the coronary arteries. More specifically, the present disclosure describes certain principles and embodiments for using patients' cardiac imaging to: (1) derive a patient-specific geometric model of the coronary vessels; and (2) perform coronary flow simulation to extract hemodynamic characteristics, patient physiological information, and boundary conditions in order to predict the onset and location of coronary lesions. The present disclosure is not limited to a physics-based simulation of blood flow to predict the locations predisposed to plaque formation, but rather uses machine learning to predict the lesion location by incorporating various risk factors, including patient demographics and coronary geometry, as well as the results of patient-specific biophysical simulations (e.g., hemodynamic characteristics). If additional diagnostic test results are available, those results may also be used in the training and prediction. According to certain embodiments, the presently disclosed methods involve two phases: (1) a training phase in which the machine learning system is trained to predict one or more locations of coronary lesions, and (2) a production phase in which the machine learning system is used to produce one or more locations of coronary lesions. Referring now to the figures,FIG.1depicts a block diagram of an exemplary system and network for predicting the location, onset, and/or change of coronary lesions from vessel geometry, physiology, and hemodynamics. Specifically,FIG.1depicts a plurality of physician devices or systems102and third party provider devices or systems104, any of which may be connected to an electronic network101, such as the Internet, through one or more computers, servers, and/or handheld mobile devices. Physicians and/or third party providers associated with physician devices or systems102and/or third party provider devices or systems104, respectively, may create or otherwise obtain images of one or more patients' cardiac and/or vascular systems. The physicians and/or third party providers may also obtain any combination of patient-specific information, such as age, medical history, blood pressure, blood viscosity, etc. Physicians and/or third party providers may transmit the cardiac/vascular images and/or patient-specific information to server systems106over the electronic network101. Server systems106may include storage devices for storing images and data received from physician devices or systems102and/or third party provider devices or systems104. Server systems106may also include processing devices for processing images and data stored in the storage devices. FIG.2is a diagram of an exemplary three-dimensional mesh of a geometric model200used in predicting the location, onset, and/or change of coronary lesions from vessel geometry, according to an exemplary embodiment of the present disclosure. For example, as described above, a third party provider or physician may obtain patient-specific anatomical data of one or more patients. Patient-specific anatomical data may include data regarding the geometry of the patient's heart, e.g., at least a portion of the patient's aorta, a proximal portion of the main coronary arteries (and the branches extending therefrom) connected to the aorta, and the myocardium. However, as-described above, patient-specific anatomical data may also or alternatively be obtained in relation to any portion of the patient's vasculature, including beyond the patient's heart. Initially, a patient may be selected, e.g., when the physician determines that information about the patient's coronary blood flow is desired, e.g., if the patient is experiencing symptoms associated with coronary artery disease, such as chest pain, heart attack, etc. The patient-specific anatomical data may be obtained noninvasively, e.g., using a noninvasive imaging method. For example, CCTA is an imaging method in which a user may operate a computer tomography (CT) scanner to view and create images of structures, e.g., the myocardium, the aorta, the main coronary arteries, and other blood vessels connected thereto. The CCTA data may be time-varying, e.g., to show changes in vessel shape over a cardiac cycle. CCTA may be used to produce an image of the patient's heart. For example, 64-slice CCTA data may be obtained, e.g., data relating to 64 slices of the patient's heart, and assembled into a three-dimensional image. Alternatively, other noninvasive imaging methods, such as magnetic resonance imaging (MRI) or ultrasound (US), or invasive imaging methods, such as digital subtraction angiography (DSA), may be used to produce images of the structures of the patient's anatomy. The imaging methods may involve injecting the patient intravenously with a contrast agent to enable identification of the structures of the anatomy. The resulting imaging data (e.g., provided by CCTA, MRI, etc.) may be provided by a third-party vendor, such as a radiology lab or a cardiologist, by the patient's physician, etc. Other patient-specific anatomical data may also be determined from the patient noninvasively. For example, physiological data such as the patient's blood pressure, baseline heart rate, height, weight, hematocrit, stroke volume, etc., may be measured. The blood pressure may be the blood pressure in the patient's brachial artery (e.g., using a pressure cuff), such as the maximum (systolic) and minimum (diastolic) pressures. The patient-specific anatomical data obtained as described above may be transferred over a secure communication line (e.g., via electronic network101ofFIG.1). For example, the data may be transferred to server systems106or other computer system for performing computational analysis, e.g., the computational analysis described below with respect toFIGS.3-5B. In one exemplary embodiment, the patient-specific anatomical data may be transferred to server systems106or other computer system operated by a service provider providing a web-based service. Alternatively, the data may be transferred to a computer system operated by the patient's physician or other user. In one embodiment, server systems106may generate a three-dimensional solid model and/or three-dimensional mesh200based on the received patient-specific anatomical data. For example, server systems106may generate the three-dimensional model and/or mesh based on any of the techniques described in U.S. Pat. No. 8,315,812 by Taylor et al., which issued on Nov. 20, 2012, the entirety of which is hereby incorporated herein by reference. FIG.3Ais a block diagram of an exemplary method300for training a machine learning system, based on a plurality of patients' blood flow characteristics and geometry, for predicting the location, onset, and/or change of coronary lesions from vessel geometry, physiology, and hemodynamics, according to an exemplary embodiment of the present disclosure. Specifically, as shown inFIG.3A, method300may include obtaining patient imaging data (e.g., a geometric model) and physiologic and/or hemodynamic information302for a plurality of patients. Method300may include generating feature vectors304based on the plurality of patients' imaging and physiologic and/or hemodynamic information. Method300further includes obtaining information about plaque306for the plurality of patients, and formatting the information about the plurality of patients' plaque into the format that is desired of the output308of the learning system. Method300completes the training mode by inputting into a learning system310both the feature vectors304formed from the plurality of patients' imaging data and physiologic and/or hemodynamic information, and the output308of the information about plaque for the plurality of patients. For example, as will be described in more detail below, any suitable type of machine learning system may process both the feature vectors304and outputs308to identify patterns and conclusions from that data, for later use in producing outputs of information about a particular user's plaque. FIG.3Bis a block diagram of an exemplary method350for using the trained machine learning system310for predicting, for a particular patient, the location, onset, and/or change of coronary lesions from vessel geometry, physiology, and hemodynamics, according to an exemplary embodiment of the present disclosure. As shown inFIG.3B, method350may include obtaining patient imaging data (e.g., a geometric model) and physiologic and/or hemodynamic information312for a particular patient, for whom it is desired to predict plaque location, onset, and/or change based on the trained learning system310. Of course, method350may include obtaining the patient imaging data and physiologic and/or hemodynamic information for any number of patients for whom it is desired to predict plaque location, onset, and/or change based on the trained learning system. Method350may include generating a feature vector314for each of a plurality of points of the patient's geometric model, based on one or more elements of the received physiologic and/or hemodynamic information. Method350may then include operating the machine learning system310on the feature vectors generated for the patient to obtain an output316of the estimates of the presence or onset of plaque at each of a plurality of points in the patient's geometric model, and translating the output into useable information318about the location, onset, and/or change of plaque in the patient318. Described below are exemplary embodiments for implementing a training mode method300and a production mode method350of machine learning for predicting the location, onset, and/or change of coronary lesions from vessel geometry, physiology, and hemodynamics, e.g. using server systems106, based on images and data received from physicians and/or third party providers over electronic network101. Specifically, the methods ofFIGS.4A-5Bmay be performed by server systems106, based on information received from physician devices or systems102and/or third party provider devices or systems104over electronic network101. FIG.4Ais a block diagram of an exemplary method400for training a machine learning system (e.g., a machine learning system310executed on server systems106) for predicting the location of coronary lesions from vessel geometry, physiology, and hemodynamics, according to an exemplary embodiment of the present disclosure. Specifically, method400may include, for one or more patients (step402), obtaining a patient-specific geometric model of a portion of the patient's vasculature (step404), obtaining one or more estimates of physiological or phenotypic parameters of the patient (step406), and obtaining one or more estimates of biophysical hemodynamic characteristics of the patient (step408). For example, the step of obtaining a patient-specific geometric model of a portion of the patient's vasculature (step404) may include obtaining a patient-specific model of the geometry for one or more of the patient's blood vessels, myocardium, aorta, valves, plaques, and/or chambers. In one embodiment, this geometry may be represented as a list of points in space (possibly with a list of neighbors for each point) in which the space can be mapped to spatial units between points (e.g., millimeters). In one embodiment, this model may be derived by performing a cardiac CT imaging of the patient in the end diastole phase of the cardiac cycle. This image then may be segmented manually or automatically to identify voxels belonging to the aorta and the lumen of the coronary arteries. Given a 3D image of coronary vasculature, any of the many available methods may be used for extracting a patient-specific model of cardiovascular geometry. Inaccuracies in the geometry extracted automatically may be corrected by a human observer who compares the extracted geometry with the images and makes corrections as needed. Once the voxels are identified, the geometric model can be derived (e.g., using marching cubes). The step of obtaining one or more estimates of physiological or phenotypic parameters of the patient (step406) may include obtaining a list of one or more estimates of physiological or phenotypic parameters of the patient, such as blood pressure, blood viscosity, in vitro blood test results (e.g., LDL/Triglyceride cholesterol level), patient age, patient gender, the mass of the supplied tissue, etc. These parameters may be global (e.g., blood pressure) or local (e.g., estimated density of the vessel wall at a location). In one embodiment, the physiological or phenotypic parameters may include, blood pressure, hematocrit level, patient age, patient gender, myocardial mass (e.g., derived by segmenting the myocardium in the image, and calculating the volume in the image and using an estimated density of 1.05 g/mL to estimate the myocardial mass), general risk factors of coronary artery disease (e.g., smoking, diabetes, hypertension, abdominal obesity, dietary habits, family history, etc.), and/or in vitro blood test results (e.g., LDL, Triglyceride cholesterol level). The step of obtaining one or more estimates of biophysical hemodynamic characteristics of the patient (step408) may include obtaining a list of one or more estimates of biophysical hemodynamic characteristics from computational fluid dynamics analysis, such as wall-shear stress, oscillatory shear index, particle residence time, Reynolds number, Womersley number, local flow rate, and turbulent kinetic energy, etc. Specifically, the mean wall-shear stress, may be defined as |1T1-T0∫T0T1ts→dt|. {right arrow over (ts)}, which may be the wall shear stress vector defined as the in-plane component of the surface traction vector. The oscillatory shear index (OSI), may be defined as ½(1− |1T1-T0∫T0T1ts→dt|1T1-T0∫T0T1|ts→|dt), which may be a measure of the uni-directionality of shear stress. The particle residence time may be a measure of the time it takes blood to be flushed from a specified fluid domain. The turbulent kinetic energy (“TKE”) may be a measure of the intensity of turbulence associated with eddies in turbulent flow, and may be characterized by measured root-mean-square velocity fluctuation, and may be normalized by kinetic energy. The Reynolds number may be defined as ρUDμ where (ρ: density of blood, U: average flow velocity, D: vessel diameter, μ: dynamic viscosity). The Womersley number may be defined as D2ϖρμ where (ω: angular frequency, equal to 1cardiaccyclelength). Method400may further include obtaining an indication of the presence or absence of plaque at one or more locations of the patient-specific geometric model (step410). For example, in one embodiment, the location of calcified or non-calcified plaque may be determined using CT and/or other imaging modalities, including intravascular ultrasound, or optical coherence tomography. For example, the plaque may be detected in the three-dimensional image (200ofFIG.2) generated from patient-specific anatomical data. The plaque may be identified in a three-dimensional image or model as areas that are lighter than the lumens of the aorta, the main coronary arteries, and/or the branches. Thus, the plaque may be detected by the computer system as having an intensity value below a set value or may be detected visually by the user. The location of detected plaques may be parameterized by a distance from the ostium point (left main or right coronary ostium) to the projection of centroid of plaque coordinates onto the associated vessel centerline and an angular position of plaque with respect to myocardium (e.g., myocardial/pericardial side). The location of detected plaques may be also parameterized by start and end points of the projection of plaque coordinates onto the associated vessel centerline. If plaque exists at a location, method400may include obtaining a list of one or more measurements of coronary plaque composition, e.g., type, Hounsfield units (“HU”), etc., burden, shape (eccentric or concentric), and location. Method400may further include, for each of a plurality of points in the patient-specific geometric model for which there is information about the presence or absence of plaque (step412), creating a feature vector for the point (step414) and associating the feature vector with the presence or absence of plaque at that point (step416). In one embodiment, the step of creating a feature vector for the point may include creating a feature vector for that point that consists of a numerical description of the geometry and biophysical hemodynamic characteristics at that point, and estimates of physiological or phenotypic parameters of the patient. For example, a feature vector for attributes: distance to ostium, wall shear stress, local flow rate, Reynolds number, and centerline curvature, may be in the form of (50 mm, 70 dyne/cm2, 1500 mm3/sec, 400, 1 mm−1). Global physiological or phenotypic parameters may be used in the feature vector of all points, and local physiological or phenotypic parameters may change in the feature vector of different points. In one embodiment, an exemplary feature vector generated in step414may include one or more of: (i) systolic and diastolic blood pressure, (ii) heart rate, (iii) blood properties including: plasma, red blood cells (erythrocytes), hematocrit, white blood cells (leukocytes) and platelets (thrombocytes), viscosity, yield stress, etc. (iv) patient age, gender, height, weight, etc., (v) lifestyle characteristics, e.g., presence or absence of current medications/drugs, (vi) general risk factors of CAD, such as smoking, diabetes, hypertension, abdominal obesity, dietary habits, family history of CAD, etc., (vii) in vitro blood test results, such as LDL, Triglyceride cholesterol level, etc., (viii) coronary calcium score, (ix) amount of calcium in aorta and valve, (x) presence of aortic aneurysm, (xi) presence of valvular heart disease, (xii) presence of peripheral disease, (xiii) presence of dental disease, (xiv) epicardial fat volume, (xv) cardiac function (ejection fraction), (xvi) stress echocardiogram test results, (xvii) characteristics of the aortic geometry (e.g., cross-sectional area profile along the ascending and descending aorta, and surface area and volume of the aorta, (xviii) a SYNTAX score, as described in U.S. patent application Ser. No. 13/656,183, filed by Timothy A. Fonte et al. on Oct. 19, 2012, the entire disclosure of which is incorporated herein by reference, (xix) plaque burden of existing plaque, (xx) adverse plaque characteristics of existing plaque (e.g., presence of positive remodeling, presence of low attenuation plaque, presence of spotty calcification), (xxi) characteristics of the coronary branch geometry, (xxii) characteristics of coronary cross-sectional area, (xxiii) characteristics of coronary lumen intensity, e.g., intensity change along the centerline (slope of linearly-fitted intensity variation), (xxiv) characteristics of surface of coronary geometry, e.g., 3D surface curvature of geometry (Gaussian, maximum, minimum, mean), (xxv) characteristics of volume of coronary geometry, e.g., ratio of total coronary volume compared to myocardial volume, (xxvi) characteristics of coronary centerline, (xxvii) characteristics of coronary deformation, (xxviii) characteristics of existing plaque, and (xxix) characteristics of coronary hemodynamics derived from computational flow dynamics or invasive measurement. In one embodiment, the characteristics of the coronary branch geometry may include one or more of: (1) total number of vessel bifurcations, and the number of upstream/downstream vessel bifurcations; (2) average, minimum, and maximum upstream/downstream cross-sectional areas; (3) distances (along the vessel centerline) to the centerline point of minimum and maximum upstream/downstream cross-sectional areas, (4) cross-sectional area of and distance (along the vessel centerline) to the nearest upstream/downstream vessel bifurcation, (5) cross-sectional area of and distance (along the vessel centerline) to the nearest coronary outlet and aortic inlet/outlet, (6) cross-sectional areas and distances (along the vessel centerline) to the downstream coronary outlets with the smallest/largest cross-sectional areas, and/or (7) upstream/downstream volumes of the coronary vessels. In one embodiment, the characteristics of coronary cross-sectional area may include one or more of: (1) cross-sectional lumen area along the coronary centerline, (2) cross-sectional lumen area to the power of N (where N can be determined from various source of scaling laws such as Murray's law (N=1.5) and Uylings' study (N=1.165˜1.5)), (3) a ratio of lumen cross-sectional area with respect to the main ostia (LM, RCA) (e.g., measure of cross-sectional area at the LM ostium, normalized cross-sectional area of the left coronary by LM ostium area, measure of cross-sectional area at the RCA ostium, normalized cross-sectional area of the right coronary by RCA ostium area), (4) ratio of lumen cross-sectional area with respect to the main ostia to the power of N (where N can be determined from various sources of scaling laws such as Murray's law (N=1.5) and Uyling's study (N=1.165˜1.5)), (5) degree of tapering in cross-sectional lumen area along the centerline (based on a sample centerline points within a certain interval (e.g., twice the diameter of the vessel) and computation of a slope of linearly-fitted cross-sectional area), (6) location of stenotic lesions (based on detecting minima of cross-sectional area curve (e.g., detecting locations, where first derivative of area curve is zero and second derivative is positive, and smoothing cross-sectional area profile to avoid detecting artifactual peaks), and computing distance (parametric arc length of centerline) from the main ostium, (7) length of stenotic lesions (computed based on the proximal and distal locations from the stenotic lesion, where cross-sectional area is recovered), (8) degree of stenotic lesions, by evaluating degree of stenosis based on reference values of smoothed cross-sectional area profile using Fourier smoothing or kernel regression, (9) location and number of lesions corresponding to 50%, 75%, 90% area reduction, (10) distance from stenotic lesion to the main ostia, and/or (11) irregularity (or circularity) of cross-sectional lumen boundary. In one embodiment, the characteristics of coronary centerline may include: (1) curvature (bending) of coronary centerline, such as by computing Frenet curvature, based on κ=|p′×p″||p′|3, where p is a coordinate of the centerline, and computing an inverse of the radius of a circumscribed circle along the centerline points, and (2) tortuosity (non-planarity) of coronary centerline, such as by computing Frenet torsion, based on τ=(p′×p″)·p′′′|p′×p″|2, where p is a coordinate of the centerline. In one embodiment, calculation of the characteristics of coronary deformation may involve multi-phase CCTA (e.g., diastole and systole), including (1) distensibility of coronary artery over cardiac cycle, (2) bifurcation angle change over cardiac cycle, and/or (3) curvature change over cardiac cycle. In one embodiment, the characteristics of existing plaque may be calculated based on: (1) volume of plaque, (2) intensity of plaque, (3) type of plaque (calcified, non-calcified), (4) distance from the plaque location to ostium (LM or RCA), and (5) distance from the plaque location to the nearest downstream/upstream bifurcation. In one embodiment, the characteristics of coronary hemodynamics may be derived from computational flow dynamics or invasive measurement. For example, pulsatile flow simulation may be performed to obtain transient characteristics of blood, by using a lumped parameter coronary vascular model for downstream vasculatures, inflow boundary condition with coupling a lumped parameter heart model and a closed loop model to describe the intramyocardial pressure variation resulting from the interactions between the heart and arterial system during cardiac cycle. For example, the calculation may include: measured FFR, coronary flow reserve, pressure distribution, FFRct, mean wall-shear stress, oscillatory shear index, particle residence time, turbulent kinetic energy, Reynolds number, Womersley number, and/or local flow rate. Method400may then include associating the feature vector with the presence or absence of plaque at each point of the patient-specific geometric model (step416). Method400may involve continuing to perform the above steps412,414,416, for each of a plurality of points in the patient-specific geometric model (step418), and for each of any number of patients on which a machine learning algorithm may be based (step420). Method400may then include training the machine learning algorithm to predict the probability of the presence of plaque at the points from the feature vectors at the points (step422). Examples of machine learning algorithms suitable for performing this task may include support vector machines (SVMs), multi-layer perceptrons (MLPs), and/or multivariate regression (MVR) (e.g., weighted linear or logistic regression). Method400may then include storing or otherwise saving the results of the machine learning algorithm (e.g., feature weights) to a digital representation, such as the memory or digital storage (e.g., hard drive, network drive) of a computational device, such as a computer, laptop, DSP, server, etc. of server systems106(step424). FIG.4Bis a block diagram of an exemplary method450for using a machine learning system trained according to method400(e.g., a machine learning system310executed on server systems106) for predicting, for a particular patient, the location of coronary lesions from vessel geometry, physiology, and hemodynamics, according to an exemplary embodiment of the present disclosure. In one embodiment, method450may include, for one or more patients (step452), obtaining a patient-specific geometric model of a portion of the patient's vasculature (step454), obtaining one or more estimates of physiological or phenotypic parameters of the patient (step456), and obtaining one or more estimates of biophysical hemodynamic characteristics of the patient (step458). Specifically, the step of obtaining a patient-specific geometric model of a portion of the patient's vasculature (step454) may include obtaining a patient-specific model of the geometry for one or more of the patient's blood vessels, myocardium, aorta, valves, plaques, and/or chambers. In one embodiment, this geometry may be represented as a list of points in space (possibly with a list of neighbors for each point) in which the space can be mapped to spatial units between points (e.g., millimeters). In one embodiment, this model may be derived by performing a cardiac CT imaging of the patient in the end diastole phase of the cardiac cycle. This image then may be segmented manually or automatically to identify voxels belonging to the aorta and the lumen of the coronary arteries. Inaccuracies in the geometry extracted automatically may be corrected by a human observer who compares the extracted geometry with the images and makes corrections as needed. Once the voxels are identified, the geometric model can be derived (e.g., using marching cubes). In one embodiment, the step of obtaining one or more estimates of physiological or phenotypic parameters of the patient (step456) may include obtaining a list of one or more estimates of physiological or phenotypic parameters of the patient, such as blood pressure, blood viscosity, in vitro blood test results (e.g., LDL/Triglyceride cholesterol level), patient age, patient gender, the mass of the supplied tissue, etc. These parameters may be global (e.g., blood pressure) or local (e.g., estimated density of the vessel wall at a location). In one embodiment, the physiological or phenotypic parameters may include, blood pressure, hematocrit level, patient age, patient gender, myocardial mass (e.g., derived by segmenting the myocardium in the image, and calculating the volume in the image and using an estimated density of 1.05 g/mL to estimate the myocardial mass), general risk factors of coronary artery disease (e.g., smoking, diabetes, hypertension, abdominal obesity, dietary habits, family history, etc.), and/or in vitro blood test results (e.g., LDL, Triglyceride cholesterol level). In one embodiment, the step of obtaining one or more estimates of biophysical hemodynamic characteristics of the patient (step458) may include obtaining a list of one or more estimates of biophysical hemodynamic characteristics from computational fluid dynamics analysis, such as wall-shear stress, oscillatory shear index, particle residence time, Reynolds number, Womersley number, local flow rate, and turbulent kinetic energy, etc. Specifically, the mean wall-shear stress, may be defined as |1T1-T0∫T0T1ts→dt|·ts→, which may be the wall shear stress vector defined as the in-plane component of the surface traction vector. The oscillatory shear index (OSI), may be defined as 12(1-|1T1-T0∫T0T1ts→dt|1T1-T0∫T0T1|ts→|dt), which may be a measure of the uni-directionality of shear stress. The particle residence time may be a measure of the time it takes blood to be flushed from a specified fluid domain. The turbulent kinetic energy (TKE) may be a measure of the intensity of turbulence associated with eddies in turbulent flow, and may be characterized by measured root-mean-square velocity fluctuation, and may be normalized by kinetic energy. The Reynolds number may be defined as ρUDμ where (ρ: density of blood, U: average flow velocity, D: vessel diameter, μ: dynamic viscosity). The Womersley number may be defined as D2ϖρμ where (ω: angular frequency, equal to 1cardiac cycle length). Method450may include, for every point in the patient-specific geometric model of the patient (step460), creating for that point a feature vector comprising a numerical description of the geometry and biophysical hemodynamic characteristic at that point, and estimates of physiological or phenotypic parameters of the patient (step462). Global physiological or phenotypic parameters may be used in the feature vector of one or more points, and local physiological or phenotypic parameters may change in the feature vector of different points. Method450may involve continuing to perform the above steps460,462, for each of a plurality of points in the patient-specific geometric model (step464). Method450may then include producing estimates of the probability of the presence or absence of plaque at each point in the patient-specific geometric model based on the stored machine learning results (stored at B,FIG.4A) (step468). Specifically, method450may use the saved results of the machine learning algorithm310produced in the training mode of method400(e.g., feature weights) to produce estimates of the probability of the presence of plaque at each point in the patient-specific geometric model (e.g., by generating plaque estimates as a function of the feature vector at each point). These estimates may be produced using the same machine learning algorithm technique used in the training mode (e.g., the SVM, MLP, MVR technique). In one embodiment, the estimates may be a probability of the existence of plaque at each point of a geometric model. If there is no existing plaque at a point, the method may include generating an estimated probability of the onset of plaque (e.g., lipid-rich, non-calcified plaque). If plaque does exist at a point, the method may include generating an estimated probability of progression of the identified plaque to a different stage (e.g., fibrotic or calcified), and the amount or shape of such progression. In one embodiment, the estimates may be a probability of a shape, type, composition, size, growth, and/or shrinkage of plaque at any given location or combination of locations. For example, in one embodiment, (in the absence of longitudinal training data) the progression of plaque may be predicted by determining that the patient appears that they should have disease characteristic X based on the patient's population, despite actually having characteristic Y. Therefore, the estimate may include a prediction that the patient will progress from state X to state Y, which may include assumptions and/or predictions about plaque growth, shrinkage, change of type, change of composition, change of shape, etc.). Method450may then include saving the estimates of the probability of the presence or absence of plaque (step470), such as to the memory or digital storage (e.g., hard drive, network drive) of a computational device, such as a computer, laptop, DSP, server, etc., of server systems106, and communicating these patient-specific and location-specific predicted probabilities of lesion formation to a health care provider, such as over electronic network101. FIG.5Ais a block diagram of an exemplary method500for training a machine learning system (e.g., a machine learning system310executed on server systems106) for predicting the onset or change (e.g., growth and/or shrinkage), of coronary lesions over time, such as by using longitudinal data (i.e., corresponding data taken from the same patients at different points in time) of vessel geometry, physiology, and hemodynamics, according to an exemplary embodiment of the present disclosure. Specifically, method500may include, for one or more patients (step502), obtaining a patient-specific geometric model of a portion of the patient's vasculature (step504), obtaining one or more estimates of physiological or phenotypic parameters of the patient (step506), and obtaining one or more estimates of biophysical hemodynamic characteristics of the patient (step508). For example, the step of obtaining a patient-specific geometric model of a portion of the patient's vasculature (step504) may include obtaining a patient-specific model of the geometry for one or more of the patient's blood vessels, myocardium, aorta, valves, plaques, and/or chambers. In one embodiment, this geometry may be represented as a list of points in space (possibly with a list of neighbors for each point) in which the space can be mapped to spatial units between points (e.g., millimeters). In one embodiment, this model may be derived by performing a cardiac CT imaging of the patient in the end diastole phase of the cardiac cycle. This image then may be segmented manually or automatically to identify voxels belonging to the aorta and the lumen of the coronary arteries. Inaccuracies in the geometry extracted automatically may be corrected by a human observer who compares the extracted geometry with the images and makes corrections as needed. Once the voxels are identified, the geometric model can be derived (e.g., using marching cubes). The step of obtaining one or more estimates of physiological or phenotypic parameters of the patient (step506) may include obtaining a list of one or more estimates of physiological or phenotypic parameters of the patient, such as blood pressure, blood viscosity, in vitro blood test results (e.g., LDL/Triglyceride cholesterol level), patient age, patient gender, the mass of the supplied tissue, etc. These parameters may be global (e.g., blood pressure) or local (e.g., estimated density of the vessel wall at a location). In one embodiment, the physiological or phenotypic parameters may include, blood pressure, hematocrit level, patient age, patient gender, myocardial mass (e.g., derived by segmenting the myocardium in the image, and calculating the volume in the image and using an estimated density of 1.05 g/mL to estimate the myocardial mass), general risk factors of coronary artery disease (e.g., smoking, diabetes, hypertension, abdominal obesity, dietary habits, family history, etc.), and/or in vitro blood test results (e.g., LDL, Triglyceride cholesterol level). The step of obtaining one or more estimates of biophysical hemodynamic characteristics of the patient (step508) may include obtaining a list of one or more estimates of biophysical hemodynamic characteristics from computational fluid dynamics analysis, such as wall-shear stress, oscillatory shear index, particle residence time, Reynolds number, Womersley number, local flow rate, and turbulent kinetic energy, etc. Specifically, the mean wall-shear stress, may be defined as |1T1-T0∫T0T1ts->dt|. {right arrow over (ts)}, which may be the wall shear stress vector defined as the in-plane component of the surface traction vector. The oscillatory shear index (OSI), may be defined as 12(1-|1T1-T0∫T0T1ts¯dt|1T1-T0∫T0T1ts->dt), which may be a measure of the uni-directionality of shear stress. The particle residence time may be a measure of the time it takes blood to be flushed from a specified fluid domain. The turbulent kinetic energy (TKE) may be a measure of the intensity of turbulence associated with eddies in turbulent flow, and may be characterized by measured root-mean-square velocity fluctuation, and may be normalized by kinetic energy. The Reynolds number may be defined as ρUDμ where (ρ: density of blood, U: average flow velocity, D: vessel diameter, μ: dynamic viscosity). The Womersley number may be defined as D2ϖρμ whereω: angular frequency, equal to 1cardiac cycle length). Method500may further include obtaining an indication of the growth, shrinkage, or onset of plaque at one or more locations of the patient-specific geometric model (step510). For example, in one embodiment, the location of plaque may be determined using CT and/or other imaging modalities, including intravascular ultrasound, or optical coherence tomography. If plaque exists at a location, method500may include obtaining a list of one or more measurements of coronary plaque composition, burden and location. In order to synchronize geometry obtained from patients over time, it may be desirable to determine point correspondence between multiple time variant scans of each individual. In other words, it may be desirable to learn the vessel characteristics in a location at the earlier time point that are correlated with the progression of disease in the same location at the later time point, such as by using a database of pairs of images of the same patient at two different time points. Given the image of a new patient, training data of local disease progression may then be used to predict the change in disease at each location. Accordingly, in one embodiment, step510may further include: (i) determining a mapping of a coronary centerline from an initial scan to a follow-up scan; and (ii) determining a mapping of extracted plaques using curvilinear coordinates defined along the centerline. In one embodiment, the coronary centerline mapping may be determined by (i) extracting centerlines of major epicardial coronary arteries (e.g., left descending coronary artery, circumlex artery, right coronary artery) and branch vessels (e.g, diagonal, marginal, etc) for each scan; (ii) using bifurcating points as fiducial landmarks to determine common material points between the scans; and (iii) for points between bifurcations, using linear interpolation or cross-sectional area profile (e.g., value, slope) of coronary vessels to identify correspondence. In one embodiment, the mapping of extracted plaques may be determined by: (i) extracting plaque from each scan; (ii) parameterizing the location of plaque voxels by curvilinear coordinate system for each associated centerline (r, θ, s); and determining correspondence of plaque voxels in each curvilinear coordinate system. In one embodiment, the curvilinear coordinate system may be defined where: r=distance from plaque voxel to the associated centerline (projection of plaque); s=distance from ostium point (Left main or right coronary) to the projection of plaque voxel onto associated centerline; and θ=angular position with respect to reference parallel path to centerline. Method500may further include, for each of a plurality of points in the patient-specific geometric model for which there is information about the growth, shrinkage, or onset of plaque (step512), creating a feature vector for the point (step514) and associating the feature vector with the growth, shrinkage, or onset of plaque at that point (step516). In one embodiment, the step of creating a feature vector for the point may include creating a feature vector for that point that consists of a numerical description of the geometry and biophysical hemodynamic characteristics at that point, and estimates of physiological or phenotypic parameters of the patient. For example, a feature vector for attributes: hematocrit, plaque burden, plaque Hounsfield unit, distance to ostium, wall shear stress, flow, Reynolds number, and centerline curvature may be in the form of: (45%, 20 mm3, 130 HU, 60.5 mm, 70 dyne/cm2, 1500 mm3/sec, 400, 1 mm−1). Global physiological or phenotypic parameters may be used in the feature vector of all points, and local physiological or phenotypic parameters may change in the feature vector of different points. In one embodiment, an exemplary feature vector generated in step514may include one or more of: (i) systolic and diastolic blood pressure, (ii) heart rate, (iii) blood properties including: plasma, red blood cells (erythrocytes), hematocrit, white blood cells (leukocytes) and platelets (thrombocytes), viscosity, yield stress, etc. (iv) patient age, gender, height, weight, etc., (v) lifestyle characteristics, e.g., presence or absence of current medications/drugs, (vi) general risk factors of CAD, such as smoking, diabetes, hypertension, abdominal obesity, dietary habits, family history of CAD, etc., (vii) in vitro blood test results, such as LDL, Triglyceride cholesterol level, etc., (viii) coronary calcium score, (ix) amount of calcium in aorta and valve, (x) presence of aortic aneurysm, (xi) presence of valvular heart disease, (xii) presence of peripheral disease, (xiii) presence of dental disease, (xiv) epicardial fat volume, (xv) cardiac function (ejection fraction), (xvi) stress echocardiogram test results, (xvii) characteristics of the aortic geometry (e.g., cross-sectional area profile along the ascending and descending aorta, and Surface area and volume of the aorta, (xviii) a SYNTAX score, as described above, (xix) plaque burden of existing plaque, (xx) adverse plaque characteristics of existing plaque (e.g., presence of positive remodeling, presence of low attenuation plaque, presence of spotty calcification), (xxi) characteristics of the coronary branch geometry, (xxii) characteristics of coronary cross-sectional area, (xxiii) characteristics of coronary lumen intensity, e.g., intensity change along the centerline (slope of linearly-fitted intensity variation), (xxiv) characteristics of surface of coronary geometry, e.g., 3D surface curvature of geometry (Gaussian, maximum, minimum, mean), (xxv) characteristics of volume of coronary geometry, e.g., ratio of total coronary volume compared to myocardial volume, (xxvi) characteristics of coronary centerline, (xxvii) characteristics of coronary deformation, (xxviii) characteristics of existing plaque, and/or (xxix) characteristics of coronary hemodynamics derived from computational flow dynamics or invasive measurement. In one embodiment, the characteristics of the coronary branch geometry may include one or more of: (1) total number of vessel bifurcations, and the number of upstream/downstream vessel bifurcations; (2) average, minimum, and maximum upstream/downstream cross-sectional areas; (3) distances (along the vessel centerline) to the centerline point of minimum and maximum upstream/downstream cross-sectional areas, (4) cross-sectional area of and distance (along the vessel centerline) to the nearest upstream/downstream vessel bifurcation, (5) cross-sectional area of and distance (along the vessel centerline) to the nearest coronary outlet and aortic inlet/outlet, (6) cross-sectional areas and distances (along the vessel centerline) to the downstream coronary outlets with the smallest/largest cross-sectional areas, and/or (7) upstream/downstream volumes of the coronary vessels. In one embodiment, the characteristics of coronary cross-sectional area may include one or more of: (1) cross-sectional lumen area along the coronary centerline, (2) cross-sectional lumen area to the power of N (where N can be determined from various source of scaling laws such as Murray's law (N=1.5) and Uylings' study (N=1.165˜1.5)), (3) a ratio of lumen cross-sectional area with respect to the main ostia (LM, RCA) (e.g., measure of cross-sectional area at the LM ostium, normalized cross-sectional area of the left coronary by LM ostium area, measure of cross-sectional area at the RCA ostium, normalized cross-sectional area of the right coronary by RCA ostium area, (4) ratio of lumen cross-sectional area with respect to the main ostia to the power of N (where power can be determined from various source of scaling laws such as Murray's law (N=1.5) and Uylings' study (N=1.165˜1.5)), (5) degree of tapering in cross-sectional lumen area along the centerline (based on a sample centerline points within a certain interval (e.g., twice the diameter of the vessel) and compute a slope of linearly-fitted cross-sectional area), (6) location of stenotic lesions (based on detecting minima of cross-sectional area curve (e.g., detecting locations, where first derivative of area curve is zero and second derivative is positive, and smoothing cross-sectional area profile to avoid detecting artifactual peaks), and computing distance (parametric arc length of centerline) from the main ostium, (7) length of stenotic lesions (computed based on the proximal and distal locations from the stenotic lesion, where cross-sectional area is recovered, (8) degree of stenotic lesions, by evaluating degree of stenosis based on reference values of smoothed cross-sectional area profile using Fourier smoothing or kernel regression, (9) location and number of lesions corresponding to 50%, 75%, 90% area reduction, (10) distance from stenotic lesion to the main ostia, and/or (11) irregularity (or circularity) of cross-sectional lumen boundary. In one embodiment, the characteristics of coronary centerline may include: (1) curvature (bending) of coronary centerline, such as by computing Frenet curvature, based on κ=|p′×p″|p′×p″2, where p is a coordinate of the centerline, and computing an inverse of the radius of a circumscribed circle along the centerline points, and/or (2) tortuosity (non-planarity) of coronary centerline, such as by computing Frenet torsion, based on τ=(p′×p″)·p′′′|p′×p″|2, where p is a coordinate of the centerline. In one embodiment, calculation of the characteristics of coronary deformation may involve multi-phase CCTA (e.g., diastole and systole), including (1) distensibility of coronary artery over cardiac cycle, (2) bifurcation angle change over cardiac cycle, and/or (3) curvature change over cardiac cycle. In one embodiment, the characteristics of existing plaque may be calculated based on: (1) volume of plaque, (2) intensity of plaque, (3) type of plaque (calcified, non-calcified), (4) distance from the plaque location to ostium (LM or RCA), and/or (5) distance from the plaque location to the nearest downstream/upstream bifurcation. In one embodiment, the characteristics of coronary hemodynamics may be derived from computational flow dynamics or invasive measurement. For example, pulsatile flow simulation may be performed to obtain transient characteristics of blood, by using a lumped parameter coronary vascular model for downstream vasculatures, inflow boundary condition with coupling a lumped parameter heart model and a closed loop model to describe the intramyocardial pressure variation resulting from the interactions between the heart and arterial system during cardiac cycle. For example, the calculation may include one or more of: measured FFR, coronary flow reserve, pressure distribution, FFRct, mean wall-shear stress, oscillatory shear index, particle residence time, turbulent kinetic energy, Reynolds number, Womersley number, and/or local flow rate. Method500may then include associating the feature vector with the growth, shrinkage, or onset of plaque at each point of the patient-specific geometric model (step516). Method500may involve continuing to perform the above steps512,514,516, for each of a plurality of points in the patient-specific geometric model (step518), and for each of any number of patients for which a machine learning algorithm may be based (step520). Method500may also involve continuing to perform the above steps512,514,516, for each of a plurality of points in the patient-specific geometric model, and for each of any number of patients for which a machine learning algorithm may be based, across any additional time period or periods useful for generating information about the growth, shrinkage, or onset of plaque (i.e., the change and/or rate of change of plaque at each point of the model) (step522). Method500may then include training a machine learning algorithm to predict the probability of amounts of growth, shrinkage, or onset of plaque at the points from the feature vectors at the points (step524). Examples of machine learning algorithms suitable for performing this task may include support vector machines (SVMs), multi-layer perceptrons (MLPs), and/or multivariate regression (MVR) (e.g., weighted linear or logistic regression). In one embodiment, if training data causes the machine learning algorithm to predict a lower amount (e.g., size or extent) of plaque than what is detected, then the machine learning algorithm may be interpreted as predicting plaque shrinkage; if training data causes the machine learning algorithm to predict a higher amount (e.g., size or extent) of plaque than what is detected, then the machine learning algorithm may be interpreted as predicting plaque growth. Method500may then include storing or otherwise saving the results of the machine learning algorithm (e.g., feature weights) to a digital representation, such as the memory or digital storage (e.g., hard drive, network drive) of a computational device, such as a computer, laptop, DSP, server, etc. of server systems106(step526). FIG.5Bis a block diagram of an exemplary method of using the machine learning system (e.g., machine learning system310executed on server systems106) for predicting, for a particular patient, the rate of onset, growth/shrinkage, of coronary lesions from vessel geometry, physiology, and hemodynamics, according to an exemplary embodiment of the present disclosure. In one embodiment, method550may include, for one or more patients (step552), obtaining a patient-specific geometric model of a portion of the patient's vasculature (step554), obtaining one or more estimates of physiological or phenotypic parameters of the patient (step556), and obtaining one or more estimates of biophysical hemodynamic characteristics of the patient (step558). Specifically, the step of obtaining a patient-specific geometric model of a portion of the patient's vasculature (step554) may include obtaining a patient-specific model of the geometry for one or more of the patient's blood vessels, myocardium, aorta, valves, plaques, and/or chambers. In one embodiment, this geometry may be represented as a list of points in space (possibly with a list of neighbors for each point) in which the space can be mapped to spatial units between points (e.g., millimeters). In one embodiment, this model may be derived by performing a cardiac CT imaging of the patient in the end diastole phase of the cardiac cycle. This image then may be segmented manually or automatically to identify voxels belonging to the aorta and the lumen of the coronary arteries. Inaccuracies in the geometry extracted automatically may be corrected by a human observer who compares the extracted geometry with the images and makes corrections as needed. Once the voxels are identified, the geometric model can be derived (e.g., using marching cubes). In one embodiment, the step of obtaining one or more estimates of physiological or phenotypic parameters of the patient (step556) may include obtaining a list of one or more estimates of physiological or phenotypic parameters of the patient, such as blood pressure, blood viscosity, in vitro blood test results (e.g., LDL/Triglyceride cholesterol level), patient age, patient gender, the mass of the supplied tissue, etc. These parameters may be global (e.g., blood pressure) or local (e.g., estimated density of the vessel wall at a location). In one embodiment, the physiological or phenotypic parameters may include, blood pressure, hematocrit level, patient age, patient gender, myocardial mass (e.g., derived by segmenting the myocardium in the image, and calculating the volume in the image and using an estimated density of 1.05 g/mL to estimate the myocardial mass), general risk factors of coronary artery disease (e.g., smoking, diabetes, hypertension, abdominal obesity, dietary habits, family history, etc.), and/or in vitro blood test results (e.g., LDL, Triglyceride cholesterol level). In one embodiment, the step of obtaining one or more estimates of biophysical hemodynamic characteristics of the patient (step558) may include obtaining a list of one or more estimates of biophysical hemodynamic characteristics from computational fluid dynamics analysis, such as wall-shear stress, oscillatory shear index, particle residence time, Reynolds number, Womersley number, local flow rate, and turbulent kinetic energy, etc. Specifically, the mean wall-shear stress, may be defined as |1T1-T0∫T0T1ts->dt|·ts->, which may be the wall shear stress vector defined as the in-plane component of the surface traction vector. The oscillatory shear index (OSI), may be defined as 12(1-|1T1-T0∫T0T1ts->dt|1T1-T0∫T0T1|ts->|dt), which may be a measure of the uni-directionality of shear stress. The particle residence time may be a measure of the time it takes blood to be flushed from a specified fluid domain. The turbulent kinetic energy (TKE) may be a measure of the intensity of turbulence associated with eddies in turbulent flow, and may be characterized by measured root-mean-square velocity fluctuation, and may be normalized by kinetic energy. The Reynolds number may be defined as ρUDμ where (ρ: density of blood, U: average flow velocity, D: vessel diameter, μ: dynamic viscosity). The Womersley number may be defined as D2ϖρμ where (ω: angular frequency, equal to 1cardiac cycle length). Method550may include, for every point in the patient-specific geometric model (step560), creating for that point a feature vector comprising a numerical description of the geometry and biophysical hemodynamic characteristic at that point, and estimates of physiological or phenotypic parameters of the patient. Global physiological or phenotypic parameters can be used in the feature vector of all points and local physiological or phenotypic parameters can change in the feature vector of different points. Method550may involve continuing to perform the above steps560,562, for each of a plurality of points in the patient-specific geometric model (step564). Method550may then include producing estimates of the probability and/or rate of the growth, shrinkage, or onset of plaque at each point in the patient-specific geometric model based on the stored machine learning results (stored at B,FIG.5A) (step566). Specifically, method550may use the saved results of the machine learning algorithm produced in the training mode of method500(e.g., feature weights) to produce estimates of the probability of growth, shrinkage, or onset (e.g., rates of growth/shrinkage) of plaque at each point in the patient-specific geometric model (e.g., by generating plaque estimates as a function of the feature vector at each point). These estimates may be produced using the same machine learning algorithm technique used in the training mode (e.g., the SVM, MLP, MVR technique). Method550may then include saving the estimates of the probability of the growth, shrinkage, or onset of plaque (step568), such as to the memory or digital storage (e.g., hard drive, network drive) of a computational device, such as a computer, laptop, DSP, server, etc., of server systems106, and communicating these patient-specific and location-specific predicted probabilities of lesion formation to a health care provider. FIG.6is a simplified block diagram of an exemplary computer system600in which embodiments of the present disclosure may be implemented, for example as any of the physician devices or servers102, third party devices or servers104, and server systems106. A platform for a server600, for example, may include a data communication interface for packet data communication660. The platform may also include a central processing unit (CPU)620, in the form of one or more processors, for executing program instructions. The platform typically includes an internal communication bus610, program storage and data storage for various data files to be processed and/or communicated by the platform such as ROM630and RAM640, although the server600often receives programming and data via a communications network (not shown). The hardware elements, operating systems and programming languages of such equipment are conventional in nature, and it is presumed that those skilled in the art are adequately familiar therewith. The server600also may include input and output ports650to connect with input and output devices such as keyboards, mice, touchscreens, monitors, displays, etc. Of course, the various server functions may be implemented in a distributed fashion on a number of similar platforms, to distribute the processing load. Alternatively, the servers may be implemented by appropriate programming of one computer hardware platform. As described above, the computer system600may include any type or combination of computing systems, such as handheld devices, personal computers, servers, clustered computing machines, and/or cloud computing systems. In one embodiment, the computer system600may be an assembly of hardware, including a memory, a central processing unit (“CPU”), and/or optionally a user interface. The memory may include any type of RAM or ROM embodied in a physical storage medium, such as magnetic storage including floppy disk, hard disk, or magnetic tape; semiconductor storage such as solid state disk (SSD) or flash memory; optical disc storage; or magneto-optical disc storage. The CPU may include one or more processors for processing data according to instructions stored in the memory. The functions of the processor may be provided by a single dedicated processor or by a plurality of processors. Moreover, the processor may include, without limitation, digital signal processor (DSP) hardware, or any other hardware capable of executing software. The user interface may include any type or combination of input/output devices, such as a display monitor, touchpad, touchscreen, microphone, camera, keyboard, and/or mouse. Program aspects of the technology may be thought of as “products” or “articles of manufacture” typically in the form of executable code and/or associated data that is carried on or embodied in a type of machine readable medium. “Storage” type media include any or all of the tangible memory of the computers, processors or the like, or associated modules thereof, such as various semiconductor memories, tape drives, disk drives and the like, which may provide non-transitory storage at any time for the software programming. All or portions of the software may at times be communicated through the Internet or various other telecommunication networks. Such communications, for example, may enable loading of the software from one computer or processor into another, for example, from a management server or host computer of the mobile communication network into the computer platform of a server and/or from a server to the mobile device. Thus, another type of media that may bear the software elements includes optical, electrical and electromagnetic waves, such as used across physical interfaces between local devices, through wired and optical landline networks and over various air-links. The physical elements that carry such waves, such as wired or wireless links, optical links or the like, also may be considered as media bearing the software. As used herein, unless restricted to non-transitory, tangible “storage” media, terms, such as computer or machine “readable medium” refer to any medium that participates in providing instructions to a processor for execution. Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the invention being indicated by the following claims. | 62,254 |
11861832 | The figures depict an embodiment of the present disclosure for purposes of illustration only. One skilled in the art will readily recognize from the following discussion that alternative embodiments of the structures and methods illustrated herein may be employed without departing from the principles of the disclosure described herein. DETAILED DESCRIPTION Some embodiments of this disclosure, illustrating all its features, will now be discussed in detail. The words “receiving,” “resampling,” “detecting,” “identifying,” “performing,” “determining,” and other forms thereof, are intended to be open ended in that an item or items following any one of these words is not meant to be an exhaustive listing of such item(s) or meant to be limited to only the listed item(s). It must also be noted that as used herein and in the appended claims, the singular forms “a,” “an,” and “the” include plural references unless the context clearly dictates otherwise. Although any system and methods similar or equivalent to those described herein can be used in the practice or testing of embodiments of the present disclosure, the exemplary, system and methods are now described. The disclosed embodiments are merely examples of the disclosure, which may be embodied in various forms. Various modifications to the embodiment will be readily apparent to those skilled in the art and the generic principles herein may be applied to other embodiments. However, one of ordinary skill in the art will readily recognize that the present disclosure is not intended to be limited to the embodiments described, but is to be accorded the widest scope consistent with the principles and features described herein. The present subject matter discloses a system and a method for determining a brock score. Typically, a doctor has to manually identify nodules in the CT scan image. Once the nodule is identified, the doctor manually performs analysis of the nodule to determine if the nodule is cancerous or not. This is a cumbersome and a time-consuming task. More importantly, the present invention discloses an efficient, and an automatic process for determining a brock score. The present invention determines the brock score in a real-time based on an analysis of a CT scan image related to a chest region of a patient. Further, the present invention generates a report comprising the brock score and other relevant information of the patient. Furthermore, the present invention provides remote assessment of the CT scan image. This helps to provide consultation to the patient remotely. Initially, the CT scan image of the patient may be received. Further, a nodule in the CT scan image may be detected. Subsequently, a region of interest associated with the nodule may be determined. Further, the brock score may be determined automatically based on identifying a plurality of characteristics associated with the region of interest. In one aspect, the one or more characteristics may be identified using deep learning model. In one embodiment, the present invention is configured to compute the brock score automatically using the deep learning model. While aspects of described system and method for determining a brock score may be implemented in any number of different computing systems, environments, and/or configurations, the embodiments are described in the context of the following exemplary system. Referring now toFIG.1, a network implementation100of a system102for determining a brock score is disclosed. It may be noted that one or more users may access the system102through one or more user devices104-1,104-2. . .104-N, collectively referred to as user devices104, hereinafter, or applications residing on the user devices104. In one aspect, the one or more users may comprise a health practitioner, a doctor, a lab assistant, a radiologist and the like. Although the present disclosure is explained considering that the system102is implemented on a server, it may be understood that the system102may be implemented in a variety of computing systems, such as a laptop computer, a desktop computer, a notebook, a workstation, a virtual environment, a mainframe computer, a server, a network server, a cloud-based computing environment. It will be understood that the system102may be accessed by multiple users through one or more user devices104-1,104-2. . .104-N. In one implementation, the system102may comprise the cloud-based computing environment in which the user may operate individual computing systems configured to execute remotely located applications. Examples of the user devices104may include, but are not limited to, a portable computer, a personal digital assistant, a handheld device, and a workstation. The user devices104are communicatively coupled to the system102through a network106. In one implementation, the network106may be a wireless network, a wired network, or a combination thereof. The network106can be implemented as one of the different types of networks, such as intranet, local area network (LAN), wide area network (WAN), the internet, and the like. The network106may either be a dedicated network or a shared network. The shared network represents an association of the different types of networks that use a variety of protocols, for example, Hypertext Transfer Protocol (HTTP), Transmission Control Protocol/Internet Protocol (TCP/IP), Wireless Application Protocol (WAP), and the like, to communicate with one another. Further, the network106may include a variety of network devices, including routers, bridges, servers, computing devices, storage devices, and the like. In one embodiment, the system102may include at least one processor108, an input/output (I/O) interface110, and a memory112. The at least one processor108may be implemented as one or more microprocessors, microcomputers, microcontrollers, digital signal processors, Central Processing Units (CPUs), state machines, logic circuitries, and/or any devices that manipulate signals based on operational instructions. Among other capabilities, the at least one processor108is configured to fetch and execute computer-readable instructions stored in the memory112. The I/O interface110may include a variety of software and hardware interfaces, for example, a web interface, a graphical user interface, and the like. The I/O interface110may allow the system102to interact with the user directly or through the client devices104. Further, the I/O interface110may enable the system102to communicate with other computing devices, such as web servers and external data servers (not shown). The I/O interface110can facilitate multiple communications within a wide variety of networks and protocol types, including wired networks, for example, LAN, cable, etc., and wireless networks, such as WLAN, cellular, or satellite. The I/O interface110may include one or more ports for connecting a number of devices to one another or to another server. The memory112may include any computer-readable medium or computer program product known in the art including, for example, volatile memory, such as static random access memory (SRAM) and dynamic random access memory (DRAM), and/or nonvolatile memory, such as read only memory (ROM), erasable programmable ROM, flash memories, hard disks, Solid State Disks (SSD), optical disks, and magnetic tapes. The memory112may include routines, programs, objects, components, data structures, etc., which perform particular tasks or implement particular abstract data types. The memory112may include programs or coded instructions that supplement applications and functions of the system102. In one embodiment, the memory112, amongst other things, serves as a repository for storing data processed, received, and generated by one or more of the programs or the coded instructions. As there are various challenges observed in the existing art, the challenges necessitate the need to build the system102for determining a brock score. At first, a user may use the user device104to access the system102via the I/O interface110. The user may register the user devices104using the I/O interface110in order to use the system102. In one aspect, the user may access the I/O interface110of the system102. The detail functioning of the system102is described below with the help of figures. The present subject matter describes the system102for determining a brock score. The system102may monitor a Computed Tomography (CT) scan image in real-time. The CT scan image may be monitored using deep learning and image processing technique to determine the brock score of a patient. In order to determine the brock score, initially, the system102may receive the CT scan image related to a chest region of the patient. The CT scan image may be captured using a scanning device i.e. a medical scanner. The CT scan image may be a three-dimensional image. The CT scan image may be received from the user via the user device104. The user may be one of a health practitioner, a doctor, a lab assistant, a radiologist and the like. In one aspect, the CT scan image may be a Non-contrast CT series with axial cuts and soft reconstruction kernel which covers an entire Lung. The CT scan image may be one non-contrast CT series with consistently spaced axial slices. The CT scan image may comprise minimum of 40 axial slices in the series. The CT scan image may be available in a Digital Imaging and Communications in Medicine (DICOM) format. In one example, a maximum thickness of the CT scan image may be 6 mm. In one embodiment, the system102may validate if the CT scan image is related to the chest region or not. The system102may use a machine learning model to validate the CT scan image. The machine learning model may be trained continuously in real-time using training data comprising historical chest region CT scan images associated with a set of patients. In an aspect, if the CT scan image is validated as the chest region CT scan, then the system102may proceed to analyze the CT scan image to determine the brock score. In another aspect, if the CT scan image is not related to the chest region or if there is no plain axial series, then the CT scan image may not be processed further. The system102may transmit a response to the user indicating that the uploaded series or the CT scan image is not valid and recommend rescanning. In one embodiment, the system102may comprise a trained data model. The trained data model may be trained using historical data related to previous CT scans of the patient, one or more CT scans associated with a set of patients and the like. In one example, the trained data may comprise dataset containing 120,000 Chest CTs used for training and internally validating algorithms. The dataset may be referred to as ‘development dataset’. The development dataset may be divided into training dataset and internal validation dataset using a 4:1 split. The resultant validation dataset (20% of the entire data) may be used to estimate the performance of the trained data model and for hyper-parameter tuning. In an aspect, the hyperparameter tuning may correspond to finding the optimal hyperparameters for any given machine learning algorithm. In one aspect, the dataset in the trained data model may be large that results into multiple advantages. The dataset may comprise number of scans for all target abnormalities, allowing the development of accurate algorithm. An adequate number of control scans with various non-target abnormalities and normal variations may be likely to be present in the dataset. It may reduce the chances that these occurrences will negatively impact performance when the algorithm is deployed in the real world on previously unseen data. The selection of a large number of sources for the training data, rather than a large amount of data from a single site, may be advantageous because it allows the algorithms to be trained on the CT scan images from a wide variety of device manufacturers and CT protocols, without manually specifying the model name and specifications of the device. In one embodiment, the system102may automate the checking of the DICOM for age, body part, contrast/non-contrast, slice thickness, view, and kernel. The system102may use a separate module called a series classifier which is described in detail in the preprocessing section. Further, the system102may check presence of a corresponding radiology report i.e., a ground truth, by matching the patient IDs. If no report is found, the CT scan image may be excluded. Subsequently, the system102may automate the checking of radiology report for the age. The system102may identify a number of cases which are labelled as CHEST in the DICOM attribute but are not actually Chest CT scan images, such CT scan images may be identified using the trained series classifier and not used in training or testing (these can be considered outliers). In one embodiment, only requirement for training the system102may be the presence of the DICOM data and a text report. Once these requirements are met, the concept of missing values or missing data may not apply as it does for other machine learning algorithms. There may be no other exclusions from the training dataset. In one aspect, an automated natural language processing based labeling approach may be chosen as the primary method for generating the ground truth. Additionally, a number of pixel-level hand annotations may be used to either further improve accuracy or to provide input to a segmentation algorithm. It may be noted that an intended use of the system102is to aid in the interpretation of the Chest CT images, therefore the labeling method that largely depends on the radiology reports, which are the ground truth for these images, is appropriate. Each Chest CT scan image in the training dataset may have a single corresponding the ground truth, generated during the normal course of clinical care. The ground truth includes at least the radiology report and a biopsy report where available. The ground truth is used for training the algorithm. The Natural Language Processing (NLP) algorithms may be developed based on rules/dictionaries, trained with machine learning techniques, or a combination of the two approaches. Rule based NLP algorithm may use a list of manually created rules to parse the unorganized content and structure it. Machine Learning (ML) based NLP algorithm, on the other hand, may automatically generate the rules when trained on a large annotated dataset. The rule-based NLP algorithm may be chosen over a machine-learning based NLP algorithm for the purpose of labeling radiology reports. The rule-based NLP algorithm may have few advantages comprising clinical knowledge can be manually incorporated into the rule-based NLP algorithm. In order to capture this knowledge in the ML based algorithm, a huge amount of annotation may be required. Further, rules may be readily added or modified to accommodate a new set of target findings in the rule-based NLP algorithm. Once the CT scan image is received, the system102may apply a gaussian smoothing method on the CT scan image. The gaussian smoothing method may be configured to counteract noise. In other words, the gaussian smoothing method may reduce image noise and enhance a structure of the CT scan image. In one aspect, the gaussian smoothing may be applied in a z dimension (i.e., longitudinal axis). In one example, a gaussian kernel used for the gaussian smoothing may have a sigma of 1 mm in the z dimension, and 0 in other dimensions. The gaussian smoothing may have a negligible effect on the CT scan image with thickness greater than 2 mm as the gaussian kernel decays by 95% at 2*sigma (=2 mm). In one aspect, the gaussian smoothing may help to remove noise from the CT scan image. Subsequently, the system102may resample the CT scan image into a plurality of slices. The CT scan image may be resampled using a bilinear interpolation. The bilinear interpolation may use the distance weighted average of the four nearest pixel values to estimate a new pixel value. In one aspect, the system102may resample the CT scan image so that its slice thickness is around 2.5 mm. The system102may obtain a resampling factor by dividing 2.5 by the series' slice thickness and rounding the result to an integer. The rounding may be used to ensure that there are no resampling artifacts. Further, the system102may detect a nodule one or more of the plurality of slices. The nodule may be detected based on an analysis of the plurality of slices using a first deep learning model. The first deep learning model may be trained using historical CT scans and historical nodules associated with the set of patients. The first deep learning model may detect patterns in the historical CT scans and process the one or more slices. The patterns may be related to the detection of the historical nodule on the historical CT scans. The processing of the one or more slices may comprise analysis of the slices using the patterns to detect the nodule on the one or more slices. In one aspect, the first deep learning model may be configured to determine whether the nodule on the one or more slices matches with at least one of the historical nodules. The system102may receive an output from the first deep learning model indicating whether the nodule matches with the historical nodules. In one embodiment, the first deep learning model may analyze the plurality of slices using training data i.e., the historical CT scans and the historical nodule in order to detect the nodule. In one aspect, the nodule may be present in multiple slices. Once the nodule matches with the one of the historical nodules, the system102may detect it as confirmed nodule. In one aspect, the system102may comprise a Se-ResNeXt50 model to detect the nodule. The Se-ResNeXt50 may be a modified version of ResneXt50 a popular neural network architecture which has 50 layers with increased inter layer connectivity. The model may have 50 convolutional layers, modified to take in regional information and softmax based confidences. Further, the system102may comprise U-Net and FPN. The U-Net and FPN may be a popular segmentation architecture for biomedical segmentation. The U-Net and FPN may have five downsampling and five upsampling blocks of convolutional layers with skip connections. Referring now toFIG.2, the structure of the nodule is shown, in accordance with an embodiment of the present subject matter. In one embodiment, the CT scan image200of the patient may be received. Further, the system102may detect the nodule202. In one example, the nodule may be a rounded or irregular opacity, well or poorly defined, measuring up to 3 cm in the diameter. In one embodiment, the system102may use the neural network architecture for slice-wise inference. It is an FPN with SE-ResNext-50 backbone with classification and segmentation heads. Weights of the Convolutional Neural Network (CNN) may be used to process each slice may be ‘tied’ and thus share the same weights. Slice level classification output may be pooled into scan level output using following operation as shown in equation 1. Pscan=Σi=0 to #sliceswi*PsliceiEquation 1 Wherein wimay be softmax weights computed as shown in equation 2. wi=exp(Pslicei)/Σi=0 to #slicesexp(Pslicei) Equation 2 In one aspect, essentially the system102may comprise a softer version of max pooling used in CNNs. The operation may be referred as ‘softmaxpooling’. The model architecture may comprise three outputs: scan-level probability, list of slice-level probabilities of presence of nodules and a 3D segmentation mask of nodules. Referring again toFIG.1, once the nodule is detected, the system102may identify a region of interest associated with the nodule. The region of interest may be identified based on an analysis of the nodule. The system102may use an image processing technique to identify the region of interest. The system102may analyze the plurality of slices using the image processing technique and mark the region of interest. In one aspect, the region of interest may indicate an abnormality on each slice. In one embodiment, a small part of the CT scan image may be annotated at a pixel level which serve as the secondary labels to the training algorithms. It may include the region of interest annotation (lung, diaphragm, mediastinum and ribs) as well as abnormality pixel-wise annotation which are then used to derive the Region of Interest (ROI) level annotations. In one example, 5% of the Chest CT scan images may be duplicated, as a test for competency of the annotators. If there was less than 75% concordance the CT scan image will be re-annotated. These discrepancies may be tracked as a way to continually test the accuracy of the annotations and the competency of the annotators. Once the region of interest is identified, the system102may perform a nodule segmentation to remove an area surrounding the region of interest. The nodule segmentation may be performed using the first deep learning model. The nodule segmentation may correspond to masking of the region of interest. In an aspect, once the nodule is detected and the region of interest is identified, the system102may mask the region of interest to remove the area. The area may be black or air areas and fatty tissues around the region of interest. Removing these areas may help to focus on the region of interest. The masking may help to improve performance of a detection algorithm. Further, the system102may automatically determine plurality of characteristics associated with the region of interest. The plurality of characteristics may be identified using a second deep learning model. In an aspect, the second deep learning model may be trained using historical characteristics identified for the historical nodules associated with the set of patients. The historical characteristics may include a historical size of nodules, historical location of nodules, a historical spiculation of nodules, and a historical texture of nodules associated with the set of patients. In the aspect, historical data associated with the set of patients may be used to train the second deep learning model. The second deep learning model may analyze the region of interest using the historical data in order to determine the plurality of characteristics of the region of interest. The system102may further receive an output related to the plurality of characteristics from the second deep learning model. In one embodiment, the system102may receive user's feedback in order train the second deep learning model. The continuous training of the second deep learning model may help for an accurate determination of the plurality of characteristics. In an aspect, the plurality of characteristics may be determined in real-time. Further, the plurality of characteristics may comprise a location of the nodule, a texture of the nodule, a size of the nodule, a spiculation of the nodule, and number of nodules present in the chest CT scan image. In one embodiment, the location of the nodule may be one of an upper right area of the chest region, an upper left area of the chest region, a lower right area of the chest region and a lower left area of the chest region. The texture of the nodule may be one of a solid nodule, a partially solid nodule and a non-solid nodule. The size of the nodule may correspond to a diameter of the nodule. The spiculation of the nodule may indicate a border of the nodule. Further, the number of nodules present in the CT scan image may be computed by the system102. In one embodiment, the system102may use a Convolution Neural Network (CNN) module to determine the plurality of characteristics. In one example, the diameter of the nodule may be further used to determine a total volume of the nodule and an area covered by the nodule. Upon determining the plurality of characteristics, the system102may determine a brock score for the patient. The brock score may be determined automatically in real-time. The brock score may be determined based on the plurality of characteristics and demographic data of the patient. The demographic data may a patient name, a patient age, a patient gender, and patient's family medical history. In an aspect, the system102may receive the demographic data from the patient, the patient's guardian and the like. In another aspect, the system102may capture the patient's name, the patient's age and the patient's gender by analyzing the CT scan image using a natural language processing technique. The patient's family medical history may correspond to the history of the patient's family regarding a lung cancer. In one example, the brock score may be valid for: Age >18 yrs, Nodule Size 1 mm to 30 mm, Nodule Count <100. In one embodiment, the system102may assign a weightage to each of the plurality of characteristics. Further, the weightage of each characteristic and the demographic data of the patient may be used to determine the brock score in real-time. In one aspect, the brock score may indicate a probability of a lung cancer related to the nodule detected in the chest CT scan image of the patient. Subsequently, the system102may compare the brock score with a brock threshold. Based on the comparison, the system102may recommend a next course of action for the patient. The next course of action may comprise a follow-up with the health practitioner. In one embodiment, if the brock score is greater than or equal to the brock threshold, it may indicate that the nodule detected in the CT scan image is cancerous. In another embodiment, if the brock score is less than brock threshold, it may indicate that the nodule is non-cancerous. In one example, the health practitioner may use the brock score to determine appropriate follow-up time and management of the nodules detected on the CT scan image. In one embodiment, the system102may detect an emphysema in the CT scan image. The emphysema may correspond to a destruction of lung tissues. The emphysema may be detected using the first deep learning model. The first deep learning model may be trained continuously trained using previous emphysema data associated with the set of patients. The detection of the emphysema may indicate that there is a low chance of cancerous. The emphysema may be used to determine the brock score. In one aspect, the weightage may be assigned to emphysema. Further, the system102may use the emphysema, the plurality of characteristics and the demographic data to determine the brock score. Further, the system102may generate a report of the patient. The report may be generated in real-time. The report may comprise the detected nodule, the detected emphysema, the brock score and the next course of action. In one aspect, the health practitioner may analyze the report and provide consultation to the patient remotely. In one aspect, the system102may notify the patient regarding the follow-up with the health practitioner. The follow-up may be notified based on the brock score. In one aspect, the system102may allow the health practitioner to provide feedback on the report. The health practitioner may be allowed to annotate the nodule detected in the CT scan image. In one embodiment, the system102may comprise a false positive detection module. The false detection module may determine false positive nodules on each slice. The false positive nodules may be determined using a 3-dimensional CNN model. The false positive nodules may be determined by comparing the nodules with the historical nodules. In one embodiment, the system102may use the location and the diameter of the nodule to determine the false positive nodules. Referring now toFIG.3, a block diagram for determining a brock score is shown, in accordance with an embodiment of the present subject matter. In one embodiment, at block302, a CT scan image related to a chest region of a patient may be received. Further. At block304, the CT scan image may be analyzed using deep learning to detect a nodule. The nodule may be detected on one or more slices from a plurality of slices of the CT scan image. Further, at block306, a nodule segmentation may be performed to remove an area surrounding the nodule. The nodule segmentation may help to focus on the nodule. Upon nodule segmentation, a region of interest related to the nodule may be identified. Subsequently, at block308, a plurality of characteristics related to the nodule may be identified automatically using the deep learning. The plurality of characteristics may include a size, a location, a spiculation, a texture and number of nodules present in the CT scan image. The plurality of characteristics and demographic data at block310may be used to determine the brock score at block312. The brock score may be used to predict a lung cancer risk for the patient. Referring now toFIG.4, a method400for determining a brock score for a patient is shown, in accordance with an embodiment of the present subject matter. The method400may be described in the general context of computer executable instructions. Generally, computer executable instructions can include routines, programs, objects, components, data structures, procedures, modules, functions, etc., that perform particular functions or implement particular abstract data types. The order in which the method400is described is not intended to be construed as a limitation, and any number of the described method blocks can be combined in any order to implement the method400or alternate methods for determining the brock score. Additionally, individual blocks may be deleted from the method400without departing from the spirit and scope of the subject matter described herein. Furthermore, the method400for determining the brock score can be implemented in any suitable hardware, software, firmware, or combination thereof. However, for ease of explanation, in the embodiments described below, the method400may be considered to be implemented in the above-described system102. At block402, a Computed Tomography (CT) scan image related to a chest region of the patient may be received. At block404, the CT scan image may be resampled into a plurality of slices using a bilinear interpolation. At block406, a nodule may be detected on one or more of the plurality of slices based on an analysis of the plurality of slices using a first deep learning model. In an aspect, the first deep learning model may be trained using historical CT scans and historical nodules associated with a set of patients. The first deep learning model may detect patterns related to the historical nodules in the historical CT scans and process the one or more slices. At block408, a region of interest associated with the nodule may be identified based on an analysis of the nodule using an image processing technique. At block410, a nodule segmentation may be performed to remove an area surrounding the region of interest. In an aspect, the nodule segmentation may be performed using the first deep learning model. At block412, a plurality of characteristics associated with the region of interest may be automatically determined using a second deep learning model. In an aspect, the second deep learning model may be trained using historical characteristics including a historical size of nodules, historical location of nodules, a historical spiculation of nodules, and a historical texture of nodules associated with the set of patients. The plurality of characteristics may comprise a location of the nodule, a texture of the nodule, a size of the nodule, a spiculation of the nodule, and number of nodules present in the CT scan image. At block414, a brock score for the patient may be determined automatically in real-time based on the plurality of characteristics and demographic data of the patient. Exemplary embodiments discussed above may provide certain advantages. Though not required to practice aspects of the disclosure, these advantages may include those provided by the following features. Some embodiments of the system and the method enable identifying characteristics of a nodule automatically using deep learning. Some embodiments of the system and the method enable detecting a nodule and an emphysema using a Convolution Neural Network (CNN) model. Some embodiments of the system and the method enable identifying false positive nodules using a 3-directional CNN model. Some embodiments of the system and the method enable computing a brock score using a plurality of characteristics and deep learning technique, which further helps to predict a lung cancer risk for a patient. Some embodiments of the system and the method enable an improvement in a traditional method of identifying the brock score. Some embodiments of the system and the method enable an efficient and an accurate process using the large dataset. Although implementations for methods and system for determining a brock score have been described in language specific to structural features and/or methods, it is to be understood that the appended claims are not necessarily limited to the specific features or methods described. Rather, the specific features and methods are disclosed as examples of implementations for determining the brock score. | 33,316 |
11861833 | DETAILED DESCRIPTION Although several embodiments, examples, and illustrations are disclosed below, it will be understood by those of ordinary skill in the art that the inventions described herein extend beyond the specifically disclosed embodiments, examples, and illustrations and includes other uses of the inventions and obvious modifications and equivalents thereof. Embodiments of the inventions are described with reference to the accompanying figures, wherein like numerals refer to like elements throughout. The terminology used in the description presented herein is not intended to be interpreted in any limited or restrictive manner simply because it is being used in conjunction with a detailed description of certain specific embodiments of the inventions. In addition, embodiments of the inventions can comprise several novel features and no single feature is solely responsible for its desirable attributes or is essential to practicing the inventions herein described. Introduction Disclosed herein are systems, methods, and devices for medical image analysis, diagnosis, risk stratification, decision making and/or disease tracking. Coronary heart disease affects over 17.6 million Americans. The current trend in treating cardiovascular health issues is generally two-fold. First, physicians generally review a patient's cardiovascular health from a macro level, for example, by analyzing the biochemistry or blood content or biomarkers of a patient to determine whether there are high levels of cholesterol elements in the bloodstream of a patient. In response to high levels of cholesterol, some physicians will prescribe one or more drugs, such as statins, as part of a treatment plan in order to decrease what is perceived as high levels of cholesterol elements in the bloodstream of the patient. The second general trend for currently treating cardiovascular health issues involves physicians evaluating a patient's cardiovascular health through the use of angiography to identify large blockages in various arteries of a patient. In response to finding large blockages in various arteries, physicians in some cases will perform an angioplasty procedure wherein a balloon catheter is guided to the point of narrowing in the vessel. After properly positioned, the balloon is inflated to compress or flatten the plaque or fatty matter into the artery wall and/or to stretch the artery open to increase the flow of blood through the vessel and/or to the heart. In some cases, the balloon is used to position and expand a stent within the vessel to compress the plaque and/or maintain the opening of the vessel to allow more blood to flow. About 500,000 heart stent procedures are performed each year in the United States. However, a recent federally funded $100 million study calls into question whether the current trends in treating cardiovascular disease are the most effective treatment for all types of patients. The recent study involved over 5,000 patients with moderate to severe stable heart disease from 320 sites in 37 countries and provided new evidence showing that stents and bypass surgical procedures are likely no more effective than drugs combined with lifestyle changes for people with stable heart disease. Accordingly, it may be more advantageous for patients with stable heart disease to forgo invasive surgical procedures, such as angioplasty and/or heart bypass, and instead be prescribed heart medicines, such as statins, and certain lifestyle changes, such as regular exercise. This new treatment regimen could affect thousands of patients worldwide. Of the estimated 500,000 heart stent procedures performed annually in the United States, it is estimated that a fifth of those are for people with stable heart disease. It is further estimated that 25% of the estimated 100,000 people with stable heart disease, or roughly 23,000 people, are individuals that do not experience any chest pain. Accordingly, over 20,000 patients annually could potentially forgo invasive surgical procedures or the complications resulting from such procedures. To determine whether a patient should forego invasive surgical procedures and opt instead for a drug regimen and/or to generate a more effective treatment plan, it can be important to more fully understand the cardiovascular disease of a patient. Specifically, it can be advantageous to better understand the arterial vessel health of a patient. For example, it is helpful to understand whether plaque build-up in a patient is mostly fatty matter build-up or mostly calcified matter build-up, because the former situation may warrant treatment with heart medicines, such as statins, whereas in the latter situation a patient should be subject to further periodic monitoring without prescribing heart medicine or implanting any stents. However, if the plaque build-up is significant enough to cause severe stenosis or narrowing of the arterial vessel such that blood flow to heart muscle might be blocked, then an invasive angioplasty procedure to implant a stent may likely be required because heart attack or sudden cardiac death (SCD) could occur in such patients without the implantation of a stent to enlarge the vessel opening. Sudden cardiac death is one of the largest causes of natural death in the United States, accounting for approximately 325,000 adult deaths per year and responsible for nearly half of all deaths from cardiovascular disease. For males, SCD is twice as common as compared to females. In general, SCD strikes people in the mid-30 to mid-40 age range. In over 50% of cases, sudden cardiac arrest occurs with no warning signs. With respect to the millions suffering from heart disease, there is a need to better understand the overall health of the artery vessels within a patient beyond just knowing the blood chemistry or content of the blood flowing through such artery vessels. For example, in some embodiments of systems, devices, and methods disclosed herein, arteries with “good” or stable plaque or plaque comprising hardened calcified content are considered non-life threatening to patients whereas arteries containing “bad” or unstable plaque or plaque comprising fatty material are considered more life threatening because such bad plaque may rupture within arteries thereby releasing such fatty material into the arteries. Such a fatty material release in the blood stream can cause inflammation that may result in a blood clot. A blood clot within an artery can prevent blood from traveling to heart muscle thereby causing a heart attack or other cardiac event. Further, in some instances, it is generally more difficult for blood to flow through fatty plaque buildup than it is for blood to flow through calcified plaque build-up. Therefore, there is a need for better understanding and analysis of the arterial vessel walls of a patient. Further, while blood tests and drug treatment regimens are helpful in reducing cardiovascular health issues and mitigating against cardiovascular events (for example, heart attacks), such treatment methodologies are not complete or perfect in that such treatments can misidentify and/or fail to pinpoint or diagnose significant cardiovascular risk areas. For example, the mere analysis of the blood chemistry of a patient will not likely identify that a patient has artery vessels having significant amounts of fatty deposit material bad plaque buildup along a vessel wall. Similarly, an angiogram, while helpful in identifying areas of stenosis or vessel narrowing, may not be able to clearly identify areas of the artery vessel wall where there is significant buildup of bad plaque. Such areas of buildup of bad plaque within an artery vessel wall can be indicators of a patient at high risk of suffering a cardiovascular event, such as a heart attack. In certain circumstances, areas where there exist areas of bad plaque can lead to a rupture wherein there is a release of the fatty materials into the bloodstream of the artery, which in turn can cause a clot to develop in the artery. A blood clot in the artery can cause a stoppage of blood flow to the heart tissue, which can result in a heart attack. Accordingly, there is a need for new technology for analyzing artery vessel walls and/or identifying areas within artery vessel walls that comprise a buildup of plaque whether it be bad or otherwise. Various systems, methods, and devices disclosed herein are directed to embodiments for addressing the foregoing issues. In particular, various embodiments described herein relate to systems, methods, and devices for medical image analysis, diagnosis, risk stratification, decision making and/or disease tracking. In some embodiments, the systems, devices, and methods described herein are configured to utilize non-invasive medical imaging technologies, such as a CT image for example, which can be inputted into a computer system configured to automatically and/or dynamically analyze the medical image to identify one or more coronary arteries and/or plaque within the same. For example, in some embodiments, the system can be configured to utilize one or more machine learning and/or artificial intelligence algorithms to automatically and/or dynamically analyze a medical image to identify, quantify, and/or classify one or more coronary arteries and/or plaque. In some embodiments, the system can be further configured to utilize the identified, quantified, and/or classified one or more coronary arteries and/or plaque to generate a treatment plan, track disease progression, and/or a patient-specific medical report, for example using one or more artificial intelligence and/or machine learning algorithms. In some embodiments, the system can be further configured to dynamically and/or automatically generate a visualization of the identified, quantified, and/or classified one or more coronary arteries and/or plaque, for example in the form of a graphical user interface. Further, in some embodiments, to calibrate medical images obtained from different medical imaging scanners and/or different scan parameters or environments, the system can be configured to utilize a normalization device comprising one or more compartments of one or more materials. As will be discussed in further detail, the systems, devices, and methods described herein allow for automatic and/or dynamic quantified analysis of various parameters relating to plaque, cardiovascular arteries, and/or other structures. More specifically, in some embodiments described herein, a medical image of a patient, such as a coronary CT image, can be taken at a medical facility. Rather than having a physician eyeball or make a general assessment of the patient, the medical image is transmitted to a backend main server in some embodiments that is configured to conduct one or more analyses thereof in a reproducible manner. As such, in some embodiments, the systems, methods, and devices described herein can provide a quantified measurement of one or more features of a coronary CT image using automated and/or dynamic processes. For example, in some embodiments, the main server system can be configured to identify one or more vessels, plaque, and/or fat from a medical image. Based on the identified features, in some embodiments, the system can be configured to generate one or more quantified measurements from a raw medical image, such as for example radiodensity of one or more regions of plaque, identification of stable plaque and/or unstable plaque, volumes thereof, surface areas thereof, geometric shapes, heterogeneity thereof, and/or the like. In some embodiments, the system can also generate one or more quantified measurements of vessels from the raw medical image, such as for example diameter, volume, morphology, and/or the like. Based on the identified features and/or quantified measurements, in some embodiments, the system can be configured to generate a risk assessment and/or track the progression of a plaque-based disease or condition, such as for example atherosclerosis, stenosis, and/or ischemia, using raw medical images. Further, in some embodiments, the system can be configured to generate a visualization of GUI of one or more identified features and/or quantified measurements, such as a quantized color mapping of different features. In some embodiments, the systems, devices, and methods described herein are configured to utilize medical image-based processing to assess for a subject his or her risk of a cardiovascular event, major adverse cardiovascular event (MACE), rapid plaque progression, and/or non-response to medication. In particular, in some embodiments, the system can be configured to automatically and/or dynamically assess such health risk of a subject by analyzing only non-invasively obtained medical images. In some embodiments, one or more of the processes can be automated using an AI and/or ML algorithm. In some embodiments, one or more of the processes described herein can be performed within minutes in a reproducible manner. This is stark contrast to existing measures today which do not produce reproducible prognosis or assessment, take extensive amounts of time, and/or require invasive procedures. As such, in some embodiments, the systems, devices, and methods described herein are able to provide physicians and/or patients specific quantified and/or measured data relating to a patient's plaque that do not exist today. For example, in some embodiments, the system can provide a specific numerical value for the volume of stable and/or unstable plaque, the ratio thereof against the total vessel volume, percentage of stenosis, and/or the like, using for example radiodensity values of pixels and/or regions within a medical image. In some embodiments, such detailed level of quantified plaque parameters from image processing and downstream analytical results can provide more accurate and useful tools for assessing the health and/or risk of patients in completely novel ways. General Overview In some embodiments, the systems, devices, and methods described herein are configured to automatically and/or dynamically perform medical image analysis, diagnosis, risk stratification, decision making and/or disease tracking.FIG.1is a flowchart illustrating an overview of an example embodiment(s) of a method for medical image analysis, visualization, risk assessment, disease tracking, treatment generation, and/or patient report generation. As illustrated inFIG.1, in some embodiments, the system is configured to access and/or analyze one or more medical images of a subject, such as for example a medical image of a coronary region of a subject or patient. In some embodiments, before obtaining the medical image, a normalization device is attached to the subject and/or is placed within a field of view of a medical imaging scanner at block102. For example, in some embodiments, the normalization device can comprise one or more compartments comprising one or more materials, such as water, calcium, and/or the like. Additional detail regarding the normalization device is provided below. Medical imaging scanners may produce images with different scalable radiodensities for the same object. This, for example, can depend not only on the type of medical imaging scanner or equipment used but also on the scan parameters and/or environment of the particular day and/or time when the scan was taken. As a result, even if two different scans were taken of the same subject, the brightness and/or darkness of the resulting medical image may be different, which can result in less than accurate analysis results processed from that image. To account for such differences, in some embodiments, a normalization device comprising one or more known elements is scanned together with the subject, and the resulting image of the one or more known elements can be used as a basis for translating, converting, and/or normalizing the resulting image. As such, in some embodiments, a normalization device is attached to the subject and/or placed within the field of view of a medical imaging scan at a medical facility. In some embodiments, at block104, the medical facility then obtains one or more medical images of the subject. For example, the medical image can be of the coronary region of the subject or patient. In some embodiments, the systems disclosed herein can be configured to take in CT data from the image domain or the projection domain as raw scanned data or any other medical data, such as but not limited to: x-ray; Dual-Energy Computed Tomography (DECT), Spectral CT, photon-counting detector CT, ultrasound, such as echocardiography or intravascular ultrasound (IVUS); magnetic resonance (MR) imaging; optical coherence tomography (OCT); nuclear medicine imaging, including positron-emission tomography (PET) and single photon emission computed tomography (SPECT); near-field infrared spectroscopy (NIRS); and/or the like. As used herein, the term CT image data or CT scanned data can be substituted with any of the foregoing medical scanning modalities and process such data through an artificial intelligence (AI) algorithm system in order to generate processed CT image data. In some embodiments, the data from these imaging modalities enables determination of cardiovascular phenotype, and can include the image domain data, the projection domain data, and/or a combination of both. In some embodiments, at block106, the medical facility can also obtain non-imaging data from the subject. For example, this can include blood tests, biomarkers, panomics and/or the like. In some embodiments, at block108, the medical facility can transmit the one or more medical images and/or other non-imaging data at block108to a main server system. In some embodiments, the main server system can be configured to receive and/or otherwise access the medical image and/or other non-imaging data at block110. In some embodiments, at block112, the system can be configured to automatically and/or dynamically analyze the one or more medical images which can be stored and/or accessed from a medical image database100. For example, in some embodiments, the system can be configured to take in raw CT image data and apply an artificial intelligence (AI) algorithm, machine learning (ML) algorithm, and/or other physics-based algorithm to the raw CT data in order to identify, measure, and/or analyze various aspects of the identified arteries within the CT data. In some embodiments, the inputting of the raw medical image data involves uploading the raw medical image data into cloud-based data repository system. In some embodiments, the processing of the medical image data involves processing the data in a cloud-based computing system using an AI and/or ML algorithm. In some embodiments, the system can be configured to analyze the raw CT data in about 1 minute, about 2 minutes, about 3 minutes, about 4 minutes, about 5 minutes, about 6 minutes, about 7 minutes, about 8 minutes, about 9 minutes, about 10 minutes, about 15 minutes, about 20 minutes, about 30 minutes, about 35 minutes, about 40 minutes, about 45 minutes, about 50 minutes, about 55 minutes, about 60 minutes, and/or within a range defined by two of the aforementioned values. In some embodiments, the system can be configured to utilize a vessel identification algorithm to identify and/or analyze one or more vessels within the medical image. In some embodiments, the system can be configured to utilize a coronary artery identification algorithm to identify and/or analyze one or more coronary arteries within the medical image. In some embodiments, the system can be configured to utilize a plaque identification algorithm to identify and/or analyze one or more regions of plaque within the medical image. In some embodiments, the vessel identification algorithm, coronary artery identification algorithm, and/or plaque identification algorithm comprises an AI and/or ML algorithm. For example, in some embodiments, the vessel identification algorithm, coronary artery identification algorithm, and/or plaque identification algorithm can be trained on a plurality of medical images wherein one or more vessels, coronary arteries, and/or regions of plaque are pre-identified. Based on such training, for example by use of a Convolutional Neural Network in some embodiments, the system can be configured to automatically and/or dynamically identify from raw medical images the presence and/or parameters of vessels, coronary arteries, and/or plaque. As such, in some embodiments, the processing of the medical image or raw CT scan data can comprise analysis of the medical image or CT data in order to determine and/or identify the existence and/or nonexistence of certain artery vessels in a patient. As a natural occurring phenomenon, certain arteries may be present in certain patients whereas such certain arteries may not exist in other patients. In some embodiments, at block112, the system can be further configured to analyze the identified vessels, coronary arteries, and/or plaque, for example using an AI and/or ML algorithm. In particular, in some embodiments, the system can be configured to determine one or more vascular morphology parameters, such as for example arterial remodeling, curvature, volume, width, diameter, length, and/or the like. In some embodiments, the system can be configured to determine one or more plaque parameters, such as for example volume, surface area, geometry, radiodensity, ratio or function of volume to surface area, heterogeneity index, and/or the like of one or more regions of plaque shown within the medical image. “Radiodensity” as used herein is a broad term that refers to the relative inability of electromagnetic relation (e.g., X-rays) to pass through a material. In reference to an image, radiodensity values refer to values indicting a density in image data (e.g., film, print, or in an electronic format) where the radiodensity values in the image corresponds to the density of material depicted in the image. In some embodiments, at block114, the system can be configured to utilize the identified and/or analyzed vessels, coronary arteries, and/or plaque from the medical image to perform a point-in-time analysis of the subject. In some embodiments, the system can be configured to use automatic and/or dynamic image processing of one or more medical images taken from one point in time to identify and/or analyze one or more vessels, coronary arteries, and/or plaque and derive one or more parameters and/or classifications thereof. For example, as will be described in more detail herein, in some embodiments, the system can be configured to generate one or more quantification metrics of plaque and/or classify the identified regions of plaque as good or bad plaque. Further, in some embodiments, at block114, the system can be configured to generate one or more treatment plans for the subject based on the analysis results. In some embodiments, the system can be configured to utilize one or more AI and/or ML algorithms to identify and/or analyze vessels or plaque, derive one or more quantification metrics and/or classifications, and/or generate a treatment plan. In some embodiments, if a previous scan or medical image of the subject exists, the system can be configured to perform at block126one or more time-based analyses, such as disease tracking. For example, in some embodiments, if the system has access to one or more quantified parameters or classifications derived from previous scans or medical images of the subject, the system can be configured to compare the same with one or more quantified parameters or classifications derived from a current scan or medical image to determine the progression of disease and/or state of the subject. In some embodiments, at block116, the system is configured to automatically and/or dynamically generate a Graphical User Interface (GUI) or other visualization of the analysis results at block116, which can include for example identified vessels, regions of plaque, coronary arteries, quantified metrics or parameters, risk assessment, proposed treatment plan, and/or any other analysis result discussed herein. In some embodiments, the system is configured to analyze arteries present in the CT scan data and display various views of the arteries present in the patient, for example within 10-15 minutes or less. In contrast, as an example, conducting a visual assessment of a CT to identify stenosis alone, without consideration of good or bad plaque or any other factor, can take anywhere between 15 minutes to more than an hour depending on the skill level, and can also have substantial variability across radiologists and/or cardiac imagers. In some embodiments, at block118, the system can be configured to transmit the generated GUI or other visualization, analysis results, and/or treatment to the medical facility. In some embodiments, at block120, a physician at the medical facility can then review and/or confirm and/or revise the generated GUI or other visualization, analysis results, and/or treatment. In some embodiments, at block122, the system can be configured to further generate and transmit a patient-specific medical report to a patient, who can receive the same at block124. In some embodiments, the patient-specific medical report can be dynamically generated based on the analysis results derived from and/or other generated from the medical image processing and analytics. For example, the patient-specific report can include identified vessels, regions of plaque, coronary arteries, quantified metrics or parameters, risk assessment, proposed treatment plan, and/or any other analysis result discussed herein. In some embodiments, one or more of the process illustrated inFIG.1can be repeated, for example for the same patient at a different time to track progression of a disease and/or the state of the patient. Image Processing-Based Classification of Good V. Bad Plaque As discussed, in some embodiments, the systems, methods, and devices described herein are configured to automatically and/or dynamically identify and/or classify good v. bad plaque or stable v. unstable plaque based on medical image analysis and/or processing. For example, in some embodiments, the system can be configured to utilize an AI and/or ML algorithm to identify areas in an artery that exhibit plaque buildup within, along, inside and/or outside the arteries. In some embodiments, the system can be configured to identify the outline or boundary of plaque buildup associated with an artery vessel wall. In some embodiments, the system can be configured to draw or generate a line that outlines the shape and configuration of the plaque buildup associated with the artery. In some embodiments, the system can be configured to identify whether the plaque buildup is a certain kind of plaque and/or the composition or characterization of a particular plaque buildup. In some embodiments, the system can be configured to characterize plaque binarily, ordinally and/or continuously. In some embodiments, the system can be configured to determine that the kind of plaque buildup identified is a “bad” kind of plaque due to the dark color or dark gray scale nature of the image corresponding to the plaque area, and/or by determination of its attenuation density (e.g., using a Hounsfield unit scale or other). For example, in some embodiments, the system can be configured to identify certain plaque as “bad” plaque if the brightness of the plaque is darker than a pre-determined level. In some embodiments, the system can be configured to identify good plaque areas based on the white coloration and/or the light gray scale nature of the area corresponding to the plaque buildup. For example, in some embodiments, the system can be configured to identify certain plaque as “good” plaque if the brightness of the plaque is lighter than a pre-determined level. In some embodiments, the system can be configured to determine that dark areas in the CT scan are related to “bad” plaque, whereas the system can be configured to identify good plaque areas corresponding to white areas. In some embodiments, the system can be configured to identify and determine the total area and/or volume of total plaque, good plaque, and/or bad plaque identified within an artery vessel or plurality of vessels. In some embodiments, the system can be configured to determine the length of the total plaque area, good plaque area, and/or bad plaque area identified. In some embodiments, the system can be configured to determine the width of the total plaque area, good plaque area, and/or bad plaque area identified. The “good” plaque may be considered as such because it is less likely to cause heart attack, less likely to exhibit significant plaque progression, and/or less likely to be ischemia, amongst others. Conversely, the “bad” plaque be considered as such because it is more likely to cause heart attack, more likely to exhibit significant plaque progression, and/or more likely to be ischemia, amongst others. In some embodiments, the “good” plaque may be considered as such because it is less likely to result in the no-reflow phenomenon at the time of coronary revascularization. Conversely, the “bad” plaque may be considered as such because it is more likely to cause the no-reflow phenomenon at the time of coronary revascularization. FIG.2Ais a flowchart illustrating an overview of an example embodiment(s) of a method for analysis and classification of plaque from a medical image, which can be obtained non-invasively. As illustrated inFIG.2A, at block202, in some embodiments, the system can be configured to access a medical image, which can include a coronary region of a subject and/or be stored in a medical image database100. The medical image database100can be locally accessible by the system and/or can be located remotely and accessible through a network connection. The medical image can comprise an image obtain using one or more modalities such as for example, CT, Dual-Energy Computed Tomography (DECT), Spectral CT, photon-counting CT, x-ray, ultrasound, echocardiography, intravascular ultrasound (IVUS), Magnetic Resonance (MR) imaging, optical coherence tomography (OCT), nuclear medicine imaging, positron-emission tomography (PET), single photon emission computed tomography (SPECT), or near-field infrared spectroscopy (NIRS). In some embodiments, the medical image comprises one or more of a contrast-enhanced CT image, non-contrast CT image, MR image, and/or an image obtained using any of the modalities described above. In some embodiments, the system can be configured to automatically and/or dynamically perform one or more analyses of the medical image as discussed herein. For example, in some embodiments, at block204, the system can be configured to identify one or more arteries. The one or more arteries can include coronary arteries, carotid arteries, aorta, renal artery, lower extremity artery, upper extremity artery, and/or cerebral artery, amongst others. In some embodiments, the system can be configured to utilize one or more AI and/or ML algorithms to automatically and/or dynamically identify one or more arteries or coronary arteries using image processing. For example, in some embodiments, the one or more AI and/or ML algorithms can be trained using a Convolutional Neural Network (CNN) on a set of medical images on which arteries or coronary arteries have been identified, thereby allowing the AI and/or ML algorithm automatically identify arteries or coronary arteries directly from a medical image. In some embodiments, the arteries or coronary arteries are identified by size and/or location. In some embodiments, at block206, the system can be configured to identify one or more regions of plaque in the medical image. In some embodiments, the system can be configured to utilize one or more AI and/or ML algorithms to automatically and/or dynamically identify one or more regions of plaque using image processing. For example, in some embodiments, the one or more AI and/or ML algorithms can be trained using a Convolutional Neural Network (CNN) on a set of medical images on which regions of plaque have been identified, thereby allowing the AI and/or ML algorithm automatically identify regions of plaque directly from a medical image. In some embodiments, the system can be configured to identify a vessel wall and a lumen wall for each of the identified coronary arteries in the medical image. In some embodiments, the system is then configured to determine the volume in between the vessel wall and the lumen wall as plaque. In some embodiments, the system can be configured to identify regions of plaque based on the radiodensity values typically associated with plaque, for example by setting a predetermined threshold or range of radiodensity values that are typically associated with plaque with or without normalizing using a normalization device. In some embodiments, the system is configured to automatically and/or dynamically determine one or more vascular morphology parameters and/or plaque parameters at block208from the medical image. In some embodiments, the one or more vascular morphology parameters and/or plaque parameters can comprise quantified parameters derived from the medical image. For example, in some embodiments, the system can be configured to utilize an AI and/or ML algorithm or other algorithm to determine one or more vascular morphology parameters and/or plaque parameters. As another example, in some embodiments, the system can be configured to determine one or more vascular morphology parameters, such as classification of arterial remodeling due to plaque, which can further include positive arterial remodeling, negative arterial remodeling, and/or intermediate arterial remodeling. In some embodiments, the classification of arterial remodeling is determined based on a ratio of the largest vessel diameter at a region of plaque to a normal reference vessel diameter of the same region which can be retrieved from a normal database. In some embodiments, the system can be configured to classify arterial remodeling as positive when the ratio of the largest vessel diameter at a region of plaque to a normal reference vessel diameter of the same region is more than 1.1. In some embodiments, the system can be configured to classify arterial remodeling as negative when the ratio of the largest vessel diameter at a region of plaque to a normal reference vessel diameter is less than 0.95. In some embodiments, the system can be configured to classify arterial remodeling as intermediate when the ratio of the largest vessel diameter at a region of plaque to a normal reference vessel diameter is between 0.95 and 1.1. Further, as part of block208, in some embodiments, the system can be configured to determine a geometry and/or volume of one or more regions of plaque and/or one or more vessels or arteries at block201. For example, the system can be configured to determine if the geometry of a particular region of plaque is round or oblong or other shape. In some embodiments, the geometry of a region of plaque can be a factor in assessing the stability of the plaque. As another example, in some embodiments, the system can be configured to determine the curvature, diameter, length, volume, and/or any other parameters of a vessel or artery from the medical image. In some embodiments, as part of block208, the system can be configured to determine a volume and/or surface area of a region of plaque and/or a ratio or other function of volume to surface area of a region of plaque at block203, such as for example a diameter, radius, and/or thickness of a region of plaque. In some embodiments, a plaque having a low ratio of volume to surface area can indicate that the plaque is stable. As such, in some embodiments, the system can be configured to determine that a ratio of volume to surface area of a region of plaque below a predetermined threshold is indicative of stable plaque. In some embodiments, as part of block208, the system can be configured to determine a heterogeneity index of a region of plaque at block205. For instance, in some embodiments, a plaque having a low heterogeneity or high homogeneity can indicate that the plaque is stable. As such, in some embodiments, the system can be configured to determine that a heterogeneity of a region of plaque below a predetermined threshold is indicative of stable plaque. In some embodiments, heterogeneity or homogeneity of a region of plaque can be determined based on the heterogeneity or homogeneity of radiodensity values within the region of plaque. As such, in some embodiments, the system can be configured to determine a heterogeneity index of plaque by generating spatial mapping, such as a three-dimensional histogram, of radiodensity values within or across a geometric shape or region of plaque. In some embodiments, if a gradient or change in radiodensity values across the spatial mapping is above a certain threshold, the system can be configured to assign a high heterogeneity index. Conversely, in some embodiments, if a gradient or change in radiodensity values across the spatial mapping is below a certain threshold, the system can be configured to assign a low heterogeneity index. In some embodiments, as part of block208, the system can be configured to determine a radiodensity of plaque and/or a composition thereof at block207. For example, a high radiodensity value can indicate that a plaque is highly calcified or stable, whereas a low radiodensity value can indicate that a plaque is less calcified or unstable. As such, in some embodiments, the system can be configured to determine that a radiodensity of a region of plaque above a predetermined threshold is indicative of stable stabilized plaque. In addition, different areas within a region of plaque can be calcified at different levels and thereby show different radiodensity values. As such, in some embodiments, the system can be configured to determine the radiodensity values of a region of plaque and/or a composition or percentage or change of radiodensity values within a region of plaque. For instance, in some embodiments, the system can be configured to determine how much or what percentage of plaque within a region of plaque shows a radiodensity value within a low range, medium range, high range, and/or any other classification. Similarly, in some embodiments, as part of block208, the system can be configured to determine a ratio of radiodensity value of plaque to a volume of plaque at block209. For instance, it can be important to assess whether a large or small region of plaque is showing a high or low radiodensity value. As such, in some embodiments, the system can be configured to determine a percentage composition of plaque comprising different radiodensity values as a function or ratio of volume of plaque. In some embodiments, as part of block208, the system can be configured to determine the diffusivity and/or assign a diffusivity index to a region of plaque at block211. For example, in some embodiments, the diffusivity of a plaque can depend on the radiodensity value of plaque, in which a high radiodensity value can indicate low diffusivity or stability of the plaque. In some embodiments, at block210, the system can be configured to classify one or more regions of plaque identified from the medical image as stable v. unstable or good v. bad based on the one or more vascular morphology parameters and/or quantified plaque parameters determined and/or derived from raw medical images. In particular, in some embodiments, the system can be configured to generate a weighted measure of one or more vascular morphology parameters and/or quantified plaque parameters determined and/or derived from raw medical images. For example, in some embodiments, the system can be configured to weight one or more vascular morphology parameters and/or quantified plaque parameters equally. In some embodiments, the system can be configured to weight one or more vascular morphology parameters and/or quantified plaque parameters differently. In some embodiments, the system can be configured to weight one or more vascular morphology parameters and/or quantified plaque parameters logarithmically, algebraically, and/or utilizing another mathematical transform. In some embodiments, the system is configured to classify one or more regions of plaque at block210using the generated weighted measure and/or using only some of the vascular morphology parameters and/or quantified plaque parameters. In some embodiments, at block212, the system is configured to generate a quantized color mapping based on the analyzed and/or determined parameters. For example, in some embodiments, the system is configured to generate a visualization of the analyzed medical image by generating a quantized color mapping of calcified plaque, non-calcified plaque, good plaque, bad plaque, stable plaque, and/or unstable plaque as determined using any of the analytical techniques described herein. Further, in some embodiments, the quantified color mapping can also include arteries and/or epicardial fat, which can also be determined by the system, for example by utilizing one or more AI and/or ML algorithms. In some embodiments, at block214, the system is configured to generate a proposed treatment plan for the subject based on the analysis, such as for example the classification of plaque derived automatically from a raw medical image. In particular, in some embodiments, the system can be configured to assess or predict the risk of atherosclerosis, stenosis, and/or ischemia of the subject based on a raw medical image and automated image processing thereof. In some embodiments, one or more processes described herein in connection withFIG.2Acan be repeated. For example, if a medical image of the same subject is taken again at a later point in time, one or more processes described herein can be repeated and the analytical results thereof can be used for disease tracking and/or other purposes. Determination of Non-Calcified Plaque from a Non-Contrast CT Image(s) As discussed herein, in some embodiments, the system can be configured to utilize a CT or other medical image of a subject as input for performing one or more image analysis techniques to assess a subject, including for example risk of a cardiovascular event. In some embodiments, such CT image can comprise a contrast-enhanced CT image, in which case some of the analysis techniques described herein can be directly applied, for example to identify or classify plaque. However, in some embodiments, such CT image can comprise a non-contrast CT image, in which case it can be more difficult to identify and/or determine non-calcified plaque due to its low radiodensity value and overlap with other low radiodensity values components, such as blood for example. As such, in some embodiments, the systems, devices, and methods described herein provide a novel approach to determining non-calcified plaque from a non-contrast CT image, which can be more widely available. Also, in some embodiments, in addition to or instead of analyzing a contrast-enhanced CT scan, the system can also be configured to examine the attenuation densities within the arteries that are lower than the attenuation density of the blood flowing within them in a non-contrast CT scan. In some embodiments, these “low attenuation” plaques may be differentiated between the blood attenuation density and the fat that sometimes surrounds the coronary artery and/or may represent non-calcified plaques of different materials. In some embodiments, the presence of these non-calcified plaques may offer incremental prediction for whether a previously calcified plaque is stabilizing or worsening or progressing or regressing. These findings that are measurable through these embodiments may be linked to the prognosis of a patient, wherein calcium stabilization (that is, higher attenuation densities) and lack of non-calcified plaque by may associated with a favorable prognosis, while lack of calcium stabilization (that is, no increase in attenuation densities), or significant progression or new calcium formation may be associated with a poorer prognosis, including risk of rapid progression of disease, heart attack or other major adverse cardiovascular event. FIG.2Bis a flowchart illustrating an overview of an example embodiment(s) of a method for determination of non-calcified and/or low-attenuated plaque from a medical image, such as a non-contrast CT image. As discussed herein and as illustrated inFIG.2B, in some embodiments, the system can be configured to determine non-calcified and/or low-attenuated plaque from a medical image. In some embodiments, the medical image can be of the coronary region of the subject or patient. In some embodiments, the medical image can be obtained using one or more modalities such as CT, Dual-Energy Computed Tomography (DECT), Spectral CT, x-ray, ultrasound, echocardiography, IVUS, MR, OCT, nuclear medicine imaging, PET, SPECT, NIRS, and/or the like. In some embodiments, the system can be configured to access one or more medical images at block202, for example from a medical image database100. In some embodiments, in order to determine non-calcified and/or low-attenuated plaque from the medical image or non-contrast CT image, the system can be configured to utilize a stepwise approach to first identify areas within the medical image that are clearly non-calcified plaque. In some embodiments, the system can then conduct a more detailed analysis of the remaining areas in the image to identify other regions of non-calcified and/or low-attenuated plaque. By utilizing such compartmentalized or a stepwise approach, in some embodiments, the system can identify or determine non-calcified and/or low-attenuated plaque from the medical image or non-contrast CT image with a faster turnaround rather than having to apply a more complicated analysis to every region or pixel of the image. In particular, in some embodiments, at block224, the system can be configured to identify epicardial fat from the medical image. In some embodiments, the system can be configured to identify epicardial fat by determining every pixel or region within the image that has a radiodensity value below a predetermined threshold and/or within a predetermined range. The exact predetermined threshold value or range of radiodensity for identifying epicardial fat can depend on the medical image, scanner type, scan parameters, and/or the like, which is why a normalization device can be used in some instances to normalize the medical image. For example, in some embodiments, the system can be configured to identify as epicardial fat pixels and/or regions within the medical image or non-contrast CT image with a radiodensity value that is around −100 Hounsfield units and/or within a range that includes −100 Hounsfield units. In particular, in some embodiments, the system can be configured to identify as epicardial fat pixels and/or regions within the medical image or non-contrast CT image with a radiodensity value that is within a range with a lower limit of about −100 Hounsfield units, about −110 Hounsfield units, about −120 Hounsfield units, about −130 Hounsfield units, about −140 Hounsfield units, about −150 Hounsfield units, about −160 Hounsfield units, about −170 Hounsfield units, about −180 Hounsfield units, about −190 Hounsfield units, or about −200 Hounsfield units, and an upper limit of about 30 Hounsfield units, about 20 Hounsfield units, about 10 Hounsfield units, about 0 Hounsfield units, about −10 Hounsfield units, about −20 Hounsfield units, about −30 Hounsfield units, about −40 Hounsfield units, about −50 Hounsfield units, about −60 Hounsfield units, about −70 Hounsfield units, about −80 Hounsfield units, or about −90 Hounsfield units. In some embodiments, the system can be configured to identify and/or segment arteries on the medical image or non-contrast CT image using the identified epicardial fat as outer boundaries of the arteries. For example, the system can be configured to first identify regions of epicardial fat on the medical image and assign a volume in between epicardial fat as an artery, such as a coronary artery. In some embodiments, at block226, the system can be configured to identify a first set of pixels or regions within the medical image, such as within the identified arteries, as non-calcified or low-attenuated plaque. More specifically, in some embodiments, the system can be configured to identify as an initial set low-attenuated or non-calcified plaque by identifying pixels or regions with a radiodensity value that is below a predetermined threshold or within a predetermined range. For example, the predetermined threshold or predetermined range can be set such that the resulting pixels can be confidently marked as low-attenuated or non-calcified plaque without likelihood of confusion with another matter such as blood. In particular, in some embodiments, the system can be configured to identify the initial set of low-attenuated or non-calcified plaque by identifying pixels or regions with a radiodensity value below around 30 Hounsfield units. In some embodiments, the system can be configured to identify the initial set of low-attenuated or non-calcified plaque by identifying pixels or regions with a radiodensity value at or below around 60 Hounsfield units, around 55 Hounsfield units, around 50 Hounsfield units, around 45 Hounsfield units, around 40 Hounsfield units, around 35 Hounsfield units, around 30 Hounsfield units, around 25 Hounsfield units, around 20 Hounsfield units, around 15 Hounsfield units, around 10 Hounsfield units, around 5 Hounsfield units, and/or with a radiodensity value at or above around 0 Hounsfield units, around 5 Hounsfield units, around 10 Hounsfield units, around 15 Hounsfield units, around 20 Hounsfield units, around 25 Hounsfield units, and/or around 30 Hounsfield units. In some embodiments, the system can be configured classify pixels or regions that fall within or below this predetermined range of radiodensity values as a first set of identified non-calcified or low-attenuated plaque at block238. In some embodiments, the system at block228can be configured to identify a second set of pixels or regions within the medical image, such as within the identified arteries, that may or may not represent low-attenuated or non-calcified plaque. As discussed, in some embodiments, this second set of candidates of pixels or regions may require additional analysis to confirm that they represent plaque. In particular, in some embodiments, the system can be configured to identify this second set of pixels or regions that may potentially be low-attenuated or non-calcified plaque by identifying pixels or regions of the image with a radiodensity value within a predetermined range. In some embodiments, the predetermined range for identifying this second set of pixels or regions can be between around 30 Hounsfield units and 100 Hounsfield units. In some embodiments, the predetermined range for identifying this second set of pixels or regions can have a lower limit of around 0 Hounsfield units, 5 Hounsfield units, 10 Hounsfield units, 15 Hounsfield units, 20 Hounsfield units, 25 Hounsfield units, 30 Hounsfield units, 35 Hounsfield units, 40 Hounsfield units, 45 Hounsfield units, 50 Hounsfield units, and/or an upper limit of around 55 Hounsfield units, 60 Hounsfield units, 65 Hounsfield units, 70 Hounsfield units, 75 Hounsfield units, 80 Hounsfield units, 85 Hounsfield units, 90 Hounsfield units, 95 Hounsfield units, 100 Hounsfield units, 110 Hounsfield units, 120 Hounsfield units, 130 Hounsfield units, 140 Hounsfield units, 150 Hounsfield units. In some embodiments, at block230, the system can be configured conduct an analysis of the heterogeneity of the identified second set of pixels or regions. For example, depending on the range of radiodensity values used to identify the second set of pixels, in some embodiments, the second set of pixels or regions may include blood and/or plaque. Blood can typically show a more homogeneous gradient of radiodensity values compared to plaque. As such, in some embodiments, by analyzing the homogeneity or heterogeneity of the pixels or regions identified as part of the second set, the system can be able to distinguish between blood and non-calcified or low attenuated plaque. As such, in some embodiments, the system can be configured to determine a heterogeneity index of the second set of regions of pixels identified from the medical image by generating spatial mapping, such as a three-dimensional histogram, of radiodensity values within or across a geometric shape or region of plaque. In some embodiments, if a gradient or change in radiodensity values across the spatial mapping is above a certain threshold, the system can be configured to assign a high heterogeneity index and/or classify as plaque. Conversely, in some embodiments, if a gradient or change in radiodensity values across the spatial mapping is below a certain threshold, the system can be configured to assign a low heterogeneity index and/or classify as blood. In some embodiments, at block240, the system can be configured to identify a subset of the second set of regions of pixels identified from the medical image as plaque or non-calcified or low-attenuated plaque. In some embodiments, at block242, the system can be configured to combine the first set of identified non-calcified or low-attenuated plaque from block238and the second set of identified non-calcified or low-attenuated plaque from block240. As such, even using non-contrast CT images, in some embodiments, the system can be configured to identify low-attenuated or non-calcified plaque which can be more difficult to identify compared to calcified or high-attenuated plaque due to possible overlap with other matter such as blood. In some embodiments, the system can also be configured to determine calcified or high-attenuated plaque from the medical image at block232. This process can be more straightforward compared to identifying low-attenuated or non-calcified plaque from the medical image or non-contrast CT image. In particular, in some embodiments, the system can be configured to identify calcified or high-attenuated plaque from the medical image or non-contrast CT image by identifying pixels or regions within the image that have a radiodensity value above a predetermined threshold and/or within a predetermined range. For example, in some embodiments, the system can be configured to identify as calcified or high-attenuated plaque regions or pixels from the medical image or non-contrast CT image having a radiodensity value above around 100 Hounsfield units, around 150 Hounsfield units, around 200 Hounsfield units, around 250 Hounsfield units, around 300 Hounsfield units, around 350 Hounsfield units, around 400 Hounsfield units, around 450 Hounsfield units, around 500 Hounsfield units, around 600 Hounsfield units, around 700 Hounsfield units, around 800 Hounsfield units, around 900 Hounsfield units, around 1000 Hounsfield units, around 1100 Hounsfield units, around 1200 Hounsfield units, around 1300 Hounsfield units, around 1400 Hounsfield units, around 1500 Hounsfield units, around 1600 Hounsfield units, around 1700 Hounsfield units, around 1800 Hounsfield units, around 1900 Hounsfield units, around 2000 Hounsfield units, around 2500 Hounsfield units, around 3000 Hounsfield units, and/or any other minimum threshold. In some embodiments, at block234, the system can be configured to generate a quantized color mapping of one or more identified matters from the medical image. For example, in some embodiments, the system can be configured assign different colors to each of the different regions associated with different matters, such as non-calcified or low-attenuated plaque, calcified or high-attenuated plaque, all plaque, arteries, epicardial fat, and/or the like. In some embodiments, the system can be configured to generate a visualization of the quantized color map and/or present the same to a medical personnel or patient via a GUI. In some embodiments, at block236, the system can be configured to generate a proposed treatment plan for a disease based on one or more of the identified non-calcified or low-attenuated plaque, calcified or high-attenuated plaque, all plaque, arteries, epicardial fat, and/or the like. For example, in some embodiments, the system can be configured to generate a treatment plan for an arterial disease, renal artery disease, abdominal atherosclerosis, carotid atherosclerosis, and/or the like, and the medical image being analyzed can be taken from any one or more regions of the subject for such disease analysis. In some embodiments, one or more processes described herein in connection withFIG.2Bcan be repeated. For example, if a medical image of the same subject is taken again at a later point in time, one or more processes described herein can be repeated and the analytical results thereof can be used for disease tracking and/or other purposes. Further, in some embodiments, the system can be configured to identify and/or determine non-calcified plaque from a DECT or spectral CT image. Similar to the processes described above, in some embodiments, the system can be configured to access a DECT or spectral CT image, identify epicardial fat on the DECT image or spectral CT and/or segment one or more arteries on the DECT image or spectral CT, identify and/or classify a first set of pixels or regions within the arteries as a first set of low-attenuated or non-calcified plaque, and/or identify a second set of pixels or regions within the arteries as a second set of low-attenuated or non-calcified plaque. However, unlike the techniques described above, in some embodiments, such as for example where a DECT or spectral CT image is being analyzed, the system can be configured to identify a subset of those second set of pixels without having to perform a heterogeneity and/or homogeneity analysis of the second set of pixels. Rather, in some embodiments, the system can be configured to distinguish between blood and low-attenuated or non-calcified plaque directly from the image, for example by utilizing the dual or multispectral aspect of a DECT or spectral CT image. In some embodiments, the system can be configured to combine the first set of identified pixels or regions and the subset of the second set of pixels or regions identified as low-attenuated or non-calcified plaque to identify a whole set of the same on the medical image. In some embodiments, even if analyzing a DECT or spectral CT image, the system can be configured to further analyze the second set of pixels or regions by performing a heterogeneity or homogeneity analysis, similar to that described above in relation to block230. For example, even if analyzing a DECT or spectral CT image, in some embodiments, the distinction between certain areas of blood and/or low-attenuated or non-calcified plaque may not be complete and/or accurate. Imaging Analysis-Based Risk Assessment In some embodiments, the systems, devices, and methods described herein are configured to utilize medical image-based processing to assess for a subject his or her risk of a cardiovascular event, major adverse cardiovascular event (MACE), rapid plaque progression, and/or non-response to medication. In particular, in some embodiments, the system can be configured to automatically and/or dynamically assess such health risk of a subject by analyzing only non-invasively obtained medical images, for example using AI and/or ML algorithms, to provide a full image-based analysis report within minutes. In particular, in some embodiments, the system can be configured to calculate the total amount of plaque (and/or amounts of specific types of plaque) within a specific artery and/or within all the arteries of a patient. In some embodiments, the system can be configured to determine the total amount of bad plaque in a particular artery and/or within a total artery area across some or all of the arteries of the patient. In some embodiments, the system can be configured to determine a risk factor and/or a diagnosis for a particular patient to suffer a heart attack or other cardiac event based on the total amount of plaque in a particular artery and/or a total artery area across some or all of the arteries of a patient. Other risk factors that can be determined from the amount of “bad” plaque, or the relative amount of “bad” versus “good” plaque, can include the rate of disease progression and/or the likelihood of ischemia. In some embodiments, plaques can be measured by total volume (or area on cross-sectional imaging) as well as by relative amount when normalized to the total vessel volumes, total vessel lengths or subtended myocardium. In some embodiments, the imaging data of the coronary arteries can include measures of atherosclerosis, stenosis and vascular morphology. In some embodiments, this information can be combined with other cardiovascular disease phenotyping by quantitative characterization of left and right ventricles, left and right atria; aortic, mitral, tricuspid and pulmonic valves; aorta, pulmonary artery, pulmonary vein, coronary sinus and inferior and superior vena cava; epicardial or pericoronary fat; lung densities; bone densities; pericardium and others. As an example, in some embodiments, the imaging data for the coronary arteries may be integrated with the left ventricular mass, which can be segmented according to the amount and location of the artery it is subtended by. This combination of left ventricular fractional myocardial mass to coronary artery information may enhance the prediction of whether a future heart attack will be a large one or a small one. As another example, in some embodiments, the vessel volume of the coronary arteries can be related to the left ventricular mass as a measure of left ventricular hypertrophy, which can be a common finding in patients with hypertension. Increased left ventricular mass (relative or absolute) may indicate disease worsening or uncontrolled hypertension. As another example, in some embodiments, the onset, progression, and/or worsening of atrial fibrillation may be predicted by the atrial size, volume, atrial free wall mass and thickness, atrial function and fat surrounding the atrium. In some embodiments, these predictions may be done with a ML or AI algorithm or other algorithm type. Sequentially, in some embodiments, the algorithms that allow for segmentation of atherosclerosis, stenosis and vascular morphology—along with those that allow for segmentation of other cardiovascular structures, and thoracic structures—may serve as the inputs for the prognostic algorithms. In some embodiments, the outputs of the prognostic algorithms, or those that allow for image segmentation, may be leveraged as inputs to other algorithms that may then guide clinical decision making by predicting future events. As an example, in some embodiments, the integrated scoring of atherosclerosis, stenosis, and/or vascular morphology may identify patients who may benefit from coronary revascularization, that is, those who will achieve symptom benefit, reduced risk of heart attack and death. As another example, in some embodiments, the integrated scoring of atherosclerosis, stenosis and vascular morphology may identify individuals who may benefit from specific types of medications, such as lipid lowering medications (such as statin medications, PCSK-9 inhibitors, icosapent ethyl, and others); Lp(a) lowering medications; anti-thrombotic medications (such as clopidogrel, rivaroxaban and others). In some embodiments, the benefit that is predicted by these algorithms may be for reduced progression, determination of type of plaque progression (progression, regression or mixed response), stabilization due to the medical therapy, and/or need for heightened intensified therapy. In some embodiments, the imaging data may be combined with other data to identify areas within a coronary vessel that are normal and without plaque now but may be at higher likelihood of future plaque formation. In some embodiments, an automated or manual co-registration method can be combined with the imaging segmentation data to compare two or more images over time. In some embodiments, the comparison of these images can allow for determination of differences in coronary artery atherosclerosis, stenosis and vascular morphology over time, and can be used as an input variable for risk prediction. In some embodiments, the imaging data of the coronary arteries for atherosclerosis, stenosis, and vascular morphology—coupled or not coupled to thoracic and cardiovascular disease measurements—can be integrated into an algorithm that determines whether a coronary vessel is ischemia, or exhibits reduced blood flow or pressure (either at rest or hyperemic states). In some embodiments, the algorithms for coronary atherosclerosis, stenosis and ischemia can be modified by a computer system and/or other to remove plaque or “seal” plaque. In some embodiments, a comparison can be made before or after the system has removed or sealed the plaque to determine whether any changes have occurred. For example, in some embodiments, the system can be configured to determine whether coronary ischemia is removed with the plaque sealing. In some embodiments, the characterization of coronary atherosclerosis, stenosis and/or vascular morphology can enable relating a patient's biological age to their vascular age, when compared to a population-based cohort of patients who have undergone similar scanning. As an example, a 60-year old patient may have X units of plaque in their coronary arteries that is equivalent to the average 70-year old patient in the population-based cohort. In this case, the patient's vascular age may be 10 years older than the patient's biological age. In some embodiments, the risk assessment enabled by the image segmentation prediction algorithms can allow for refined measures of disease or death likelihood in people being considered for disability or life insurance. In this scenario, the risk assessment may replace or augment traditional actuarial algorithms. In some embodiments, imaging data may be combined with other data to augment risk assessment for future adverse events, such as heart attacks, strokes, death, rapid progression, non-response to medical therapy, no-reflow phenomenon and others. In some embodiments, other data may include a multi-omic approach wherein an algorithm integrates the imaging phenotype data with genotype data, proteomic data, transcriptomic data, metabolomic data, microbiomic data and/or activity and lifestyle data as measured by a smart phone or similar device. FIG.3Ais a flowchart illustrating an overview of an example embodiment(s) of a method for risk assessment based on medical image analysis. As illustrated inFIG.3A, in some embodiments, the system can be configured to access a medical image at block202. Further, in some embodiments, the system can be configured to identify one or more arteries at block204and/or one or more regions of plaque at block206. In addition, in some embodiments, the system can be configured to determine one or more vascular morphology and/or quantified plaque parameters at block208and/or classify stable or unstable plaque based on the determined one or more vascular morphology and/or quantified plaque parameters and/or a weighted measure thereof at block210. Additional detail regarding the processes and techniques represented in blocks202,204,206,208, and210can be found in the description above in relation toFIG.2A. In some embodiments, the system can automatically and/or dynamically determine and/or generate a risk of cardiovascular event for the subject at block302, for example using the classified stable and/or unstable regions of plaque. More specifically, in some embodiments, the system can utilize an AI, ML, or other algorithm to generate a risk of cardiovascular event, MACE, rapid plaque progression, and/or non-response to medication at block302based on the image analysis. In some embodiments, at block304, the system can be configured to compare the determined one or more vascular morphology parameters, quantified plaque parameters, and/or classified stable v. unstable plaque and/or values thereof, such as volume, ratio, and/or the like, to one or more known datasets of coronary values derived from one or more other subjects. The one or more known datasets can comprise one or more vascular morphology parameters, quantified plaque parameters, and/or classified stable v. unstable plaque and/or values thereof, such as volume, ratio, and/or the like, derived from medical images taken from other subjects, including healthy subjects and/or subjects with varying levels of risk. For example, the one or more known datasets of coronary values can be stored in a coronary values database306that can be locally accessible by the system and/or remotely accessible via a network connection by the system. In some embodiments, at block308, the system can be configured to update the risk of cardiovascular event for the subject based on the comparison to the one or more known datasets. For example, based on the comparison, the system may increase or decrease the previously generated risk assessment. In some embodiments, the system may maintain the previously generated risk assessment even after comparison. In some embodiments, the system can be configured to generate a proposed treatment for the subject based on the generated and/or updated risk assessment after comparison with the known datasets of coronary values. In some embodiments, at block310, the system can be configured to further identify one or more other cardiovascular structures from the medical image and/or determine one or more parameters associated with the same. For example, the one or more additional cardiovascular structures can include the left ventricle, right ventricle, left atrium, right atrium, aortic valve, mitral valve, tricuspid valve, pulmonic valve, aorta, pulmonary artery, inferior and superior vena cava, epicardial fat, and/or pericardium. In some embodiments, parameters associated with the left ventricle can include size, mass, volume, shape, eccentricity, surface area, thickness, and/or the like. Similarly, in some embodiments, parameters associated with the right ventricle can include size, mass, volume, shape, eccentricity, surface area, thickness, and/or the like. In some embodiments, parameters associated with the left atrium can include size, mass, volume, shape, eccentricity, surface area, thickness, pulmonary vein angulation, atrial appendage morphology, and/or the like. In some embodiments, parameters associated with the right atrium can include size, mass, volume, shape, eccentricity, surface area, thickness, and/or the like. Further, in some embodiments, parameters associated with the aortic valve can include thickness, volume, mass, calcifications, three-dimensional map of calcifications and density, eccentricity of calcification, classification by individual leaflet, and/or the like. In some embodiments, parameters associated with the mitral valve can include thickness, volume, mass, calcifications, three-dimensional map of calcifications and density, eccentricity of calcification, classification by individual leaflet, and/or the like. In some embodiments, parameters associated with the tricuspid valve can include thickness, volume, mass, calcifications, three-dimensional map of calcifications and density, eccentricity of calcification, classification by individual leaflet, and/or the like. In some embodiments, parameters associated with the pulmonic valve can include thickness, volume, mass, calcifications, three-dimensional map of calcifications and density, eccentricity of calcification, classification by individual leaflet, and/or the like. In some embodiments, parameters associated with the aorta can include dimensions, volume, diameter, area, enlargement, outpouching, and/or the like. In some embodiments, parameters associated with the pulmonary artery can include dimensions, volume, diameter, area, enlargement, outpouching, and/or the like. In some embodiments, parameters associated with the inferior and superior vena cava can include dimensions, volume, diameter, area, enlargement, outpouching, and/or the like. In some embodiments, parameters associated with epicardial fat can include volume, density, density in three dimensions, and/or the like. In some embodiments, parameters associated with the pericardium can include thickness, mass, and/or the like. In some embodiments, at block312, the system can be configured to classify one or more of the other identified cardiovascular structures, for example using the one or more determined parameters thereof. In some embodiments, for one or more of the other identified cardiovascular structures, the system can be configured to classify each as normal v. abnormal, increased or decreased, and/or static or dynamic over time. In some embodiments, at block314, the system can be configured to compare the determined one or more parameters of other cardiovascular structures to one or more known datasets of cardiovascular structure parameters derived from one or more other subjects. The one or more known datasets of cardiovascular structure parameters can include any one or more of the parameters mentioned above associated with the other cardiovascular structures. In some embodiments, the cardiovascular structure parameters of the one or more known datasets can be derived from medical images taken from other subjects, including healthy subjects and/or subjects with varying levels of risk. In some embodiments, the one or more known datasets of cardiovascular structure parameters can be stored in a cardiovascular structure values or cardiovascular disease (CVD) database316that can be locally accessible by the system and/or remotely accessible via a network connection by the system. In some embodiments, at block318, the system can be configured to update the risk of cardiovascular event for the subject based on the comparison to the one or more known datasets of cardiovascular structure parameters. For example, based on the comparison, the system may increase or decrease the previously generated risk assessment. In some embodiments, the system may maintain the previously generated risk assessment even after comparison. In some embodiments, at block320, the system can be configured to generate a quantified color map, which can include color coding for one or more other cardiovascular structures identified from the medical image, stable plaque, unstable plaque, arteries, and/or the like. In some embodiments, at block322, the system can be configured to generate a proposed treatment for the subject based on the generated and/or updated risk assessment after comparison with the known datasets of cardiovascular structure parameters. In some embodiments, at block324, the system can be configured to further identify one or more non-cardiovascular structures from the medical image and/or determine one or more parameters associated with the same. For example, the medical image can include one or more non-cardiovascular structures that are in the field of view. In particular, the one or more non-cardiovascular structures can include the lungs, bones, liver, and/or the like. In some embodiments, parameters associated with the non-cardiovascular structures can include volume, surface area, ratio or function of volume to surface area, heterogeneity of radiodensity values, radiodensity values, geometry (such as oblong, spherical, and/or the like), spatial radiodensity, spatial scarring, and/or the like. In addition, in some embodiments, parameters associated with the lungs can include density, scarring, and/or the like. For example, in some embodiments, the system can be configured to associate a low Hounsfield unit of a region of the lungs with emphysema. In some embodiments, parameters associated with bones, such as the spine and/or ribs, can include radiodensity, presence and/or extent of fractures, and/or the like. For example, in some embodiments, the system can be configured to associate a low Hounsfield unit of a region of bones with osteoporosis. In some embodiments, parameters associated with the liver can include density for non-alcoholic fatty liver disease which can be assessed by the system by analyzing and/or comparing to the Hounsfield unit density of the liver. In some embodiments, at block326, the system can be configured to classify one or more of the identified non-cardiovascular structures, for example using the one or more determined parameters thereof. In some embodiments, for one or more of the identified non-cardiovascular structures, the system can be configured to classify each as normal v. abnormal, increased or decreased, and/or static or dynamic over time. In some embodiments, at block328, the system can be configured to compare the determined one or more parameters of non-cardiovascular structures to one or more known datasets of non-cardiovascular structure parameters or non-CVD values derived from one or more other subjects. The one or more known datasets of non-cardiovascular structure parameters or non-CVD values can include any one or more of the parameters mentioned above associated with non-cardiovascular structures. In some embodiments, the non-cardiovascular structure parameters or non-CVD values of the one or more known datasets can be derived from medical images taken from other subjects, including healthy subjects and/or subjects with varying levels of risk. In some embodiments, the one or more known datasets of non-cardiovascular structure parameters or non-CVD values can be stored in a non-cardiovascular structure values or non-CVD database330that can be locally accessible by the system and/or remotely accessible via a network connection by the system. In some embodiments, at block332, the system can be configured to update the risk of cardiovascular event for the subject based on the comparison to the one or more known datasets of non-cardiovascular structure parameters or non-CVD values. For example, based on the comparison, the system may increase or decrease the previously generated risk assessment. In some embodiments, the system may maintain the previously generated risk assessment even after comparison. In some embodiments, at block334, the system can be configured to generate a quantified color map, which can include color coding for one or more non-cardiovascular structures identified from the medical image, as well as for the other cardiovascular structures identified from the medical image, stable plaque, unstable plaque, arteries, and/or the like. In some embodiments, at block336, the system can be configured to generate a proposed treatment for the subject based on the generated and/or updated risk assessment after comparison with the known datasets of non-cardiovascular structure parameters or non-CVD values. In some embodiments, one or more processes described herein in connection withFIG.3Acan be repeated. For example, if a medical image of the same subject is taken again at a later point in time, one or more processes described herein can be repeated and the analytical results thereof can be used for tracking of risk assessment of the subject based on image processing and/or other purposes. Quantification of Atherosclerosis In some embodiments, the system is configured to analyze one or more arteries present in a medical image, such as CT scan data, to automatically and/or dynamically quantify atherosclerosis. In some embodiments, the system is configured to quantify atherosclerosis as the primary disease process, while stenosis and/or ischemia can be considered surrogates thereof. Prior to the embodiments described herein, it was not feasible to quantify the primary disease due to the lengthy manual process and manpower needed to do so, which could take anywhere from 4 to 8 or more hours. In contrast, in some embodiments, the system is configured to quantify atherosclerosis based on analysis of a medical image and/or CT scan using one or more AI, ML, and/or other algorithms that can segment, identify, and/or quantify atherosclerosis in less than about 1 minute, about 2 minutes, about 3 minutes, about 4 minutes, about 5 minutes, about 6 minutes, about 7 minutes, about 8 minutes, about 9 minutes, about 10 minutes, about 11 minutes, about 12 minutes, about 13 minutes, about 14 minutes, about 15 minutes, about 20 minutes, about 25 minutes, about 30 minutes, about 40 minutes, about 50 minutes, and/or about 60 minutes. In some embodiments, the system is configured to quantify atherosclerosis within a time frame defined by two of the aforementioned values. In some embodiments, the system is configured to calculate stenosis rather than simply eyeballing, thereby allowing users to better understand whole heart atherosclerosis and/or guaranteeing the same calculated stenosis result if the same medical image is used for analysis. Importantly, the type of atherosclerosis can also be quantified and/or classified by this method. Types of atherosclerosis can be determined binarily (calcified vs. non-calcified plaque), ordinally (dense calcified plaque, calcified plaque, fibrous plaque, fibrofatty plaque, necrotic core, or admixtures of plaque types), or continuously (by attenuation density on a Hounsfield unit scale or similar). FIG.3Bis a flowchart illustrating an overview of an example embodiment(s) of a method for quantification and/or classification of atherosclerosis based on medical image analysis. As illustrated inFIG.3B, in some embodiments, the system can be configured to access a medical image at block202, such as a CT scan of a coronary region of a subject. Further, in some embodiments, the system can be configured to identify one or more arteries at block204and/or one or more regions of plaque at block206. In addition, in some embodiments, the system can be configured to determine one or more vascular morphology and/or quantified plaque parameters at block208. For example, in some embodiments, the system can be configured to determine a geometry and/or volume of a region of plaque and/or a vessel at block201, a ratio or function of volume to surface area of a region of plaque at block203, a heterogeneity or homogeneity index of a region of plaque at block205, radiodensity of a region of plaque and/or a composition thereof by ranges of radiodensity values at block207, a ratio of radiodensity to volume of a region of plaque at block209, and/or a diffusivity of a region of plaque at block211. Additional detail regarding the processes and techniques represented in blocks202,204,206,208,201,203,205,207,209, and211can be found in the description above in relation toFIG.2A. In some embodiments, the system can be configured quantify and/or classify atherosclerosis at block340based on the determined one or more vascular morphology and/or quantified plaque parameters. In some embodiments, the system can be configured to generate a weighted measure of one or more vascular morphology parameters and/or quantified plaque parameters determined and/or derived from raw medical images. For example, in some embodiments, the system can be configured to weight one or more vascular morphology parameters and/or quantified plaque parameters equally. In some embodiments, the system can be configured weight one or more vascular morphology parameters and/or quantified plaque parameters differently. In some embodiments, the system can be configured weight one or more vascular morphology parameters and/or quantified plaque parameters logarithmically, algebraically, and/or utilizing another mathematical transform. In some embodiments, the system is configured to quantify and/or classify atherosclerosis at block340using the weighted measure and/or using only some of the vascular morphology parameters and/or quantified plaque parameters. In some embodiments, the system is configured to generate a weighted measure of the one or more vascular morphology parameters and/or quantified plaque parameters by comparing the same to one or more known vascular morphology parameters and/or quantified plaque parameters that are derived from medical images of other subjects. For example, the one or more known vascular morphology parameters and/or quantified plaque parameters can be derived from one or more healthy subjects and/or subjects at risk of coronary vascular disease. In some embodiments, the system is configured to classify atherosclerosis of a subject based on the quantified atherosclerosis as one or more of high risk, medium risk, or low risk. In some embodiments, the system is configured to classify atherosclerosis of a subject based on the quantified atherosclerosis using an AI, ML, and/or other algorithm. In some embodiments, the system is configured to classify atherosclerosis of a subject by combining and/or weighting one or more of a ratio of volume of surface area, volume, heterogeneity index, and radiodensity of the one or more regions of plaque. In some embodiments, a plaque having a low ratio of volume to surface area or a low absolute volume itself can indicate that the plaque is stable. As such, in some embodiments, the system can be configured to determine that a ratio of volume to surface area of a region of plaque below a predetermined threshold is indicative of a low risk atherosclerosis. Thus, in some embodiments, the system can be configured to take into account the number and/or sides of a plaque. For example, if there are a higher number of plaques with smaller sides, then that can be associated with a higher surface area or more irregularity, which in turn can be associated with a higher surface area to volume ratio. In contrast, if there are fewer number of plaques with larger sides or more regularity, then that can be associated with a lower surface area to volume ratio or a higher volume to surface area ratio. In some embodiments, a high radiodensity value can indicate that a plaque is highly calcified or stable, whereas a low radiodensity value can indicate that a plaque is less calcified or unstable. As such, in some embodiments, the system can be configured to determine that a radiodensity of a region of plaque above a predetermined threshold is indicative of a low risk atherosclerosis. In some embodiments, a plaque having a low heterogeneity or high homogeneity can indicate that the plaque is stable. As such, in some embodiments, the system can be configured to determine that a heterogeneity of a region of plaque below a predetermined threshold is indicative of a low risk atherosclerosis. In some embodiments, at block342, the system is configured to calculate or determine a numerical calculation or representation of coronary stenosis based on the quantified and/or classified atherosclerosis derived from the medical image. In some embodiments, the system is configured to calculate stenosis using the one or more vascular morphology parameters and/or quantified plaque parameters derived from the medical image of a coronary region of the subject. In some embodiments, at block344, the system is configured to predict a risk of ischemia for the subject based on the quantified and/or classified atherosclerosis derived from the medical image. In some embodiments, the system is configured to calculate a risk of ischemia using the one or more vascular morphology parameters and/or quantified plaque parameters derived from the medical image of a coronary region of the subject. In some embodiments, the system is configured to generate a proposed treatment for the subject based on the quantified and/or classified atherosclerosis, stenosis, and/or risk of ischemia, wherein all of the foregoing are derived automatically and/or dynamically from a raw medical image using image processing algorithms and techniques. In some embodiments, one or more processes described herein in connection withFIG.3Acan be repeated. For example, if a medical image of the same subject is taken again at a later point in time, one or more processes described herein can be repeated and the analytical results thereof can be used for tracking of quantified atherosclerosis for a subject and/or other purposes. Quantification of Plaque, Stenosis, and/or CAD-RADS Score As discussed herein, in some embodiments, the system is configured to take the guesswork out of interpretation of medical images and provide substantially exact and/or substantially accurate calculations or estimates of stenosis percentage, atherosclerosis, and/or Coronary Artery Disease-Reporting and Data System (CAD-RADS) score as derived from a medical image. As such, in some embodiments, the system can enhance the reads of the imagers by providing comprehensive quantitative analyses that can improve efficiency, accuracy, and/or reproducibility. FIG.3Cis a flowchart illustrating an overview of an example embodiment(s) of a method for quantification of stenosis and generation of a CAD-RADS score based on medical image analysis. As illustrated inFIG.3A, in some embodiments, the system can be configured to access a medical image at block202. Additional detail regarding the types of medical images and other processes and techniques represented in block202can be found in the description above in relation toFIG.2A. In some embodiments, at block354, the system is configured to identify one or more arteries, plaque, and/or fat in the medical image, for example using AI, ML, and/or other algorithms. The processes and techniques for identifying one or more arteries, plaque, and/or fat can include one or more of the same features as described above in relation to blocks204and206. In particular, in some embodiments, the system can be configured to utilize one or more AI and/or ML algorithms to automatically and/or dynamically identify one or more arteries, including for example coronary arteries, carotid arteries, aorta, renal artery, lower extremity artery, and/or cerebral artery. In some embodiments, one or more AI and/or ML algorithms can be trained using a Convolutional Neural Network (CNN) on a set of medical images on which arteries have been identified, thereby allowing the AI and/or ML algorithm automatically identify arteries directly from a medical image. In some embodiments, the arteries are identified by size and/or location. Further, in some embodiments, the system can be configured to identify one or more regions of plaque in the medical image, for example using one or more AI and/or ML algorithms to automatically and/or dynamically identify one or more regions of plaque. In some embodiments, the one or more AI and/or ML algorithms can be trained using a Convolutional Neural Network (CNN) on a set of medical images on which regions of plaque have been identified, thereby allowing the AI and/or ML algorithm automatically identify regions of plaque directly from a medical image. In some embodiments, the system can be configured to identify a vessel wall and a lumen wall for each of the identified coronary arteries in the medical image. In some embodiments, the system is then configured to determine the volume in between the vessel wall and the lumen wall as plaque. In some embodiments, the system can be configured to identify regions of plaque based on the radiodensity values typically associated with plaque, for example by setting a predetermined threshold or range of radiodensity values that are typically associated with plaque with or without normalizing using a normalization device. Similarly, in some embodiments, the system can be configured to identify one or more regions of fat, such as epicardial fat, in the medical image, for example using one or more AI and/or ML algorithms to automatically and/or dynamically identify one or more regions of fat. In some embodiments, the one or more AI and/or ML algorithms can be trained using a Convolutional Neural Network (CNN) on a set of medical images on which regions of fat have been identified, thereby allowing the AI and/or ML algorithm automatically identify regions of fat directly from a medical image. In some embodiments, the system can be configured to identify regions of fat based on the radiodensity values typically associated with fat, for example by setting a predetermined threshold or range of radiodensity values that are typically associated with fat with or without normalizing using a normalization device. In some embodiments, the system can be configured to determine one or more vascular morphology and/or quantified plaque parameters at block208. For example, in some embodiments, the system can be configured to determine a geometry and/or volume of a region of plaque and/or a vessel at block201, a ratio or function of volume to surface area of a region of plaque at block203, a heterogeneity or homogeneity index of a region of plaque at block205, radiodensity of a region of plaque and/or a composition thereof by ranges of radiodensity values at block207, a ratio of radiodensity to volume of a region of plaque at block209, and/or a diffusivity of a region of plaque at block211. Additional detail regarding the processes and techniques represented in blocks208,201,203,205,207,209, and211can be found in the description above in relation toFIG.2A. In some embodiments, at block358, the system is configured to calculate or determine a numerical calculation or representation of coronary stenosis based on the one or more vascular morphology parameters and/or quantified plaque parameters derived from the medical image of a coronary region of the subject. In some embodiments, the system can be configured to generate a weighted measure of one or more vascular morphology parameters and/or quantified plaque parameters determined and/or derived from raw medical images. For example, in some embodiments, the system can be configured weight one or more vascular morphology parameters and/or quantified plaque parameters equally. In some embodiments, the system can be configured to weight one or more vascular morphology parameters and/or quantified plaque parameters differently. In some embodiments, the system can be configured weight one or more vascular morphology parameters and/or quantified plaque parameters logarithmically, algebraically, and/or utilizing another mathematical transform. In some embodiments, the system is configured to calculate stenosis at block358using the weighted measure and/or using only some of the vascular morphology parameters and/or quantified plaque parameters. In some embodiments, the system can be configured to calculate stenosis on a vessel-by-vessel basis or a region-by-region basis. In some embodiments, based on the calculated stenosis, the system is configured to determine a CAD-RADS score at block360. This is in contrast to preexisting methods of determining a CAD-RADS based on eyeballing or general assessment of a medical image by a physician, which can result in unreproducible results. In some embodiments described herein, however, the system can be configured to generate a reproducible and/or objective calculated CAD-RADS score based on automatic and/or dynamic image processing of a raw medical image. In some embodiments, at block362, the system can be configured to determine a presence or risk of ischemia based on the calculated stenosis, one or more quantified plaque parameters and/or vascular morphology parameters derived from the medical image. For example, in some embodiments, the system can be configured to determine a presence or risk of ischemia by combining one or more of the foregoing parameters, either weighted or not, or by using some or all of these parameters on an individual basis. In some embodiments, the system can be configured to determine a presence of risk of ischemia by comparing one or more of the calculated stenosis, one or more quantified plaque parameters and/or vascular morphology parameters to a database of known such parameters derived from medical images of other subjects, including for example healthy subjects and/or subjects at risk of a cardiovascular event. In some embodiments, the system can be configured to calculate presence or risk of ischemia on a vessel-by-vessel basis or a region-by-region basis. In some embodiments, at block364, the system can be configured to determine one or more quantified parameters of fat for one or more regions of fat identified from the medical image. For example, in some embodiments, the system can utilize any of the processes and/or techniques discussed herein in relation to deriving quantified parameters of plaque, such as those described in connection with blocks208,201,203,205,207,209, and211. In particular, in some embodiments, the system can be configured to determine one or more parameters of fat, including volume, geometry, radiodensity, and/or the like of one or more regions of fat within the medical image. In some embodiments, at block366, the system can be configured to generate a risk assessment of cardiovascular disease or event for the subject. In some embodiments, the generated risk assessment can comprise a risk score indicating a risk of coronary disease for the subject. In some embodiments, the system can generate a risk assessment based on an analysis of one or more vascular morphology parameters, one or more quantified plaque parameters, one or more quantified fat parameters, calculated stenosis, risk of ischemia, CAD-RADS score, and/or the like. In some embodiments, the system can be configured to generate a weighted measure of one or more vascular morphology parameters, one or more quantified plaque parameters, one or more quantified fat parameters, calculated stenosis, risk of ischemia, and/or CAD-RADS score of the subject. For example, in some embodiments, the system can be configured weight one or more of the foregoing parameters equally. In some embodiments, the system can be configured weight one or more of these parameters differently. In some embodiments, the system can be configured weight one or more of these parameters logarithmically, algebraically, and/or utilizing another mathematical transform. In some embodiments, the system is configured to generate a risk assessment of coronary disease or cardiovascular event for the subject at block366using the weighted measure and/or using only some of these parameters. In some embodiments, the system can be configured to generate a risk assessment of coronary disease or cardiovascular event for the subject by combining one or more of the foregoing parameters, either weighted or not, or by using some or all of these parameters on an individual basis. In some embodiments, the system can be configured to generate a risk assessment of coronary disease or cardiovascular event by comparing one or more vascular morphology parameters, one or more quantified plaque parameters, one or more quantified fat parameters, calculated stenosis, risk of ischemia, and/or CAD-RADS score of the subject to a database of known such parameters derived from medical images of other subjects, including for example healthy subjects and/or subjects at risk of a cardiovascular event. Further, in some embodiments, the system can be configured to automatically and/or dynamically generate a CAD-RADS modifier based on one or more of the determined one or more vascular morphology parameters, the set of quantified plaque parameters of the one or more regions of plaque, the quantified coronary stenosis, the determined presence or risk of ischemia, and/or the determined set of quantified fat parameters. In particular, in some embodiments, the system can be configured to automatically and/or dynamically generate one or more applicable CAD-RADS modifiers for the subject, including for example one or more of nondiagnostic (N), stent (S), graft (G), or vulnerability (V), as defined by and used by CAD-RADS. For example, N can indicate that a study is nondiagnostic, S can indicate the presence of a stent, G can indicate the presence of a coronary artery bypass graft, and V can indicate the presence of vulnerable plaque, for example showing a low radiodensity value. In some embodiments, the system can be configured to generate a proposed treatment for the subject based on the generated risk assessment of coronary disease, one or more vascular morphology parameters, one or more quantified plaque parameters, one or more quantified fat parameters, calculated stenosis, risk of ischemia, CAD-RADS score, and/or CAD-RADS modifier derived from the raw medical image using image processing. In some embodiments, one or more processes described herein in connection withFIG.3Bcan be repeated. For example, if a medical image of the same subject is taken again at a later point in time, one or more processes described herein can be repeated and the analytical results thereof can be used for tracking of quantified plaque, calculated stenosis, CAD-RADS score and/or modifier derived from a medical image(s), risk determined risk of ischemia, quantified fat parameters, generated risk assessment of coronary disease for a subject, and/or other purposes. Disease Tracking In some embodiments, the systems, methods, and devices described herein can be configured to track the progression and/or regression of an arterial and/or plaque-based disease, such as a coronary disease. For example, in some embodiments, the system can be configured to track the progression and/or regression of a disease by automatically and/or dynamically analyzing a plurality of medical images obtained from different times using one or more techniques discussed herein and comparing different parameters derived therefrom. As such, in some embodiments, the system can provide an automated disease tracking tool using non-invasive raw medical images as an input, which does not rely on subjective assessment. In particular, in some embodiments, the system can be configured to utilize a four-category system to determine whether plaque stabilization or worsening is occurring in a subject. For example, in some embodiments, these categories can include: (1) “plaque progression” or “rapid plaque progression”; (2) “mixed response—calcium dominant” or “non-rapid calcium dominant mixed response”; (3) “mixed response—non-calcium dominant” or “non-rapid non-calcium dominant mixed response”; or (4) “plaque regression.” In some embodiments, in “plaque progression” or “rapid plaque progression,” the overall volume or relative volume of plaque increases. In some embodiments, in “mixed response—calcium dominant” or “non-rapid calcium dominant mixed response,” the plaque volume remains relatively constant or does not increase to the threshold level of “rapid plaque progression” but there is a general progression of calcified plaque and a general regression of non-calcified plaque. In some embodiments, in “mixed response—non-calcium dominant” or “non-rapid non-calcium dominant mixed response,” the plaque volume remains relatively constant but there is a general progression of non-calcified plaque and a general regression of calcified plaque. In some embodiments, in “plaque regression,” the overall volume or relative volume of plaque decreases. In some embodiments, these 4 categories can be expanded to be more granular, for example including for higher vs. lower density calcium plaques (e.g., for those > vs. <1000 Hounsfield units) and/or to categorize more specifically in calcium-dominant and non-calcified plaque-dominant mixed response. For example, for the non-calcified plaque-dominant mixed response, the non-calcified plaque can further include necrotic core, fibrofatty plaque and/or fibrous plaque as separate categories within the overall umbrella of non-calcified plaque. Similarly, calcified plaques can be categorized as lower density calcified plaques, medium density calcified plaques and high density calcified plaques. FIG.3Dis a flowchart illustrating an overview of an example embodiment(s) of a method for disease tracking based on medical image analysis. For example, in some embodiments, the system can be configured to track the progression and/or regression of a plaque-based disease or condition, such as a coronary disease relating to or involving atherosclerosis, stenosis, ischemia, and/or the like, by analyzing one or more medical images obtained non-invasively. As illustrated inFIG.3D, in some embodiments, the system at block372is configured to access a first set of plaque parameters derived from a medical image of a subject at a first point in time. In some embodiments, the medical image can be stored in a medical image database100and can include any of the types of medical images described above, including for example CT, non-contrast CT, contrast-enhanced CT, MR, DECT, Spectral CT, and/or the like. In some embodiments, the medical image of the subject can comprise the coronary region, coronary arteries, carotid arteries, renal arteries, abdominal aorta, cerebral arteries, lower extremities, and/or upper extremities of the subject. In some embodiments, the set of plaque parameters can be stored in a plaque parameter database370, which can include any of the quantified plaque parameters discussed above in relation to blocks208,201,203,205,207,209, and/or211. In some embodiments, the system can be configured to directly access the first set of plaque parameters that were previously derived from a medical image(s) and/or stored in a plaque parameter database370. In some embodiments, the plaque parameter database370can be locally accessible and/or remotely accessible by the system via a network connection. In some embodiments, the system can be configured to dynamically and/or automatically derive the first set of plaque parameters from a medical image taken from a first point in time. In some embodiments, at block374, the system can be configured to access a second medical image(s) of the subject, which can be obtained from the subject at a later point in time than the medical image from which the first set of plaque parameters were derived. In some embodiments, the medical image can be stored in a medical image database100and can include any of the types of medical images described above, including for example CT, non-contrast CT, contrast-enhanced CT, MR, DECT, Spectral CT, and/or the like. In some embodiments, at block376, the system can be configured to dynamically and/or automatically derive a second set of plaque parameters from the second medical image taken from the second point in time. In some embodiments, the second set of plaque parameters can include any of the quantified plaque parameters discussed above in relation to blocks208,201,203,205,207,209, and/or211. In some embodiments, the system can be configured to store the derived or determined second set of plaque parameters in the plaque parameter database370. In some embodiments, at block378, the system can be configured to analyze changes in one or more plaque parameters between the first set derived from a medical image taken at a first point in time to the second set derived from a medical image taken at a later point in time. For example, in some embodiments, the system can be configured to compare a quantified plaque parameter between the two scans, such as for example radiodensity, volume, geometry, location, ratio or function of volume to surface area, heterogeneity index, radiodensity composition, radiodensity composition as a function of volume, ratio of radiodensity to volume, diffusivity, any combinations or relations thereof, and/or the like of one or more regions of plaque. In some embodiments, the system can be configured to determine the heterogeneity index of one or more regions of plaque by generating a spatial mapping or a three-dimensional histogram of radiodensity values across a geometric shape of one or more regions of plaque. In some embodiments, the system is configured to analyze changes in one or more non-image based metrics, such as for example serum biomarkers, genetics, omics, transcriptomics, microbiomics, and/or metabolomics. In some embodiments, the system is configured to determine a change in plaque composition in terms of radiodensity or stable v. unstable plaque between the two scans. For example, in some embodiments, the system is configured to determine a change in percentage of higher radiodensity or stable plaques v. lower radiodensity or unstable plaques between the two scans. In some embodiments, the system can be configured to track a change in higher radiodensity plaques v. lower radiodensity plaques between the two scans. In some embodiments, the system can be configured to define higher radiodensity plaques as those with a Hounsfield unit of above 1000 and lower radiodensity plaques as those with a Hounsfield unit of below 1000. In some embodiments, at block380, the system can be configured to determine the progression or regression of plaque and/or any other related measurement, condition, assessment, or related disease based on the comparison of the one or more parameters derived from two or more scans and/or change in one or more non-image based metrics, such as serum biomarkers, genetics, omics, transcriptomics, microbiomics, and/or metabolomics. For example, in some embodiments, the system can be configured to determine the progression and/or regression of plaque in general, atherosclerosis, stenosis, risk or presence of ischemia, and/or the like. Further, in some embodiments, the system can be configured to automatically and/or dynamically generate a CAD-RADS score of the subject based on the quantified or calculated stenosis, as derived from the two medical images. Additional detail regarding generating a CAD-RADS score is described herein in relation toFIG.3C. In some embodiments, the system can be configured to determine a progression or regression in the CAD-RADS score of the subject. In some embodiments, the system can be configured to compare the plaque parameters individually and/or combining one or more of them as a weighted measure. For example, in some embodiments, the system can be configured to weight the plaque parameters equally, differently, logarithmically, algebraically, and/or utilizing another mathematical transform. In some embodiments, the system can be configured to utilize only some or all of the quantified plaque parameters. In some embodiments, the state of plaque progression as determined by the system can include one of four categories, including rapid plaque progression, non-rapid calcium dominant mixed response, non-rapid non-calcium dominant mixed response, or plaque regression. In some embodiments, the system is configured to classify the state of plaque progression as rapid plaque progression when a percent atheroma volume increase of the subject is more than 1% per year. In some embodiments, the system is configured to classify the state of plaque progression as non-rapid calcium dominant mixed response when a percent atheroma volume increase of the subject is less than 1% per year and calcified plaque represents more than 50% of total new plaque formation. In some embodiments, the system is configured to classify the state of plaque progression as non-rapid non-calcium dominant mixed response when a percent atheroma volume increase of the subject is less than 1% per year and non-calcified plaque represents more than 50% of total new plaque formation. In some embodiments, the system is configured to classify the state of plaque progression as plaque regression when a decrease in total percent atheroma volume is present. In some embodiments, at block382, the system can be configured to generate a proposed treatment plan for the subject. For example, in some embodiments, the system can be configured to generate a proposed treatment plan for the subject based on the determined progression or regression of plaque and/or any other related measurement, condition, assessment, or related disease based on the comparison of the one or more parameters derived from two or more scans. In some embodiments, one or more processes described herein in connection withFIG.3Dcan be repeated. For example, one or more processes described herein can be repeated and the analytical results thereof can be used for continued tracking of a plaque-based disease and/or other purposes. Determination of Cause of Change in Calcium In some embodiments, the systems, methods and devices disclosed herein can be configured to generate analysis and/or reports that can determine the likely cause of an increased calcium score. A high or increased calcium score alone is not representative of any specific cause, either positive or negative. Rather, in general, there can be various possible causes for a high or increased calcium score. For example, in some cases, a high or increased calcium score can be an indicator of significant heart disease and/or that the patient is at increased risk of a heart attack. Also, in some cases, a high or increased calcium score can be an indicator that the patient is increasing the amount of exercise performed, because exercise can convert fatty material plaque within the artery vessel. In some cases, a high or increased calcium score can be an indicator of the patient beginning a statin regimen wherein the statin is converting the fatty material plaque into calcium. Unfortunately, a blood test alone cannot be used to determine which of the foregoing reasons is the likely cause of an increased calcium score. In some embodiments, by utilizing one or more techniques described herein, the system can be configured to determine the cause of an increased or high calcium score. More specifically, in some embodiments, the system can be configured to track a particular segment of an artery wall vessel of a patient in such a way to monitor the conversion of a fatty deposit material plaque lesion to a mostly calcified plaque deposit, which can be helpful in determining the cause of an increase calcium score, such as one or more of the causes identified above. In addition, in some embodiments, the system can be configured to determine and/or use the location, size, shape, diffusivity and/or the attenuation radiodensity of one or more regions of calcified plaque to determine the cause of an increase in calcium score. As a non-limiting example, if a calcium plaque increases in density, this may represent a stabilization of plaque by treatment or lifestyle, whereas if a new calcium plaque forms where one was not there before (particularly with a lower attenuation density), this may represent an adverse finding of disease progression rather than stabilization. In some embodiments, one or more processes and techniques described herein may be applied for non-contrast CT scans (such as an ECG gated coronary artery calcium score or non-ECG gated chest CT) as well as contrast-enhanced CT scans (such as a coronary CT angiogram). As another non-limiting example, the CT scan image acquisition parameters can be altered to improve understanding of calcium changes over time. As an example, traditional coronary artery calcium imaging is done using a 2.5-3.0 mm slice thickness and detecting voxels/pixels that are 130 Hounsfield units or greater. An alternative may be to do “thin” slice imaging with 0.5 mm slice thickness or similar; and detecting all Hounsfield units densities below 130 and above a certain threshold (e.g., 100) that may identify less dense calcium that may be missed by an arbitrary 130 Hounsfield unit threshold. FIG.3Eis a flowchart illustrating an overview of an example embodiment(s) of a method for determination of cause of change in calcium score, whether an increase or decrease, based on medical image analysis. As illustrated inFIG.3E, in some embodiments, the system can be configured to access a first calcium score and/or a first set of plaque parameters of a subject at block384. The first calcium score and/or a first set of plaque parameters can be derived from a medical image of a subject and/or from a blood test at a first point in time. In some embodiments, the medical image can be stored in a medical image database100and can include any of the types of medical images described above, including for example CT, non-contrast CT, contrast-enhanced CT, MR, DECT, Spectral CT, and/or the like. In some embodiments, the medical image of the subject can comprise the coronary region, coronary arteries, carotid arteries, renal arteries, abdominal aorta, cerebral arteries, lower extremities, and/or upper extremities of the subject. In some embodiments, the set of plaque parameters can be stored in a plaque parameter database370, which can include any of the quantified plaque parameters discussed above in relation to blocks208,201,203,205,207,209, and/or211. In some embodiments, the system can be configured to directly access and/or retrieve the first calcium score and/or first set of plaque parameters that are stored in a calcium score database398and/or plaque parameter database370respectively. In some embodiments, the plaque parameter database370and/or calcium score database298can be locally accessible and/or remotely accessible by the system via a network connection. In some embodiments, the system can be configured to dynamically and/or automatically derive the first set of plaque parameters and/or calcium score from a medical image and/or blood test of the subject taken from a first point in time. In some embodiments, at block386, the system can be configured to access a second calcium score and/or second medical image(s) of the subject, which can be obtained from the subject at a later point in time than the first calcium score and/or medical image from which the first set of plaque parameters were derived. For example, in some embodiments, the second calcium score can be derived from the second medical image and/or a second blood test taken of the subject at a second point in time. In some embodiments, the second calcium score can be stored in the calcium score database398. In some embodiments, the medical image can be stored in a medical image database100and can include any of the types of medical images described above, including for example CT, non-contrast CT, contrast-enhanced CT, MR, DECT, Spectral CT, and/or the like. In some embodiments, at block388, the system can be configured to compare the first calcium score to the second calcium score and determine a change in the calcium score. However, as discussed above, this alone typically does not provide insight as to the cause of the change in calcium score, if any. In some embodiments, if there is no statistically significant change in calcium score between the two readings, for example if any difference is below a predetermined threshold value, then the system can be configured to end the analysis of the change in calcium score. In some embodiments, if there is a statistically significant change in calcium score between the two readings, for example if the difference is above a predetermined threshold value, then the system can be configured to continue its analysis. In particular, in some embodiments, at block390, the system can be configured to dynamically and/or automatically derive a second set of plaque parameters from the second medical image taken from the second point in time. In some embodiments, the second set of plaque parameters can include any of the quantified plaque parameters discussed above in relation to blocks208,201,203,205,207,209, and/or211. In some embodiments, the system can be configured to store the derived or determined second set of plaque parameters in the plaque parameter database370. In some embodiments, at block392, the system can be configured to analyze changes in one or more plaque parameters between the first set derived from a medical image taken at a first point in time to the second set derived from a medical image taken at a later point in time. For example, in some embodiments, the system can be configured to compare a quantified plaque parameter between the two scans, such as for example radiodensity, volume, geometry, location, ratio or function of volume to surface area, heterogeneity index, radiodensity composition, radiodensity composition as a function of volume, ratio of radiodensity to volume, diffusivity, any combinations or relations thereof, and/or the like of one or more regions of plaque and/or one or more regions surrounding plaque. In some embodiments, the system can be configured to determine the heterogeneity index of one or more regions of plaque by generating a spatial mapping or a three-dimensional histogram of radiodensity values across a geometric shape of one or more regions of plaque. In some embodiments, the system is configured to analyze changes in one or more non-image based metrics, such as for example serum biomarkers, genetics, omics, transcriptomics, microbiomics, and/or metabolomics. In some embodiments, the system is configured to determine a change in plaque composition in terms of radiodensity or stable v. unstable plaque between the two scans. For example, in some embodiments, the system is configured to determine a change in percentage of higher radiodensity or stable plaques v. lower radiodensity or unstable plaques between the two scans. In some embodiments, the system can be configured to track a change in higher radiodensity plaques v. lower radiodensity plaques between the two scans. In some embodiments, the system can be configured to define higher radiodensity plaques as those with a Hounsfield unit of above 1000 and lower radiodensity plaques as those with a Hounsfield unit of below 1000. In some embodiments, the system can be configured to compare the plaque parameters individually and/or combining one or more of them as a weighted measure. For example, in some embodiments, the system can be configured to weight the plaque parameters equally, differently, logarithmically, algebraically, and/or utilizing another mathematical transform. In some embodiments, the system can be configured to utilize only some or all of the quantified plaque parameters. In some embodiments, at block394, the system can be configured to characterize the change in calcium score of the subject based on the comparison of the one or more plaque parameters, whether individually and/or combined or weighted. In some embodiments, the system can be configured to characterize the change in calcium score as positive, neutral, or negative. For example, in some embodiments, if the comparison of one or more plaque parameters reveals that plaque is stabilizing or showing high radiodensity values as a whole for the subject without generation of any new plaque, then the system can report that the change in calcium score is positive. In contrast, if the comparison of one or more plaque parameters reveals that plaque is destabilizing as a whole for the subject, for example due to generation of new unstable regions of plaque with low radiodensity values, without generation of any new plaque, then the system can report that the change in calcium score is negative. In some embodiments, the system can be configured to utilize any or all techniques of plaque quantification and/or tracking of plaque-based disease analysis discussed herein, include those discussed in connection withFIGS.3A,3B,3C, and3D. As a non-limiting example, in some embodiments, the system can be configured to characterize the cause of a change in calcium score based on determining and comparing a change in ratio between volume and radiodensity of one or more regions of plaque between the two scans. Similarly, in some embodiments, the system can be configured to characterize the cause of a change in calcium score based on determining and comparing a change in diffusivity and/or radiodensity of one or more regions of plaque between the two scans. For example, if the radiodensity of a region of plaque has increased, the system can be configured to characterize the change or increase in calcium score as positive. In some embodiments, if the system identifies one or more new regions of plaque in the second image that were not present in the first image, the system can be configured to characterize the change in calcium score as negative. In some embodiments, if the system determines that the volume to surface area ratio of one or more regions of plaque has decreased between the two scans, the system can be configured to characterize the change in calcium score as positive. In some embodiments, if the system determines that a heterogeneity or heterogeneity index of a region is plaque has decreased between the two scans, for example by generating and/or analyzing spatial mapping of radiodensity values, then the system can be configured to characterize the change in calcium score as positive. In some embodiments, the system is configured to utilize an AI, ML, and/or other algorithm to characterize the change in calcium score based on one or more plaque parameters derived from a medical image. For example, in some embodiments, the system can be configured to utilize an AI and/or ML algorithm that is trained using a CNN and/or using a dataset of known medical images with identified plaque parameters combined with calcium scores. In some embodiments, the system can be configured to characterize a change in calcium score by accessing known datasets of the same stored in a database. For example, the known dataset may include datasets of changes in calcium scores and/or medical images and/or plaque parameters derived therefrom of other subjects in the past. In some embodiments, the system can be configured to characterize a change in calcium score and/or determine a cause thereof on a vessel-by-vessel basis, segment-by-segment basis, plaque-by-plaque basis, and/or a subject basis. In some embodiments, at block396, the system can be configured to generate a proposed treatment plan for the subject. For example, in some embodiments, the system can be configured to generate a proposed treatment plan for the subject based on the change in calcium score and/or characterization thereof for the subject. In some embodiments, one or more processes described herein in connection withFIG.3Ecan be repeated. For example, one or more processes described herein can be repeated and the analytical results thereof can be used for continued tracking and/or characterization of changes in calcium score for a subject and/or other purposes. Prognosis of Cardiovascular Event In some embodiments, the systems, devices, and methods described herein are configured to generate a prognosis of a cardiovascular event for a subject based on one or more of the medical image-based analysis techniques described herein. For example, in some embodiments, the system is configured to determine whether a patient is at risk for a cardiovascular event based on the amount of bad plaque buildup in the patient's artery vessels. For this purpose, a cardiovascular event can include clinical major cardiovascular events, such as heart attack, stroke or death, as well as disease progression and/or ischemia. In some embodiments, the system can identify the risk of a cardiovascular event based on a ratio of the amount and/or volume of bad plaque buildup versus the total surface area and/or volume of some or all of the artery vessels in a patient. In some embodiments, if the foregoing ratio exceeds a certain threshold, the system can be configured to output a certain risk factor and/or number and/or level associated with the patient. In some embodiments, the system is configured to determine whether a patient is at risk for a cardiovascular event based on an absolute amount or volume or a ratio of the amount or volume bad plaque buildup in the patient's artery vessels compared to the total volume of some or all of the artery vessels. In some embodiments, the system is configured to determine whether a patient is at risk for a cardiovascular event based on results from blood chemistry or biomarker tests of the patient, for example whether certain blood chemistry or biomarker tests of the patient exceed certain threshold levels. In some embodiments, the system is configured to receive as input from the user or other systems and/or access blood chemistry or biomarker tests data of the patient from a database system. In some embodiments, the system can be configured to utilize not only artery information related to plaque, vessel morphology, and/or stenosis but also input from other imaging data about the non-coronary cardiovascular system, such as subtended left ventricular mass, chamber volumes and size, valvular morphology, vessel (e.g., aorta, pulmonary artery) morphology, fat, and/or lung and/or bone health. In some embodiments, the system can utilize the outputted risk factor to generate a treatment plan proposal. For example, the system can be configured to output a treatment plan that involves the administration of cholesterol reducing drugs, such as statins, in order to transform the soft bad plaque into hard plaque that is safer and more stable for a patient. In general, hard plaque that is largely calcified can have a significant lower risk of rupturing into the artery vessel thereby decreasing the chances of a clot forming in the artery vessel which can decrease a patient's risk of a heart attack or other cardiac event. FIG.4Ais a flowchart illustrating an overview of an example embodiment(s) of a method for prognosis of a cardiovascular event based on and/or derived from medical image analysis. As illustrated inFIG.4A, in some embodiments, the system can be configured to access a medical image at block202, such as a CT scan of a coronary region of a subject, which can be stored in a medical image database100. Further, in some embodiments, the system can be configured to identify one or more arteries at block204and/or one or more regions of plaque at block206. In addition, in some embodiments, the system can be configured to determine one or more vascular morphology and/or quantified plaque parameters at block208. For example, in some embodiments, the system can be configured to determine a geometry and/or volume of a region of plaque and/or a vessel, a ratio or function of volume to surface area of a region of plaque, a heterogeneity or homogeneity index of a region of plaque, radiodensity of a region of plaque and/or a composition thereof by ranges of radiodensity values, a ratio of radiodensity to volume of a region of plaque, and/or a diffusivity of a region of plaque. In addition, in some embodiments, at block210, the system can be configured to classify one or more regions of plaque as stable v. unstable or good v. bad based on the one or more vascular morphology parameters and/or quantified plaque parameters determined and/or derived from raw medical images. Additional detail regarding the processes and techniques represented in blocks202,204,206,208, and210can be found in the description above in relation toFIG.2A. In some embodiments, the system at block412is configured to generate a ratio of bad plaque to the vessel on which the bad plaque appears. More specifically, in some embodiments, the system can be configured to determine a total surface area of a vessel identified on a medical image and a surface area of all regions of bad or unstable plaque within that vessel. Based on the foregoing, in some embodiments, the system can be configured to generate a ratio of surface area of all bad plaque within a particular vessel to the surface area of the entire vessel or a portion thereof shown in a medical image. Similarly, in some embodiments, the system can be configured to determine a total volume of a vessel identified on a medical image and a volume of all regions of bad or unstable plaque within that vessel. Based on the foregoing, in some embodiments, the system can be configured to generate a ratio of volume of all bad plaque within a particular vessel to the volume of the entire vessel or a portion thereof shown in a medical image. In some embodiments, at block414, the system is further configured to determine a total absolute volume and/or surface area of all bad or unstable plaque identified in a medical image. Also, in some embodiments, at block416, the system is configured to determine a total absolute volume of all plaque, including good plaque and bad plaque, identified in a medical image. Further, in some embodiments, at block418, the system can be configured to access or retrieve results from a blood chemistry and/or biomarker test of the patient and/or other non-imaging test results. Furthermore, in some embodiments, at block422, the system can be configured to access and/or analyze one or more non-coronary cardiovascular system medical images. In some embodiments, at block420, the system can be configured to analyze one or more of the generated ratio of bad plaque to a vessel, whether by surface area or volume, total absolute volume of bad plaque, total absolute volume of plaque, blood chemistry and/or biomarker test results, and/or analysis results of one or more non-coronary cardiovascular system medical images to determine whether one or more of these parameters, either individually and/or combined, is above a predetermined threshold. For example, in some embodiments, the system can be configured to analyze one or more of the foregoing parameters individually by comparing them to one or more reference values of healthy subjects and/or subjects at risk of a cardiovascular event. In some embodiments, the system can be configured to analyze a combination, such as a weighted measure, of one or more of the foregoing parameters by comparing the combined or weighted measure thereof to one or more reference values of healthy subjects and/or subjects at risk of a cardiovascular event. In some embodiments, the system can be configured to weight one or more of these parameters equally. In some embodiments, the system can be configured to weight one or more of these parameters differently. In some embodiments, the system can be configured to weight one or more of these parameters logarithmically, algebraically, and/or utilizing another mathematical transform. In some embodiments, the system can be configured to utilize only some of the aforementioned parameters, either individually, combined, and/or as part of a weighted measure. In some embodiments, at block424, the system is configured to generate a prognosis for a cardiovascular event for the subject. In particular, in some embodiments, the system is configured to generate a prognosis for cardiovascular event based on one or more of the analysis results of the generated ratio of bad plaque to a vessel, whether by surface area or volume, total absolute volume of bad plaque, total absolute volume of plaque, blood chemistry and/or biomarker test results, and/or analysis results of one or more non-coronary cardiovascular system medical images. In some embodiments, the system is configured to generate the prognosis utilizing an AI, ML, and/or other algorithm. In some embodiments, the generated prognosis comprises a risk score or risk assessment of a cardiovascular event for the subject. In some embodiments, the cardiovascular event can include one or more of atherosclerosis, stenosis, ischemia, heart attack, and/or the like. In some embodiments, at block426, the system can be configured to generate a proposed treatment plan for the subject. For example, in some embodiments, the system can be configured to generate a proposed treatment plan for the subject based on the change in calcium score and/or characterization thereof for the subject. In some embodiments, the generated treatment plan can include use of statins, lifestyle changes, and/or surgery. In some embodiments, one or more processes described herein in connection withFIG.4Acan be repeated. For example, one or more processes described herein can be repeated and the analytical results thereof can be used for continued prognosis of a cardiovascular event for a subject and/or other purposes. Patient-Specific Stent Determination In some embodiments, the systems, methods, and devices described herein can be used to determine and/or generate one or more parameters for a patient-specific stent and/or selection or guidance for implantation thereof. In particular, in some embodiments, the systems disclosed herein can be used to dynamically and automatically determine the necessary stent type, length, diameter, gauge, strength, and/or any other stent parameter for a particular patient based on processing of the medical image data, for example using AI, ML, and/or other algorithms. In some embodiments, by determining one or more patient-specific stent parameters that are best suited for a particular artery area, the system can reduce the risk of patient complications and/or insurance risks because if too large of a stent is implanted, then the artery wall can be stretched too thin resulting in a possible rupture, or undesirable high flow, or other issues. On the other hand, if too small of a stent is implanted, then the artery wall might not be stretched open enough resulting in too little blood flow or other issues. In some embodiments, the system is configured to dynamically identify an area of stenosis within an artery, dynamically determine a proper diameter of the identified area of the artery, and/or automatically select a stent from a plurality of available stent options. In some embodiments, the selected stent can be configured to prop open the artery area after implantation to the determined proper artery diameter. In some embodiments, the proper artery diameter is determined to be equivalent or substantially equivalent to what the diameter would naturally be without stenosis. In some embodiments, the system can be configured to dynamically generate a patient-specific surgical plan for implanting the selected stent in the identified artery area. For example, the system can be configured to determine whether a bifurcation of the artery is near the identified artery area and generate a patient-specific surgical plan for inserting two guidewires for handling the bifurcation and/or determining the position for jailing and inserting a second stent into the bifurcation. FIG.4Bis a flowchart illustrating an overview of an example embodiment(s) of a method for determination of patient-specific stent parameters based on medical image analysis. As illustrated inFIG.4B, in some embodiments, the system can be configured to access a medical image at block202, such as a CT scan of a coronary region of a subject. Further, in some embodiments, the system can be configured to identify one or more arteries at block204and/or one or more regions of plaque at block206. In addition, in some embodiments, the system can be configured to determine one or more vascular morphology and/or quantified plaque parameters at block208. For example, in some embodiments, the system can be configured to determine a geometry and/or volume of a region of plaque and/or a vessel at block201, a ratio or function of volume to surface area of a region of plaque at block203, a heterogeneity or homogeneity index of a region of plaque at block205, radiodensity of a region of plaque and/or a composition thereof by ranges of radiodensity values at block207, a ratio of radiodensity to volume of a region of plaque at block209, and/or a diffusivity of a region of plaque at block211. Additional detail regarding the processes and techniques represented in blocks202,204,206,208,201,203,205,207,209, and211can be found in the description above in relation toFIG.2A. In some embodiments, at block440, the system can be configured to analyze the medical image to determine one or more vessel parameters, such as the diameter, curvature, vascular morphology, vessel wall, lumen wall, and/or the like. In some embodiments, the system can be configured to determine or derive from the medical image one or more vessel parameters as shown in the medical image, for example with stenosis at certain regions along the vessel. In some embodiments, the system can be configured to determine one or more vessel parameters without stenosis. For example, in some embodiments, the system can be configured to graphically and/or hypothetically remove stenosis or plaque from a vessel to determine the diameter, curvature, and/or the like of the vessel if stenosis did not exist. In some embodiments, at block442, the system can be configured to determine whether a stent is recommended for the subject and, if so, one or more recommended parameters of a stent specific for that patient based on the medical analysis. For example, in some embodiments, the system can be configured to analyze one or more of the identified vascular morphology parameters, quantified plaque parameters, and/or vessel parameters. In some embodiments, the system can be configured to utilize an AI, ML, and/or other algorithm. In some embodiments, the system is configured to analyze one or more of the aforementioned parameters individually, combined, and/or as a weighted measure. In some embodiments, one or more of these parameters derived from a medical image, either individually or combined, can be compared to one or more reference values derived or collected from other subjects, including those who had a stent implanted and those who did not. In some embodiments, based on the determined parameters of a patient-specific stent, the system can be configured to determine a selection of a preexisting stent that matches those parameters and/or generate manufacturing instructions to manufacture a patient-specific stent with stent parameters derived from a medical image. In some embodiments, the system can be configured to recommend a diameter of a stent that is less than or substantially equal to the diameter of an artery if stenosis did not exist. In some embodiments, at block444, the system can be configured to generate a recommended surgical plan for stent implantation based on the analyzed medical image. For example, in some embodiments, the system can be configured to determine whether a bifurcation exists based on the medical image and/or generate guidelines for the positioning of guidewires and/or stent for the patient prior to surgery. As such, in some embodiments, the system can be configured to generate a detailed surgical plan that is specific to a particular patient based on medical image analysis of plaque and/or other parameters. In some embodiments, at block446, the system is configured to access or retrieve one or more medical images after stent implantation. In some embodiments, at block448, the system can be configured to analyze the accessed medical image to perform post-implantation analysis. For example, in some embodiments, the system can be configured to derive one or more vascular morphology and/or plaque parameters, including any of those discussed herein in relation to block208, after stent implantation. Based on analysis of the foregoing, in some embodiments, the system can generate further proposed treatment in some embodiments, such as for example recommended use of statins or other medications, lifestyle change, further surgery or stent implantation, and/or the like. In some embodiments, one or more processes described herein in connection withFIG.4Bcan be repeated. For example, one or more processes described herein can be repeated and the analytical results thereof can be used to determine the need for and/or parameters of an additional patient-specific stent for a patient and/or other purposes. Patient-Specific Report In some embodiments, the system is configured to dynamically generate a patient-specific report based on the analysis of the processed data generated from the raw CT scan data. In some embodiments, the patient specific report is dynamically generated based on the processed data. In some embodiments, the written report is dynamically generated based on selecting and/or combining certain phrases from a database, wherein certain words, terms, and/or phrases are altered to be specific to the patient and the identified medical issues of the patient. In some embodiments, the system is configured to dynamically select one or more images from the image scanning data and/or the system generated image views described herein, wherein the selected one or more images are dynamically inserted into the written report in order to generate a patient-specific report based on the analysis of the processed data. In some embodiments, the system is configured to dynamically annotate the selected one or more images for insertion into the patient specific report, wherein the annotations are specific to patient and/or are annotations based on the data processing performed by the devices, methods, and systems disclosed herein, for example, annotating the one or more images to include markings or other indicators to show where along the artery there exists bad plaque buildup that is significant. In some embodiments, the system is configured to dynamically generate a report based on past and/or present medical data. For example, in some embodiments, the system can be configured to show how a patient's cardiovascular health has changed over a period. In some embodiments, the system is configured to dynamically generate phrases and/or select phrases from a database to specifically describe the cardiovascular health of the patient and/or how the cardiovascular disease has changed within a patient. In some embodiments, the system is configured to dynamically select one or more medical images from prior medical scanning and/or current medical scanning for insertion into the medical report in order to show how the cardiovascular disease has changed over time in a patient, for example, showing past and present images juxtaposed to each other, or for example, showing past images that are superimposed on present images thereby allowing a user to move or fade or toggle between past and present images. In some embodiments, the patient-specific report is an interactive report that allows a user to interact with certain images, videos, animations, augmented reality (AR), virtual reality (VR), and/or features of the report. In some embodiments, the system is configured to insert into the patient-specific report dynamically generated illustrations or images of patient artery vessels in order to highlight specific vessels and/or portions of vessels that contain or are likely to contain vascular disease that require review or further analysis. In some embodiments, the dynamically generated patient-specific report is configured to show a user the vessel walls using AR and/or VR. In some embodiments, the system is configured to insert into the dynamically generated report any ratios and/or dynamically generated data using the methods, systems, and devices disclosed herein. In some embodiments, the dynamically generated report comprises a radiology report. In some embodiments, the dynamically generated report is in an editable document, such as Microsoft Word®, in order to allow the physician to make edits to the report. In some embodiments, the dynamically generated report is saved into a PACS (Picture Archiving and Communication System) or other EMR (electronic medical records) system. In some embodiments, the system is configured to transform and/or translate data from the imaging into drawings or infographics in a video format, with or without audio, in order to transmit accurately the information in a way that is better understandable to any patient to improve literacy. In some embodiments, this method of improving literacy is coupled to a risk stratification tool that defines a lower risk with higher literacy, and a higher risk with lower literacy. In some embodiments, these report outputs may be patient-derived and/or patient-specific. In some embodiments, real patient imaging data (for example, from their CT) can be coupled to graphics from their CT and/or drawings from the CT to explain the findings further. In some embodiments, real patient imaging data, graphics data and/or drawings data can be coupled to an explaining graphic that is not from the patient but that can help the patient better understand (for example, a video about lipid-rich plaque). In some embodiments, these patient reports can be imported into an application that allows for following disease over time in relation to control of heart disease risk factors, such as diabetes or hypertension. In some embodiments, an app and/or user interface can allow for following of blood glucose and blood pressure over time and/or relate the changes of the image over time in a way that augments risk prediction. In some embodiments, the system can be configured to generate a video report that is specific to the patient based on the processed data generated from the raw CT data. In some embodiments, the system is configured to generate and/or provide a personalized cinematic viewing experience for a user, which can be programmed to automatically and dynamically change content based upon imaging findings, associated auto-calculated diagnoses, and/or prognosis algorithms. In some embodiments, the method of viewing, unlike traditional reporting, is through a movie experience which can be in the form of a regular 2D movie and/or through a mixed reality movie experience through AR or VR. In some embodiments, in the case of both 2D and mixed reality, the personalized cinematic experience can be interactive with the patient to predict their prognosis, such as risk of heart attack, rate of disease progression, and/or ischemia. In some embodiments, the system can be configured to dynamically generate a video report that comprises both cartoon images and/or animation along with audio content in combination with actual CT image data from the patient. In some embodiments, the dynamically generated video medical report is dynamically narrated based on selecting phrases, terms and/or other content from a database such that a voice synthesizer or pre-made voice content can be used for playback during the video report. In some embodiments, the dynamically generated video medical report is configured to comprise any of the images disclosed herein. In some embodiments, the dynamically generated video medical report can be configured to dynamically select one or more medical images from prior medical scanning and/or current medical scanning for insertion into the video medical report in order to show how the cardiovascular disease has changed over time in a patient. For example, in some embodiments, the report can show past and present images juxtaposed next to each other. In some embodiments, the report can show past images that are superimposed on present images thereby allowing a user to toggle or move or fade between past and present images. In some embodiments, the dynamically generated video medical report can be configured to show actual medical images, such as a CT medical image, in the video report and then transition to an illustrative view or cartoon view (partial or entirely an illustrative or cartoon view) of the actual medical images, thereby highlighting certain features of the patient's arteries. In some embodiments, the dynamically generated video medical report is configured to show a user the vessel walls using AR and/or VR. FIG.5Ais a flowchart illustrating an overview of an example embodiment(s) of a method for generation of a patient-specific medical report based on medical image analysis. As illustrated inFIG.5A, in some embodiments, the system can be configured to access a medical image at block202. In some embodiments, the medical image can be stored in a medical image database100. Additional detail regarding the types of medical images and other processes and techniques represented in block202can be found in the description above in relation toFIG.2A. In some embodiments, at block354, the system is configured to identify one or more arteries, plaque, and/or fat in the medical image, for example using AI, ML, and/or other algorithms. Additional detail regarding the types of medical images and other processes and techniques represented in block354can be found in the description above in relation toFIG.3C. In some embodiments, at block208, the system can be configured to determine one or more vascular morphology and/or quantified plaque parameters. For example, in some embodiments, the system can be configured to determine a geometry and/or volume of a region of plaque and/or a vessel at block201, a ratio or function of volume to surface area of a region of plaque at block203, a heterogeneity or homogeneity index of a region of plaque at block205, radiodensity of a region of plaque and/or a composition thereof by ranges of radiodensity values at block207, a ratio of radiodensity to volume of a region of plaque at block209, and/or a diffusivity of a region of plaque at block211. Additional detail regarding the processes and techniques represented in blocks208,201,203,205,207,209, and211can be found in the description above in relation toFIG.2A. In some embodiments, at block508, the system can be configured to determine and/or quantify stenosis, atherosclerosis, risk of ischemia, risk of cardiovascular event or disease, and/or the like. The system can be configured to utilize any techniques and/or algorithms described herein, including but not limited to those described above in connection with block358and block366ofFIG.3C. In some embodiments, at block510, the system can be configured to generate an annotated medical image and/or quantized color map using the analysis results derived from the medical image. For example, in some embodiments, the system can be configured to generate a quantized map showing one or more arteries, plaque, fat, good plaque, bad plaque, vascular morphologies, and/or the like. In some embodiments, at block512, the system can be configured to determine a progression of plaque and/or disease of the patient, for example based on analysis of previously obtained medical images of the subject. In some embodiments, the system can be configured to utilize any algorithms or techniques described herein in relation to disease tracking, including but not limited to those described in connection with block380and/orFIG.3Dgenerally. In some embodiments, at block514, the system can be configured to generate a proposed treatment plan for the patient based on the determined progression of plaque and/or disease. In some embodiments, the system can be configured to utilize any algorithms or techniques described herein in relation to disease tracking and treatment generation, including but not limited to those described in connection with block382and/orFIG.3Dgenerally. In some embodiments, at block516, the system can be configured to generate a patient-specific report. The patient-specific report can include one or more medical images of the patient and/or derived graphics thereof. For example, in some embodiments, the patient report can include one or more annotated medical images and/or quantized color maps. In some embodiments, the patient-specific report can include one or more vascular morphology and/or quantified plaque parameters derived from the medical image. In some embodiments, the patient-specific report can include quantified stenosis, atherosclerosis, ischemia, risk of cardiovascular event or disease, CAD-RADS score, and/or progression or tracking of any of the foregoing. In some embodiments, the patient-specific report can include a proposed treatment, such as statins, lifestyle changes, and/or surgery. In some embodiments, the system can be configured to access and/or retrieve from a patient report database500one or more phrases, characterizations, graphics, videos, audio files, and/or the like that are applicable and/or can be used to generate the patient-specific report. In generating the patient-specific report, in some embodiments, the system can be configured to compare one or more parameters, such as those mentioned above and/or derived from a medical image of the patient, with one or more parameters previously derived from other patients. For example, in some embodiments, the system can be configured to compare one or more quantified plaque parameters derived from the medical image of the patient with one or more quantified plaque parameters derived from medical images of other patients in the similar or same age group. Based on the comparison, in some embodiments, the system can be configured to determine which phrases, characterizations, graphics, videos, audio files, and/or the like to include in the patient-specific report, for example by identifying similar previous cases. In some embodiments, the system can be configured to utilize an AI and/or ML algorithm to generate the patient-specific report. In some embodiments, the patient-specific report can include a document, AR experience, VR experience, video, and/or audio component. FIGS.5B-5Iillustrate example embodiment(s) of a patient-specific medical report generated based on medical image analysis. In particular,FIG.5Billustrates an example cover page of a patient-specific report. FIGS.5C-5Iillustrate portions of an example patient-specific report(s). In some embodiments, a patient-specific report generated by the system may include only some or all of these illustrated portions. As illustrated inFIGS.5C-5I, in some embodiments, the patient-specific report includes a visualization of one or more arteries and/or portions thereof, such as for example, the Right Coronary Artery (RCA), R-Posterior Descending Artery (R-PDA), R-Posterolateral Branch (R-PLB), Left Main (LM) and Left Anterior Descending (LAD) Artery, 1st Diagonal (D1) Artery, 2nd Diagonal (D2) Artery, Circumflex (Cx) Artery, 1st Obtuse Marginal Branch (OM1), 2nd Obtuse Marginal Branch (OM2), Ramus Intermedius (RI), and/or the like. In some embodiments, for each of the arteries included in the report, the system is configured to generate a straightened view for easy tracking along the length of the vessel, such as for example at the proximal, mid, and/or distal portions of an artery. In some embodiments, a patient-specific report generated by the system includes a quantified measure of various plaque and/or vascular morphology-related parameters shown within the vessel. In some embodiments, for each or some of the arteries included in the report, the system is configured to generate and/or derive from a medical image of the patient and include in a patient-specific report a quantified measure of the total plaque volume, total low-density or non-calcified plaque volume, total non-calcified plaque value, and/or total calcified plaque volume. Further, in some embodiments, for each or some of the arteries included in the report, the system is configured to generate and/or derive from a medical image of the patient and include in a patient-specific report a quantified measure of stenosis severity, such as for example a percentage of the greatest diameter stenosis within the artery. In some embodiments, for each or some of the arteries included in the patient-specific report, the system is configured to generate and/or derive from a medical image of the patient and include in a patient-specific report a quantified measure of vascular remodeling, such as for example the highest remodeling index. Visualization/GUI Atherosclerosis, the buildup of fats, cholesterol and other substances in and on your artery walls (e.g., plaque), which can restrict blood flow. The plaque can burst, triggering a blood clot. Although atherosclerosis is often considered a heart problem, it can affect arteries anywhere in the body. However, determining information about plaque in coronary arteries can be difficult due in part to imperfect imaging data, aberrations that can be present in coronary artery images (e.g., due to movement of the patient), and differences in the manifestation of plaque in different patients. Accordingly, neither calculated information derived from CT images, or visual inspection of the CT images, alone provide sufficient information to determine conditions that exist in the patient's coronary arteries. Portions of this disclosure describe information they can be determined from CT images using automatic or semiautomatic processes. For example, using a machine learning process has been trained on thousands of CT scans determine information depicted in the CT images, and/or utilizing analyst to review and enhance the results of the machine learning process, and the example user interfaces described herein can provide the determined information to another analyst or a medical practitioner. While the information determined from the CT images is invaluable in assessing the condition of a patient's coronary arteries, visual analysis of the coronary arteries by skilled medical practitioner, with the information determined from the CT images in-hand, allows a more comprehensive assessment of the patient's coronary arteries. As indicated herein, embodiments of the system facilitate the analysis and visualization of vessel lumens, vessel walls, plaque and stenosis in and around coronary vessels. This system can display vessels in multi-planar formats, cross-sectional views, 3D coronary artery tree view, axial, sagittal, and coronal views based on a set of computerized tomography (CT) images, e.g., generated by a CT scan of a patient's vessels. The CT images can be Digital Imaging and Communications in Medicine (DICOM) images, a standard for the communication and management of medical imaging information and related data. CT images, or CT scans, as used herein, is a broad term that refers to pictures of structures within the body created by computer controlled scanner. For example, by a scanner that uses an X-ray beam. However, it is appreciated that other radiation sources and/or imaging systems may produce a set of CT-like images. Accordingly, the use of the term “CT images” herein may refer to any type of imaging system having any type of imaging source that produces a set of images depicting “slices” of structures within a body, unless otherwise indicated. One key aspect of the user interface described herein is the precise correlation of the views and information that is displayed of the CT images. Locations in the CT images displayed on portions (or “panels”) of the user interface are correlated precisely by the system such that the same locations are displayed concurrently in a different views. By simultaneously displaying a portion of the coronary vessel in, for example, two, three, four, five or six views simultaneously, and allowing a practitioner to explore particular locations of a coronary vessel in one view while the other 2-6 views correspondingly show the exact same location provides an enormous amount of insight into the condition of the vessel and allows the practitioner/analyst to quickly and easily visually integrate the presented information to gain a comprehensive and accurate understanding of the condition of the coronary vessel being examined. Advantageously, the present disclosure allows CT images and data to be analyzed in a more useful and accurate way, for users to interact and analyze images and data in a more analytically useful way and/or for computation analysis to be performed in a more useful way, for example to detect conditions requiring attention. The graphical user interfaces in the processing described herein allow a user to visualize otherwise difficult to define relationships between different information and views of coronary arteries. In an example, displaying a portion of a coronary artery simultaneously in a CMPR view, a SMPR view, and a cross-sectional view can provide insight to an analyst of plaque or stenosis associated with the coronary artery that may not otherwise be perceivable using a fewer number of views. Similarly, displaying the portion of the coronary artery in an axial view, a sagittal view, and a coronal view, in addition to the CMPR view, the SMPR view, and the cross-sectional view can provide further information to the analyst that would not otherwise be perceivable with a fewer number of views of the coronary artery. In various embodiments, any of the information described or illustrated herein, determined by the system or an analyst interacting with the system, and other information (for example, from another outside source, e.g., an analyst) that relates to coronary arteries/vessels associated with the set of CT images (“artery information”) including information indicative of stenosis and plaque of segments of the coronary vessels in the set of CT images, and information indicative of identification and location of the coronary vessels in the set of CT images, can be stored on the system and presented in various panels of the user interface and in reports. The present disclosure allows for easier and quicker analysis of a patient's coronary arteries and features associate with coronary arteries. The present disclosure also allows faster analysis of coronary artery data by allowing quick and accurate access to selected portions of coronary artery data. Without using the present system and methods of the disclosure, quickly selecting, displaying, and analyzing CT images and coronary artery information, can be cumbersome and inefficient, and may lead to analyst missing critical information in their analysis of a patient's coronary arteries, which may lead to inaccurate evaluation of a patient's condition. In various embodiments, the system can identify a patient's coronary arteries either automatically (e.g., using a machine learning algorithm during the preprocessing step of set of CT images associated with a patient), or interactively (e.g., by receiving at least some input form a user) by an analyst or practitioner using the system. As described herein, in some embodiments, the processing of the raw CT scan data can comprise analysis of the CT data in order to determine and/or identify the existence and/or nonexistence of certain artery vessels in a patient. As a natural occurring phenomenon, certain arteries may be present in certain patients whereas such certain arteries may not exist in other patients. In some embodiments, the system can be configured to identify and label the artery vessels detected in the scan data. In certain embodiments, the system can be configured to allow a user to click upon a label of an identified artery within the patient, and thereby allowing that artery to be highlighted in an electronic representation of a plurality of artery vessels existing in the patient. In some embodiments, the system is configured to analyze arteries present in the CT scan data and display various views of the arteries present in the patient, for example within 10-15 minutes or less. In contrast, as an example, conducting a visual assessment of a CT to identify stenosis alone, without consideration of good or bad plaque or any other factor, can take anywhere between 15 minutes to more than an hour depending on the skill level, and can also have substantial variability across radiologists and/or cardiac imagers. Although some systems may allow an analyst to view the CT images associated with a patient, they lack the ability to display all of the necessary views, in real or near real-time, with correspondence between 3-D artery tree views of coronary arteries specific to a patient, multiple SMPR views, and a cross-sectional, as well as an axial view, a sagittal view, and/or the coronal view. Embodiments of the system can be configured this display one or more of the use, or all of the use, which provides unparalleled visibility of a patient's coronary arteries, and allows an analyst or practitioner to perceive features and information that is simply may not be perceivable without these views. That is, a user interface configured to show all of these views, as well as information related to the displayed coronary vessel, allows an analyst or practitioner to use their own experience in conjunction with the information that the system is providing, to better identify conditions of the arteries which can help them make a determination on treatments for the patient. In addition, the information that is determined by the system and displayed by the user interface that cannot be perceived by an analyst or practitioner is presented in such a manner that is easy to understand and quick to assimilate. As an example, the knowledge of actual radiodensity values of plaque is not something that analyst and determine simply by looking at the CT image, but the system can and present a full analysis of all plaque is found. In general, arteries vessels are curvilinear in nature. Accordingly, the system can be configured to straighten out such curvilinear artery vessels into a substantially straight-line view of the artery, and in some embodiments, the foregoing is referred to as a straight multiplanar reformation (MPR) view. In some embodiments, the system is configured to show a dashboard view with a plurality of artery vessels showing in a straight multiplanar reformation view. In some embodiments, the linear view of the artery vessels shows a cross-sectional view along a longitudinal axis (or the length of the vessel or a long axis) of the artery vessel. In some embodiments, the system can be configured to allow the user to rotate in a 360° fashion about the longitudinal axis of the substantially linear artery vessels in order for the user to review the vessel walls from various views and angles. In some embodiments, the system is configured to not only show the narrowing of the inner vessel diameter but also characteristics of the inner and/or outer vessel wall itself. In some embodiments, the system can be configured to display the plurality of artery vessels in a multiple linear views, e.g., in an SMPR view. In some embodiments, the system can be configured to display the plurality of artery vessels in a perspective view in order to better show the user the curvatures of the artery vessels. In some embodiments, the perspective view is referred to as a curved multiplanar reformation view. In some embodiments, the perspective view comprises the CT image of the heart and the vessels, for example, in an artery tree view. In some embodiments, the perspective view comprises a modified CT image showing the artery vessels without the heart tissue displayed in order to better highlight the vessels of the heart. In some embodiments, the system can be configured to allow the user to rotate the perspective view in order to display the various arteries of the patient from different perspectives. In some embodiments, the system can be configured to show a cross-sectional view of an artery vessel along a latitudinal axis (or the width of the vessel or short axis). In contrast to the cross-sectional view along a longitudinal axis, in some embodiments, the system can allow a user to more clearly see the stenosis or vessel wall narrowing by viewing the artery vessel from a cross-sectional view across a latitudinal axis. In some embodiments, the system is configured to display the plurality of artery vessels in an illustrative view or cartoon view. In the illustrative view of the artery vessels, in some embodiments, the system can utilize solid coloring or grey scaling of the specific artery vessels or sections of specific artery vessels to indicate varying degrees of risk for a cardiovascular event to occur in a particular artery vessel or section of artery vessel. For example, the system can be configured to display a first artery vessel in yellow to indicate a medium risk of a cardiovascular event occurring in the first artery vessel while displaying a second artery vessel in red to indicate a high risk of a cardiovascular event occurring in the second artery vessel. In some embodiments, the system can be configured to allow the user to interact with the various artery vessels and/or sections of artery vessels in order to better understand the designated risk associated with the artery vessel or section of artery vessel. In some embodiments, the system can allow the user to switch from the illustrative view to a CT view of the arteries of the patient. In some embodiments, the system can be configured to display in a single dashboard view all or some of the various views described herein. For example, the system can be configured to display the linear view with the perspective view. In another example, the system can be configured to display the linear view with the illustrative view. In some embodiments, the processed CT image data can result in allowing the system to utilize such processed data to display to a user various arteries of a patient. As described above, the system can be configured to utilize the processed CT data in order to generate a linear view of the plurality of artery vessels of a patient. In some embodiments, the linear view displays the arteries of a patient as in a linear fashion to resemble a substantially straight line. In some embodiments, the generating of the linear view requires the stretching of the image of one or more naturally occurring curvilinear artery vessels. In some embodiments, the system can be configured to utilize such processed data to allow a user to rotate a displayed linear view of an artery in a 360° rotatable fashion. In some embodiments, the processed CT image data can visualize and compare the artery morphologies over time, i.e., throughout the cardiac cycle. The dilation of the arteries, or lack thereof, may represent a healthy versus sick artery that is not capable of vasodilation. In some embodiments, a prediction algorithm can be made to determine the ability of the artery to dilate or not, by simply examining a single point in time. As mentioned above, aspects of the system can help to visualize a patient's coronary arteries. In some embodiments, the system can be configured to utilize the processed data from the raw CT scans in order to dynamically generate a visualization interface for a user to interact with and/or analyze the data for a particular patient. The visualization system can display multiple arteries associated with a patient's heart. The system can be configured to display multiple arteries in a substantially linear fashion even though the arteries are not linear within the body of the patient. In some embodiments, the system can be configured to allow the user to scroll up and down or left to right along the length of the artery in order to visualize different areas of the artery. In some embodiments, the system can be configured to allow a user to rotate in a 360° fashion an artery in order to allow the user to see different portions of the artery at different angles. Advantageously, the system can be configured to comprise or generate markings in areas where there is an amount of plaque buildup that exceeds a threshold level. In some embodiments, the system can be configured to allow the user to target a particular area of the artery for further examination. The system can be configured to allow the user to click on one or more marked areas of the artery in order to display the underlying data associated with the artery at a particular point along the length of the artery. In some embodiments, the system can be configured to generate a cartoon rendition of the patient's arteries. In some embodiments, the cartoon or computer-generated representation of the arteries can comprise a color-coded scheme for highlighting certain areas of the patient's arteries for the user to examine further. In some embodiments, the system can be configured to generate a cartoon or computer-generated image of the arteries using a red color, or any other graphical representation, to signify arteries that require further analysis by the user. In some embodiments, the system can label the cartoon representation of the arteries, and the 3D representation of the arteries described above, with stored coronary vessel labels according to the labeling scheme. If a user desires, the labeling scheme can be changed or refined and preferred labels may be stored and used label coronary arteries. In some embodiments, the system can be configured to identify areas in the artery where ischemia is likely to be found. In some embodiments, the system can be configured to identify the areas of plaque in which bad plaque exists. In some embodiments, the system can be configured to identify bad plaque areas by determining whether the coloration and/or the gray scale level of the area within the artery exceeds a threshold level. In an example, the system can be configured to identify areas of plaque where the image of a plaque area is black or substantially black or dark gray. In an example, the system can be configured to identify areas of “good” plaque by the designation of whiteness or light grey in a plaque area within the artery. In some embodiments, the system is configured to identify portions of an artery vessel where there is high risk for a cardiac event and/or draw an outline following the vessel wall or profiles of plaque build-up along the vessel wall. In some embodiments, the system is further configured to display this information to a user and/or provide editing tools for the user to change the identified portions or the outline designations if the user thinks that the AI algorithm incorrectly drew the outline designations. In some embodiments, the system comprises an editing tool referred to as “snap-to-lumen,” wherein the user selects a region of interest by drawing a box around a particular area of the vessel and selecting the snap-to-lumen option and the system automatically redraws the outline designation to more closely track the boundaries of the vessel wall and/or the plaque build-up, wherein the system is using image processing techniques, such as but not limited to edge detection. In some embodiments, the AI algorithm does not process the medical image data with complete accuracy and therefore editing tools are necessary to complete the analysis of the medical image data. In some embodiments, the final user editing of the medical image data allows for faster processing of the medical image data than using solely AI algorithms to process the medical image data. In some embodiments, the system is configured to replicate images from higher resolution imaging. As an example, in CT, partial volume artifacts from calcium are a known artifact of CT that results in overestimation of the volume of calcium and the narrowing of an artery. By training and validating a CT artery appearance to that of intravascular ultrasound or optical coherence tomography or histopathology, in some embodiments, the CT artery appearance may be replicated to be similar to that of IVUS or OCT and, in this way, de-bloom the coronary calcium artifacts to improve the accuracy of the CT image. In some embodiments, the system is configured to provide a graphical user interface for displaying a vessel from a beginning portion to an ending portion and/or the tapering of the vessel over the course of the vessel length. Many examples of panels that can be displayed in a graphical user interface are illustrated and described in reference toFIGS.6A-9N. In some embodiments, portions of the user interface, panels, buttons, or information displayed on the user interface be arranged differently than what is described herein and illustrated in the Figures. For example, a user may have a preference for arranging different views of the arteries in different portions of the user interface. In some embodiments, the graphical user interface is configured to annotate the displayed vessel view with plaque build-up data obtained from the AI algorithm analysis in order to show the stenosis of the vessel or a stenosis view. In some embodiments, the graphical user interface system is configured to annotate the displayed vessel view with colored markings or other markings to show areas of high risk or further analysis, areas of medium risk, and/or areas of low risk. For example, the graphical user interface system can be configured to annotate certain areas along the vessel length in red markings, or other graphical marking, to indicate that there is significant bad fatty plaque build-up and/or stenosis. In some embodiments, the annotated markings along the vessel length are based on one or more variable such as but not limited to stenosis, biochemistry tests, biomarker tests, AI algorithm analysis of the medical image data, and/or the like. In some embodiments, the graphical user interface system is configured to annotate the vessel view with an atherosclerosis view. In some embodiments, the graphical user interface system is configured to annotate the vessel view with an ischemia view. In some embodiments, the graphical user interface is configured to allow the user to rotate the vessel 180 degrees or 360 degrees in order to display the vessel and the annotated plaque build-up views from different angles. From this view, the user can manually determine the stent length and diameter for addressing the stenosis, and in some embodiments, the system is configured to analyze the medical image information to determine the recommended stent length and diameter, and display the proposed stent for implantation in the graphical user interface to illustrate to the user how the stent would address the stenosis within the identified area of the vessel. In some embodiments, the systems, methods, and devices disclosed herein can be applied to other areas of the body and/or other vessels and/or organs of a subject, whether the subject is human or other mammal. Illustrative Example One of the main uses of such systems can be to determine the presence of plaque in vessels, for example but not limited to coronary vessels. Plaque type can be visualized based on Hounsfield Unit density for enhanced readability for the user. Embodiments of the system also provide quantification of variables related to stenosis and plaque composition at both the vessel and lesion levels for the segmented coronary artery. In some embodiments, the system is configured as a web-based software application that is intended to be used by trained medical professionals as an interactive tool for viewing and analyzing cardiac CT data for determining the presence and extent of coronary plaques (i.e., atherosclerosis) and stenosis in patients who underwent Coronary Computed Tomography Angiography (CCTA) for evaluation of coronary artery disease (CAD), or suspected CAD. This system post processes CT images obtained using a CT scanner. The system is configured to generate a user interface that provides tools and functionality for the characterization, measurement, and visualization of features of the coronary arteries. Features of embodiments of the system can include, for example, centerline and lumen/vessel extraction, plaque composition overlay, user identification of stenosis, vessel statistics calculated in real time, including vessel length, lesion length, vessel volume, lumen volume, plaque volume (non-calcified, calcified, low-density-non-calcified plaque and total), maximum remodeling index, and area/diameter stenosis (e.g., a percentage), two dimensional (2D) visualization of multi-planar reformatted vessel and cross-sectional views, interactive three dimensional (3D) rendered coronary artery tree, visualization of a cartoon artery tree that corresponds to actual vessels that appear in the CT images, semi-automatic vessel segmentation that is user modifiable, and user identification of stents and Chronic Total Occlusion (CTO). In an embodiment, the system uses 18 coronary segments within the coronary vascular tree (e.g., in accordance with the guidelines of the Society of Cardiovascular Computed Tomography). The coronary segment labels include:pRCA—proximal right coronary arterymRCA—mid right coronary arterydRCA—distal right coronary arteryR-PDA—right posterior descending arteryLM—left main arterypLAD—proximal left anterior descending arterymLAD—mid left anterior descending arterydLAD—distal left anterior descending arteryD1—first diagonalD2—second diagonalpCx—proximal left circumflex arteryOM1—first obtuse marginalLCx—distal left circumflexOM2—second obtuse marginalL-PDA—left posterior descending arteryR-PLB—right posterior lateral branchRI—ramus intermedius arteryL-PLB—left posterior lateral branch Other embodiments can include more, or fewer, coronary segment labels. The coronary segments present in an individual patient are dependent on whether they are right or left coronary dominant. Some segments are only present when there is right coronary dominance, and some only when there is a left coronary dominance. Therefore, in many, if not all instances, no single patient may have all 18 segments. The system will account for most known variants. In one example of performance of the system, CT scans were processed by the system, and the resulting data was compared to ground truth results produced by expert readers. Pearson Correlation Coefficients and Bland-Altman Agreements between the systems results and the expert reader results is shown in the table below: PearsonBland-AltmanOutputCorrelationAgreementLumen Volume0.9196%Vessel Volume0.9397%Total Plaque Volume0.8595%Calcified Plaque Volume0.9495%Non-Calcified Plaque Volume0.7495%Low-Density-Non-Calcified0.5397%Plaque Volume FIGS.6A-9Nillustrate an embodiment of the user interface of the system, and show examples of panels, graphics, tools, representations of CT images, and characteristics, structure, and statistics related to coronary vessels found in a set of CT images. In various embodiments, the user interface is flexible and that it can be configured to show various arrangements of the panels, images, graphics representations of CT images, and characteristics, structure, and statistics. For example, based on an analyst's preference. The system has multiple menus and navigational tools to assist in visualizing the coronary arteries. Keyboard and mouse shortcuts can also be used to navigate through the images and information associated with a set of CT images for patient. FIG.6Aillustrates an example of a user interface600that can be generated and displayed on a CT image analysis system described herein, the user interface600having multiple panels (views) that can show various corresponding views of a patient's arteries and information about the arteries. In an embodiment, the user interface600shown inFIG.6Acan be a starting point for analysis of the patient's coronary arteries, and is sometimes referred to herein as the “Study Page” (or the Study Page600). In some embodiments, the Study Page can include a number of panels that can be arranged in different positions on the user interface600, for example, based on the preference the analyst. In various instances of the user interface600, certain panels of the possible panels that may be displayed can be selected to be displayed (e.g., based on a user input). The example of the Study Page600shown inFIG.6Aincludes a first panel601(also shown in the circled “2”) including an artery tree602comprising a three-dimensional (3D) representation of coronary vessels based on the CT images and depicting coronary vessels identified in the CT images, and further depicting respective segment labels. While processing the CT images, the system can determine the extent of the coronary vessels are determined and the artery tree is generated. Structure that is not part of the coronary vessels (e.g., heart tissue and other tissue around the coronary vessels) are not included in the artery tree602. Accordingly, the artery tree602inFIG.6Adoes not include any heart tissue between the branches (vessels)603of the artery tree602allowing visualization of all portions of the artery tree602without them being obscured by heart tissue. This Study Page600example also includes a second panel604(also shown in the circled “1a”) illustrating at least a portion of the selected coronary vessel in at least one straightened multiplanar reformat (SMPR) vessel view. A SMPR view is an elevation view of a vessel at a certain rotational aspect. When multiple SMPR views are displayed in the second panel604each view can be at a different rotational aspect. For example, at any whole degree, or at a half degree, from 0° to 259.5°, where 360° is the same view as 0°. In this example, the second panel604includes four straightened multiplanar vessels604a-ddisplayed in elevation views at a relative rotation of 0°, 22.5°, 45°, and 67.5°, the rotation indicated that the upper portion of the straightened multiplanar vessel. In some embodiments, the rotation of each view can be selected by the user, for example, at the different relative rotation interval. The user interface includes the rotation tool605that is configured to receive an input from a user, and can be used to adjust rotation of a SMPR view (e.g., by one or more degrees). One or more graphics related to the vessel shown in the SMPR view can also be displayed. For example, a graphic representing the lumen of the vessel, a graphic representing the vessel wall, and/or a graphic representing plaque. This Study Page600example also includes the third panel606(also indicated by the circled “1c”), which is configured to show a cross-sectional view of a vessel606agenerated based on a CT image in the set of CT images of the patient. The cross-sectional view corresponds to the vessel shown in the SMPR view. The cross-sectional view also corresponds to a location indicated by a user (e.g., with a pointing device) on a vessel in the SMPR view. The user interfaces configured such that a selection of a particular location along the coronary vessel in the second panel604displays the associated CT image in a cross-sectional view in the third panel606. In this example, a graphic607is displayed on the second panel604and the third panel606indicating the extent of plaque in the vessel. This Study Page600example also includes a fourth panel608that includes anatomical plane views of the selected coronary vessel. In this embodiment, the Study Page600includes an axial plane view608a(also indicated by the circled “3a”), a coronal plane view608b(also indicated by the circled “3b”), and a sagittal plane view608c(also indicated by the circled “3c”). The axial plane view is a transverse or “top” view. The coronal plane view is a front view. The sagittal plane view is a side view. The user interface is configured to display corresponding views of the selected coronary vessel. For example, views of the selected coronary vessel at a location on the coronary vessel selected by the user (e.g., on one of the SMPR views in the second panel604. FIG.6Billustrates another example of the Study Page (user interface)600that can be generated and displayed on the system, the user interface600having multiple panels that can show various corresponding views of a patient's arteries. In this example, the user interface600displays an 3D artery tree in the first panel601, the cross-sectional view in the third panel606, and axial, coronal, and sagittal plane views in the fourth panel608. Instead of the second panel604shown inFIG.6A, the user interface600includes a fifth panel609showing curved multiplanar reformat (CMPR) vessel views of a selected coronary vessel. The fifth panel609can be configured to show one or more CMPR views. In this example, two CMPR views were generated and are displayed, a first CMPR view609aat 0° and a second CMPR view609bat 90°. The CMPR views can be generated and displayed at various relative rotations, for example, from 0° to 259.5°. The coronary vessel shown in the CMPR view corresponds to the selected vessel, and corresponds to the vessel displayed in the other panels. When a location on the vessel in one panel is selected (e.g., the CMPR view), the views in the other panels (e.g., the cross-section, axial, sagittal, and coronal views) can be automatically updated to also show the vessel at that the selected location in the respective views, thus greatly enhancing the information presented to a user and increasing the efficiency of the analysis. FIGS.6C,6D, and6Eillustrate certain details of a multiplanar reformat (MPR) vessel view in the second panel, and certain functionality associated with this view. After a user verifies the accuracy of the segmentation of the coronary artery tree in panel602, they can proceed to interact with the MPR views where edits can be made to the individual vessel segments (e.g., the vessel walls, the lumen, etc.). In the SMPR and CMPR views, the vessel can be rotated in increments (e.g., 22.5°) by using the arrow icon605, illustrated inFIGS.6Cand b6D. Alternatively, the vessel can be rotated continuously by 1 degree increments in 360 degrees by using the rotation command610, as illustrated inFIG.6E. The vessels can also be rotated by pressing the COMMAND or CTRL button and left clicking+dragging the mouse on the user interface600. FIG.6Fillustrates additional information of the three-dimensional (3D) rendering of the coronary artery tree602on the first panel601that allows a user to view the vessels and modify the labels of a vessel.FIG.6Gillustrates shortcut commands for the coronary artery tree602, axial view608a, sagittal view608b, and coronal view608c. In panel601shown inFIG.6F, a user can rotate the artery tree as well as zoom in and out of the 3D rendering using commands selected in the user interface illustrated inFIG.6G. Clicking on a vessel will turn it yellow which indicates that is the vessel that is currently being reviewed. In this view, users can rename or delete a vessel by right-clicking on the vessel name which opens panel611, which is configured to receive an input from a user to rename the vessel. Panel601also includes a control that can be activated to turn the displayed labels “on” or “off.”FIG.6Hfurther illustrates panel608of the user interface for viewing DICOM images in three anatomical planes: axial, coronal, and sagittal.FIG.6Iillustrates panel606showing a cross-sectional view of a vessel. The scroll, zoom in/out, and pan commands can also be used on these views. FIGS.6J and6Killustrate certain aspects of the toolbar612and menu navigation functionality of the user interface600.FIG.6Jillustrates a toolbar of the user interface for navigating the vessels. The toolbar612includes a button612a,612betc. for each of the vessels displayed on the screen. The user interface600is configured to display the buttons612a-nto indicate various information to the user. In an example, when a vessel is selected, the corresponding button is highlighted (e.g., displayed in yellow), for example, button612c. In another example, a button being dark gray with white lettering indicates that a vessel is available for analysis. In an example, a button612dthat is shaded black means a vessel could not be analyzed by the software because they are either not anatomically present or there are too many artifacts. A button612ethat is displayed as gray with check mark indicates that the vessel has been reviewed. FIG.6Killustrates a view of the user interface600with an expanded menu to view all the series (of images) that are available for review and analysis. If the system has provided more than one of the same vessel segment from different series of images for analysis, the user interface is configured to receive a user input to selected the desired series for analysis. In an example, an input can be received indicating a series for review by a selection on one of the radio buttons613from the series of interest. The radio buttons will change from gray to purple when it is selected for review. In an embodiment, the software, by default, selects the two series of highest diagnostic quality for analysis however, all series are available for review. The user can use clinical judgment to determine if the series selected by the system is of diagnostic quality that is required for the analysis, and should select a different series for analysis if desired. The series selected by the system is intended to improve workflow by prioritizing diagnostic quality images. The system is not intended to replace the user's review of all series and selection of a diagnostic quality image within a study. Users can send any series illustrated inFIG.6Kfor the system to suggest vessel segmentations by hovering the mouse over the series and select an “Analyze” button614as illustrated inFIG.6L. FIG.6Millustrates a panel that can be displayed on the user interface600to add a new vessel on the image, according to one embodiment. To add a new vessel on the image, the user interface600can receive a user input via a “+Add Vessel” button on the toolbar612. The user interface will display a “create Mode”615button appear in the fourth panel608on the axial, coronal and sagittal view. Then the vessel can be added on the image by scrolling and clicking the left mouse button to create multiple dots (e.g., green dots). As the new vessel is being added, it will preview as a new vessel in the MPR, cross-section, and 3D artery tree view. The user interface is configured to receive a “Done” command to indicate adding the vessel has been completed. Then, to segment the vessels utilizing the system's semi-automatic segmentation tool, click “Analyze” on the tool bar and the user interface displays suggested segmentation for review and modification. The name of the vessel can be chosen by selecting “New” in the 3D artery tree view in the first panel601, which activates the name panel611and the name of the vessel can be selected from panel611, which then stores the new vessel and its name. In an embodiment, if the software is unable to identify the vessel which has been added by the user, it will return straight vessel lines connecting the user-added green dots, and the user can adjust the centerline. The pop-up menu611of the user interface allows new vessels to be identified and named according to a standard format quickly and consistently. FIG.7Aillustrates an example of an editing toolbar714that includes editing tools which allow users to modify and improve the accuracy of the findings resulting from processing CT scans with a machine learning algorithm, and then processing the CT scans, and information generated by the machine learning algorithm, by an analyst. In some embodiments, the user interface includes editing tools that can be used to modify and improve the accuracy of the findings. In some embodiments, the editing tools are located on the left-hand side of the user interface, as shown inFIG.7A. The following is a listing and description of the available editing tools. Hovering over each button (icon) will display the name of each tool. These tools can be activated and deactivated by clicking on it. If the color of the tool is gray, it is deactivated. If the software has identified any of these characteristics in the vessel, the annotations will already be on the image when the tool is activated. The editing tools in the toolbar can include one or more of the following tools: Lumen Wall701, Snap to Vessel Wall702, Vessel Wall703, Snap to Lumen Wall704, Segments705, Stenosis706, Plaque Overlay707, Centerline708, Chronic Total Occlusion (CTO)709, Stent710, Exclude By711, Tracker712, and Distance713. The user interface600is configured to activate each of these tools by receiving a user selection on the respective toll icon (shown in the table below and inFIG.7A) and are configured to provide functionality described in the Editing Tools Description Table below: Editing Tools Description TableLLUMENUSERS CAN ADJUST OR DRAW NEW LUMEN WALL CONTOURS TO IMPROVE THE ACCURACYWALLOF THE LOCATION AND MEASUREMENTS OF THE LUMENSNAP TOUSERS CAN DRAG A SHADED AREA AND RELEASE IT IN ORDER TO SNAP THE LUMEN WALLVESSELTO THE VESSEL WALL FOR HEALTHY VESSELS AREASWALLVVESSELUSERS CAN ADJUST OR DRAW NEW VESSEL WALL CONTOURS TO REFINE THE EXTERIOR OF THEWALLVESSEL WALLSNAP TOUSERS CAN DRAG A SHADED AREA AND RELEASE IT IN ORDER TO SNAP THE VESSEL WALLLUMENTO THE LUMEN WALL FOR HEALTHY VESSELS AREASWALLSSEGMENTSUSERS CAN ADD SEGMENT MARKERS TO DEFINE THE BOUNDARIES OF EACH OF THE 18 CORONARYSEGMENTS. NEW OR ALREADY EXISTING MARKERS CAN BE DRAGGED UP AND DOWN TO ADJUSTTO THE EXACT SEGMENT BOUNDARIES.ESTENOSISTHIS TOOL CONSISTS OF 5 MARKERS THAT ALLOW USERS TO MARK REGIONS OF STENOSIS ON THEVESSEL. USERS CAN ADD NEW STENOSIS MARKERS AND NEW OR ALREADY EXISTING MARKERSCAN BE DRAGGED UP/DOWN.PPLAQUETHIS TOOL OVERLAYS THE SMPR AND THE CROSS SECTION VIEWS, WITH COLORIZED AREAS OFOVERLAYPLAQUE BASED UPON THE PLAQUES HOUNSFIELD ATTENUATIONCCENTER-USERS CAN ADJUST THE CENTERLINE OF THE VESSEL IN THE CMPR OR CROSS-SECTION VIEW.LINEADJUSTMENTS WILL BE PROPAGATED TO THE SMPR VIEW.OCTOCHRONIC TOTAL OCCLUSION TOOL CONSISTS OF TWO MARKERS THAT IDENTIFY THE START ANDEND OF A SECTION OF AN ARTERY THAT IS TOTALLY OCCLUDED. MULTIPLE CTOS CAN BE ADDEDAND DRAGGED TO THE AREA OF INTEREST.NSTENTTHE STENT TOOL ALLOW USERS TO IDENTIFY THE PRESENCE OF STENT(S) IN THE CORONARYARTERIES. USERS CAN ADD STENT MARKERS AND DRAG EXISTING MARKERS UP OR DOWN TO THEEXACT STENT BOUNDARIES.XEXCLUDEBY USING THIS TOOL SECTIONS OF A VESSEL CAN BE REMOVED FROM THE FINAL CALCULATIONS/ANALYSIS. REMOVAL OF THESE SECTIONS IS OFTEN DUE TO THE PRESENCE OF ARTIFACTS,USUALLY DUE TO MOTION OR MISALIGNMENT ISSUES, AMONG OTHERS.TTRACKERTHE TRACKER ORIENTS AND ALLOWS USERS TO CORRELATE THE MPR, CROSS-SECTION, AXIAL,CORONAL, SAGITTAL, AND 3D ARTERY TREE VIEWS.DDISTANCETHE TOOL IS USED ON THE MPR, CROSS-SECTION, AXIAL, CORONAL, OR SAGITTAL VIEWS TOMEASURE DISTANCES BETWEEN POINTS. THE TOOL PROVIDES ACCURATE READINGS INMILLIMETERS ALLOWING FOR QUICK REVIEW AND ESTIMATION ON AREAS OF INTEREST. FIGS.7B and7Cillustrate certain functionality of the Tracker tool. The Tracker tool712orients and allows user to correlate the views shown in the various panels of the user interface600, for example, in the SMPR, CMPR, cross-section, axial, coronal, sagittal, and the 3D artery tree views. To activate, the tracker icon is selected on the editing toolbar. When the Tracker tool712is activated, the user interface generates and displays a line616(e.g., a red line) on the SMPR or CMPR view. The system generates on the user interface a corresponding (red) disc617which is displayed on the 3D artery tree in the first panel601in a corresponding location as the line616. The system generates on the user interface a corresponding (red) dot which his displayed on the axial, sagittal and coronal views in the fourth panel608in a corresponding location as the line616. The line616, disc617, and dots618are location indicators all referencing the same location in the different views, such that scrolling any of the trackers up and down will also result in the same movement of the location indicator in other views. Also, the user interface600displays the cross-sectional image in panel606corresponding to the location indicated by the location indicators. FIGS.7D and7Eillustrate certain functionality of the vessel and lumen wall tools, which are used to modify the lumen and vessel wall contours. The Lumen Wall tool701and the Vessel Wall tool703are configured to modify the lumen and vessel walls (also referred to herein as contours, boundaries, or features) that were previously determined for a vessel (e.g., determined by processing the CT images using a machine learning process. These tool are used by the system for determining measurements that are output or displayed. By interacting with the contours generated by the system with these tools, a user can refine the accuracy of the location of the contours, and any measurements that are derived from those contours. These tools can be used in the SMPR and cross-section view. The tools are activated by selecting the vessel and lumen icons701,703on the editing toolbar. The vessel wall619will be displayed in the MPR view and the cross-section view in a graphical “trace” overlay in a color (e.g., yellow). The lumen wall629will be displayed in a graphical “trace” overly in a different color (e.g., purple). In an embodiment, the user interface is configured to refine the contours through interactions with a user. For example, to refine the contours, the user can hover above the contour with a pointing device (e.g., mouse, stylus, finger) so it highlights the contour, click on the contour for the desired vessel or lumen wall and drag the displayed trace to a different location setting a new boundary. The user interface600is configured to automatically save any changes to these tracings. The system re-calculates any measurements derived from the changes contours in real time, or near real time. Also, the changes made in one panel on one view are displayed correspondingly in the other views/panels. FIG.7Fillustrates the lumen wall button701and the snap to vessel wall button702(left) and the vessel wall button703and the snap to lumen wall button704(right) of the user interface600which can be used to activate the Lumen Wall/Snap to Vessel tools701,702, and the Vessel Wall/Snap to Lumen Wall703,704tools, respectively. The user interface provides these tools to modify lumen and vessel wall contours that were previously determined. The Snap to Vessel/Lumen Wall tools are used to easily and quickly close the gap between lumen and vessel wall contours, that is, move a trace of the lumen contour and a trace of the vessel contour to be the same, or substantially the same, saving interactive editing time. The user interface600is configured to activate these tools when a user hovers of the tools with a pointing device, which reveals the snap to buttons. For example, hovering over the Lumen Wall button701reveals the Snap to Vessel button702to the right-side of the Lumen wall button, and hovering over the Vessel Wall button703reveals the Snap to Lumen Wall button704beside the Vessel Wall button703. A button is selected to activate the desired tool. In reference toFIG.7G, a pointing device can be used to click at a first point620and drag along the intended part of the vessel to edit to a second point621, and an area622will appear indicating where the tool will run. Once the end of the desired area622is drawn, releasing the selection will snap the lumen and vessel walls together. FIG.7Hillustrates an example of the second panel602that can be displayed while using the Segment tool705which allows for marking the boundaries between individual coronary segments on the MPR. The user interface600is configured such that when the Segment tool705is selected, lines (e.g., lines623,624) appear on the vessel image in the second panel602on the vessels in the SMPR view. The lines indicate segment boundaries that were determined by the system. The names are displayed in icons625,626adjacent to the respective line623,624. To edit the name of the segment, click on an icon625,626and label appropriately using the name panel611, illustrated inFIG.7I. A segment can also be deleted, for example, by selecting a trashcan icon. The lines623,624can be moved up and down to define the segment of interest. If a segment is missing, the user can add a new segment using a segment addition button, and labeled using the labeling feature in the segment labeling pop-up menu611. FIGS.7J-7Millustrate an example of using the stenosis tool706on the user interface600. For example,FIG.7Lillustrates a stenosis button which can be used to drop stenosis markers based on the user edited lumen and vessel wall contours.FIG.7Millustrates the stenosis markers on segments on a curved multiplanar vessel (CMPR) view. The second panel604can be displayed while using the stenosis tool706which allows a user to indicate markers to mark areas of stenosis on a vessel. In an embodiment, the stenosis tool contains a set of five markers that are used to mark areas of stenosis on the vessel. These markers are defined as:R1: Nearest proximal normal slice to the stenosis/lesionP: Most proximal abnormal slice of the stenosis/lesionO: Slice with the maximum occlusionD: Most distal abnormal slice of the stenosis/lesionR2: Nearest distal normal slice to the stenosis/lesion In an embodiment, there are two ways to add stenosis markers to the multiplanar view (straightened and curved). After selecting the stenosis tool706, a stenosis can be added by activating the stenosis button shown inFIG.7KorFIG.7L: to drop 5 evenly spaced stenosis markers (i) click on the Stenosis “+” button (FIG.7K); (ii) a series of 5 evenly spaced yellow lines will appear on the vessel; the user must edit these markers to the applicable position; (iii) move all 5 markers at the same time by clicking inside the highlighted area encompassed by the markers and dragging them up/down; (iv) move the individual markers by clicking on the individual yellow lines or tags and move up and down; (v) to delete a stenosis, click on the red trashcan icon. To drop stenosis markers based on the user-edited lumen and vessel wall contours, click on the stenosisbutton (seeFIG.7L). A series of 5 yellow lines will appear on the vessel. The positions are based on the user-edited contours. The user interface600provides functionality for a user to edit the stenosis markers, e.g., can move the stenosis markersFIG.7Jillustrates the stenosis markers R1, P, O, D, and R2 placed on vessels in a SMPR view.FIG.7Millustrates the markers R1, P, O, D, and R2 placed on vessels in a CMPR view. FIG.7Nillustrates an example of a panel that can be displayed while using the Plaque Overlay tool707of the user interface. In an embodiment and in reference toFIG.7N, “Plaque” is categorized as: low-density-non-calcified plaque (LD-NCP)701, non-calcified plaque (NCP)632, or calcified plaque (CP)633. Selecting the Plaque Overlay tool707on the editing toolbar activates the tool. When activated, the Plaque Overlay tool707overlays different colors on vessels in the SMPR view in the second panel604, and in the cross-section the SMPR, and cross-section view in the third panel606(see for example,FIG.7R) with areas of plaque based on Hounsfield Unit (HU) density. In addition, a legend opens in the cross-section view corresponding to plaque type to plaque overlay color as illustrated inFIGS.7O and7Q. Users can select different HU ranges for the three different types of plaque by clicking on the “Edit Thresholds” button located in the top right corner of the cross-section view as illustrated inFIG.7P. In one embodiment, plaque thresholds default to the values shown in the table below: Plaque TypeHounsfield Unit (HU)LD-NCO−189 to 30NCP−189 to 350CP350 to 2500 The default values can be revised, if desired, for example, using the Plaque Threshold interface shown inFIG.7Q. Although default values are provided, users can select different plaque thresholds based on their clinical judgment. Users can use the cross-section view of the third panel606, illustrated inFIG.7R, to further examine areas of interest. Users can also view the selected plaque thresholds in a vessel statistics panel of the user interface600, illustrated inFIG.7S. The Centerline tool708allows users to adjust the center of the lumen. Changing a center point (of the centerline) may change the lumen and vessel wall and the plaque quantification, if present. The Centerline tool708is activated by selecting it on the user interface600. A line635(e.g., a yellow line) will appear on the CMPR view609and a point634(e.g., a yellow point) will appear in the cross-section view on the third panel606. The centerline can be adjusted as necessary by clicking and dragging the line/point. Any changes made in the CMPR view will be reflected in the cross-section view, and vice-versa. The user interface600provides for several ways to extend the centerline of an existing vessel. For example, a user can extend the centerline by: (1) right-clicking on the dot634delineated vessel on the axial, coronal, or sagittal view (seeFIG.7U); (2) select “Extend from Start” or “Extend from End” (seeFIG.7U), the view will jump to the start or end of the vessel; (3) add (green) dots to extend the vessel (seeFIG.7V); (4) when finished, select the (blue) check mark button, to cancel the extension, select the (red) “x” button (see for example,FIG.7V). The user interface then extends the vessel according to the changes made by the user. A user can then manually edit the lumen and vessel walls on the SMPR or cross-section views (see for example,FIG.7W). If the user interface is unable to identify the vessel section which has been added by the user, it will return straight vessel lines connecting the user-added dots. The user can then adjust the centerline. The user interface600also provides a Chronic Total Occlusion (CTO) tool709to identify portions of an artery with a chronic total occlusion (CTO), that is, a portion of artery with 100% stenosis and no detectable blood flow. Since it is likely to contain a large amount of thrombus, the plaque within the CTO is not included in overall plaque quantification. To activate, click on the CTO tool709on the editing toolbar612. To add a CTO, click on the CTO “+” button on the user interface. Two lines (markers)636,637will appear on the MPR view in the second panel604, as illustrated inFIG.7Xindicating a portion of the vessel of the CTO. The markers636,637can be moved to adjust the extent of the CTO. If more than one CTO is present, additional CTO's can be added by again activating the CTO “+” button on the user interface. A CTO can also be deleted, if necessary. The location of the CTO is stored. In addition, portions of the vessel that are within the designated CTO are not included in the overall plaque calculation, and the plaque quantification determination is re-calculated as necessary after CTO's are identified. The user interface600also provides a Stent tool710to indicate where in vessel a stent exists. The Stent tool is activated by a user selection of the Stent tool710on the toolbar612. To add a stent, click on the Stent “+” button provided on the user interface. Two lines638,639(e.g., purple lines) will appear on of the MPR view as illustrated inFIG.7Y, and the lines638,639can be moved to indicate the extend of the stent by clicking on the individual lines638,639and moving them up and down along the vessel to the ends of the stent. Overlapping with the stent (or the CTO/Exclusion/Stenosis) markers is not permitted by the user interface600. A stent can also be deleted. The user interface600also provides an Exclude tool711that is configured to indicate a portion of a vessel to exclude from the analysis due to blurring caused by motion, contrast, misalignment, or other reasons. Excluding poor quality images will improve the overall quality of the results of the analysis for the non-excluded portions of the vessels. To exclude the top or bottom portion of a vessel, activate the segment tool705and the exclude tool711in the editing toolbar612.FIG.7Zillustrates the use of the exclusion tool to exclude a portion from the top of the vessel.FIG.7AAillustrates the use of the exclusion tool to exclude a bottom portion of the vessel. A first segment marker acts as the exclusion marker for the top portion of the vessel. The area enclosed by exclusion markers is excluded from all vessel statistic calculations. An area can be excluded by dragging the top segment marker to the bottom of the desired area of exclusion. The excluded area will be highlighted. Or the “End” marker can be dragged to the top of the desired area of exclusion. The excluded area will be highlighted, and a user can enter the reason for an exclusion in the user interface (seeFIG.7AC). To add a new exclusion to the center of the vessel, activate the exclude tool711on the editing toolbar612. Click on the Exclusion “+” button. A pop-up window on the user interface will appear for the reason of the exclusion (FIG.7AC), and the reason can be entered and it is stored in reference to the indicated excluded area. Two markers640,641will appear on the MPR as shown inFIG.7AB. Move both markers at the same time by clicking inside the highlighted area. The user can move the individual markers by clicking and dragging the lines640,641. The user interface600tracks the locations of the exclusion marker lines640,641(and previously defined features) and prohibits overlap of the area defined by the exclusion lines640,641with any previously indicated portions of the vessel having a CTO, stent or stenosis. The user interface600also is configured to delete a designated exclusion. Now referring toFIGS.7AD-7AG, the user interface600also provides a Distance tool713, which is used to measure the distance between two points on an image. It is a drag and drop ruler that captures precise measurements. The Distance tool works in the MPR, cross-section, axial, coronal, and sagittal views. To activate, click on the distance tool713on the editing toolbar612. Then, click and drag between the desired two points. A line642and measurement643will appear on the image displayed on the user interface600. Delete the measurement by right-clicking on the distance line642or measurement643and selecting “Remove the Distance” button644on the user interface600(seeFIG.7AF).FIG.7ADillustrates an example of measuring a distance of a straightened multiplanar vessel (SMPR).FIG.7AEillustrates an example of measuring the distance642of a curved multiplanar vessel (CMPR).FIG.7AFillustrates an example of measuring a distance642of a cross-section of the vessel.FIG.7AGillustrates an example of measuring the distance642on an Axial View of a patient's anatomy. An example of a vessel statistics panel of the user interface600is described in reference toFIGS.7AH-7AK.FIG.7AHillustrates a “vessel statistics” portion645of the user interface600(e.g., a button) of a panel which can be selected to display the vessel statistics panel646(or “tab”), illustrated inFIG.7AI.FIG.7AJillustrates certain functionality on the vessel statistics tab that allows a user to click through the details of multiple lesions.FIG.7AKfurther illustrates the vessel panel which the user can use to toggle between vessels. For example, users can hide the panel by clicking on the “X” on the top right hand side of the panel, illustrated inFIG.7AI. Statistics are shown at the per-vessel and per-lesion (if present) level, as indicated inFIG.7AJ. If more than one lesion is marked by the user, the user can click through each lesion's details. To view the statistics for each vessel, the users can toggle between vessels on the vessel panel illustrated inFIG.7AK. General information pertaining to the length and volume are presented for the vessel and lesion (if present) in the vessel statistics panel646, along with the plaque and stenosis information on a per-vessel and per-lesion level. Users may exclude artifacts from the image they do not want to be considered in the calculations by using the exclusion tool. The following tables indicate certain statistics that are available for vessels, lesions, plaque, and stenosis. VESSELTermDefinitionVessel Length (mm)Length of a linear coronary vesselTotal Vessel VolumeThe volume of consecutive slices(mm3)of vessel contours.Total Lumen VolumeThe volume of consecutive slices(mm3)of lumen contours LESIONTermDefinitionLesion Length (mm)Linear distance from the start of a coronarylesion to the end of a coronary lesion.Vessel Volume (mm3)The volume of consecutive slices of vesselcontours.Lumen Volume (mm3)The volume of consecutive slices of lumencontours. PLAQUETermDefinitionTotal Calcified Plaque Volume (mm3)Calcified plaque is defined as plaque inbetween the lumen and vessel wall with anattenuation of greater than 350 HU, or asdefined by the user, and is reported inabsolute measures by plaque volume.Calcified plaques are identified in eachcoronary artery ≥1.5 mm in mean vesseldiameter.Total Non-Calcified Plaque Volume (mm3)Non-calcified plaque is defined as plaque inbetween the lumen and vessel wall with anattenuation of less than or equal to 350, or asdefined by the user, HU and is reported inabsolute measures by plaque volume. Thetotal non-calcified plaque volume is the sumtotal of all non-calcified plaques identified ineach coronary artery ≥1.5 mm in mean vesseldiameter. Non-calcified plaque data reportedis further broken down into low-densityplaque, based on HU density thresholds.Low-Density Non-Calcified Plaque VolumeLow-Density--Non-Calcified Plaque is(mm3)defined as plaque in between the lumen andvessel wall with an attenuation of less than orequal to 30 HU or as defined by the user andis reported in absolute measures by plaquevolume.Total Plaque Volume (mm3)Plaque volume is defined as plaque inbetween the lumen and vessel wall reportedin absolute measures. The total plaquevolume is the sum total of all plaqueidentified in each coronary artery ≥1.5 mm inmean vessel diameter or wherever the userplaces the “End” marker. STENOSISTermDefinitionRemodeling IndexRemodeling Index is defined as the meanvessel diameter at a denoted slice divided bythe mean vessel diameter at a reference slice.Greatest DiameterThe deviation of the mean lumen diameter atStenosis (%)the denoted slice from a reference slice,expressed in percentage.Greatest AreaThe deviation of the lumen area at theStenosis (%)denoted slice to a reference area, expressedin percentage A quantitative variable that is used in the system and displayed on various portions of the user interface600, for example, in reference to low-density non-calcified plaque, non-calcified plaque, and calcified plaque, is the Hounsfield unit (HU). As is known, a Hounsfield Unit scale is a quantitative scale for describing radiation, and is frequently used in reference to CT scans as a way to characterize radiation attenuation and thus making it easier to define what a given finding may represent. A Hounsfield Unit measurement is presented in reference to a quantitative scale. Examples of Hounsfield Unit measurements of certain materials are shown in the following table: MaterialHUAir−1000Fat−50Distilled Water0Soft Tissue+40Blood+40 to 80Calcified Plaques350-1000+Bone+1000 In an embodiment, information that the system determines relating to stenosis, atherosclerosis, and CAD-RADS details are included on panel800of the user interface600, as illustrated inFIG.8A. By default, the CAD-RADS score may be unselected and requires the user to manually select the score on the CAD-RADS page. Hovering over the “#” icons causes the user interface600to provide more information about the selected output. To view more details about the stenosis, atherosclerosis, and CAD-RADS outputs, click the “View Details” button in the upper right of panel800—this will navigate to the applicable details page. In an embodiment, in the center of a centerpiece page view of the user interface600there is a non-patient specific rendition of a coronary artery tree805(a “cartoon artery tree”805) broken into segments805a-805rbased on the SCCT coronary segmentation, as illustrated in panel802inFIG.8C. All analyzed vessels are displayed in color according to the legend806based on the highest diameter stenosis within that vessel. Greyed out segments/vessels in the cartoon artery tree805, for example, segment805qand805r, were not anatomically available or not analyzed in the system (all segments may not exist in all patients). Per-territory and per-segment information can be viewed by clicking the territory above the tree (RCA, LM+LAD, etc.) using, for example, the user interface600selection buttons in panel801, as illustrated inFIGS.8B and8C. Or my selecting a segment805a-805rwithin the cartoon coronary tree805. Stenosis and atherosclerosis data displayed on the user interface in panel807will update accordingly as various segments are selected, as illustrated inFIG.8D.FIG.8Eillustrates an example of a portion of the per-territory summary panel807of the user interface.FIG.8Falso illustrates an example of portion of panel807showing the SMPR of a selected vessel and its associated statistics along the vessel at indicated locations (e.g., at locations indicated by a pointing device as it is moved along the SMPR visualization). That is, the user interface600is configured to provide plaque details and stenosis details in an SMPR visualization in panel809and a pop-up panel810that displays information as the user interface receives location information long the displayed vessel from the user, e.g., via a pointing device. The presence of a chronic total occlusion (CT) and/or a stent are indicated at the vessel segment level. For example,FIG.8Gillustrates the presence of a stent in the D1 segment.FIG.8Hindicates the presence of a CTO in the mRCA segment. Coronary dominance and any anomalies can be displayed below the coronary artery tree as illustrated inFIG.8I. The anomalies that were selected in the analysis can be displayed, for example, by “hovering” with a pointing device over the “details” button. If plaque thresholds were changed in the analysis, an alert can be displayed on the user interface, or on a generated report, that indicates the plaque thresholds were changed. When anomalies are present, the coronary vessel segment805associated with each anomaly will appear detached from the aorta as illustrated inFIG.8J. In an embodiment, a textual summary of the analysis can also be displayed below the coronary tree, for example, as illustrated in the panel811inFIG.8K. FIG.9Aillustrates an atherosclerosis panel900that can be displayed on the user interface, which displays a summary of atherosclerosis information based on the analysis.FIG.9Billustrates the vessel selection panel which can be used to select a vessel such that the summary of atherosclerosis information is displayed on a per segment basis. The top section of the atherosclerosis panel900contains per-patient data, as illustrated inFIG.9A. When a user “hovers” over the “Segments with Calcified Plaque” on panel901, or hovers over the “Segments with Non-Calcified Plaque” in panel902, the segment names with the applicable plaque are displayed. Below the patient specific data, users may access per-vessel and per-segment atherosclerosis data by clicking on one of the vessel buttons, illustrated inFIG.9B. FIG.9Cillustrates a panel903, that can be generated and displayed on the user interface, which shows atherosclerosis information determined by the system on a per segment basis. The presence of positive remodeling, the highest remodeling index, and the presence of Low-Density-Non-Calcified Plaque are reported for each segment in the panel903illustrated inFIG.9C. For example, plaque data can be displayed below on a per-segment basis, and plaque composition volumes can be displayed on a per-segment in the panel903illustrated inFIG.9C. FIG.9Dillustrates a panel904that can be displayed on the user interface that contains stenosis per patient data. The top section of the stenosis panel904contains per-patient data. Further details about each count can be displayed by hovering with a pointing device over the numbers, as illustrated inFIG.9E. Vessels included in each territory are shown in the table below: Vessel TerritorySegment NameLM (Left Main Artery)LMLAD (Left Anterior Descending)pLADmLADdLADD1D2RILCx (Left Circumflex Artery)pCxLCxOM1OM2L-PLBL-PDARCA (Right Coronary Artery)pRCAmRCAdRCAR-PLBR-PDA In an embodiment, a percentage Diameter Stenosis bar graph906can be generated and displayed in a panel905of the user interface, as illustrated inFIG.9F. The percentage Diameter Stenosis bar graph906displays the greatest diameter stenosis in each segment. If a CTO has been marked on the segment, it will display as a 100% diameter stenosis. If more than one stenosis has been marked on a segment, the highest value outputs are displayed by default and the user can click into each stenosis bar to view stenosis details and interrogate smaller stenosis (if present) within that segment. The user can also scroll through each cross-section by dragging the grey button in the center of a SMPR view of the vessel, and view the lumen diameter and % diameter stenosis at each cross-section at any selected location, as illustrated inFIG.9G. FIG.9Hillustrates a panel showing categories of the one or more stenosis marked on the SMPR based on the analysis. Color can be used to enhance the displayed information. In an example, stenosis in the LM>=50% diameter stenosis are marked in red. As illustrated in a panel907of the user interface inFIG.9I, for each segment's greatest percentage diameter stenosis the minimum luminal diameter and lumen diameter at the reference can be displayed when a pointing device is “hovered” above the graphical vessel cross-section representation, as illustrated inFIG.9J. If a segment was not analyzed or is not anatomically present, the segment will be greyed out and will display “Not Analyzed”. If a segment was analyzed but did not have any stenosis marked, the value will display “N/A”. FIG.9Killustrates a panel908of the user interface that indicates CADS-RADS score selection. The CAD-RADS panel displays the definitions of CAD-RADS as defined by “Coronary Artery Disease-Reporting and Data System (CAD-RADS) An Expert Consensus Document of SCCT, ACR and NASCI: Endorsed by the ACC”. The user is in full control of selecting the CAD-RADS score. In an embodiment, no score will be suggested by the system. In another embodiment, a CAD-RADS score can be suggested. Once a CAD-RADS score is selected on this page, the score will display in both certain user interface panels and full text report pages. Once a CAD-RADS score is selected, the user has the option of selecting modifiers and the presentation of symptoms. Once a presentation is selected, the interpretation, further cardiac investigation and management guidelines can be displayed to the user on the user interface, for example, as illustrated in the panel909illustrated inFIG.9L. These guidelines reproduce the guidelines found in “Coronary Artery Disease-Reporting and Data System (CAD-RADS) An Expert Consensus Document of SCCT, ACR and NASCI: Endorsed by the ACC.” FIGS.9M and9Nillustrate tables that can be generated and displayed on a panel of the user interface, and/or included in a report.FIG.9Millustrates quantitative stenosis and vessel outputs.FIG.9Nillustrates quantitative plaque outputs. In these quantitative tables, a user can view quantitative per-segment stenosis and atherosclerosis outputs from the system analysis. The quantitative stenosis and vessel outputs table (FIG.9M) includes information for the evaluated arteries and segments. Totals are given for each vessel territory. Information can include, for example, length, vessel volume, lumen volume, total plaque volume, maximum diameter stenosis, maximum area stenosis, and highest remodeling index. The quantitative plaque outputs table (FIG.9N) includes information for the evaluated arteries and segments. Information can include, for example, total plaque volume, total calcified plaque volume, non-calcified plaque volume, low-density non-calcified plaque volume, and total non-calcified plaque volume. The user is also able to download a PDF or CSV file of the quantitative outputs is a full text Report. The full text Report presents a textual summary of the atherosclerosis, stenosis, and CAD-RADS measures. The user can edit the report, as desired. Once the user chooses to edit the report, the report will not update the CAD-RADS selection automatically. FIG.10is a flowchart illustrating a process1000for analyzing and displaying CT images and corresponding information. At block1005, the process1000stores computer-executable instructions, a set of CT images of a patient's coronary vessels, vessel labels, and artery information associated with the set of CT images including information of stenosis, plaque, and locations of segments of the coronary vessels. All of the steps of the process can be performed by embodiments of the system described herein, for example, on embodiments of the systems described inFIG.13. For example, by one or more computer hardware processors in communication with the one or more non-transitory computer storage mediums, executing the computer-executable instructions stored on one or more non-transitory computer storage mediums. In various embodiments, the user interface can include one or more portions, or panels, that are configured to display one or more of images, in various views (e.g., SMPR, CMPR, cross-sectional, axial, sagittal, coronal, etc.) related to the CT images of a patient's coronary arteries, a graphical representation of coronary arteries, features (e.g., a vessel wall, the lumen, the centerline, the stenosis, plaque, etc.) that have been extracted or revised by machine learning algorithm or by an analyst, and information relating to the CT images that has been determined by the system, by an analyst, or by an analyst interacting with the system (e.g., measurements of features in the CT images. In various embodiments, panels of the user interface can be arranged differently than what is described herein and what is illustrated in the corresponding figures. A user can make an input to the user interface using a pointing device or a user's finger on a touchscreen. In an embodiment, the user interface can receive input by determining the selection of a button/icon/portion of the user interface. In an embodiment, the user interface can receive an input in a defined field of the user interface. At block1010, the process1000can generate and display in a user interface a first panel including an artery tree comprising a three-dimensional (3D) representation of coronary vessels based on the CT images and depicting coronary vessels identified in the CT images, and depicting segment labels, the artery tree not including heart tissue between branches of the artery tree. An example of such an artery tree602is shown in panel601inFIG.6A. In various embodiments, panel601can be positioned in locations of the user interface600other than what is shown inFIG.6A. At block1015, the process1000can receive a first input indicating a selection of a coronary vessel in the artery tree in the first panel. For example, the first input can be received by the user interface600of a vessel in the artery tree602in panel601. At block1020, in response to the first input, the process1000can generate and display on the user interface a second panel illustrating at least a portion of the selected coronary vessel in at least one straightened multiplanar vessel (SMPR) view. In an example, the SMPR view is displayed in panel604ofFIG.6A. At block1025, the process1000can generate and display on the user interface a third panel showing a cross-sectional view of the selected coronary vessel, the cross-sectional view generated using one of the set of CT images of the selected coronary vessel. Locations along the at least one SMPR view are each associated with one of the CT images in the set of CT images such that a selection of a particular location along the coronary vessel in the at least one SMPR view displays the associated CT image in the cross-sectional view in the third panel. In an example, the cross-sectional view can be displayed in panel606as illustrated inFIG.6A. At block1035, the process1000can receive a second input on the user interface indicating a first location along the selected coronary artery in the at least one SMPR view. In an example, user may use a pointing device to select a different portion of the vessel shown in the SMPR view in panel604. At block1030, the process1000, in response to the second input, displays the associated CT scan associated in the cross-sectional view in the third panel, panel606. That is, the cross-sectional view that correspond to the first input is replaced by the cross-sectional view that corresponds to the second input on the SMPR view. Normalization Device In some instances, medical images processed and/or analyzed as described throughout this application can be normalized using a normalization device. As will be described in more detail in this section, the normalization device may comprise a device including a plurality of samples of known substances that can be placed in the medical image field of view so as to provide images of the known substances, which can serve as the basis for normalizing the medical images. In some instances, the normalization device allows for direct within image comparisons between patient tissue and/or other substances (e.g., plaque) within the image and known substances within the normalization device. As mentioned briefly above, in some instances, medical imaging scanners may produce images with different scalable radiodensities for the same object. This, for example, can depend not only on the type of medical imaging scanner or equipment used but also on the scan parameters and/or environment of the particular day and/or time when the scan was taken. As a result, even if two different scans were taken of the same subject, the brightness and/or darkness of the resulting medical image may be different, which can result in less than accurate analysis results processed from that image. To account for such differences, in some embodiments, the normalization device comprising one or more known samples of known materials can be scanned together with the subject, and the resulting image of the one or more known elements can be used as a basis for translating, converting, and/or normalizing the resulting image. Normalizing the medical images that will be analyzed can be beneficial for several reasons. For example, medical images can be captured under a wide variety of conditions, all of which can affect the resulting medical images. In instances where the medical imager comprises a CT scanner, a number of different variables can affect the resulting image. Variable image acquisition parameters, for example, can affect the resulting image. Variable image acquisition parameters can comprise one or more of a kilovoltage (kV), kilovoltage peak (kVp), a milliamperage (mA), or a method of gating, among others. In some embodiments, methods of gating can include prospective axial triggering, retrospective ECG helical gating, and fast pitch helical, among others. Varying any of these parameters, may produce slight differences in the resulting medical images, even if the same subject is scanned. Additionally, the type of reconstruction used to prepare the image after the scan may provide differences in medical images. Example types of reconstruction can include iterative reconstruction, non-iterative reconstruction, machine learning-based reconstruction, and other types of physics-based reconstruction among others.FIGS.11A-11Dillustrate different images reconstructed using different reconstruction techniques. In particular,FIG.11Aillustrates a CT image reconstructed using filtered back projection, whileFIG.11Billustrates the same CT image reconstructed using iterative reconstruction. As shown, the two images appear slightly different. The normalization device described below can be used to help account for these differences by providing a method for normalizing between the two.FIG.11Cillustrates a CT image reconstructed by using iterative reconstruction, whileFIG.11Dillustrates the same image reconstructed using machine learning. Again, one can see that the images include slight differences, and the normalization device described herein can advantageously be useful in normalizing the images to account for the two differences. As another example, various types of image capture technologies can be used to capture the medical images. In instances where the medical imager comprises a CT scanner, such image capture technologies may include a dual source scanner, a single source scanner, dual energy, monochromatic energy, spectral CT, photon counting, and different detector materials, among others. As before, images captured using difference parameters may appear slightly different, even if the same subject is scanned. In addition to CT scanners, other types of medical imagers can also be used to capture medical images. These can include, for example, x-ray, ultrasound, echocardiography, intravascular ultrasound (IVUS), MR imaging, optical coherence tomography (OCT), nuclear medicine imaging, positron-emission tomography (PET), single photon emission computed tomography (SPECT), or near-field infrared spectroscopy (NIRS). Use of the normalization device can facilitate normalization of images such that images captured on these different imaging devices can be used in the methods and systems described herein. Additionally, new types of medical imaging technologies are currently being developed. Use of the normalization device can allow the methods and systems described herein to be used even with medical imaging technologies that are currently being developed or that will be developed in the future. Use of different or emerging medical imaging technologies can also cause slight differences between images. Another factor that can cause differences in medical images that can be accounted for using the normalization device can be use of different contrast agents during medical imaging. Various contrast agents currently exist, and still others are under development. Use of the normalization device can facilitate normalization of medical images regardless of the type of contrast agent used and even in instances where no contrast agent is used. These slight differences can, in some instances, negatively impact analysis of the image, especially where analysis of the image is performed by artificial intelligence or machine learning algorithms that were trained or developed using medical images captured under different conditions. In some embodiments, the methods and systems described throughout this application for analyzing medical images can include the use of artificial intelligence and/or machine learning algorithms. Such algorithms can be trained using medical images. In some embodiments, the medical images that are used to train these algorithms can include the normalization device such that the algorithms are trained based on normalized images. Then, by normalizing subsequent images by also including the normalization device in those images, the machine learning algorithms can be used to analyze medical images captured under a wide variety of parameters, such as those described above. In some embodiments, the normalization device described herein is distinguishable from a conventional phantom. In some instances, conventional phantoms can be used to verify if a CT machine is operating in a correct manner. These conventional phantoms can be used periodically to verify the calibration of the CT machine. For example, in some instances, conventional phantoms can be used prior to each scan, weekly, monthly, yearly, or after maintenance on the CT machine to ensure proper functioning and calibration. Notably, however, the conventional phantoms do not provide a normalization function that allows for normalization of the resulting medical images across different machines, different parameters, different patients, etc. In some embodiments, the normalization device described herein can provide this functionality. The normalization device can allow for the normalization of CT data or other medical imaging data generated by various machine types and/or for normalization across different patients. For example, different CT devices manufactured by various manufacturers, can produce different coloration and/or different gray scale images. In another example, some CT scanning devices can produce different coloration and/or different gray scale images as the CT scanning device ages or as the CT scanning device is used or based on the environmental conditions surrounding the device during the scanning. In another example, patient tissue types or the like can cause different coloration and/or gray scale levels to appear differently in medical image scan data. Normalization of CT scan data can be important in order to ensure that processing of the CT scan data or other medical imaging data is consistent across various data sets generated by various machines or the same machines used at different times and/or across different patients. In some embodiments, the normalization device needs to be used each time a medical image scan is performed because scanning equipment can change over time and/or patients are different with each scan. In some embodiments, the normalization device is used in performing each and every scan of patient in order to normalize the medical image data of each patient for the AI algorithm(s) used to analyze the medical image data of the patient. In other words, in some embodiments, the normalization device is used to normalize to each patient as opposed to each scanner. In some embodiments, the normalization device may have different known materials with different densities adjacent to each other (e.g., as described with reference toFIG.12F). This configuration may address an issue present in some CT images where the density of a pixel influences the density of the adjacent pixels and that influence changes with the density of each of the individual pixel. One example of such an embodiment can include different contrast densities in the coronary lumen influencing the density of the plaque pixels. The normalization device can address this issue by having known volumes of known substances to help to correctly evaluate volumes of materials/lesions within the image correcting in some way the influence of the blooming artifact on quantitative CT image analysis/measures. In some instances, the normalization device might have moving known materials with known volume and known and controllable motion. This may allow to exclude or reduce the effect of motion on quantitative CT image analysis/measures. Accordingly, the normalization device, in some embodiments, is not a phantom in the traditional sense because the normalization device is not just calibrating to a particular scanner but is also normalizing for a specific patient at a particular time in a particular environment for a particular scan, for particular scan image acquisition parameters, and/or for specific contrast protocols. Accordingly, in some embodiments, the normalization device can be considered a reverse phantom. This can be because, rather than providing a mechanism for validating a particular medical imager as a conventional phantom would, the normalization device can provide a mechanism for normalizing or validating a resulting medical image such that it can be compared with other medical images taken under different conditions. In some embodiments, the normalization device is configured to normalize the medical image data being examined with the medical image data used to train, test, and/or validate the AI algorithms used for analyzing the to be examined medical image data. In some embodiments, the normalization of medical scanning data can be necessary for the AI processing methods disclosed herein because in some instances AI processing methods can only properly process medical scanning data when the medical scanning data is consistent across all medical scanning data being processed. For example, in situations where a first medical scanner produces medical images showing fatty material as dark gray or black, whereas a second medical scanner produces medical image showing the same fatty material as medium or light gray, then the AI processing methodologies of the systems, methods, and devices disclosed herein may misidentify and/or not fully identify the fatty materials in one set or both sets of the medical images produced by the first and second medical scanners. This can be even more problematic as the relationship of specific material densities may not be constant, and even may change in an non linear way depending on the material and on the scanning parameters. In some embodiments, the normalization device enables the use of AI algorithms trained on certain medical scanner devices to be used on medical images generated by next-generation medical scanner devices that may have not yet even been developed. FIG.12Ais a block diagram representative of an embodiment of a normalization device1200that can be configured to normalize medical images for use with the methods and systems described herein. In the illustrated embodiment, the normalization device1200can include a substrate1202. The substrate1202can provide the body or structure for the normalization device1200. In some embodiments, the normalization device1200can comprise a square or rectangular or cube shape, although other shapes are possible. In some embodiments, the normalization device1200is configured to be bendable and/or be self-supporting. For example, the substrate1202can be bendable and/or self-supporting. A bendable substrate1202can allow the normalization device to fit to the contours of a patient's body. In some embodiments, the substrate1202can comprise one or more fiducials1203. The fiducials1203can be configured to facilitate determination of the alignment of the normalization device1200in an image of the normalization device such that the position in the image of each of the one or more compartments holding samples of known materials can be determined. The substrate1202can also include a plurality of compartments (not shown inFIG.12A, but see, for example, compartments1216ofFIGS.12C-12F). The compartments1216can be configured to hold samples of known materials, such as contrast samples1204, studied variable samples1206, and phantom samples1208. In some embodiments, the contrast samples1204comprise samples of contrast materials used during capture of the medical image. In some embodiments, the samples of the contrast materials1204comprise one or more of iodine, Gad, Tantalum, Tungsten, Gold, Bismuth, or Ytterbium. These samples can be provided within the compartments1216of the normalization device1200at various concentrations. The studied variable samples1206can includes samples of materials representative of materials to be analyzed systems and methods described herein. In some examples, the studied variable samples1206comprise one or more of calcium 1000 HU, calcium 220 HU, calcium 150 HU, calcium 130 HU, and a low attenuation (e.g., 30 HU) material. Other studied variable samples1206provided at different concentrations can also be included. In general, the studied variable samples1206can correspond to the materials for which the medical image is being analyzed. The phantom samples1208can comprise samples of one or more phantom materials. In some examples, the phantom samples1208comprise one or more of water, fat, calcium, uric acid, air, iron, or blood. Other phantom samples1208can also be used. In some embodiments, the more materials contained in the normalization device1200, or the more compartments1216with different materials in the normalization device1200, the better the normalization of the data produced by the medical scanner. In some embodiments, the normalization device1200or the substrate1202thereof is manufactured from flexible and/or bendable plastic. In some embodiments, the normalization device1200is adapted to be positioned within or under the coils of an MR scanning device. In some embodiments, the normalization device1200or the substrate1202thereof is manufactured from rigid plastic. In the illustrated embodiment ofFIG.12A, the normalization device1200also includes an attachment mechanism1210. The attachment mechanism1210can be used to attach the normalization device1200to the patient. For example, in some embodiments, the normalization device1200is attached to the patient near the coronary region to be imaged prior to image acquisition. In some embodiments, the normalization device1200can be adhered to the skin of a patient using an adhesive or Velcro or some other fastener or glue. In some embodiments, the normalization device1200can be applied to a patient like a bandage. For example, in some embodiments, a removable Band-Aid or sticker is applied to the skin of the patient, wherein the Band-Aid can comprise a Velcro outward facing portion that allows the normalization device having a corresponding Velcro mating portion to adhere to the B and-Aid or sticker that is affixed to the skin of the patient (see, for example, the normalization device ofFIG.12G, described below). In some embodiments, the attachment mechanism1210can be omitted, such that the normalization device1200need not be affixed to the patient. Rather, in some embodiments, the normalization device can be placed in a medical scanner with or without a patient. In some embodiments, the normalization device can be configured to be placed alongside a patient within a medical scanner. In some embodiments, the normalization device1200can be a reusable device or be a disposable one-time use device. In some embodiments, the normalization device1200comprises an expiration date, for example, the device can comprise a material that changes color to indicate expiration of the device, wherein the color changes over time and/or after a certain number of scans or an amount of radiation exposure (see, for example,FIGS.12H and12I, described below). In some embodiments, the normalization device1200requires refrigeration between uses, for example, to preserve one or more of the samples contained therein. In some embodiments, the normalization device1200can comprise an indicator, such as a color change indicator, that notifies the user that the device has expired due to heat exposure or failure to refrigerate. In certain embodiments, the normalization device1200comprises a material that allows for heat transfer from the skin of the patient in order for the materials within the normalization device1200to reach the same or substantially the same temperature of the skin of the patient because in some cases the temperature of the materials can affect the resulting coloration or gray-scale of the materials produced by the image scanning device. For example, the substrate1202can comprise a material with a relatively high heat transfer coefficient to facilitate heat transfer from the patient to the samples within the substrate1202. In some embodiments, the normalization device1200can be removably coupled to a patient's skin by using an adhesive that can allow the device to adhere to the skin of a patient. In some embodiments, the normalization device1200can be used in the imaging field of view or not in the imaging field of view. In some embodiments, the normalization device1200can be imaged simultaneously with the patient image acquisition or sequentially. Sequential use can comprise first imaging the normalization device1200and the imaging the patient shortly thereafter using the same imaging parameters (or vice versa). In some embodiments, the normalization device1200can be static or programmed to be in motion or movement in sync with the image acquisition or the patient's heart or respiratory motion. In some embodiments, the normalization device1200can utilize comparison to image domain-based data or projection domain-based data. In some embodiments, the normalization device1200can be a 2D (area), or 3D (volume), or 4D (changes with time) device. In some embodiments, two or more normalization devices1200can be affixed to and/or positioned alongside a patient during medical image scanning in order to account for changes in coloration and/or gray scale levels at different depths within the scanner and/or different locations within the scanner. In some embodiments, the normalization device1200can comprise one or more layers, wherein each layer comprises compartments for holding the same or different materials as other layers of the device.FIG.12B, for example, illustrates a perspective view of an embodiment of a normalization device1200including a multilayer substrate1202. In the illustrated embodiment, the substrate1202comprises a first layer1212and a second layer1214. The second layer1214can be positioned above the first layer1212. In other embodiments, one or more additional layers may be positioned above the second layer1214. Each of the layers1212,1214can be configured with compartments for holding the various known samples, as shown inFIG.12C. In some embodiments, the various layers1212,1214of the normalization device1200allow for normalization at various depth levels for various scanning machines that perform three-dimensional scanning, such as MR and ultrasound. In some embodiments, the system can be configured to normalize by averaging of coloration and/or gray scale level changes in imaging characteristics due to changes in depth. FIG.12Cis a cross-sectional view of the normalization device1200ofFIG.12Billustrating various compartments1216positioned therein for holding samples of known materials for use during normalization. The compartments1216can be configured to hold, for example, the contrast samples1204, the studied variable samples1206, and the phantom samples1208illustrated inFIG.12A. The compartments1216may comprise spaces, pouches, cubes, spheres, areas, or the like, and within each compartment1216there is contained one or more compounds, fluids, substances, elements, materials, and the like. In some embodiments, each of the compartments1216can comprise a different substance or material. In some embodiments, each compartment1216is air-tight and sealed to prevent the sample, which may be a liquid, from leaking out. Within each layer1212,1214, or within the substrate1202, the normalization device1200may include different arrangements for the compartments1216.FIG.12Dillustrates a top down view of an example arrangement of a plurality of compartments1216within the normalization device1200. In the illustrated embodiment, the plurality of compartments1216are arranged in a rectangular or grid-like pattern.FIG.12Eillustrates a top down view of another example arrangement of a plurality of compartments1216within a normalization device1200. In the illustrated embodiment, the plurality of compartments1216are arranged in a circular pattern. Other arrangements are also possible. FIG.12Fis a cross-sectional view of another embodiment of a normalization device1200illustrating various features thereof, including adjacently arranged compartments1216A, self-sealing fillable compartments1216B, and compartments of various sizes and shapes1216C. As shown inFIG.12F, one or more of the compartments1216A can be arranged so as to be adjacent to each other so that materials within the compartments1216A can be in contact with and/or in close proximity to the materials within the adjacent compartments1216A. In some embodiments, the normalization device1200comprises high density materials juxtaposed to low density materials in order to determine how a particular scanning device displays certain materials, thereby allowing normalization across multiple scanning devices. In some embodiments, certain materials are positioned adjacent or near other materials because during scanning certain materials can influence each other. Examples of materials that can be placed in adjacently positioned compartments1216A can include iodine, air, fat material, tissue, radioactive contrast agent, gold, iron, other metals, distilled water, and/or water, among others. In some embodiments, the normalization device1200is configured receive material and/or fluid such that the normalization device is self-sealing. Accordingly,FIG.12Fillustrates compartments1216B that are self-sealing. These can allow a material to be injected into the compartment1216B and then sealed therein. For example, a radioactive contrast agent can be injected in a self-sealing manner into a compartment1216B of the normalization device1200, such that the medical image data generated from the scanning device can be normalized over time as the radioactive contrast agent decays over time during the scanning procedure. In some embodiments, the normalization device can be configured to contain materials specific for a patient and/or a type of tissue being analyzed and/or a disease type and/or a scanner machine type. In some embodiments, the normalization device1200can be configured measure scanner resolution and type of resolution by configuring the normalization device1200with a plurality of shapes, such as a circle. Accordingly, the compartments1216C can be provided with different shapes and sizes.FIG.12Fillustrates an example wherein compartments1216C are provided with different shapes (cubic and spherical) and different sizes. In some embodiments, all compartments1216can be the same shape and size. In some embodiments, the size of one or more compartment1216of the normalization device1200can be configured or selected to correspond to the resolution of the medical image scanner. For example, in some embodiments, if the spatial resolution of a medical image scanner is 0.5 mm×0.5 mm×0.5 mm, then the dimension of the compartments of the normalization device can also be 0.5 mm×0.5 mm×0.5 mm. In some embodiments, the sizes of the compartments range from 0.5 mm to 0.75 mm. In some embodiments, the width of the compartments of the normalization device can be about 0.1 mm, about 0.15 mm, about 0.2 mm, about 0.25 mm, about 0.3 mm, about 0.35 mm, about 0.4 mm, about 0.45 mm, about 0.5 mm, about 0.55 mm, about 0.6 mm, about 0.65 mm, about 0.7 mm, about 0.75 mm, about 0.8 mm, about 0.85 mm, about 0.9 mm, about 0.95 mm, about 1.0 mm, and/or within a range defined by two of the aforementioned values. In some embodiments, the length of the compartments of the normalization device can be about 0.1 mm, about 0.15 mm, about 0.2 mm, about 0.25 mm, about 0.3 mm, about 0.35 mm, about 0.4 mm, about 0.45 mm, about 0.5 mm, about 0.55 mm, about 0.6 mm, about 0.65 mm, about 0.7 mm, about 0.75 mm, about 0.8 mm, about 0.85 mm, about 0.9 mm, about 0.95 mm, about 1.0 mm, and/or within a range defined by two of the aforementioned values. In some embodiments, the height of the compartments of the normalization device can be about 0.1 mm, about 0.15 mm, about 0.2 mm, about 0.25 mm, about 0.3 mm, about 0.35 mm, about 0.4 mm, about 0.45 mm, about 0.5 mm, about 0.55 mm, about 0.6 mm, about 0.65 mm, about 0.7 mm, about 0.75 mm, about 0.8 mm, about 0.85 mm, about 0.9 mm, about 0.95 mm, about 1.0 mm, and/or within a range defined by two of the aforementioned values. In some embodiments, the dimensions of each of the compartments1216in the normalization device1200are the same or substantially the same for all of the compartments1216. In some embodiments, the dimensions of some or all of the compartments1216in the normalization device1200can be different from each other in order for a single normalization device1200to have a plurality of compartments having different dimensions such that the normalization device1200can be used in various medical image scanning devices having different resolution capabilities (for example, as illustrated inFIG.12F). In some embodiments, a normalization device1200having a plurality of compartments1216with differing dimensions enable the normalization device to be used to determine the actual resolution capability of the scanning device. In some embodiments, the size of each compartment1216may extend up to 10 mm, and the sizes of each compartment may be variable depending upon the material contained within. In the illustrated embodiment ofFIGS.12C and12F, the normalization device1200includes an attachment mechanism1210which includes an adhesive surface1218. The adhesive surface1218can be configured to affix (e.g., removably affix) the normalization device1200to the skin of the patient.FIG.12Gis a perspective view illustrating an embodiment of an attachment mechanism1210for a normalization device1200that uses hook and loop fasteners1220to secure a substrate of the normalization device to a fastener of the normalization device1200. In the illustrated embodiment, an adhesive surface1218can be configured to be affixed to the patient. The adhesive surface1218can include a first hook and loop fastener1220. A corresponding hook and loop fastener1220can be provided on a lower surface of the substrate1202and used to removably attach the substrate1202to the adhesive surface1218via the hook and loop fasteners1220. FIGS.12H and12Iillustrate an embodiment of a normalization device1200that includes an indicator1222configured to indicate an expiration status of the normalization device1200. The indicator1222can comprise a material that changes color or reveals a word to indicate expiration of the device, wherein the color or text changes or appears over time and/or after a certain number of scans or an amount of radiation exposure.FIG.12Hillustrates the indicator1222in a first state representative of a non-expired state, andFIG.12Iillustrates the indicator1222in a second state representative of an expired state. In some embodiments, the normalization device1200requires refrigeration between uses. In some embodiments, the indicator1222, such as a color change indicator, can notify the user that the device has expired due to heat exposure or failure to refrigerate. In some embodiments, the normalization device1200can be used with a system configured to set distilled water to a gray scale value of zero, such that if a particular medical image scanning device registers the compartment of the normalization device1200comprising distilled water as having a gray scale value of some value other than zero, then the system can utilize an algorithm to transpose or transform the registered value to zero. In some embodiments, the system is configured to generate a normalization algorithm based on known values established for particular substances in the compartments of the normalization device1200, and on the detected/generated values by a medical image scanning device for the same substances in the compartments1216of the normalization device1200. In some embodiments, the normalization device1200can be configured to generate a normalization algorithm based on a linear regression model to normalize medical image data to be analyzed. In some embodiments, the normalization device1200can be configured to generate a normalization algorithm based on a non-linear regression model to normalize medical image data to be analyzed. In some embodiments, the normalization device1200can be configured to generate a normalization algorithm based on any type of model or models, such as an exponential, logarithmic, polynomial, power, moving average, and/or the like, to normalize medical image data to be analyzed. In some embodiments, the normalization algorithm can comprise a two-dimensional transformation. In some embodiments, the normalization algorithm can comprise a three-dimensional transformation to account for other factors such as depth, time, and/or the like. By using the normalization device1200to scan known substances using different machines or the same machine at different times, the system can normalize CT scan data across various scanning machines and/or the same scanning machine at different times. In some embodiments, the normalization device1200disclosed herein can be used with any scanning modality including but not limited to x-ray, ultrasound, echocardiogram, magnetic resonance (MR), optical coherence tomography (OCT), intravascular ultrasound (IVUS) and/or nuclear medicine imaging, including positron-emission tomography (PET) and single photon emission computed tomography (SPECT). In some embodiments, the normalization device1200contains one or more materials that form plaque (e.g., studied variable samples1206) and one or more materials that are used in the contrast that is given to the patient through a vein during examination (e.g., contrast samples1204). In some embodiments, the materials within the compartments1216include iodine of varying concentrations, calcium of varying densities, non-calcified plaque materials or equivalents of varying densities, water, fat, blood or equivalent density material, iron, uric acid, air, gadolinium, tantalum, tungsten, gold, bismuth, ytterbium, and/or other material. In some embodiments, the training of the AI algorithm can be based at least in part on data relating to the density in the images of the normalization device1200. As such, in some embodiments, the system can have access to and/or have stored pre-existing data on how the normalization device1200behaved or was shown in one or more images during the training of the AI algorithm. In some embodiments, the system can use such prior data as a baseline to determine the difference with how the normalization device1200behaves in the new or current CT scan to which the AI algorithm is applied to. In some embodiments, the determined difference can be used to calibrate, normalize, and/or map one or more densities in recently acquired image(s) to one or more images that were obtained and/or used during training of the AI algorithm. As a non-limiting example, in some embodiments, the normalization device1200comprises calcium. If, for example, the calcium in the CT or normalization device1200that was used to train the AI algorithm(s) showed a density of 300 Hounsfield Units (HU), and if the same calcium showed a density of 600 HU in one or more images of a new scan, then the system, in some embodiments, may be configured to automatically divide all calcium densities in half to normalize or transform the new CT image(s) to be equivalent to the old CT image(s) used to train the AI algorithm. In some embodiments, as discussed above, the normalization device1200comprises a plurality of all materials that may be relevant, which can be advantageous as different materials can change densities in different amounts across scans. For example, if the density of calcium changes 2× across scans, the density of fat may change around 10% across the same scans. As such, it can be advantageous for the normalization device1200to comprise a plurality of materials, such as for example one or more materials that make up plaque, blood, contrast, and/or the like. As described above, in some embodiments, the system can be configured to normalize, map, and/or calibrate density readings and/or CT images obtained from a particular scanner and/or subject proportionally according to changes or differences in density readings and/or CT images obtained from one or more materials of a normalization device1200using a baseline scanner compared to density readings and/or CT images obtained from one or more same materials of a normalization device1200using the particular scanner and/or subject. As a non-limiting example, for embodiments in which the normalization device1200comprises calcium, the system can be configured to apply the same change in density of known calcium between the baseline scan and the new scan, for example 2×, to all other calcium readings of the new scan to calibrate and/or normalize the readings. In some embodiments, the system can be configured to normalize, map, and/or calibrate density readings and/or CT images obtained from a particular scanner and/or subject by averaging changes or differences between density readings and/or CT images obtained from one or more materials of a normalization device1200using a baseline scanner compared to density readings and/or CT images obtained from one or more materials or areas of a subject using the same baseline scanner. As a non-limiting example, for embodiments in which the normalization device1200comprises calcium, the system can be configured to determine a difference, or a ratio thereof, in density readings between calcium in the normalization device1200and other areas of calcium in the subject during the baseline scan. In some embodiments, the system can be configured to similarly determine a difference, or a ratio thereof, in density readings between calcium in the normalization device1200and other areas of calcium in the subject during the new scan; dividing the value of calcium from the device to the value of calcium anywhere else in the image can cancel out any change as the difference in conditions can affect the same material in the same manner. In some embodiments, the device will account for scan parameters (such as mA or kVp), type and number of x-ray sources within a scanner (such as single source or dual source), temporal resolution of a scanner, spatial resolution of scanner or image, image reconstruction method (such as adaptive statistical iterative reconstruction, model-based iterative reconstruction, machine learning-based iterative reconstruction or similar); image reconstruction method (such as from different types of kernels, overlapping slices from retrospective ECG-helical studies, non-overlapping slices from prospective axial triggered studies, fast pitch helical studies, or half vs. full scan integral reconstruction); contrast density accounting for internal factors (such as oxygen, blood, temperature, and others); contrast density accounting for external factors (such as contrast density, concentration, osmolality and temporal change during the scan); detection technology (such as material, collimation and filtering); spectral imaging (such as polychromatic, monochromatic and spectral imaging along with material basis decomposition and single energy imaging); photon counting; and/or scanner brand and model. In some embodiments, the normalization device1200can be applied to MRI studies, and account for one or more of: type of coil; place of positioning, number of antennas; depth from coil elements; image acquisition type; pulse sequence type and characteristics; field strength, gradient strength, slew rate and other hardware characteristics; magnet vendor, brand and type; imaging characteristics (thickness, matrix size, field of view, acceleration factor, reconstruction methods and characteristics, 2D, 3D, 4D [cine imaging, any change over time], temporal resolution, number of acquisitions, diffusion coefficients, method of populating k-space); contrast (intrinsic [oxygen, blood, temperature, etc.] and extrinsic types, volume, temporal change after administration); static or moving materials; quantitative imaging (including T1 T2 mapping, ADC, diffusion, phase contrast, and others); and/or administration of pharmaceuticals during image acquisition. In some embodiments, the normalization device1200can be applied to ultrasound studies, and account for one or more of: type and machine brands; transducer type and frequency; greyscale, color, and pulsed wave doppler; B- or M-mode doppler type; contrast agent; field of view; depth from transducer; pulsed wave deformity (including elastography), angle; imaging characteristics (thickness, matrix size, field of view, acceleration factor, reconstruction methods and characteristics, 2D, 3D, 4D [cine imaging, any change over time]; temporal resolution; number of acquisitions; gain, and/or focus number and places, amongst others. In some embodiments, the normalization device1200can be applied to nuclear medicine studies, such as PET or SPECT and account for one or more of: type and machine brands; for PET/CT all CT applies; for PET/MR all MR applies; contrast (radiopharmaceutical agent types, volume, temporal change after administration); imaging characteristics (thickness, matrix size, field of view, acceleration factor, reconstruction methods and characteristics, 2D, 3D, 4D [cine imaging, any change over time]; temporal resolution; number of acquisitions; gain, and/or focus number and places, amongst others. In some embodiments, the normalization device may have different known materials with different densities adjacent to each other. This may address any issue present in some CT images where the density of a pixel influences the density of the adjacent pixels and that influence changes with the density of each of the individual pixel. One example of this embodiment being different contrast densities in the coronary lumen influencing the density of the plaque pixels. In some embodiments, the normalization device may include known volumes of known substances to help to correctly evaluate volumes of materials/lesions within the image in order to correct the influence of the blooming artifact on quantitative CT image analysis/measures. In some embodiments, the normalization device might have moving known materials with known volume and known and controllable motion. This would allow to exclude or reduce the effect of motion on quantitative CT image analysis/measures. In some embodiments, having a known material on the image in the normalization device might also be helpful for material specific reconstructions from the same image. For example, it can be possible to use only one set of images to display only known materials, not needing multiple kV/spectral image hardware. FIG.12Jis a flowchart illustrating an example method1250for normalizing medical images for an algorithm-based medical imaging analysis such as the analyses described herein. Use of the normalization device can improve accuracy of the algorithm-based medical imaging analysis. The method1250can be a computer-implemented method, implemented on a system that comprises a processor and an electronic storage medium. The method1250illustrates that the normalization device can be used to normalize medical images captured under different conditions. For example, at block1252, a first medical image of a coronary region of a subject and the normalization device is accessed. The first medical image can be obtained non-invasively. The normalization device can comprise a substrate comprising a plurality of compartments, each of the plurality of compartments holding a sample of a known material, for example as described above. At block1254, a second medical image of a coronary region of a subject and the normalization device is captured. The second medical image can be obtained non-invasively. Although the method1250is described with reference to a coronary region of a patient, the method is also applicable to all body parts and not only the vessels as the same principles apply to all body parts, all time points and all imaging devices. This can even include “live” type of images such as fluoroscopy or MR real time image. As illustrated by the portion within the dotted lines, the first medical image and the second medical image can comprise at least one of the following: (1) one or more first variable acquisition parameters associated with capture of the first medical image differ from a corresponding one or more second variable acquisition parameters associated with capture of the second medical image, (2) a first image capture technology used to capture the first medical image differs from a second image capture technology used to capture the second medical image, and (3) a first contrast agent used during the capture of the first medical image differs from a second contrast agent used during the capture of the second medical image. In some embodiments, the first medical image and the second medical image each comprise a CT image and the one or more first variable acquisition parameters and the one or more second variable acquisition parameters comprise one or more of a kilovoltage (kV), kilovoltage peak (kVp), a milliamperage (mA), or a method of gating. In some embodiments, the method of gating comprises one of prospective axial triggering, retrospective ECG helical gating, and fast pitch helical. In some embodiments, the first image capture technology and the second image capture technology each comprise one of a dual source scanner, a single source scanner, dual energy, monochromatic energy, spectral CT, photon counting, and different detector materials. In some embodiments, the first contrast agent and the second contrast agent each comprise one of an iodine contrast of varying concentration or a non-iodine contrast agent. In some embodiments, the first image capture technology and the second image capture technology each comprise one of CT, x-ray, ultrasound, echocardiography, intravascular ultrasound (IVUS), MR imaging, optical coherence tomography (OCT), nuclear medicine imaging, positron-emission tomography (PET), single photon emission computed tomography (SPECT), or near-field infrared spectroscopy (NIRS). In some embodiments, a first medical imager that captures the first medical imager is different than a second medical image that capture the second medical image. In some embodiments, the subject of the first medical image is different than the subject of the first medical image. In some embodiments, wherein the subject of the first medical image is the same as the subject of the second medical image. In some embodiments, wherein the subject of the first medical image is different than the subject of the second medical image. In some embodiments, wherein the capture of the first medical image is separated from the capture of the second medical image by at least one day. In some embodiments, wherein the capture of the first medical image is separated from the capture of the second medical image by at least one day. In some embodiments, wherein a location of the capture of the first medical image is geographically separated from a location of the capture of the second medical image. Accordingly, it is apparent that the first and second medical images can be acquired under different conditions that can cause differences between the two images, even if the subject of each image is the same. The normalization device can help to normalize and account for these differences. The method1250then moves to blocks1262and1264, at which image parameters of the normalization device within the first medical image and which image parameters of the normalization device within the second medical image are identified, respectively. Due to different circumstances under which the first and second medical images were captured, the normalization device may appear differently in each image, even though the normalization device includes the same known samples. Next, at blocks1266and1268, the method generates a normalized first medical image for the algorithm-based medical imaging analysis based in part on the first identified image parameters of the normalization device within the first medical image and generates a normalized second medical image for the algorithm-based medical imaging analysis based in part on the second identified image parameters of the normalization device within the second medical image, respectively. In these blocks, each image is normalized based on the appearance or determined parameters of the normalization device in each image. In some embodiments, the algorithm-based medical imaging analysis comprises an artificial intelligence or machine learning imaging analysis algorithm, and the artificial intelligence or machine learning imaging analysis algorithm was trained using images that included the normalization device. System Overview In some embodiments, the systems, devices, and methods described herein are implemented using a network of one or more computer systems, such as the one illustrated inFIG.13.FIG.13is a block diagram depicting an embodiment(s) of a system for medical image analysis, visualization, risk assessment, disease tracking, treatment generation, and/or patient report generation. As illustrated inFIG.13, in some embodiments, a main server system1302is configured to perform one or more processes, analytics, and/or techniques described herein, some of which relating to medical image analysis, visualization, risk assessment, disease tracking, treatment generation, and/or patient report generation. In some embodiments, the main server system1302is connected via an electronic communications network1308to one or more medical facility client systems1304and/or one or more user access point systems1306. For example, in some embodiments, one or more medical facility client systems1304can be configured to access a medical image taken at the medical facility of a subject, which can then be transmitted to the main server system1302via the network1308for further analysis. After analysis, in some embodiments, the analysis results, such as for example quantified plaque parameters, assessed risk of a cardiovascular event, generated report, annotated and/or derived medical images, and/or the like, can be transmitted back to the medical facility client system1304via the network1308. In some embodiments, the analysis results, such as for example quantified plaque parameters, assessed risk of a cardiovascular event, generated report, annotated and/or derived medical images, and/or the like, can be transmitted also to a user access point system1306, such as a smartphone or other computing device of the patient or subject. As such, in some embodiments, a patient can be allowed to view and/or access a patient-specific report and/or other analyses generated and/or derived by the system from the medical image on the patient's computing device. In some embodiments, the main server system1302can comprise and/or be configured to access one or more modules and/or databases for performing the one or more processes, analytics, and/or techniques described herein. For example, in some embodiments, the main server system1302can comprise an image analysis module1310, a plaque quantification module1312, a fat quantification module1314, an atherosclerosis, stenosis, and/or ischemia analysis module1316, a visualization/GUI module1318, a risk assessment module1320, a disease tracking module1322, a normalization module1324, a medical image database1326, a parameter database1328, a treatment database1330, a patient report database1332, a normalization device database1334, and/or the like. In some embodiments, the image analysis module1310can be configured to perform one or more processes described herein relating to image analysis, such as for example vessel and/or plaque identification from a raw medical image. In some embodiments, the plaque quantification module1312can be configured to perform one or more processes described herein relating to deriving or generating quantified plaque parameters, such as for example radiodensity, volume, heterogeneity, and/or the like of plaque from a raw medical image. In some embodiments, the fat quantification module1314can be configured to perform one or more processes described herein relating to deriving or generating quantified fat parameters, such as for example radiodensity, volume, heterogeneity, and/or the like of fat from a raw medical image. In some embodiments, the atherosclerosis, stenosis, and/or ischemia analysis module1316can be configured to perform one or more processes described herein relating to analyzing and/or generating an assessment or quantification of atherosclerosis, stenosis, and/or ischemia from a raw medical image. In some embodiments, the visualization/GUI module1318can be configured to perform one or more processes described herein relating to deriving or generating one or more visualizations and/or GUIs, such as for example a straightened view of a vessel identifying areas of good and/or bad plaque from a raw medical image. In some embodiments, the risk assessment module1320can be configured to perform one or more processes described herein relating to deriving or generating risk assessment, such as for example of a cardiovascular event or disease from a raw medical image. In some embodiments, the disease tracking module1322can be configured to perform one or more processes described herein relating to tracking a plaque-based disease, such as for example atherosclerosis, stenosis, ischemia, and/or the like from a raw medical image. In some embodiments, the normalization module1324can be configured to perform one or more processes described herein relating to normalizing and/or translating a medical image, for example based on a medical image of a normalization device comprising known materials, for further processing and/or analysis. In some embodiments, the medical image database1326can comprise one or more medical images that are used for one or more of the various analysis techniques and processes described herein. In some embodiments, the parameter database1328can comprise one or more parameters derived from raw medical images by the system, such as for example one or more vessel morphology parameters, quantified plaque parameters, quantified fat parameters, and/or the like. In some embodiments, the treatment database1328can comprise one or more recommended treatments derived from raw medical images by the system. In some embodiments, the patient report database1332can comprise one or more patient-specific reports derived from raw medical images by the system and/or one or more components thereof that can be used to generate a patient-specific report based on medical image analysis results. In some embodiments, the normalization database1334can comprise one or more historical data points and/or datasets of normalizing various medical images and/or the specific types of medical imaging scanners and/or specific scan parameters used to obtain those images, as well as previously used normalization variables and/or translations for different medical images. Computer System In some embodiments, the systems, processes, and methods described herein are implemented using a computing system, such as the one illustrated inFIG.14. The example computer system1402is in communication with one or more computing systems1420and/or one or more data sources1422via one or more networks1418. WhileFIG.14illustrates an embodiment of a computing system1402, it is recognized that the functionality provided for in the components and modules of computer system1402may be combined into fewer components and modules, or further separated into additional components and modules. The computer system1402can comprise a Medical Analysis, Risk Assessment, and Tracking Module1414that carries out the functions, methods, acts, and/or processes described herein. The Medical Analysis, Risk Assessment, and Tracking Module1414is executed on the computer system1402by a central processing unit1406discussed further below. In general the word “module,” as used herein, refers to logic embodied in hardware or firmware or to a collection of software instructions, having entry and exit points. Modules are written in a program language, such as JAVA, C or C++, PYTHON or the like. Software modules may be compiled or linked into an executable program, installed in a dynamic link library, or may be written in an interpreted language such as BASIC, PERL, LUA, or Python. Software modules may be called from other modules or from themselves, and/or may be invoked in response to detected events or interruptions. Modules implemented in hardware include connected logic units such as gates and flip-flops, and/or may include programmable units, such as programmable gate arrays or processors. Generally, the modules described herein refer to logical modules that may be combined with other modules or divided into sub-modules despite their physical organization or storage. The modules are executed by one or more computing systems, and may be stored on or within any suitable computer readable medium, or implemented in-whole or in-part within special designed hardware or firmware. Not all calculations, analysis, and/or optimization require the use of computer systems, though any of the above-described methods, calculations, processes, or analyses may be facilitated through the use of computers. Further, in some embodiments, process blocks described herein may be altered, rearranged, combined, and/or omitted. The computer system1402includes one or more processing units (CPU)1406, which may comprise a microprocessor. The computer system1402further includes a physical memory1410, such as random access memory (RAM) for temporary storage of information, a read only memory (ROM) for permanent storage of information, and a mass storage device1404, such as a backing store, hard drive, rotating magnetic disks, solid state disks (SSD), flash memory, phase-change memory (PCM), 3D XPoint memory, diskette, or optical media storage device. Alternatively, the mass storage device may be implemented in an array of servers. Typically, the components of the computer system1402are connected to the computer using a standards based bus system. The bus system can be implemented using various protocols, such as Peripheral Component Interconnect (PCI), Micro Channel, SCSI, Industrial Standard Architecture (ISA) and Extended ISA (EISA) architectures. The computer system1402includes one or more input/output (I/O) devices and interfaces1412, such as a keyboard, mouse, touch pad, and printer. The I/O devices and interfaces1412can include one or more display devices, such as a monitor, that allows the visual presentation of data to a user. More particularly, a display device provides for the presentation of GUIs as application software data, and multi-media presentations, for example. The I/O devices and interfaces1412can also provide a communications interface to various external devices. The computer system1402may comprise one or more multi-media devices1408, such as speakers, video cards, graphics accelerators, and microphones, for example. The computer system1402may run on a variety of computing devices, such as a server, a Windows server, a Structure Query Language server, a Unix Server, a personal computer, a laptop computer, and so forth. In other embodiments, the computer system1402may run on a cluster computer system, a mainframe computer system and/or other computing system suitable for controlling and/or communicating with large databases, performing high volume transaction processing, and generating reports from large databases. The computing system1402is generally controlled and coordinated by an operating system software, such as z/OS, Windows, Linux, UNIX, BSD, SunOS, Solaris, MacOS, or other compatible operating systems, including proprietary operating systems. Operating systems control and schedule computer processes for execution, perform memory management, provide file system, networking, and I/O services, and provide a user interface, such as a graphical user interface (GUI), among other things. The computer system1402illustrated inFIG.14is coupled to a network1418, such as a LAN, WAN, or the Internet via a communication link1416(wired, wireless, or a combination thereof). Network1418communicates with various computing devices and/or other electronic devices. Network1418is communicating with one or more computing systems1420and one or more data sources1422. The Medical Analysis, Risk Assessment, and Tracking Module1414may access or may be accessed by computing systems1420and/or data sources1422through a web-enabled user access point. Connections may be a direct physical connection, a virtual connection, and other connection type. The web-enabled user access point may comprise a browser module that uses text, graphics, audio, video, and other media to present data and to allow interaction with data via the network1418. Access to the Medical Analysis, Risk Assessment, and Tracking Module1414of the computer system1402by computing systems1420and/or by data sources1422may be through a web-enabled user access point such as the computing systems'1420or data source's1422personal computer, cellular phone, smartphone, laptop, tablet computer, e-reader device, audio player, or other device capable of connecting to the network1418. Such a device may have a browser module that is implemented as a module that uses text, graphics, audio, video, and other media to present data and to allow interaction with data via the network1418. The output module may be implemented as a combination of an all-points addressable display such as a cathode ray tube (CRT), a liquid crystal display (LCD), a plasma display, or other types and/or combinations of displays. The output module may be implemented to communicate with input devices1412and they also include software with the appropriate interfaces which allow a user to access data through the use of stylized screen elements, such as menus, windows, dialogue boxes, tool bars, and controls (for example, radio buttons, check boxes, sliding scales, and so forth). Furthermore, the output module may communicate with a set of input and output devices to receive signals from the user. The input device(s) may comprise a keyboard, roller ball, pen and stylus, mouse, trackball, voice recognition system, or pre-designated switches or buttons. The output device(s) may comprise a speaker, a display screen, a printer, or a voice synthesizer. In addition a touch screen may act as a hybrid input/output device. In another embodiment, a user may interact with the system more directly such as through a system terminal connected to the score generator without communications over the Internet, a WAN, or LAN, or similar network. In some embodiments, the system1402may comprise a physical or logical connection established between a remote microprocessor and a mainframe host computer for the express purpose of uploading, downloading, or viewing interactive data and databases online in real time. The remote microprocessor may be operated by an entity operating the computer system1402, including the client server systems or the main server system, and/or may be operated by one or more of the data sources1422and/or one or more of the computing systems1420. In some embodiments, terminal emulation software may be used on the microprocessor for participating in the micro-mainframe link. In some embodiments, computing systems1420who are internal to an entity operating the computer system1402may access the Medical Analysis, Risk Assessment, and Tracking Module1414internally as an application or process run by the CPU1406. The computing system1402may include one or more internal and/or external data sources (for example, data sources1422). In some embodiments, one or more of the data repositories and the data sources described above may be implemented using a relational database, such as DB2, Sybase, Oracle, CodeBase, and Microsoft® SQL Server as well as other types of databases such as a flat-file database, an entity relationship database, and object-oriented database, and/or a record-based database. The computer system1402may also access one or more databases1422. The databases1422may be stored in a database or data repository. The computer system1402may access the one or more databases1422through a network1418or may directly access the database or data repository through I/O devices and interfaces1412. The data repository storing the one or more databases1422may reside within the computer system1402. In some embodiments, one or more features of the systems, methods, and devices described herein can utilize a URL and/or cookies, for example for storing and/or transmitting data or user information. A Uniform Resource Locator (URL) can include a web address and/or a reference to a web resource that is stored on a database and/or a server. The URL can specify the location of the resource on a computer and/or a computer network. The URL can include a mechanism to retrieve the network resource. The source of the network resource can receive a URL, identify the location of the web resource, and transmit the web resource back to the requestor. A URL can be converted to an IP address, and a Domain Name System (DNS) can look up the URL and its corresponding IP address. URLs can be references to web pages, file transfers, emails, database accesses, and other applications. The URLs can include a sequence of characters that identify a path, domain name, a file extension, a host name, a query, a fragment, scheme, a protocol identifier, a port number, a username, a password, a flag, an object, a resource name and/or the like. The systems disclosed herein can generate, receive, transmit, apply, parse, serialize, render, and/or perform an action on a URL. A cookie, also referred to as an HTTP cookie, a web cookie, an internet cookie, and a browser cookie, can include data sent from a website and/or stored on a user's computer. This data can be stored by a user's web browser while the user is browsing. The cookies can include useful information for websites to remember prior browsing information, such as a shopping cart on an online store, clicking of buttons, login information, and/or records of web pages or network resources visited in the past. Cookies can also include information that the user enters, such as names, addresses, passwords, credit card information, etc. Cookies can also perform computer functions. For example, authentication cookies can be used by applications (for example, a web browser) to identify whether the user is already logged in (for example, to a web site). The cookie data can be encrypted to provide security for the consumer. Tracking cookies can be used to compile historical browsing histories of individuals. Systems disclosed herein can generate and use cookies to access data of an individual. Systems can also generate and use JSON web tokens to store authenticity information, HTTP authentication as authentication protocols, IP addresses to track session or identity information, URLs, and the like. Example Embodiments The following are non-limiting examples of certain embodiments of systems and methods of characterizing coronary plaque. Other embodiments may include one or more other features, or different features, that are discussed herein. Embodiment 1: A computer-implemented method of quantifying and classifying coronary plaque within a coronary region of a subject based on non-invasive medical image analysis, the method comprising: accessing, by a computer system, a medical image of a coronary region of a subject, wherein the medical image of the coronary region of the subject is obtained non-invasively; identifying, by the computer system utilizing a coronary artery identification algorithm, one or more coronary arteries within the medical image of the coronary region of the subject, wherein the coronary artery identification algorithm is configured to utilize raw medical images as input; identifying, by the computer system utilizing a plaque identification algorithm, one or more regions of plaque within the one or more coronary arteries identified from the medical image of the coronary region of the subject, wherein the plaque identification algorithm is configured to utilize raw medical images as input; determining, by the computer system, one or more vascular morphology parameters and a set of quantified plaque parameters of the one or more identified regions of plaque from the medical image of the coronary region of the subject, wherein the set of quantified plaque parameters comprises a ratio or function of volume to surface area, heterogeneity index, geometry, and radiodensity of the one or more regions of plaque within the medical image; generating, by the computer system, a weighted measure of the determined one or more vascular morphology parameters and the set of quantified plaque parameters of the one or more regions of plaque; and classifying, by the computer system, the one or more regions of plaque within the medical image as stable plaque or unstable plaque based at least in part on the generated weighted measure of the determined one or more vascular morphology parameters and the determined set of quantified plaque parameters, wherein the computer system comprises a computer processor and an electronic storage medium. Embodiment 2: The computer-implemented method of Embodiment 1, wherein one or more of the coronary artery identification algorithm or the plaque identification algorithm comprises an artificial intelligence or machine learning algorithm. Embodiment 3: The computer-implemented method of any one of Embodiment 1 or 2, wherein the plaque identification algorithm is configured to determine the one or more regions of plaque by determining a vessel wall and lumen wall of the one or more coronary arteries and determining a volume between the vessel wall and lumen wall as the one or more regions of plaque. Embodiment 4: The computer-implemented method of any one of Embodiments 1-3, wherein the one or more coronary arteries are identified by size. Embodiment 5: The computer-implemented method of any one of Embodiments 1-4, wherein a ratio of volume to surface area of the one or more regions of plaque below a predetermined threshold is indicative of stable plaque. Embodiment 6: The computer-implemented method of any one of Embodiments 1-5, wherein a radiodensity of the one or more regions of plaque above a predetermined threshold is indicative of stable plaque. Embodiment 7: The computer-implemented method of any one of Embodiments 1-6, wherein a heterogeneity of the one or more regions of plaque below a predetermined threshold is indicative of stable plaque. Embodiment 8: The computer-implemented method of any one of Embodiments 1-7, wherein the set of quantified plaque parameters further comprises diffusivity of the one or more regions of plaque. Embodiment 9: The computer-implemented method of any one of Embodiments 1-8, wherein the set of quantified plaque parameters further comprises a ratio of radiodensity to volume of the one or more regions of plaque. Embodiment 10: The computer-implemented method of any one of Embodiments 1-9, further comprising generating, by the computer system, a proposed treatment for the subject based at least in part on the classified one or more regions of plaque. Embodiment 11: The computer-implemented method of any one of Embodiments 1-10, further comprising generating, by the computer system, an assessment of the subject for one or more of atherosclerosis, stenosis, or ischemia based at least in part on the classified one or more regions of plaque. Embodiment 12: The computer-implemented method of any one of Embodiments 1-11, wherein the medical image comprises a Computed Tomography (CT) image. Embodiment 13: The computer-implemented method of Embodiment 12, wherein the medical image comprises a non-contrast CT image. Embodiment 14: The computer-implemented method of Embodiment 12, wherein the medical image comprises a contrast-enhanced CT image. Embodiment 15: The computer-implemented method of any one of Embodiments 1-11, wherein the medical image comprises a Magnetic Resonance (MR) image. Embodiment 16: The computer-implemented method of any one of Embodiments 1-11, wherein the medical image is obtained using an imaging technique comprising one or more of CT, x-ray, ultrasound, echocardiography, intravascular ultrasound (IVUS), MR imaging, optical coherence tomography (OCT), nuclear medicine imaging, positron-emission tomography (PET), single photon emission computed tomography (SPECT), or near-field infrared spectroscopy (NIRS). Embodiment 17: The computer-implemented method of any one of Embodiments 1-16, wherein the heterogeneity index of one or more regions of plaque is determined by generating a three-dimensional histogram of radiodensity values across a geometric shape of the one or more regions of plaque. Embodiment 18: The computer-implemented method of any one of Embodiments 1-17, wherein the heterogeneity index of one or more regions of plaque is determined by generating spatial mapping of radiodensity values across the one or more regions of plaque. Embodiment 19: The computer-implemented method of any one of Embodiments 1-18, wherein the set of quantified plaque parameters comprises a percentage composition of plaque comprising different radiodensity values. Embodiment 20: The computer-implemented method of any one of Embodiments 1-19, wherein the set of quantified plaque parameters comprises a percentage composition of plaque comprising different radiodensity values as a function of volume of plaque. Embodiment 21: The computer-implemented method of any one of Embodiments 1-20, wherein the geometry of the one or more regions of plaque comprises a round or oblong shape. Embodiment 22: The computer-implemented method of any one of Embodiments 1-21, wherein the one or more vascular morphology parameters comprises a classification of arterial remodeling. Embodiment 23: The computer-implemented method of Embodiment 22, wherein the classification of arterial remodeling comprises positive arterial remodeling, negative arterial remodeling, and intermediate arterial remodeling. Embodiment 24: The computer-implemented method of Embodiment 22, wherein the classification of arterial remodeling is determined based at least in part on a ratio of a largest vessel diameter at the one or more regions of plaque to a normal reference vessel diameter. Embodiment 25: The computer-implemented method of Embodiment 23, wherein the classification of arterial remodeling comprises positive arterial remodeling, negative arterial remodeling, and intermediate arterial remodeling, and wherein positive arterial remodeling is determined when the ratio of the largest vessel diameter at the one or more regions of plaque to the normal reference vessel diameter is more than 1.1, wherein negative arterial remodeling is determined when the ratio of the largest vessel diameter at the one or more regions of plaque to the normal reference vessel diameter is less than 0.95, and wherein intermediate arterial remodeling is determined when the ratio of the largest vessel diameter at the one or more regions of plaque to the normal reference vessel diameter is between 0.95 and 1.1. Embodiment 26: The computer-implemented method of any one of Embodiments 1-25, wherein the function of volume to surface area of the one or more regions of plaque comprises one or more of a thickness or diameter of the one or more regions of plaque. Embodiment 27: The computer-implemented method of any one of Embodiments 1-26, wherein the weighted measure is generated by weighting the one or more vascular morphology parameters and the set of quantified plaque parameters of the one or more regions of plaque equally. Embodiment 28: The computer-implemented method of any one of Embodiments 1-26, wherein the weighted measure is generated by weighting the one or more vascular morphology parameters and the set of quantified plaque parameters of the one or more regions of plaque differently. Embodiment 29: The computer-implemented method of any one of Embodiments 1-26, wherein the weighted measure is generated by weighting the one or more vascular morphology parameters and the set of quantified plaque parameters of the one or more regions of plaque logarithmically, algebraically, or utilizing another mathematical transform. Embodiment 30: A computer-implemented method of quantifying and classifying vascular plaque based on non-invasive medical image analysis, the method comprising: accessing, by a computer system, a medical image of a subject, wherein the medical image of the subject is obtained non-invasively; identifying, by the computer system utilizing an artery identification algorithm, one or more arteries within the medical image of the subject, wherein the artery identification algorithm is configured to utilize raw medical images as input; identifying, by the computer system utilizing a plaque identification algorithm, one or more regions of plaque within the one or more arteries identified from the medical image of the subject, wherein the plaque identification algorithm is configured to utilize raw medical images as input; determining, by the computer system, one or more vascular morphology parameters and a set of quantified plaque parameters of the one or more identified regions of plaque from the medical image of the subject, wherein the set of quantified plaque parameters comprises a ratio or function of volume to surface area, heterogeneity index, geometry, and radiodensity of the one or more regions of plaque from the medical image; generating, by the computer system, a weighted measure of the determined one or more vascular morphology parameters and the set of quantified plaque parameters of the one or more regions of plaque; and classifying, by the computer system, the one or more regions of plaque within the medical image as stable plaque or unstable plaque based at least in part on the generated weighted measure of the determined one or more vascular morphology and the determined set of quantified plaque parameters, wherein the computer system comprises a computer processor and an electronic storage medium. Embodiment 31: The computer-implemented method of Embodiment 30, wherein the identified one or more arteries comprise one or more of carotid arteries, aorta, renal artery, lower extremity artery, or cerebral artery. Embodiment 32: A computer-implemented method of determining non-calcified plaque from a non-contrast Computed Tomography (CT) image, the method comprising: accessing, by a computer system, a non-contrast CT image of a coronary region of a subject; identifying, by the computer system, epicardial fat on the non-contrast CT image; segmenting, by the computer system, arteries on the non-contrast CT image using the identified epicardial fat as outer boundaries of the arteries; identifying, by the computer system, a first set of pixels within the arteries on the non-contrast CT image comprising a Hounsfield unit radiodensity value below a predetermined radiodensity threshold; classifying, by the computer system, the first set of pixels as a first subset of non-calcified plaque; identifying, by the computer system, a second set of pixels within the arteries on the non-contrast CT image comprising a Hounsfield unit radiodensity value within a predetermined radiodensity range; determining, by the computer system, a heterogeneity index of the second set of pixels and identifying a subset of the second set of pixels comprising a heterogeneity index above a heterogeneity index threshold; classifying, by the computer system, the subset of the second set of pixels as a second subset of non-calcified plaque; and determining, by the computer system, non-calcified plaque from the non-contrast CT image by combining the first subset of non-calcified plaque and the second subset of non-calcified plaque, wherein the computer system comprises a computer processor and an electronic storage medium. Embodiment 33: The computer-implemented method of Embodiment 32, wherein the predetermined radiodensity threshold comprises a Hounsfield unit radiodensity value of 30. Embodiment 34: The computer-implemented method of any one of Embodiments 32-33, wherein the predetermined radiodensity range comprises Hounsfield unit radiodensity values between 30 and 100. Embodiment 35: The computer-implemented method of any one of Embodiments 32-34, wherein identifying epicardial fat on the non-contrast CT image further comprises: determining a Hounsfield unit radiodensity value of each pixel within the non-contrast CT image; and classifying as epicardial fat pixels within the non-contrast CT image with a Hounsfield unit radiodensity value within a predetermined epicardial fat radiodensity range, wherein the predetermined epicardial fat radiodensity range comprises a Hounsfield unit radiodensity value of −100. Embodiment 36: The computer-implemented method of any one of Embodiments 32-35, wherein the heterogeneity index of the second set of pixels is determined by generating spatial mapping of radiodensity values of the second set of pixels. Embodiment 37: The computer-implemented method of any one of Embodiments 32-36, wherein the heterogeneity index of the second set of pixels is determined by generating a three-dimensional histogram of radiodensity values across a geometric region within the second set of pixels. Embodiment 38: The computer-implemented method of any one of Embodiments 32-37, further comprising classifying, by the computer system, a subset of the second set of pixels comprising a heterogeneity index below the heterogeneity index threshold as blood. Embodiment 39: The computer-implemented method of any one of Embodiments 32-38, further comprising generating a quantized color map of the coronary region of the subject by assigning a first color to the identified epicardial fat, assigning a second color to the segmented arteries, and assigning a third color to the determined non-calcified plaque. Embodiment 40: The computer-implemented method of any one of Embodiments 32-39, further comprising: identifying, by the computer system, a third set of pixels within the arteries on the non-contrast CT image comprising a Hounsfield unit radiodensity value above a predetermined calcified radiodensity threshold; and classifying, by the computer system, the third set of pixels as calcified plaque. Embodiment 41: The computer-implemented method of any one of Embodiments 32-40, further comprising determining, by the computer system, a proposed treatment based at least in part on the determined non-calcified plaque. Embodiment 42: A computer-implemented method of determining low-attenuated plaque from a medical image of a subject, the method comprising: accessing, by a computer system, a medical image of a subject; identifying, by the computer system, epicardial fat on the medical image of the subject by: determining a radiodensity value of each pixel within the medical image of the subject; and classifying as epicardial fat pixels within the medical image of the subject with a radiodensity value within a predetermined epicardial fat radiodensity range; segmenting, by the computer system, arteries on the medical image of the subject using the identified epicardial fat as outer boundaries of the arteries; identifying, by the computer system, a first set of pixels within the arteries on the medical image of the subject comprising a radiodensity value below a predetermined radiodensity threshold; classifying, by the computer system, the first set of pixels as a first subset of low-attenuated plaque; identifying, by the computer system, a second set of pixels within the arteries on the non-contrast CT image comprising a radiodensity value within a predetermined radiodensity range; determining, by the computer system, a heterogeneity index of the second set of pixels and identifying a subset of the second set of pixels comprising a heterogeneity index above a heterogeneity index threshold; classifying, by the computer system, the subset of the second set of pixels as a second subset of low-attenuated plaque; and determining, by the computer system, low-attenuated plaque from the medical image of the subject by combining the first subset of low-attenuated plaque and the second subset of low-attenuated plaque, wherein the computer system comprises a computer processor and an electronic storage medium. Embodiment 43: The computer-implemented method of Embodiment 42, wherein the medical image comprises a Computed Tomography (CT) image. Embodiment 44: The computer-implemented method of Embodiment 42, wherein the medical image comprises a Magnetic Resonance (MR) image. Embodiment 45: The computer-implemented method of Embodiment 42, wherein the medical image comprises an ultrasound image. Embodiment 46: The computer-implemented method of any one of Embodiments 42-45, wherein the medical image comprises an image of a coronary region of the subject. Embodiment 47: The computer-implemented method of any one of Embodiments 42-46, further comprising determining, by the computer system, a proposed treatment for a disease based at least in part on the determined low-attenuated plaque. Embodiment 48: The computer-implemented method of Embodiment 47, wherein the disease comprises one or more of arterial disease, renal artery disease, abdominal atherosclerosis, or carotid atherosclerosis. Embodiment 49: The computer-implemented method of any one of Embodiments 42-48, wherein the heterogeneity index of the second set of pixels is determined by generating spatial mapping of radiodensity values of the second set of pixels. Embodiment 50: A computer-implemented method of determining non-calcified plaque from a Dual-Energy Computed Tomography (DECT) image or spectral Computed Tomography (CT) image, the method comprising: accessing, by a computer system, a DECT or spectral CT image of a coronary region of a subject; identifying, by the computer system, epicardial fat on the DECT image or spectral CT; segmenting, by the computer system, arteries on the DECT image or spectral CT; identifying, by the computer system, a first set of pixels within the arteries on the DECT or spectral CT image comprising a Hounsfield unit radiodensity value below a predetermined radiodensity threshold; classifying, by the computer system, the first set of pixels as a first subset of non-calcified plaque; identifying, by the computer system, a second set of pixels within the arteries on the DECT or spectral CT image comprising a Hounsfield unit radiodensity value within a predetermined radiodensity range; classifying, by the computer system, a subset of the second set of pixels as a second subset of non-calcified plaque; and determining, by the computer system, non-calcified plaque from the DECT image or spectral CT by combining the first subset of non-calcified plaque and the second subset of non-calcified plaque, wherein the computer system comprises a computer processor and an electronic storage medium. Embodiment 51: The computer-implemented method of Embodiment 50, wherein the subset of the second set of pixels is identified by determining, by the computer system, a heterogeneity index of the second set of pixels and identifying the subset of the second set of pixels comprising a heterogeneity index above a heterogeneity index threshold. Embodiment 52: A computer-implemented method of assessing risk of a cardiovascular event for a subject based on non-invasive medical image analysis, the method comprising: accessing, by a computer system, a medical image of a coronary region of a subject, wherein the medical image of the coronary region of the subject is obtained non-invasively; identifying, by the computer system utilizing a coronary artery identification algorithm, one or more coronary arteries within the medical image of the coronary region of the subject, wherein the coronary artery identification algorithm is configured to utilize raw medical images as input; identifying, by the computer system utilizing a plaque identification algorithm, one or more regions of plaque within the one or more coronary arteries identified from the medical image of the coronary region of the subject, wherein the plaque identification algorithm is configured to utilize raw medical images as input; determining, by the computer system, one or more vascular morphology parameters and a set of quantified plaque parameters of the one or more identified regions of plaque from the medical image of the coronary region of the subject, wherein the set of quantified plaque parameters comprises a ratio or function of volume to surface area, heterogeneity index, geometry, and radiodensity of the one or more regions of plaque within the medical image; generating, by the computer system, a weighted measure of the determined one or more vascular morphology parameters and the set of quantified plaque parameters of the one or more regions of plaque; classifying, by the computer system, the one or more regions of plaque within the medical image as stable plaque or unstable plaque based at least in part on the generated weighted measure of the determined one or more vascular morphology parameters and the determined set of quantified plaque parameters; generating, by the computer system, a risk of cardiovascular event for the subject based at least in part on the one or more regions of plaque classified as stable plaque or unstable plaque; accessing, by the computer system, a coronary values database comprising one or more known datasets of coronary values derived from one or more other subjects and comparing the one or more regions of plaque classified as stable plaque or unstable plaque to the one or more known datasets of coronary values; updating, by the computer system, the generated risk of cardiovascular event for the subject based at least in part on the comparison of the one or more regions of plaque classified as stable plaque or unstable plaque to the one or more known datasets of coronary values; and generating, by the computer system, a proposed treatment for the subject based at least in part on the comparison of the one or more regions of plaque classified as stable plaque or unstable plaque to the one or more known datasets of coronary values, wherein the computer system comprises a computer processor and an electronic storage medium. Embodiment 53: The computer-implemented method of Embodiment 52, wherein the cardiovascular event comprises one or more of a Major Adverse Cardiovascular Event (MACE), rapid plaque progression, or non-response to medication. Embodiment 54: The computer-implemented method of any one of Embodiments 52-53, wherein the one or more known datasets of coronary values comprises one or more parameters of stable plaque and unstable plaque derived from medical images of healthy subjects. Embodiment 55: The computer-implemented method of any one of Embodiments 52-54, wherein the one or more other subjects are healthy. Embodiment 56: The computer-implemented method of any one of Embodiments 52-55, wherein the one or more other subjects have a heightened risk of a cardiovascular event. Embodiment 57: The computer-implemented method of any one of Embodiments 52-57, further comprising: identifying, by the computer system, one or more additional cardiovascular structures within the medical image, wherein the one or more additional cardiovascular structures comprise one or more of the left ventricle, right ventricle, left atrium, right atrium, aortic valve, mitral valve, tricuspid valve, pulmonic valve, aorta, pulmonary artery, inferior and superior vena cava, epicardial fat, or pericardium; determining, by the computer system, one or more parameters associated with the identified one or more additional cardiovascular structures; classifying, by the computer system, the one or more additional cardiovascular structures based at least in part on the determined one or more parameters; accessing, by the computer system, a cardiovascular structures values database comprising one or more known datasets of cardiovascular structures parameters derived from medical images of one or more other subjects and comparing the classified one or more additional cardiovascular structures to the one or more known datasets of cardiovascular structures parameters; and updating, by the computer system, the generated risk of cardiovascular event for the subject based at least in part on the comparison of the classified one or more additional cardiovascular structures to the one or more known datasets of cardiovascular structures parameters. Embodiment 58: The computer-implemented method of Embodiment 57, wherein the one or more additional cardiovascular structures are classified as normal or abnormal. Embodiment 59: The computer-implemented method of Embodiment 57, wherein the one or more additional cardiovascular structures are classified as increased or decreased. Embodiment 60: The computer-implemented method of Embodiment 57, wherein the one or more additional cardiovascular structures are classified as static or dynamic over time. Embodiment 61: The computer-implemented method of any one of Embodiments 57-60, further comprising generating, by the computer system, a quantized color map for the additional cardiovascular structures. Embodiment 62: The computer-implemented method of any one of Embodiments 57-61, further comprising updating, by the computer system, the proposed treatment for the subject based at least in part on the comparison of the classified one or more additional cardiovascular structures to the one or more known datasets of cardiovascular structures parameters. Embodiment 63: The computer-implemented method of any one of Embodiments 57-62, further comprising: identifying, by the computer system, one or more non-cardiovascular structures within the medical image, wherein the one or more non-cardiovascular structures comprise one or more of the lungs, bones, or liver; determining, by the computer system, one or more parameters associated with the identified one or more non-cardiovascular structures; classifying, by the computer system, the one or more non-cardiovascular structures based at least in part on the determined one or more parameters; accessing, by the computer system, a non-cardiovascular structures values database comprising one or more known datasets of non-cardiovascular structures parameters derived from medical images of one or more other subjects and comparing the classified one or more non-cardiovascular structures to the one or more known datasets of non-cardiovascular structures parameters; and updating, by the computer system, the generated risk of cardiovascular event for the subject based at least in part on the comparison of the classified one or more non-cardiovascular structures to the one or more known datasets of non-cardiovascular structures parameters. Embodiment 64: The computer-implemented method of Embodiment 63, wherein the one or more non-cardiovascular structures are classified as normal or abnormal. Embodiment 65: The computer-implemented method of Embodiment 63, wherein the one or more non-cardiovascular structures are classified as increased or decreased. Embodiment 66: The computer-implemented method of Embodiment 63, wherein the one or more non-cardiovascular structures are classified as static or dynamic over time. Embodiment 67: The computer-implemented method of any one of Embodiments 63-66, further comprising generating, by the computer system, a quantized color map for the non-cardiovascular structures. Embodiment 68: The computer-implemented method of any one of Embodiments 63-67, further comprising updating, by the computer system, the proposed treatment for the subject based at least in part on the comparison of the classified one or more non-cardiovascular structures to the one or more known datasets of non-cardiovascular structures parameters. Embodiment 69: The computer-implemented method of any one of Embodiments 63-68, wherein the one or more parameters associated with the identified one or more non-cardiovascular structures comprises one or more of ratio of volume to surface area, heterogeneity, radiodensity, or geometry of the identified one or more non-cardiovascular structures. Embodiment 70: The computer-implemented method of any one of Embodiments 52-69, wherein the medical image comprises a Computed Tomography (CT) image. Embodiment 71: The computer-implemented method of any one of Embodiments 52-69, wherein the medical image comprises a Magnetic Resonance (MR) image. Embodiment 72: A computer-implemented method of quantifying and classifying coronary atherosclerosis within a coronary region of a subject based on non-invasive medical image analysis, the method comprising: accessing, by a computer system, a medical image of a coronary region of a subject, wherein the medical image of the coronary region of the subject is obtained non-invasively; identifying, by the computer system utilizing a coronary artery identification algorithm, one or more coronary arteries within the medical image of the coronary region of the subject, wherein the coronary artery identification algorithm is configured to utilize raw medical images as input; identifying, by the computer system utilizing a plaque identification algorithm, one or more regions of plaque within the one or more coronary arteries identified from the medical image of the coronary region of the subject, wherein the plaque identification algorithm is configured to utilize raw medical images as input; determining, by the computer system, one or more vascular morphology parameters and a set of quantified plaque parameters of the one or more identified regions of plaque from the medical image of the coronary region of the subject, wherein the set of quantified plaque parameters comprises a ratio or function of volume to surface area, heterogeneity index, geometry, and radiodensity of the one or more regions of plaque within the medical image; generating, by the computer system, a weighted measure of the determined one or more vascular morphology parameters and the set of quantified plaque parameters of the one or more regions of plaque; quantifying, by the computer system, coronary atherosclerosis of the subject based at least in part on the set of generated weighted measure of the determined one or more vascular morphology parameters and the determined quantified plaque parameters; and classifying, by the computer system, coronary atherosclerosis of the subject as one or more of high risk, medium risk, or low risk based at least in part on the quantified coronary atherosclerosis of the subject, wherein the computer system comprises a computer processor and an electronic storage medium. Embodiment 73: The computer-implemented method of Embodiment 72, wherein one or more of the coronary artery identification algorithm or the plaque identification algorithm comprises an artificial intelligence or machine learning algorithm. Embodiment 74: The computer-implemented method of any one of Embodiments 72 or 73, further comprising determining a numerical calculation of coronary stenosis of the subject based at least in part on the one or more vascular morphology parameters and/or set of quantified plaque parameters determined from the medical image of the coronary region of the subject. Embodiment 75: The computer-implemented method of any one of Embodiment 72-74, further comprising assessing a risk of ischemia for the subject based at least in part on the one or more vascular morphology parameters and/or set of quantified plaque parameters determined from the medical image of the coronary region of the subject. Embodiment 76: The computer-implemented method of any one of Embodiments 72-75, wherein the plaque identification algorithm is configured to determine the one or more regions of plaque by determining a vessel wall and lumen wall of the one or more coronary arteries and determining a volume between the vessel wall and lumen wall as the one or more regions of plaque. Embodiment 77: The computer-implemented method of any one of Embodiments 72-76, wherein the one or more coronary arteries are identified by size. Embodiment 78: The computer-implemented method of any one of Embodiments 72-77, wherein a ratio of volume to surface area of the one or more regions of plaque below a predetermined threshold is indicative of low risk. Embodiment 79: The computer-implemented method of any one of Embodiments 72-78, wherein a radiodensity of the one or more regions of plaque above a predetermined threshold is indicative of low risk. Embodiment 80: The computer-implemented method of any one of Embodiments 72-79, wherein a heterogeneity of the one or more regions of plaque below a predetermined threshold is indicative of low risk. Embodiment 81: The computer-implemented method of any one of Embodiments 72-80, wherein the set of quantified plaque parameters further comprises diffusivity of the one or more regions of plaque. Embodiment 82: The computer-implemented method of any one of Embodiments 72-81, wherein the set of quantified plaque parameters further comprises a ratio of radiodensity to volume of the one or more regions of plaque. Embodiment 83: The computer-implemented method of any one of Embodiments 72-82, further comprising generating, by the computer system, a proposed treatment for the subject based at least in part on the classified atherosclerosis. Embodiment 84: The computer-implemented method of any one of Embodiments 72-83, wherein the coronary atherosclerosis of the subject is classified by the computer system using a coronary atherosclerosis classification algorithm, wherein the coronary atherosclerosis classification algorithm is configured to utilize a combination of the ratio of volume of surface area, volume, heterogeneity index, and radiodensity of the one or more regions of plaque as input. Embodiment 85: The computer-implemented method of any one of Embodiments 72-84, wherein the medical image comprises a Computed Tomography (CT) image. Embodiment 86: The computer-implemented method of Embodiment 85, wherein the medical image comprises a non-contrast CT image. Embodiment 87: The computer-implemented method of Embodiment 85, wherein the medical image comprises a contrast CT image. Embodiment 88: The computer-implemented method of any one of Embodiments 72-84, wherein the medical image is obtained using an imaging technique comprising one or more of CT, x-ray, ultrasound, echocardiography, intravascular ultrasound (IVUS), MR imaging, optical coherence tomography (OCT), nuclear medicine imaging, positron-emission tomography (PET), single photon emission computed tomography (SPECT), or near-field infrared spectroscopy (NIRS). Embodiment 89: The computer-implemented method of any one of Embodiments 72-88, wherein the heterogeneity index of one or more regions of plaque is determined by generating a three-dimensional histogram of radiodensity values across a geometric shape of the one or more regions of plaque. Embodiment 90: The computer-implemented method of any one of Embodiments 72-89, wherein the heterogeneity index of one or more regions of plaque is determined by generating spatial mapping of radiodensity values across the one or more regions of plaque. Embodiment 91: The computer-implemented method of any one of Embodiments 72-90, wherein the set of quantified plaque parameters comprises a percentage composition of plaque comprising different radiodensity values. Embodiment 92: The computer-implemented method of any one of Embodiments 72-91, wherein the set of quantified plaque parameters comprises a percentage composition of plaque comprising different radiodensity values as a function of volume of plaque. Embodiment 93: The computer-implemented method of any one of Embodiments 72-92, wherein the weighted measure of the determined one or more vascular morphology parameters and the set of quantified plaque parameters of the one or more regions of plaque is generated based at least in part by comparing the determined set of quantified plaque parameters to one or more predetermined sets of quantified plaque parameters. Embodiment 94: The computer-implemented method of Embodiment 93, wherein the one or more predetermined sets of quantified plaque parameters are derived from one or more medical images of other subjects. Embodiment 95: The computer-implemented method of Embodiment 93, wherein the one or more predetermined sets of quantified plaque parameters are derived from one or more medical images of the subject. Embodiment 96: The computer-implemented method of any one of Embodiments 72-95, wherein the geometry of the one or more regions of plaque comprises a round or oblong shape. Embodiment 97: The computer-implemented method of any one of Embodiments 72-96, wherein the one or more vascular morphology parameters comprises a classification of arterial remodeling. Embodiment 98: The computer-implemented method of Embodiment 97, wherein the classification of arterial remodeling comprises positive arterial remodeling, negative arterial remodeling, and intermediate arterial remodeling. Embodiment 99: The computer-implemented method of Embodiment 97, wherein the classification of arterial remodeling is determined based at least in part on a ratio of a largest vessel diameter at the one or more regions of plaque to a normal reference vessel diameter. Embodiment 100: The computer-implemented method of Embodiment 99, wherein the classification of arterial remodeling comprises positive arterial remodeling, negative arterial remodeling, and intermediate arterial remodeling, and wherein positive arterial remodeling is determined when the ratio of the largest vessel diameter at the one or more regions of plaque to the normal reference vessel diameter is more than 1.1, wherein negative arterial remodeling is determined when the ratio of the largest vessel diameter at the one or more regions of plaque to the normal reference vessel diameter is less than 0.95, and wherein intermediate arterial remodeling is determined when the ratio of the largest vessel diameter at the one or more regions of plaque to the normal reference vessel diameter is between 0.95 and 1.1. Embodiment 101: The computer-implemented method of any one of Embodiments 72-100, wherein the function of volume to surface area of the one or more regions of plaque comprises one or more of a thickness or diameter of the one or more regions of plaque. Embodiment 102: The computer-implemented method of any one of Embodiments 72-101, wherein the weighted measure is generated by weighting the one or more vascular morphology parameters and the set of quantified plaque parameters of the one or more regions of plaque equally. Embodiment 103: The computer-implemented method of any one of Embodiments 72-101, wherein the weighted measure is generated by weighting the one or more vascular morphology parameters and the set of quantified plaque parameters of the one or more regions of plaque differently. Embodiment 104: The computer-implemented method of any one of Embodiments 72-101, wherein the weighted measure is generated by weighting the one or more vascular morphology parameters and the set of quantified plaque parameters of the one or more regions of plaque logarithmically, algebraically, or utilizing another mathematical transform. Embodiment 105: A computer-implemented method of quantifying a state of coronary artery disease based on quantification of plaque, ischemia, and fat inflammation based on non-invasive medical image analysis, the method comprising: accessing, by a computer system, a medical image of a coronary region of a subject, wherein the medical image of the coronary region of the subject is obtained non-invasively; identifying, by the computer system utilizing a coronary artery identification algorithm, one or more coronary arteries within the medical image of the coronary region of the subject, wherein the coronary artery identification algorithm is configured to utilize raw medical images as input; identifying, by the computer system utilizing a plaque identification algorithm, one or more regions of plaque within the one or more coronary arteries identified from the medical image of the coronary region of the subject, wherein the plaque identification algorithm is configured to utilize raw medical images as input; identifying, by the computer system utilizing a fat identification algorithm, one or more regions of fat within the medical image of the coronary region of the subject, wherein the fat identification algorithm is configured to utilize raw medical images as input; determining, by the computer system, one or more vascular morphology parameters and a set of quantified plaque parameters of the one or more identified regions of plaque from the medical image of the coronary region of the subject, wherein the set of quantified plaque parameters comprises a ratio or function of volume to surface area, heterogeneity index, geometry, and radiodensity of the one or more regions of plaque within the medical image; quantifying, by the computer system, coronary stenosis based at least in part on the set of quantified plaque parameters determined from the medical image of the coronary region of the subject; and determining, by the computer system, a presence or risk of ischemia based at least in part on the set of quantified plaque parameters determined from the medical image of the coronary region of the subject; determining, by the computer system, a set of quantified fat parameters of the one or more identified regions of fat within the medical image of the coronary region of the subject, wherein the set of quantified fat parameters comprises volume, geometry, and radiodensity of the one or more regions of fat within the medical image; generating, by the computer system, a weighted measure of the determined one or more vascular morphology parameters, the set of quantified plaque parameters of the one or more regions of plaque, the quantified coronary stenosis, the determined presence or risk of ischemia, and the determined set of quantified fat parameters; and generating, by the computer system, a risk assessment of coronary disease of the subject based at least in part on the generated weighted measure of the determined one or more vascular morphology parameters, the set of quantified plaque parameters of the one or more regions of plaque, the quantified coronary stenosis, the determined presence or risk of ischemia, and the determined set of quantified fat parameters, wherein the computer system comprises a computer processor and an electronic storage medium. Embodiment 106: The computer-implemented method of Embodiment 105, wherein one or more of the coronary artery identification algorithm, plaque identification algorithm, or fat identification algorithm comprises an artificial intelligence or machine learning algorithm. Embodiment 107: The computer-implemented method of any one of Embodiment 105 or 106, further comprising automatically generating, by the computer system, a Coronary Artery Disease Reporting & Data System (CAD-RADS) classification score of the subject based at least in part on the quantified coronary stenosis. Embodiment 108: The computer-implemented method of any one of Embodiments 105-107, further comprising automatically generating, by the computer system, a CAD-RADS modifier of the subject based at least in part on one or more of the determined one or more vascular morphology parameters, the set of quantified plaque parameters of the one or more regions of plaque, the quantified coronary stenosis, the determined presence or risk of ischemia, and the determined set of quantified fat parameters, wherein the CAD-RADS modifier comprises one or more of nondiagnostic (N), stent (S), graft (G), or vulnerability (V). Embodiment 109: The computer-implemented method of any one of Embodiments 105-108, wherein the coronary stenosis is quantified on a vessel-by-vessel basis. Embodiment 110: The computer-implemented method of any one of Embodiments 105-109, wherein the presence or risk of ischemia is determined on a vessel-by-vessel basis. Embodiment 111: The computer-implemented method of any one of Embodiments 105-110, wherein the one or more regions of fat comprises epicardial fat. Embodiment 112: The computer-implemented method of any one of Embodiments 105-111, further comprising generating, by the computer system, a proposed treatment for the subject based at least in part on the generated risk assessment of coronary disease. Embodiment 113: The computer-implemented method of any one of Embodiments 105-112, wherein the medical image comprises a Computed Tomography (CT) image. Embodiment 114: The computer-implemented method of Embodiment 113, wherein the medical image comprises a non-contrast CT image. Embodiment 115: The computer-implemented method of Embodiment 113, wherein the medical image comprises a contrast CT image. Embodiment 116: The computer-implemented method of any one of Embodiments 113-115, wherein the determined set of plaque parameters comprises one or more of a percentage of higher radiodensity calcium plaque or lower radiodensity calcium plaque within the one or more regions of plaque, wherein higher radiodensity calcium plaque comprises a Hounsfield radiodensity unit of above 1000, and wherein lower radiodensity calcium plaque comprises a Hounsfield radiodensity unit of below 1000. Embodiment 117: The computer-implemented method of any one of Embodiments 105-112, wherein the medical image comprises a Magnetic Resonance (MR) image. Embodiment 118: The computer-implemented method of any one of Embodiments 105-112, wherein the medical image comprises an ultrasound image. Embodiment 119: The computer-implemented method of any one of Embodiments 105-112, wherein the medical image is obtained using an imaging technique comprising one or more of CT, x-ray, ultrasound, echocardiography, intravascular ultrasound (IVUS), MR imaging, optical coherence tomography (OCT), nuclear medicine imaging, positron-emission tomography (PET), single photon emission computed tomography (SPECT), or near-field infrared spectroscopy (NIRS). Embodiment 120: The computer-implemented method of any one of Embodiments 105-119, wherein the heterogeneity index of one or more regions of plaque is determined by generating a three-dimensional histogram of radiodensity values across a geometric shape of the one or more regions of plaque. Embodiment 121: The computer-implemented method of any one of Embodiments 105-119, wherein the heterogeneity index of one or more regions of plaque is determined by generating spatial mapping of radiodensity values across the one or more regions of plaque. Embodiment 122: The computer-implemented method of any one of Embodiments 105-121, wherein the set of quantified plaque parameters comprises a percentage composition of plaque comprising different radiodensity values. Embodiment 123: The computer-implemented method of any one of Embodiments 105-122, wherein the set of quantified plaque parameters further comprises diffusivity of the one or more regions of plaque. Embodiment 124: The computer-implemented method of any one of Embodiments 105-123, wherein the set of quantified plaque parameters further comprises a ratio of radiodensity to volume of the one or more regions of plaque. Embodiment 125: The computer-implemented method of any one of Embodiments 105-124, wherein the plaque identification algorithm is configured to determine the one or more regions of plaque by determining a vessel wall and lumen wall of the one or more coronary arteries and determining a volume between the vessel wall and lumen wall as the one or more regions of plaque. Embodiment 126: The computer-implemented method of any one of Embodiments 105-125, wherein the one or more coronary arteries are identified by size. Embodiment 127: The computer-implemented method of any one of Embodiments 105-126, wherein the generated risk assessment of coronary disease of the subject comprises a risk score. Embodiment 128: The computer-implemented method of any one of Embodiments 105-127, wherein the geometry of the one or more regions of plaque comprises a round or oblong shape. Embodiment 129: The computer-implemented method of any one of Embodiments 105-128, wherein the one or more vascular morphology parameters comprises a classification of arterial remodeling. Embodiment 130: The computer-implemented method of Embodiment 129, wherein the classification of arterial remodeling comprises positive arterial remodeling, negative arterial remodeling, and intermediate arterial remodeling. Embodiment 131: The computer-implemented method of Embodiment 129, wherein the classification of arterial remodeling is determined based at least in part on a ratio of a largest vessel diameter at the one or more regions of plaque to a normal reference vessel diameter. Embodiment 132: The computer-implemented method of Embodiment 131, wherein the classification of arterial remodeling comprises positive arterial remodeling, negative arterial remodeling, and intermediate arterial remodeling, and wherein positive arterial remodeling is determined when the ratio of the largest vessel diameter at the one or more regions of plaque to the normal reference vessel diameter is more than 1.1, wherein negative arterial remodeling is determined when the ratio of the largest vessel diameter at the one or more regions of plaque to the normal reference vessel diameter is less than 0.95, and wherein intermediate arterial remodeling is determined when the ratio of the largest vessel diameter at the one or more regions of plaque to the normal reference vessel diameter is between 0.95 and 1.1. Embodiment 133: The computer-implemented method of any of Embodiments 105-132, wherein the function of volume to surface area of the one or more regions of plaque comprises one or more of a thickness or diameter of the one or more regions of plaque. Embodiment 134: The computer-implemented method of any one of Embodiments 105-133, wherein the weighted measure is generated by weighting the one or more vascular morphology parameters, the set of quantified plaque parameters of the one or more regions of plaque, the quantified coronary stenosis, the determined presence or risk of ischemia, and the determined set of quantified fat parameters equally. Embodiment 135: The computer-implemented method of any one of Embodiments 105-133, wherein the weighted measure is generated by weighting the one or more vascular morphology parameters, the set of quantified plaque parameters of the one or more regions of plaque, the quantified coronary stenosis, the determined presence or risk of ischemia, and the determined set of quantified fat parameters differently. Embodiment 136: The computer-implemented method of any one of Embodiments 105-133, wherein the weighted measure is generated by weighting the one or more vascular morphology parameters, the set of quantified plaque parameters of the one or more regions of plaque, the quantified coronary stenosis, the determined presence or risk of ischemia, and the determined set of quantified fat parameters logarithmically, algebraically, or utilizing another mathematical transform. Embodiment 137: A computer-implemented method of tracking a plaque-based disease based at least in part on determining a state of plaque progression of a subject using non-invasive medical image analysis, the method comprising: accessing, by a computer system, a first set of plaque parameters associated with a region of a subject, wherein the first set of plaque parameters are derived from a first medical image of the subject, wherein the first medical image of the subject is obtained non-invasively at a first point in time; accessing, by a computer system, a second medical image of the subject, wherein the second medical image of the subject is obtained non-invasively at a second point in time, the second point in time being later than the first point in time; identifying, by the computer system, one or more regions of plaque from the second medical image; determining, by the computer system, a second set of plaque parameters associated with the region of the subject by analyzing the second medical image and the identified one or more regions of plaque from the second medical image; analyzing, by the computer system, a change in one or more plaque parameters by comparing one or more of the first set of plaque parameters against one or more of the second set of plaque parameters; determining, by the computer system, a state of plaque progression associated with a plaque-based disease for the subject based at least in part on the analyzed change in the one or more plaque parameters, wherein the determined state of plaque progression comprises one or more of rapid plaque progression, non-rapid calcium dominant mixed response, non-rapid non-calcium dominant mixed response, or plaque regression; and tracking, by the computer system, progression of the plaque-based disease based at least in part on the determined state of plaque progression, wherein the computer system comprises a computer processor and an electronic storage medium. Embodiment 138: The computer-implemented method of Embodiment 137, wherein rapid plaque progression is determined when a percent atheroma volume increase of the subject is more than 1% per year, wherein non-rapid calcium dominant mixed response is determined when a percent atheroma volume increase of the subject is less than 1% per year and calcified plaque represents more than 50% of total new plaque formation, wherein non-rapid non-calcium dominant mixed response is determined when a percent atheroma volume increase of the subject is less than 1% per year and non-calcified plaque represents more than 50% of total new plaque formation, and wherein plaque regression is determined when a decrease in total percent atheroma volume is present. Embodiment 139: The computer-implemented method of any one of Embodiments 137-138, further comprising generating, by the computer system, a proposed treatment for the subject based at least in part on the determined state of plaque progression of the plaque-based disease. Embodiment 140: The computer-implemented method of any one of Embodiments 137-139, wherein the medical image comprises a Computed Tomography (CT) image. Embodiment 141: The computer-implemented method of Embodiment 140, wherein the medical image comprises a non-contrast CT image. Embodiment 142: The computer-implemented method of Embodiment 140, wherein the medical image comprises a contrast CT image. Embodiment 143: The computer-implemented method of any one of Embodiments 140-142, wherein the determined state of plaque progression further comprises one or more of a percentage of higher radiodensity plaques or lower radiodensity plaques, wherein higher radiodensity plaques comprise a Hounsfield unit of above 1000, and wherein lower radiodensity plaques comprise a Hounsfield unit of below 1000. Embodiment 144: The computer-implemented method of any one of Embodiments 137-139, wherein the medical image comprises a Magnetic Resonance (MR) image. Embodiment 145: The computer-implemented method of any one of Embodiments 137-139, wherein the medical image comprises an ultrasound image. Embodiment 146: The computer-implemented method of any one of Embodiments 137-145, wherein the region of the subject comprises a coronary region of the subject. Embodiment 147: The computer-implemented method of any one of Embodiments 137-145, wherein the region of the subject comprises one or more of carotid arteries, renal arteries, abdominal aorta, cerebral arteries, lower extremities, or upper extremities. Embodiment 148: The computer-implemented method of any one of Embodiments 137-147, wherein the plaque-based disease comprises one or more of atherosclerosis, stenosis, or ischemia. Embodiment 149: The computer-implemented method of any one of Embodiments 137-148, further comprising: determining, by the computer system, a first Coronary Artery Disease Reporting & Data System (CAD-RADS) classification score of the subject based at least in part on the first set of plaque parameters; determining, by the computer system, a second CAD-RADS classification score of the subject based at least in part on the second set of plaque parameters; and tracking, by the computer system, progression of a CAD-RADS classification score of the subject based on comparing the first CAD-RADS classification score and the second CAD-RADS classification score. Embodiment 150: The computer-implemented method of any one of Embodiments 137-149, wherein the plaque-based disease is further tracked by the computer system by analyzing one or more of serum biomarkers, genetics, omics, transcriptomics, microbiomics, or metabolomics. Embodiment 151: The computer-implemented method of any one of Embodiments 137-150, wherein the first set of plaque parameters comprises one or more of a volume, surface area, geometric shape, location, heterogeneity index, and radiodensity of one or more regions of plaque within the first medical image. Embodiment 152: The computer-implemented method of any one of Embodiments 137-151, wherein the second set of plaque parameters comprises one or more of a volume, surface area, geometric shape, location, heterogeneity index, and radiodensity of one or more regions of plaque within the second medical image. Embodiment 153: The computer-implemented method of any one of Embodiments 137-152, wherein the first set of plaque parameters and the second set of plaque parameters comprise a ratio of radiodensity to volume of one or more regions of plaque. Embodiment 154: The computer-implemented method of any one of Embodiments 137-153, wherein the first set of plaque parameters and the second set of plaque parameters comprise a diffusivity of one or more regions of plaque. Embodiment 155: The computer-implemented method of any one of Embodiments 137-154, wherein the first set of plaque parameters and the second set of plaque parameters comprise a volume to surface area ratio of one or more regions of plaque. Embodiment 156: The computer-implemented method of any one of Embodiments 137-155, wherein the first set of plaque parameters and the second set of plaque parameters comprise a heterogeneity index of one or more regions of plaque. Embodiment 157: The computer-implemented method of Embodiment 156, wherein the heterogeneity index of one or more regions of plaque is determined by generating a three-dimensional histogram of radiodensity values across a geometric shape of the one or more regions of plaque. Embodiment 158: The computer-implemented method of Embodiment 156, wherein the heterogeneity index of one or more regions of plaque is determined by generating spatial mapping of radiodensity values across the one or more regions of plaque. Embodiment 159: The computer-implemented method of any one of Embodiments 137-158, wherein the first set of plaque parameters and the second set of plaque parameters comprise a percentage composition of plaque comprising different radiodensity values. Embodiment 160: The computer-implemented method of any one of Embodiments 137-159, wherein the first set of plaque parameters and the second set of plaque parameters comprise a percentage composition of plaque comprising different radiodensity values as a function of volume of plaque. Embodiment 161: A computer-implemented method of characterizing a change in coronary calcium score of a subject, the method comprising: accessing, by the computer system, a first coronary calcium score of a subject and a first set of plaque parameters associated with a coronary region of a subject, the first coronary calcium score and the first set of parameters obtained at a first point in time, wherein the first set of plaque parameters comprises volume, surface area, geometric shape, location, heterogeneity index, and radiodensity for one or more regions of plaque within the coronary region of the subject; generating, by the computer system, a first weighted measure of the accessed first set of plaque parameters; accessing, by a computer system, a second coronary calcium score of the subject and one or more medical images of the coronary region of the subject, the second coronary calcium score and the one or more medical images obtained at a second point in time, the second point in time being later than the first point in time, wherein the one or more medical images of the coronary region of the subject comprises the one or more regions of plaque; determining, by the computer system, a change in coronary calcium score of the subject by comparing the first coronary calcium score and the second coronary calcium score; identifying, by the computer system, the one or more regions of plaque from the one or more medical images; determining, by the computer system, a second set of plaque parameters associated with the coronary region of the subject by analyzing the one or more medical images, wherein the second set of plaque parameters comprises volume, surface area, geometric shape, location, heterogeneity index, and radiodensity for the one or more regions of plaque; generating, by the computer system, a second weighted measure of the determined second set of plaque parameters; analyzing, by the computer system, a change in the first weighted measure of the accessed first set of plaque parameters and the second weighted measure of the determined second set of plaque parameters; and characterizing, by the computer system, the change in coronary calcium score of the subject based at least in part on the identified one or more regions of plaque and the analyzed change in the first weighted measure of the accessed first set of plaque parameters and the second weighted measure of the determined second set of plaque parameters, wherein the change in coronary calcium score is characterized as positive, neutral, or negative, wherein the computer system comprises a computer processor and an electronic storage medium. Embodiment 162: The computer-implemented method of Embodiment 161, wherein radiodensity of the one or more regions of plaque is determined from the one or more medical images by analyzing a Hounsfield unit of the identified one or more regions of plaque. Embodiment 163: The computer-implemented method of any one of Embodiments 161-162, further comprising determining a change in ratio between volume and radiodensity of the one or more regions of plaque within the coronary region of the subject, and wherein the change in coronary calcium score of the subject is further characterized based at least in part the determined change in ratio between volume and radiodensity of one or more regions of plaque within the coronary region of the subject. Embodiment 164: The computer-implemented method of any one of Embodiments 161-163, wherein the change in coronary calcium score of the subject is characterized for each vessel. Embodiment 165: The computer-implemented method of any one of Embodiments 161-164, wherein the change in coronary calcium score of the subject is characterized for each segment. Embodiment 166: The computer-implemented method of any one of Embodiments 161-165, wherein the change in coronary calcium score of the subject is characterized for each plaque. Embodiment 167: The computer-implemented method of any one of Embodiments 161-166, wherein the first set of plaque parameters and the second set of plaque parameters further comprise a diffusivity of the one or more regions of plaque. Embodiment 168: The computer-implemented method of any one of Embodiments 161-167, wherein the change in coronary calcium score of the subject is characterized as positive when the radiodensity of the one or more regions of plaque is increased. Embodiment 169: The computer-implemented method of any one of Embodiments 161-168, wherein the change in coronary calcium score of the subject is characterized as negative when one or more new regions of plaque are identified from the one or more medical images. Embodiment 170: The computer-implemented method of any one of Embodiments 161-169, wherein the change in coronary calcium score of the subject is characterized as positive when a volume to surface area ratio of the one or more regions of plaque is decreased. Embodiment 171: The computer-implemented method of any one of Embodiments 161-170, wherein the heterogeneity index of the one or more regions of plaque is determined by generating a three-dimensional histogram of radiodensity values across a geometric shape of the one or more regions of plaque. Embodiment 172: The computer-implemented method of any one of Embodiments 161-171, wherein the change in coronary calcium score of the subject is characterized as positive when the heterogeneity index of the one or more regions of plaque is decreased. Embodiment 173: The computer-implemented method of any one of Embodiments 161-172, wherein the second coronary calcium score of the subject is determined by analyzing the one or more medical images of the coronary region of the subject. Embodiment 174: The computer-implemented method of any one of Embodiments 161-172, wherein the second coronary calcium score of the subject is accessed from a database. Embodiment 175: The computer-implemented method of any one of Embodiments 161-174, wherein the one or more medical images of the coronary region of the subject comprises an image obtained from a non-contrast Computed Tomography (CT) scan. Embodiment 176: The computer-implemented method of any one of Embodiments 161-174, wherein the one or more medical images of the coronary region of the subject comprises an image obtained from a contrast-enhanced CT scan. Embodiment 177: The computer-implemented method of Embodiment 176, wherein the one or more medical images of the coronary region of the subject comprises an image obtained from a contrast-enhanced CT angiogram. Embodiment 178: The computer-implemented method of any one of Embodiments 161-177, wherein a positive characterization of the change in coronary calcium score is indicative of plaque stabilization. Embodiment 179: The computer-implemented method of any one of Embodiments 161-178, wherein the first set of plaque parameters and the second set of plaque parameters further comprise radiodensity of a volume around plaque. Embodiment 180: The computer-implemented method of any one of Embodiments 161-179, wherein the change in coronary calcium score of the subject is characterized by a machine learning algorithm utilized by the computer system. Embodiment 181: The computer-implemented method of any one of Embodiments 161-180, wherein the first weighted measure is generated by weighting the accessed first set of plaque parameters equally. Embodiment 182: The computer-implemented method of any one of Embodiments 161-180, wherein the first weighted measure is generated by weighting the accessed first set of plaque parameters differently. Embodiment 183: The computer-implemented method of any one of Embodiments 161-180, wherein the first weighted measure is generated by weighting the accessed first set of plaque parameters logarithmically, algebraically, or utilizing another mathematical transform. Embodiment 184: A computer-implemented method of generating prognosis of a cardiovascular event for a subject based on non-invasive medical image analysis, the method comprising: accessing, by a computer system, a medical image of a coronary region of a subject, wherein the medical image of the coronary region of the subject is obtained non-invasively; identifying, by the computer system utilizing a coronary artery identification algorithm, one or more coronary arteries within the medical image of the coronary region of the subject, wherein the coronary artery identification algorithm is configured to utilize raw medical images as input; identifying, by the computer system utilizing a plaque identification algorithm, one or more regions of plaque within the one or more coronary arteries identified from the medical image of the coronary region of the subject, wherein the plaque identification algorithm is configured to utilize raw medical images as input; determining, by the computer system, a set of quantified plaque parameters of the one or more identified regions of plaque within the medical image of the coronary region of the subject, wherein the set of quantified plaque parameters comprises volume, surface area, ratio of volume to surface area, heterogeneity index, geometry, and radiodensity of the one or more regions of plaque within the medical image; classifying, by the computer system, the one or more regions of plaque within the medical image as stable plaque or unstable plaque based at least in part on the determined set of quantified plaque parameters; determining, by the computer system, a volume of unstable plaque classified within the medical image and a total volume of the one or more coronary arteries within the medical image; determining, by the computer system, a ratio of volume of unstable plaque to the total volume of the one or more coronary arteries; generating, by the computer system, a prognosis of a cardiovascular event for the subject based at least in part on analyzing the ratio of volume of unstable plaque to the total volume of the one or more coronary arteries, the volume of the one or more regions of plaque, and the volume of unstable plaque classified within the medical image, wherein the analyzing comprises conducting a comparison to a known dataset of one or more ratios of volume of unstable plaque to total volume of one or more coronary arteries, volume of one or more regions of plaque, and volume of unstable plaque, wherein the known dataset is collected from other subjects; and generating, by the computer system, treatment plan for the subject based at least in part on the generated prognosis of cardiovascular event for the subject, wherein the computer system comprises a computer processor and an electronic storage medium. Embodiment 185: The computer-implemented method of Embodiment 184, further comprising generating, by the computer system, a weighted measure of the ratio of volume of unstable plaque to the total volume of the one or more coronary arteries, the volume of the one or more regions of plaque, and the volume of unstable plaque classified within the medical image, wherein the prognosis of cardiovascular event is further generated by comparing the weighted measure to one or more weighted measures derived from the known dataset. Embodiment 186: The computer-implemented method of Embodiment 185, wherein the weighted measure is generated by weighting the ratio of volume of unstable plaque to the total volume of the one or more coronary arteries, the volume of the one or more regions of plaque, and the volume of unstable plaque classified within the medical image equally. Embodiment 187: The computer-implemented method of Embodiment 185, wherein the weighted measure is generated by weighting the ratio of volume of unstable plaque to the total volume of the one or more coronary arteries, the volume of the one or more regions of plaque, and the volume of unstable plaque classified within the medical image differently. Embodiment 188: The computer-implemented method of Embodiment 185, wherein the weighted measure is generated by weighting the ratio of volume of unstable plaque to the total volume of the one or more coronary arteries, the volume of the one or more regions of plaque, and the volume of unstable plaque classified within the medical image logarithmically, algebraically, or utilizing another mathematical transform. Embodiment 189: The computer-implemented method of any one of Embodiments 184-188, further comprising analyzing, by the computer system, a medical image of a non-coronary cardiovascular system of the subject, and wherein the prognosis of a cardiovascular event for the subject is further generated based at least in part on the analyzed medical image of the non-coronary cardiovascular system of the subject. Embodiment 190: The computer-implemented method of any one of Embodiments 184-189, further comprising accessing, by the computer system, results of a blood chemistry or biomarker test of the subject, and wherein the prognosis of a cardiovascular event for the subject is further generated based at least in part on the results of the blood chemistry or biomarker test of the subject. Embodiment 191: The computer-implemented method of any one of Embodiments 184-190, wherein the generated prognosis of a cardiovascular event for the subject comprises a risk score of a cardiovascular event for the subject. Embodiment 192: The computer-implemented method of any one of Embodiments 184-191, wherein the prognosis of a cardiovascular event is generated by the computer system utilizing an artificial intelligence or machine learning algorithm. Embodiment 193: The computer-implemented method of any one of Embodiments 184-192, wherein the cardiovascular event comprises one or more of atherosclerosis, stenosis, or ischemia. Embodiment 194: The computer-implemented method of any one of Embodiments 184-193, wherein the generated treatment plan comprises one or more of use of statins, lifestyle changes, or surgery. Embodiment 195: The computer-implemented method of any one of Embodiments 184-194, wherein one or more of the coronary artery identification algorithm or the plaque identification algorithm comprises an artificial intelligence or machine learning algorithm. Embodiment 196: The computer-implemented method of any one of Embodiments 184-195, wherein the plaque identification algorithm is configured to determine the one or more regions of plaque by determining a vessel wall and lumen wall of the one or more coronary arteries and determining a volume between the vessel wall and lumen wall as the one or more regions of plaque. Embodiment 197: The computer-implemented method of any one of Embodiments 184-196, wherein the medical image comprises a Computed Tomography (CT) image. Embodiment 198: The computer-implemented method of Embodiment 197, wherein the medical image comprises a non-contrast CT image. Embodiment 199: The computer-implemented method of Embodiment 197, wherein the medical image comprises a contrast CT image. Embodiment 200: The computer-implemented method of any one of Embodiments 184-196, wherein the medical image comprises a Magnetic Resonance (MR) image. Embodiment 201: The computer-implemented method of any one of Embodiments 184-196, wherein the medical image is obtained using an imaging technique comprising one or more of CT, x-ray, ultrasound, echocardiography, intravascular ultrasound (IVUS), MR imaging, optical coherence tomography (OCT), nuclear medicine imaging, positron-emission tomography (PET), single photon emission computed tomography (SPECT), or near-field infrared spectroscopy (NIRS). Embodiment 202: A computer-implemented method of determining patient-specific stent parameters and guidance for implantation based on non-invasive medical image analysis, the method comprising: accessing, by a computer system, a medical image of a coronary region of a patient, wherein the medical image of the coronary region of the patient is obtained non-invasively; identifying, by the computer system utilizing a coronary artery identification algorithm, one or more coronary arteries within the medical image of the coronary region of the patient, wherein the coronary artery identification algorithm is configured to utilize raw medical images as input; identifying, by the computer system utilizing a plaque identification algorithm, one or more regions of plaque within the one or more coronary arteries identified from the medical image of the coronary region of the patient, wherein the plaque identification algorithm is configured to utilize raw medical images as input; determining, by the computer system, a set of quantified plaque parameters of the one or more identified regions of plaque from the medical image of the coronary region of the patient, wherein the set of quantified plaque parameters comprises a ratio or function of volume to surface area, heterogeneity index, location, geometry, and radiodensity of the one or more regions of plaque within the medical image; determining, by the computer system, a set of stenosis vessel parameters of the one or more coronary arteries within the medical image of the coronary region of the patient, wherein the set of vessel parameters comprises volume, curvature, vessel wall, lumen wall, and diameter of the one or more coronary arteries within the medical image in the presence of stenosis; determining, by the computer system, a set of normal vessel parameters of the one or more coronary arteries within the medical image of the coronary region of the patient, wherein the set of vessel parameters comprises volume, curvature, vessel wall, lumen wall, and diameter of the one or more coronary arteries within the medical image without stenosis, wherein the set of normal vessel parameters are determined by graphically removing from the medical image of the coronary region of the patient the identified one or more regions of plaque; determining, by the computer system, a predicted effectiveness of stent implantation for the patient based at least in part on the set of quantified plaque parameters and the set of vessel parameters; generating, by the computer system, patient-specific stent parameters for the patient when the predicted effectiveness of stent implantation for the patient is above a predetermined threshold, wherein the patient-specific stent parameters are generated based at least in part on the set of quantified plaque parameters, the set of vessel parameters, and the set of normal vessel parameters; and generating, by the computer system, guidance for implantation of a patient-specific stent comprising the patient-specific stent parameters, wherein the guidance for implantation of the patient-specific stent is generated based at least in part on the set of quantified plaque parameters and the set of vessel parameters, wherein the generated guidance for implantation of the patient-specific stent comprises insertion of guidance wires and positioning of the patient-specific stent, wherein the computer system comprises a computer processor and an electronic storage medium. Embodiment 203: The computer-implemented method of Embodiment 202, further comprising accessing, by the computer system, a post-implantation medical image of the coronary region of the patient and performing post-implantation analysis. Embodiment 204: The computer-implemented method of Embodiment 203, further comprising generating, by the computer system, a treatment plan for the patient based at least in part on the post-implantation analysis. Embodiment 205: The computer-implemented method of Embodiment 204, wherein the generated treatment plan comprises one or more of use of statins, lifestyle changes, or surgery. Embodiment 206: The computer-implemented method of any one of Embodiments 202-205, wherein the set of stenosis vessel parameters comprises a location, curvature, and diameter of bifurcation of the one or more coronary arteries. Embodiment 207: The computer-implemented method of any one of Embodiments 202-206, wherein the patient-specific stent parameters comprise a diameter of the patient-specific stent. Embodiment 208: The computer-implemented method of Embodiment 207, wherein the diameter of the patient-specific stent is substantially equal to the diameter of the one or more coronary arteries without stenosis. Embodiment 209: The computer-implemented method of Embodiment 207, wherein the diameter of the patient-specific stent is less than the diameter of the one or more coronary arteries without stenosis. Embodiment 210: The computer-implemented method of any one of Embodiments 202-209, wherein the predicted effectiveness of stent implantation for the patient is determined by the computer system utilizing an artificial intelligence or machine learning algorithm. Embodiment 211: The computer-implemented method of any one of Embodiments 202-210, wherein the patient-specific stent parameters for the patient are generated by the computer system utilizing an artificial intelligence or machine learning algorithm. Embodiment 212: The computer-implemented method of any one of Embodiments 202-211, wherein one or more of the coronary artery identification algorithm or the plaque identification algorithm comprises an artificial intelligence or machine learning algorithm. Embodiment 213: The computer-implemented method of any one of Embodiments 202-212, wherein the plaque identification algorithm is configured to determine the one or more regions of plaque by determining a vessel wall and lumen wall of the one or more coronary arteries and determining a volume between the vessel wall and lumen wall as the one or more regions of plaque. Embodiment 214: The computer-implemented method of any one of Embodiments 202-213, wherein the medical image comprises a Computed Tomography (CT) image. Embodiment 215: The computer-implemented method of Embodiment 214, wherein the medical image comprises a non-contrast CT image. Embodiment 216: The computer-implemented method of Embodiment 214, wherein the medical image comprises a contrast CT image. Embodiment 217: The computer-implemented method of any one of Embodiments 202-213, wherein the medical image comprises a Magnetic Resonance (MR) image. Embodiment 218: The computer-implemented method of any one of Embodiments 202-213, wherein the medical image is obtained using an imaging technique comprising one or more of CT, x-ray, ultrasound, echocardiography, intravascular ultrasound (IVUS), MR imaging, optical coherence tomography (OCT), nuclear medicine imaging, positron-emission tomography (PET), single photon emission computed tomography (SPECT), or near-field infrared spectroscopy (NIRS). Embodiment 219: A computer-implemented method of generating a patient-specific report on coronary artery disease for a patient based on non-invasive medical image analysis, the method comprising: accessing, by a computer system, a medical image of a coronary region of a patient, wherein the medical image of the coronary region of the patient is obtained non-invasively; identifying, by the computer system utilizing a coronary artery identification algorithm, one or more coronary arteries within the medical image of the coronary region of the patient, wherein the coronary artery identification algorithm is configured to utilize raw medical images as input; identifying, by the computer system utilizing a plaque identification algorithm, one or more regions of plaque within the one or more coronary arteries identified from the medical image of the coronary region of the patient, wherein the plaque identification algorithm is configured to utilize raw medical images as input; determining, by the computer system, one or more vascular morphology parameters and a set of quantified plaque parameters of the one or more identified regions of plaque from the medical image of the coronary region of the patient, wherein the set of quantified plaque parameters comprises a ratio or function of volume to surface area, volume, heterogeneity index, location, geometry, and radiodensity of the one or more regions of plaque within the medical image; quantifying, by the computer system, stenosis and atherosclerosis of the patient based at least in part on the set of quantified plaque parameters determined from the medical image; generating, by the computer system, one or more annotated medical images based at least in part on the medical image, the quantified stenosis and atherosclerosis of the patient, and the set of quantified plaque parameters determined from the medical image; determining, by the computer system, a risk of coronary artery disease for the patient based at least in part by comparing the quantified stenosis and atherosclerosis of the patient and the set of quantified plaque parameters determined from the medical image to a known dataset of one or more quantified stenosis and atherosclerosis and one or more quantified plaque parameters derived from one or more medial images of healthy subjects within an age group of the patient; dynamically generating, by the computer system, a patient-specific report on coronary artery disease for the patient, wherein the generated patient-specific report comprises the one or more annotated medical images, one or more of the set of quantified plaque parameters, and determined risk of coronary artery disease, wherein the computer system comprises a computer processor and an electronic storage medium. Embodiment 220: The computer-implemented method of Embodiment 219, wherein the patient-specific report comprises a cinematic report. Embodiment 221: The computer-implemented method of Embodiment 220, wherein the patient-specific report comprises content configured to provide an Augmented Reality (AR) or Virtual Reality (VR) experience. Embodiment 222: The computer-implemented method of any one of Embodiments 219-221, wherein the patient-specific report comprises audio dynamically generated for the patient based at least in part on the quantified stenosis and atherosclerosis of the patient, the set of quantified plaque parameters determined from the medical image, and determined risk of coronary artery disease. Embodiment 223: The computer-implemented method of any one of Embodiments 219-222, wherein the patient-specific report comprises phrases dynamically generated for the patient based at least in part on the quantified stenosis and atherosclerosis of the patient, the set of quantified plaque parameters determined from the medical image, and determined risk of coronary artery disease. Embodiment 224: The computer-implemented method of any one of Embodiments 219-223, further comprising generating, by the computer system, a treatment plan for the patient based at least in part on the quantified stenosis and atherosclerosis of the patient, the set of quantified plaque parameters determined from the medical image, and determined risk of coronary artery disease, wherein the patient-specific report comprises the generated treatment plan. Embodiment 225: The computer-implemented method of Embodiment 224, wherein the generated treatment plan comprises one or more of use of statins, lifestyle changes, or surgery. Embodiment 226: The computer-implemented method of any one of Embodiments 219-225, further comprising tracking, by the computer system, progression of coronary artery disease for the patient based at least in part on comparing one or more of the set of quantified plaque parameters determined from the medical image against one or more previous quantified plaque parameters derived from a previous medical image of the patient, wherein the patient-specific report comprises the tracked progression of coronary artery disease. Embodiment 227: The computer-implemented method of any one of Embodiments 219-226, wherein one or more of the coronary artery identification algorithm or the plaque identification algorithm comprises an artificial intelligence or machine learning algorithm. Embodiment 228: The computer-implemented method of any one of Embodiments 219-227, wherein the plaque identification algorithm is configured to determine the one or more regions of plaque by determining a vessel wall and lumen wall of the one or more coronary arteries and determining a volume between the vessel wall and lumen wall as the one or more regions of plaque. Embodiment 229: The computer-implemented method of any one of Embodiments 219-228, wherein the medical image comprises a Computed Tomography (CT) image. Embodiment 230: The computer-implemented method of Embodiment 229, wherein the medical image comprises a non-contrast CT image. Embodiment 231: The computer-implemented method of Embodiment 229, wherein the medical image comprises a contrast CT image. Embodiment 232: The computer-implemented method of any one of Embodiments 219-228, wherein the medical image comprises a Magnetic Resonance (MR) image. Embodiment 233: The computer-implemented method of any one of Embodiments 219-228, wherein the medical image is obtained using an imaging technique comprising one or more of CT, x-ray, ultrasound, echocardiography, intravascular ultrasound (IVUS), MR imaging, optical coherence tomography (OCT), nuclear medicine imaging, positron-emission tomography (PET), single photon emission computed tomography (SPECT), or near-field infrared spectroscopy (NIRS). Embodiment 234: A system comprising: at least one non-transitory computer storage medium configured to at least store computer-executable instructions, a set of computed tomography (CT) images of a patient's coronary vessels, vessel labels, and artery information associated with the set of CT images including information of stenosis, plaque, and locations of segments of the coronary vessels; one or more computer hardware processors in communication with the at least one non-transitory computer storage medium, the one or more computer hardware processors configured to execute the computer-executable instructions to at least: generate and display a user interface a first panel including an artery tree comprising a three-dimensional (3D) representation of coronary vessels depicting coronary vessels identified in the CT images, and including segment labels related to the artery tree, the artery tree not including heart tissue between branches of the artery tree; in response to an input on the user interface indicating the selection of a coronary vessel in the artery tree in the first panel, generate and display on the user interface a second panel illustrating at least a portion of the selected coronary vessel in at least one straightened multiplanar vessel (SMPR) view; generate and display on the user interface a third panel showing a cross-sectional view of the selected coronary vessel, the cross-sectional view generated using one of the set of CT images of the selected coronary vessel, wherein locations along the at least one SMPR view are each associated with one of the CT images in the set of CT images such that a selection of a particular location along the coronary vessel in the at least one SMPR view displays the associated CT image in the cross-sectional view in the third panel; and in response to an input on the third panel indicating a first location along the selected coronary artery in the at least one SMPR view, display a cross-sectional view associated with the selected coronary artery at the first location in the third panel. Embodiment 235: The system of embodiment 234, wherein the one or more computer hardware processors are further configured to execute the computer-executable instructions to, in response to an input on the second panel of the user interface indicating a second location along the selected coronary artery in the at least one SMPR view, display the associated CT scan associated with the second location in a cross-sectional view in the third panel. Embodiment 236: The system of embodiment 234, wherein the one or more computer hardware processors are further configured to execute the computer-executable instructions to: in response to a second input on the user interface indicating the selection of a second coronary vessel in the artery tree displayed in the first panel, generate and display in the second panel at least a portion of the selected second coronary vessel in at least one straightened multiplanar vessel (SMPR) view, and generate and display on the third panel a cross-sectional view of the selected second coronary vessel, the cross-sectional view generated using one of the set of CT images of the selected second coronary vessel, wherein locations along the selected second coronary artery in the at least one SMPR view are each associated with one of the CT images in the set of CT images such that a selection of a particular location along the second coronary vessel in the at least one SMPR view displays the associated CT image in the cross-sectional view in the third panel. Embodiment 237: The system of embodiment 234, wherein the one or more computer hardware processors are further configured to identify the vessel segments using a machine learning algorithm that processes the CT images prior to storing the artery information on the at least one non-transitory computer storage medium. Embodiment 238: The system of embodiment 234, wherein the one or more computer hardware processors are further configured to execute the computer-executable instructions to generate and display on the user interface in a fourth panel a cartoon artery tree, the cartoon artery tree comprising a non-patient specific graphical representation of a coronary artery tree, and wherein in response to a selection of a vessel segment in the cartoon artery tree, a view of the selected vessel segment is displayed in a panel of the user interface in a SMPR view, and upon selection of a location of the vessel segment displayed in the SMPR view, generate and display in the user interface a panel that displays information about the selected vessel at the selected location. Embodiment 239: The system of embodiment 238, wherein the displayed information includes information relating to stenosis and plaque of the selected vessel. Embodiment 240: The system of embodiment 234, wherein the one or more computer hardware processors are further configured to execute the computer-executable instructions to generate and segment name labels, proximal to a respective segment on the artery tree, indicative of the name of the segment. Embodiment 241: The system of embodiment 240, wherein the one or more computer hardware processors are further configured to execute the computer-executable instructions to, in response to an input selection of a first segment name label displayed on the user interface, generate and display on the user interface a panel having a list of vessel segment names and indicating the current name of the selected vessel segment; and in response to an input selection of a second segment name label on the list, replace the first segment name label with the second segment name label of the displayed artery tree in the user interface. Embodiment 242: The system of embodiment 234, wherein the at least one SMPR view of the selected coronary vessel comprises at least two SMPR views of the selected coronary vessel displayed adjacently at a rotational interval. Embodiment 243: The system of embodiment 234, wherein the at least one SMPR view include four SMPR views displayed at a relative rotation of 0°, 22.5°, 45°, and 67.5°. Embodiment 244: The system of embodiment 234, wherein the one or more computer hardware processors are further configured to execute the computer-executable instructions to, in response to a user input, rotate the at least one SMPR view in increments of 1°. Embodiment 245: The system of embodiment 234, wherein the artery tree, the at least one SMPR view, and the cross-sectional view are displayed concurrently on the user interface. Embodiment 246: The system of embodiment 245, wherein the artery tree is displayed in a center portion of the user panel, the cross-sectional view is displayed in a center portion of the user interface above or below the artery tree, and the at least one SMPR view are displayed on one side of the center portion of the user interface. Embodiment 247: The system of embodiment 246, wherein the one or more computer hardware processors are further configured to generate and display, on one side of the center portion of the user interface, one or more anatomical plane views corresponding to the selected coronary artery, the anatomical plane views of the selected coronary vessel based on the CT images. Embodiment 248: The system of embodiment 247, wherein the anatomical plane views comprise three anatomical plane views. Embodiment 249: The system of embodiment 247, wherein the anatomical plane views comprise at least one of an axial plane view, a coronal plane view, or a sagittal plane view. Embodiment 250: The system of embodiment 234, wherein the one or more computer hardware processors are further configured to receive a rotation input on the user interface, and rotate the at least one SMPR views incrementally based on the rotation input. Embodiment 251: The system of embodiment 234, wherein the at least one non-transitory computer storage medium is further configured to at least store vessel wall information including information indicative of the lumen and the vessel walls of the coronary artery vessels, and wherein the one or more computer hardware processors are further configured to graphically display lumen and vessel wall information corresponding to the coronary vessel displayed in the cross-sectional view in the third panel. Embodiment 252: The system of embodiment 251, wherein and one or more computer hardware processors are further configured to display information of the lumen and the vessel wall on the user interface based on the selected portion of the coronary vessel in the at least one SMPR view. Embodiment 253: The system of embodiment 251, wherein and one or more computer hardware processors are further configured to display information of plaque based on the selected portion of the coronary vessel in the at least one SMPR view. Embodiment 254: The system of embodiment 251, wherein and one or more computer hardware processors are further configured to display information of stenosis based on the selected portion of the coronary vessel in the at least one SMPR view. Embodiment 255: The system of embodiment 234, wherein the one or more computer hardware processors are further configured to execute the computer-executable instructions to generate and display on the user interface a cartoon artery tree, the cartoon artery tree being a non-patient specific graphical representation of an artery tree, wherein portions of the artery tree are displayed in a color that corresponds to a risk level. Embodiment 256: The system of embodiment 255, wherein the risk level is based on stenosis. Embodiment 257: The system of embodiment 255, wherein the risk level is based on a plaque. Embodiment 258: The system of embodiment 255, wherein the risk level is based on ischemia. Embodiment 259: The system of embodiment 255, wherein the one or more computer hardware processors are further configured to execute the computer-executable instructions to, in response to selecting a portion of the cartoon artery tree, displaying on the second panel a SMPR view of the vessel corresponding to the selected portion of the cartoon artery tree, and displaying on the third panel a cross-sectional view of corresponding to the selected portion of the cartoon artery tree. Embodiment 260: A system comprising: means for storing computer-executable instructions, a set of computed tomography (CT) images of a patient's coronary vessels, vessel labels, and artery information associated with the set of CT images including information of stenosis, plaque, and locations of segments of the coronary vessels; and means for executing the computer-executable instructions to at least: generate and display a user interface a first panel including an artery tree comprising a three-dimensional (3D) representation of coronary vessels based on the CT images and depicting coronary vessels identified in the CT images, and depicting segment labels, the artery tree not including heart tissue between branches of the artery tree; in response to an input on the user interface indicating the selection of a coronary vessel in the artery tree in the first panel, generate and display on the user interface a second panel illustrating at least a portion of the selected coronary vessel in at least one straightened multiplanar vessel (SMPR) view; generate and display on the user interface a third panel showing a cross-sectional view of the selected coronary vessel, the cross-sectional view generated using one of the set of CT images of the selected coronary vessel, wherein locations along the at least one SMPR view are each associated with one of the CT images in the set of CT images such that a selection of a particular location along the coronary vessel in the at least one SMPR view displays the associated CT image in the cross-sectional view in the third panel; and in response to an input on the user interface indicating a first location along the selected coronary artery in the at least one SMPR view, display the associated CT scan associated with the in the cross-sectional view in the third panel. Embodiment 261: A method for analyzing CT images and corresponding information, the method comprising: storing computer-executable instructions, a set of computed tomography (CT) images of a patient's coronary vessels, vessel labels, and artery information associated with the set of CT images including information of stenosis, plaque, and locations of segments of the coronary vessels; generating and displaying in a user interface a first panel including an artery tree comprising a three-dimensional (3D) representation of coronary vessels based on the CT images and depicting coronary vessels identified in the CT images, and depicting segment labels, the artery tree not including heart tissue between branches of the artery tree; receiving a first input indicating a selection of a coronary vessel in the artery tree in the first panel; in response to the first input, generating and displaying on the user interface a second panel illustrating at least a portion of the selected coronary vessel in at least one straightened multiplanar vessel (SMPR) view; generating and displaying on the user interface a third panel showing a cross-sectional view of the selected coronary vessel, the cross-sectional view generated using one of the set of CT images of the selected coronary vessel, wherein locations along the at least one SMPR view are each associated with one of the CT images in the set of CT images such that a selection of a particular location along the coronary vessel in the at least one SMPR view displays the associated CT image in the cross-sectional view in the third panel; receiving a second input on the user interface indicating a first location along the selected coronary artery in the at least one SMPR view; and in response to the second input, displaying the associated CT scan associated in the cross-sectional view in the third panel, wherein the method is performed by one or more computer hardware processors executing computer-executable instructions in communication stored on one or more non-transitory computer storage mediums. Embodiment 262: The method of embodiment 261, further comprising, in response to an input on the second panel of the user interface indicating a second location along the selected coronary artery in the at least one SMPR view, display the associated CT scan associated with the second location in a cross-sectional view in the third panel. Embodiment 263: The method of any one of embodiments 261 and 262, further comprising: in response to a second input on the user interface indicating the selection of a second coronary vessel in the artery tree displayed in the first panel, generating and displaying in the second panel at least a portion of the selected second coronary vessel in at least one straightened multiplanar vessel (SMPR) view, and generating and displaying on the third panel a cross-sectional view of the selected second coronary vessel, the cross-sectional view generated using one of the set of CT images of the selected second coronary vessel, wherein locations along the selected second coronary artery in the at least one SMPR view are each associated with one of the CT images in the set of CT images such that a selection of a particular location along the second coronary vessel in the at least one SMPR view displays the associated CT image in the cross-sectional view in the third panel. Embodiment 264: The method of any one of embodiments 261-263, further comprising generating and displaying on the user interface in a fourth panel a cartoon artery tree, the cartoon artery tree comprising a non-patient specific graphical representation of a coronary artery tree, and wherein in response to a selection of a vessel segment in the cartoon artery tree, a view of the selected vessel segment is displayed in a panel of the user interface in a SMPR view, and upon selection of a location of the vessel segment displayed in the SMPR view, generating and displaying in the user interface a panel that displays information about the selected vessel at the selected location. Embodiment 265: The method of embodiment 264, wherein the displayed information includes information relating to stenosis and plaque of the selected vessel. Embodiment 266: The method of any one of embodiments 261-265, further comprising generating and displaying segment name labels, proximal to a respective segment on the artery tree, indicative of the name of the segment, using the stored artery information. Embodiment 267: The method of any one of embodiments 261-266, further comprising, in response to an input selection of a first segment name label displayed on the user interface, generating and displaying on the user interface a panel having a list of vessel segment names and indicating the current name of the selected vessel segment, and in response to an input selection of a second segment name label on the list, replacing the first segment name label with the second segment name label of the displayed artery tree in the user interface. Embodiment 268: The method of any one of embodiments 261-267, further comprising generating and displaying a tool bar on a fourth panel of the user interface, the tool bar comprising tools to add, delete, or revise artery information displayed on the user interface. Embodiment 269: The method of embodiment 268, wherein the tools on the toolbar include a lumen wall tool, a snap to vessel wall tool, a snap to lumen wall tool, vessel wall tool, a segment tool, a stenosis tool, a plaque overlay tool a snap to centerline tool, chronic total occlusion tool, stent tool, an exclude tool, a tracker tool, or a distance measurement tool. Embodiment 270: The method of embodiment 268, wherein the tools on the toolbar include a lumen wall tool, a snap to vessel wall tool, a snap to lumen wall tool, vessel wall tool, a segment tool, a stenosis tool, a plaque overlay tool a snap to centerline tool, chronic total occlusion tool, stent tool, an exclude tool, a tracker tool, and a distance measurement tool. Embodiment 271: A normalization device configured to facilitate normalization of medical images of a coronary region of a subject for an algorithm-based medical imaging analysis, the normalization device comprising: a substrate having a width, a length, and a depth dimension, the substrate having a proximal surface and a distal surface, the proximal surface adapted to be placed adjacent to a surface of a body portion of a patient; a plurality of compartments positioned within the substrate, each of the plurality of compartments configured to hold a sample of a known material, wherein: a first subset of the plurality of compartments hold samples of a contrast material with different concentrations, a second subset of the plurality of compartments hold samples of materials representative of materials to be analyzed by the algorithm-based medical imaging analysis, and a third subset of the plurality of compartments hold samples of phantom materials. Embodiment 272: The normalization device of Embodiment 271, wherein the contrast material comprises one of iodine, Gad, Tantalum, Tungsten, Gold, Bismuth, or Ytterbium. Embodiment 273: The normalization device of any of Embodiments 271-272, wherein the samples of materials representative of materials to be analyzed by the algorithm-based medical imaging analysis comprise at least two of calcium 1000 HU, calcium 220 HU, calcium 150 HU, calcium 130 HU, and a low attenuation (e.g., 30 HU) material. Embodiment 274: The normalization device of any of Embodiments 271-273, wherein the samples of phantom materials comprise one or more of water, fat, calcium, uric acid, air, iron, or blood. Embodiment 275: The normalization device of any of Embodiments 271-274, further comprising one or more fiducials positioned on or in the substrate for determining the alignment of the normalization device in an image of the normalization device such that the position in the image of each of the one or more compartments in the first arrangement can be determined using the one or more fiducials. Embodiment 276: The normalization device of any of Embodiments 271-275, wherein the substrate comprises a first layer, and at least some of the plurality of compartments are positioned in the first layer in a first arrangement. Embodiment 277: The normalization device of Embodiment 276, wherein the substrate further comprises a second layer positioned above the first layer, and at least some of the plurality of compartments are positioned in the second layer including in a second arrangement. Embodiment 278: The normalization device of Embodiment 277, further comprising one or more additional layers positioned above the second layer, and at least some of the plurality of compartments are positioned within the one or more additional layers. Embodiment 279: The normalization device of any one of Embodiments 271-278, wherein at least one of the compartments is configured to be self-sealing such that the material can be injected into the self-sealing compartment and the compartment seals to contain the injected material. Embodiment 280: The normalization device of any of Embodiments 271-279, further comprising an adhesive on the proximal surface of the substrate and configured to adhere the normalization device to the body portion patient. Embodiment 281: The normalization device of any of Embodiments 271-280, further comprising a heat transfer material designed to transfer heat from the body portion of the patient to the material in the one or more compartments. Embodiment 282: The normalization device of any of Embodiments 271-280, further comprising an adhesive strip having a proximal side and a distal side, the proximal side configured to adhere to the body portion, the adhesive strip including a fastener configured to removably attach to the proximal surface of the substrate. Embodiment 283: The normalization device of Embodiment 282, wherein the fastener comprises a first part of a hook-and-loop fastener, and the first layer comprises a corresponding second part of the hook-and-loop fastener. Embodiment 284: The normalization device of any of Embodiments 271-283, wherein substrate a flexible material to allow the substrate to conform to the shape of the body portion. Embodiment 285: The normalization device of any of Embodiments 271-284, wherein the first arrangement includes a circular-shaped arrangements of the compartments. Embodiment 286: The normalization device of any of Embodiments 271-284, wherein the first arrangement includes a rectangular-shaped arrangements of the compartments. Embodiment 287: The normalization device of any of Embodiments 271-286, wherein the material in at least two compartments is the same. Embodiment 288: The normalization device of any of Embodiments 271-287, wherein at least one of a length, a width or a depth dimension of a compartment is less than 0.5 mm. Embodiment 289: The normalization device of any of Embodiments 271-287, wherein a width dimension of the compartments is between 0.1 mm and 1 mm. Embodiment 290: The normalization device of Embodiment 289, wherein a length dimension of the compartments is between 0.1 mm and 1 mm. Embodiment 291: The normalization device of Embodiment 290, wherein a depth dimension of the compartments is between 0.1 mm and 1 mm. Embodiment 292: The normalization device of any of Embodiments 271-287, wherein at least one of the length, width or depth dimension of a compartment is greater than 1.0 mm. Embodiment 293: The normalization device of any of Embodiments 271-287, wherein dimensions of some or all of the compartments in the normalization device are different from each other allowing a single normalization device to have a plurality of compartments having different dimensions such that the normalization device can be used in various medical image scanning devices having different resolution capabilities. Embodiment 294: The normalization device of any of Embodiments 271-287, wherein the normalization device includes a plurality of compartments with differing dimensions such that the normalization device can be used to determine the actual resolution capability of the scanning device. Embodiment 295: A normalization device, comprising: a first layer having a width, length, and depth dimension, the first layer having a proximal surface and a distal surface, the proximal surface adapted to be placed adjacent to a surface of a body portion of a patient, the first layer including one or more compartments positioned in the first layer in a first arrangement, each of the one or more compartments containing a known material; and one or more fiducials for determining the alignment of the normalization device in an image of the normalization device such that the position in the image of each of the one or more compartments in the first arrangement be the determined using the one or more fiducials. Embodiment 296: The normalization device of Embodiment 295, further comprising a second layer having a width, length, and depth dimension, the second layer having a proximal surface and a distal surface, the proximal surface adjacent to the distal surface of the first layer, the second layer including one or more compartments positioned in the second layer in a second arrangement, each of the one or more compartments of the second layer containing a known material. Embodiment 297: The normalization device of Embodiment 296, further comprising one or more additional layers each having a width, length, and depth dimension, the one or more additional layers having a proximal surface and a distal surface, the proximal surface facing the second layer and each of the one or more layers positioned such that the second layer is between the first layer and the one or more additional layers, each of the one or more additional layers respectively including one or more compartments positioned in each respective one or more additional layers layer in a second arrangement, each of the one or more compartments of the one or more additional layers containing a known material. Embodiment 298: The normalization device of any one of Embodiments 295-297, wherein at least one of the compartments is configured to be self-sealing such that the material can be injected into the self-sealing compartment and the compartment seals to contain the injected material. Embodiment 299: The normalization device of Embodiment 295, further comprising an adhesive on the proximal surface of the first layer. Embodiment 300: The normalization device of Embodiment 295, further comprising a heat transfer material designed to transfer heat from the body portion of the patient to the material in the one or more compartments. Embodiment 301: The normalization device of Embodiment 295, further comprising an adhesive strip having a proximal side and a distal side, the proximal side configured to adhere to the body portion, the adhesive strip including a fastener configured to removably attach to the proximal surface of the first layer. Embodiment 302: The normalization device of Embodiment 301, wherein the fastener comprises a first part of a hook-and-loop fastener, and the first layer comprises a corresponding second part of the hook-and-loop fastener. Embodiment 303: The normalization device of Embodiment 295, wherein the normalization device comprises a flexible material to allow the normalization device to conform to the shape of the body portion. Embodiment 304: The normalization device of Embodiment 295, wherein the first arrangement includes a circular-shaped arrangements of the compartments. Embodiment 305: The normalization device of Embodiment 295, wherein the first arrangement includes a rectangular-shaped arrangements of the compartments. Embodiment 306: The normalization device of Embodiment 295, wherein the material in at least two compartments of the first layer is the same. Embodiment 307: The normalization device of any of Embodiment 296 or 297, wherein the material in at least two compartments of any of the layers is the same. Embodiment 308: The normalization device of Embodiment 295, wherein at least one of the one or more compartments include a contrast material. Embodiment 309: The normalization device of Embodiment 308, wherein the contrast material comprises one of iodine, Gad, Tantalum, Tungsten, Gold, Bismuth, or Ytterbium. Embodiment 310: The normalization device of Embodiment 295, wherein at least one of the one or more compartments include a material representative of a studied variable. Embodiment 311: The normalization device of Embodiment 309, wherein the studied variable is representative of calcium 1000 HU, calcium 220 HU, calcium 150 HU, calcium 130 HU, or a low attenuation (e.g., 30 HU) material. Embodiment 312: The normalization device of Embodiment 295, wherein at least one of the one or more compartments include a phantom. Embodiment 313: The normalization device of Embodiment 312, wherein the phantom comprises one of water, fat, calcium, uric acid, air, iron, or blood. Embodiment 314: The normalization device of Embodiment 295, wherein the first arrangement includes at least one compartment that contains a contrast agent, at least one compartment that includes a studied variable and at least one compartment that includes a phantom. Embodiment 315: The normalization device of Embodiment 295, wherein the first arrangement includes at least one compartment that contains a contrast agent and at least one compartment that includes a studied variable. Embodiment 316: The normalization device of Embodiment 295, wherein the first arrangement includes at least one compartment that contains a contrast agent and at least one compartment that includes a phantom. Embodiment 317: The normalization device of Embodiment 295, wherein the first arrangement includes at least one compartment that contains a studied variable and at least one compartment that includes a phantom. Embodiment 318: The normalization device of Embodiment 271, wherein the first arrangement of the first layer includes at least one compartment that contains a contrast agent, at least one compartment that includes a studied variable and at least one compartment that includes a phantom, and the second arrangement of the second layer includes at least one compartment that contains a contrast agent, at least one compartment that includes a studied variable and at least one compartment that includes a phantom. Embodiment 319: The normalization device of Embodiment 295, wherein at least one of the length, width or depth dimension of a compartment is less than 0.5 mm. Embodiment 320: The normalization device of Embodiment 295, wherein the width dimension of the compartments is between 0.1 mm and 1 mm. Embodiment 321: The normalization device of Embodiment 295, wherein the length dimension of the compartments is between 0.1 mm and 1 mm. Embodiment 322: The normalization device of Embodiment 295, wherein the depth (or height) dimension of the compartments is between 0.1 mm and 1 mm. Embodiment 323: The normalization device of Embodiment 295, wherein at least one of the length, width or depth dimension of a compartment is greater than 1.0 mm. Embodiment 324: The normalization device of any one of Embodiments 295-297, wherein the dimensions of some or all of the compartments in the normalization device are different from each other allowing a single normalization device to have a plurality of compartments having different dimension such that the normalization device can be used in various medical image scanning devices having different resolution capabilities. Embodiment 325: The normalization device of any one of Embodiments 295-297, wherein the normalization device includes a plurality of compartments with differing dimensions such that the normalization device can be used to determine the actual resolution capability of the scanning device. Embodiment 326: A computer-implemented method for normalizing medical images for an algorithm-based medical imaging analysis, wherein normalization of the medical images improves accuracy of the algorithm-based medical imaging analysis, the method comprising: accessing, by a computer system, a first medical image of a region of a subject and the normalization device, wherein the first medical image is obtained non-invasively, and wherein the normalization device comprises a substrate comprising a plurality of compartments, each of the plurality of compartments holding a sample of a known material; accessing, by the computer system, a second medical image of a region of a subject and the normalization device, wherein the second medical image is obtained non-invasively, and wherein the first medical image and the second medical image comprise at least one of the following: one or more first variable acquisition parameters associated with capture of the first medical image differ from a corresponding one or more second variable acquisition parameters associated with capture of the second medical image, a first image capture technology used to capture the first medical image differs from a second image capture technology used to capture the second medical image, and a first contrast agent used during the capture of the first medical image differs from a second contrast agent used during the capture of the second medical image; identifying, by the computer system, image parameters of the normalization device within the first medical image; generating a normalized first medical image for the algorithm-based medical imaging analysis based in part on the first identified image parameters of the normalization device within the first medical image; identifying, by the computer system, image parameters of the normalization device within the second medical image; and generating a normalized second medical image for the algorithm-based medical imaging analysis based in part on the second identified image parameters of the normalization device within the second medical image, wherein the computer system comprises a computer processor and an electronic storage medium. Embodiment 327: The computer-implemented method of Embodiment 326, wherein the algorithm-based medical imaging analysis comprises an artificial intelligence or machine learning imaging analysis algorithm, and wherein the artificial intelligence or machine learning imaging analysis algorithm was trained using images that included the normalization device. Embodiment 328: The computer-implemented method of any of Embodiments 326-327, wherein the first medical image and the second medical image each comprise a CT image and the one or more first variable acquisition parameters and the one or more second variable acquisition parameters comprise one or more of a kilovoltage (kV), kilovoltage peak (kVp), a milliamperage (mA), or a method of gating. Embodiment 329: The computer-implemented method of Embodiment 328, wherein the method of gating comprises one of prospective axial triggering, retrospective ECG helical gating, and fast pitch helical. Embodiment 330: The computer-implemented method of any of Embodiments 326-329, wherein the first image capture technology and the second image capture technology each comprise one of a dual source scanner, a single source scanner, Dual source vs. single source scanners dual energy, monochromatic energy, spectral CT, photon counting, and different detector materials. Embodiment 331: The computer-implemented method of any of Embodiments 326-330, wherein the first contrast agent and the second contrast agent each comprise one of an iodine contrast of varying concentration or a non-iodine contrast agent. Embodiment 332: The computer-implemented method of any of Embodiments 326-327, wherein the first image capture technology and the second image capture technology each comprise one of CT, x-ray, ultrasound, echocardiography, intravascular ultrasound (IVUS), MR imaging, optical coherence tomography (OCT), nuclear medicine imaging, positron-emission tomography (PET), single photon emission computed tomography (SPECT), or near-field infrared spectroscopy (NIRS). Embodiment 333: The computer-implemented method of any of Embodiments 326-332, wherein a first medical imager that captures the first medical imager is different than a second medical image that capture the second medical image. Embodiment 334: The computer-implemented method of any of Embodiments 326-333, wherein the subject of the first medical image is different than the subject of the first medical image. Embodiment 335: The computer-implemented method of any of Embodiments 326-333, wherein the subject of the first medical image is the same as the subject of the second medical image. Embodiment 336: The computer-implemented method of any of Embodiments 326-333, wherein the subject of the first medical image is different than the subject of the second medical image. Embodiment 337: The computer-implemented method of any of Embodiments 326-336, wherein the capture of the first medical image is separated from the capture of the second medical image by at least one day. Embodiment 338: The computer-implemented method of any of Embodiments 326-337, wherein the capture of the first medical image is separated from the capture of the second medical image by at least one day. Embodiment 339: The computer-implemented method of any of Embodiments 326-338, wherein a location of the capture of the first medical image is geographically separated from a location of the capture of the second medical image. Embodiment 340: The computer-implemented method of any of Embodiments 326-339, wherein the normalization device comprises the normalization device of any of Embodiments 271-325. Embodiment 341: The computer-implemented method of any of Embodiments 326-340, wherein the region of the subject comprises a coronary region of the subject. Embodiment 342: The computer-implemented method of any of Embodiments 326-341, wherein the region of the subject comprises one or more coronary arteries of the subject. Embodiment 343: The computer-implemented method of any of Embodiments 326-340, wherein the region of the subject comprises one or more of carotid arteries, renal arteries, abdominal aorta, cerebral arteries, lower extremities, or upper extremities of the subject. Additional Detail—Normalization Device As described above and throughout this application, in some embodiments, a normalization device may be used to normalize and/or calibrate a medical image of a patient before that image is analyzed by an algorithm-based medical imaging analysis. This section provides additional detail regarding embodiments of the normalization device and embodiments of the use thereof. In general, the normalization device can be configured to provide at least two functions: (1) the normalization device can be used to normalize and calibrate a medical image to a known relative spectrum; and (2) the normalization device can be used to calibrate a medical image such that pixels within the medical image representative of various materials can be normalized and calibrated to materials of known absolute density—this can facilitate and allow identification of materials within the medical image. In some embodiments, each of these two functions play a role in providing accurate algorithm-based medical imaging analysis as will be described below. For example, it can be important to normalize and calibrate a medical image to a known relative spectrum. As a specific example, a CT scan generally produces a medical image comprising pixels represented in gray scale. However, when two CT scans are taken under different conditions, the gray scale spectrum in the first image may not (and likely will not) match the gray scale spectrum of the second image. That is, even if the first and second CT images represent the same subject, the specific grayscale values in the two images, even for the same structure may not (and likely will not) match. A pixel or group of pixels within the first image that represents a calcified plaque buildup within a blood vessel, may (and likely will) appear different (a different shade of gray, for example, darker or lighter) than a pixel or group of pixels within the second image, even if the pixel or groups of pixels within the first and second images is representative of the same calcified plaque buildup. Moreover, the differences between the first and second images may not be linear. That is, the second image may not be uniformly lighter or darker than the first image, such that it is not possible to use a simple linear transform to cause the two images to correspond. Rather, it is possible that, for example, some regions in the first image may appear lighter than corresponding regions in the second image, while at the same time, other regions in the first image may appear darker than corresponding regions in the second image. In order to normalize the two medical images such that each appears on the same grayscale spectrum, a non-linear transform may be necessary. Use of the normalization device can facilitate and enable such a non-linear transform such that different medical images, that otherwise would not appear to have the same grayscale spectrum, are adjusted so that the same grayscale spectrum is used in each image. A wide variety of factors can contribute to different medical images, even of the same subject, falling on different grayscale spectrums. This can include, for example, different medical imaging machine parameters, different parameters associated with the patient, differences in contrast agents used, and/or different medical image acquisition parameters. It can be important to normalize and calibrate a medical image to a known relative spectrum to facilitate the algorithm-based analysis of the medical image. As described herein, some algorithm-based medical image analysis can be performed using artificial intelligence and/or machine learning systems. Such artificial intelligence and/or machine learning systems can be trained using a large number of medical images. The training and performance of such artificial intelligence and/or machine learning systems can be improved when the medical images are all normalized and calibrated to the same or similar relative scale. Additionally, the normalization device can be used to normalize or calibrate a medical image such that pixels within the medical image representative of various materials can be normalized and calibrated to materials of known absolute density. For example, when analyzing an image of a coronary region of to characterize, for example, calcified plaque buildup, it can be important to accurately determine which pixels or groups of pixels within the medical image correspond to regions of calcified plaque buildup. Similarly, it can be important to be able to accurately identify contrast agents, blood, vessel walls, fat, and other samples within the image. The use of normalization device can facilitate and enable identification of specific materials within the medical image. The normalization devices described throughout this application can be configured to achieve these two functions. In particular, a normalization device can include a substrate or body configured with compartments that hold different samples. The arrangement (e.g., the spatial arrangement) of the samples is known, as well as other characteristics associated with each of the samples, such as the material of sample, the volume of the sample, the absolute density of the sample, and the relative density of the sample relative to that of the other samples in the normalization device. During use, in some embodiments, the normalization device can be included in the medical imager with the patient, such that an image of the normalization device—including the known samples positioned therein—appears in the image. An image-processing algorithm can be configured to recognize the normalization device within the image and use the known samples of the normalization device to perform the two functions described above. For example, the image-processing algorithm can detect the known samples within the medical image and use the known samples to adjust the medical image such that it uses a common or desired relative spectrum. For example, if the normalization device includes a sample of calcium of a given density, then that sample of calcium will appear with a certain grayscale value within the image. Due to the various different conditions under which the medical image was taken, however, the particular grayscale value within the image will likely not correspond to the desired relative spectrum. The image-processing algorithm can then adjust the grayscale value in the image such that it falls at the appropriate location on the desired relative spectrum. At the same time, the image-processing algorithm can adjust other pixels within the image that do not correspond to the normalization device but that share the same grayscale value within the medical image, such that those pixels fall at the appropriate location on the desired relative spectrum. This can be done for all pixels in the image. As noted previously, this transformation may not be linear. Once complete, however, the pixels of the medical image will be adjusted such that they all fall on the desired relative grayscale spectrum. In this way, two images of the same subject captured under different conditions, and thus initially appearing differently, can be adjusted so that they appear the same (e.g., appearing on the same relative grayscale spectrum). Additionally, the normalization device can be used to identify particular materials within the medical image. For example, because the samples of the normalization device are known (e.g., known material, volume, position, absolute density, and/or relative density), pixels representative of the patient's anatomy can be compared against the materials of the normalization device (or a scale established by the materials of the normalization device) such that the materials of the patient's anatomy corresponding to the pixels can be identified. As a simple example, the normalization device can include a sample of calcium of a given density. Pixels that appear the same as the pixels that correspond to the sample of calcium can be identified as representing calcium having the same density as the sample. In some embodiments, the normalization device is designed such that the samples contained therein correspond to the disease or condition for which the resulting image will be analyzed, the materials within the region of interest of the patient's anatomy, and/or the type of medical imager that will be used. By using a normalization device within the image, the image-processing algorithms described throughout this application can be easily expanded for use with other imaging modalities, including new imaging modalities now under development or yet to be developed. This is because, when these new imaging modalities come online, suitable normalization devices can be designed for use therewith. Further, although this application primarily describes use of the normalization device for diagnosis and treatment of coronary conditions, other normalization devices can be configured for use in other types of medical procedures or diagnosis. This can be done by selecting samples that are most relevant to the procedure to be performed or disease to be analyzed. The normalization devices described in this application are distinguishable from conventional phantom devices that are commonly used in medical imaging applications. Conventional phantom devices are typically used to calibrate a medical imager to ensure that it is working properly. For example, conventional phantom devices are often imaged by themselves to ensure that the medical image produces an accurate representation of the phantom device. Conventional phantom devices are imaged periodically to verify and calibrate the machine itself. These phantom devices, are not, however, imaged with the patient and/or used to calibrate or normalize an image of the patient. In contrast, the normalization device is often imaged directly with the patient, especially where the size of the normalization device and the imaging modality permit the normalization device and the patient to be imaged concurrently. If concurrent image is not possible, or in other embodiments, the normalization device can be imaged separately from the patient. However, in these cases, it is important that the image of the patient and the image of the normalization device be imaged under the same conditions. Rather than verifying that the imaging device is functioning properly, the normalization device is used during an image-processing algorithm to calibrate and normalize the image, providing the two functions discussed above. To further illustrate the difference between conventional phantom devices and the normalization device, it will be noted that use of the normalization device does not replace the use of a conventional phantom. Rather, both may be used during an imaging procedure. For example, first, a conventional phantom can be imaged alone. The resulting image of the phantom can be reviewed and analyzed to determine whether the imaging device is correctly calibrated. If it is, the normalization device and the patient can be imaged together. The resulting image can be analyzed to detect the normalization device within the image, adjust the pixels of the image based on the representation of the normalization device within the image, and then, identify specific materials within the image using the normalization device as described above. Several embodiments of normalization devices have been described above with reference toFIGS.12A-12I.FIG.15present another embodiment of a normalization device1500. In the illustrated embodiment, the normalization device1500is configured for use with medical images of a coronary region of a patient for analysis and diagnosis of coronary conditions; however, the normalization device1500may also be used or may be modified for use with other types of medical images and for other types of medical conditions. As will be described below, in the illustrated embodiment, the normalization device1500is configured so as to mimic a blood vessel of a patient, and thus may be particularly suitable for use with analysis and diagnosis of conditions involving a patient's blood vessels. As shown inFIG.15, the normalization device1500comprises a substrate having a plurality of compartments holding samples formed therein. In the illustrated embodiment, the samples are labeled A1-A4, B1-B4, and C1-C4. As shown inFIG.15, the samples A1-A4 are positioned towards the center of the normalization device1500, while the samples B1-B4 and C1-C4 are generally arranged around the samples A1-A4. For each of the samples, the material, volume, absolute density, relative density, and spatial configuration is known. The samples themselves can be selected such that normalization device1500generally corresponds to a cross-sectional blood sample. For example, in one embodiment, the samples A1-A4 comprise samples of contrast agents having different densities or concentrations. Examples of different contrast agents have been provided previously and those contrast agents (or others) can be used here. In general, during a procedure, contrast agents flow through a blood vessel. Accordingly, this can be mimicked by placing the contrast agents as samples A1-A4, which are at the center of the normalization device. In some embodiments, one or more of the samples A1-A4 can be replaced with other samples that may flow through a blood vessel, such as blood. The samples B1-B4 can be selected to comprise samples that would generally be found on or around an inner blood vessel wall. For example, in some embodiments, one or more of the samples B1-B4 comprise samples of calcium of different densities, and/or one or more of the samples of B1-B4 comprise samples of fat of different densities. Similarly, the samples C1-C4 can be selected to comprise samples that would generally be found on or around an outer blood vessel wall. For example, in some embodiments, one or more of the samples C1-C4 comprise samples of calcium of different densities, and/or one or more of the samples of C1-C4 comprise samples of fat of different densities. In one example, the samples B1, B3, and C4 comprise fat samples of different densities, and the samples B2, B4, C1, C2, and C3, comprise calcium samples of different densities. Other arrangements are also possible, and, in some embodiments, one or more of the compartments may hold other samples, such as, for example, air, tissue, radioactive contrast agents, gold, iron, other metals, distilled water, water, or others. The embodiment of the normalization device1500ofFIG.15, further illustrates several additional features that may be present in some normalization devices. One such feature is represented by the different sized compartments or volumes for the samples. For example, in the illustrated embodiment the sample B1 has a smaller volume than the sample B2. Similarly, the sample C4 has a volume that is larger than the sample C3. This illustrates that, in some embodiments, the volumes of the samples need to be all of the same size. In other embodiments, the volumes of the samples may be the same size. The embodiment ofFIG.15also illustrates that various samples can be placed adjacent to (e.g., immediately adjacent to or juxtaposed with) other samples. This can be important because, in some cases of medical imaging, the radiodensity of one pixel may affect the radiodensity of an adjacent pixel. Accordingly, in some embodiments, it can be advantageous to configure the normalization device such that material samples that are likely to be found in proximity to each other are similarly located in proximity to or adjacent to each other on the normalization device. The blood vessel-like arrangement of the normalization device1500may advantageously provide such a configuration. In the illustrated embodiment, each sample A1-A4 is positioned so as to be adjacent to two other samples A1-A4 and to two samples B1-B4. Samples C1-C4 are each positioned so at to be adjacent to two other samples C1-C4 and to a sample B1-B4. A1 though a particular configuration is illustrated, various other configurations for placing samples adjacent to one another can be provided. A1 though the normalization device1500is illustrated within a plane, the normalization device1500will also include a depth dimension such that each of the samples A1-A4, B1-B4, and C1-C4 comprises a three-dimensional volume. As noted previously, the normalization device can be calibrated specifically for different types of medical imagers, as well as for different types of diseases. The described embodiment of the normalization device1500may be suitable for use with CT scans and for the analysis of coronary conditions. When configuring the normalization device for use with other types of medical imagers, the specific characteristics of the medical imager must be accounted for. For example, in an MRI machine, it can be important to calibrate for the different depths or distances to the coils. Accordingly, a normalization device configured for use with MRI may have a sufficient depth or thickness that generally corresponds to the thickness of the body (e.g., from front to back) that will be imaged. In these cases, the normalization device can be placed adjacent to the patient such that a top of the normalization device is positioned at the same height as the patient's chest, while the bottom of the normalization device is positioned at the same height as the patient's back. In this way, the distances between the patient's anatomy and the coils can be mirrored by the distances between the normalization device and the coils. In some embodiments, the sample material can be inserted within tubes positioned within the normalization device. As noted previously, in some embodiments, the normalization device may be configured to account for various time-based changes. That is, in addition to providing a three-dimensional (positional) calibration tool, the normalization device may provide four-dimensional (positional plus time) calibration tool. This can help to account for changes that occur in time, for example, as caused by patient movement due to respiration, heartbeat, blood flow, etc. To account for heartbeat, for example, the normalization device may include a mechanical structure that causes it to beat at the same frequency as the patient's heart. As another example of a time-based change, the normalization device can be configured to simulate spreading of a contrast agent through the patient's body. For example, as the contrast agent is injected into the body, a similar sample of contrast agent can be injected into or ruptured within the normalization device, allowing for a time-based mirroring of the spread. Accounting for time-based changes can be particularly important where patient images are captured over sufficiently large time steps that, for example, cause the image to appear blurry. In some embodiments, artificial intelligence or other image-processing algorithms can be used to reconstruct clear images from such blurry images. In these cases, the algorithms can use the normalization device as a check to verify that the transformation of the image is successful. For example, if the normalization device (which has a known configuration) appears correctly within the transformed image, then an assumption can be made that the rest of the image has been transformed correctly as well. Medical Reports Overview Traditional reporting of medical information is designated for physician or other provider consumption and use. Diagnostic imaging studies, laboratory blood tests, pathology reports, EKG readings, etc. are all interpreted and presented in a manner which is often difficult to understand or even unintelligible by most patients. The text, data and images from a typically report usually assumes that the reader has significant medical experience and education, or at least familiarity with medical jargon that, while understandable by medical professionals, are often opaque to the non-medical layperson patient. To be concise, the medical reports do not include any sort of background educational content and it assumes that the reader has formal medical education and understands the meaning of all of the findings in the report as well as the clinical implications of those findings for the patient. Further, often findings are seen in concert with each other for specific disease states (e.g., reduced ejection fraction is often associated with elevated left ventricular volumes), and these relationships are not typically reported as being as part of a constellation of symptoms associated with a disease state or syndrome, so the non-medical layperson patient cannot understand the relationship of findings to his/her disease state. It is then the responsibility and role of the medical provider to “translate” the reports into simple language which is typically verbally communicated with the patient at the time of their encounter with the provider be it in person or more recently during telehealth visits. The provider explains what the test does, how it works, what its limitations may be, what the patient's results were and finally what those results might mean for the patient's future. Unfortunately, patients frequently are unable to fully interpret and retain all the information that the provider might discuss with them in a short 10-15 typical patient encounter. The patients are then left confused and only partly educated on the results of their medical reports. Often the provider will give the patient a copy of the report both for their records as well as to be able to review on their own after the patient encounter. Even with the patient report in hand and after hearing the physician's explanation, the patient often remains incompletely informed regarding the results and their meaning. This can be a major source of frustration for both the provider as well as the patient. The patient does not understand fully the results of the study and their implications. Frequently patients will either reach out to friends and family to help understand the results of their examination or they will perform searches on the Internet for additional background education and meaning. Frequently however this is not successful as the patient may not understand even what they are supposed to be searching for or asking about the disease process and many online health information sites maybe inaccurate or misleading. All of this can impact current medical status of the patient, his relation with the health provider, but also future health implications including but not only therapeutic and future diagnostic test adherence. In response to this, providers sometimes refer patients to websites or provide them with written materials that may help explain their test findings and how this may relate to disease. But these are “generic” material that are not patient-specific, do not incorporate patient specific findings, and do not relate to a patient's specific conditions or symptoms. To date, however, no methods have been devised or described that combines patient facing educational content as well as the patient's specific individual report findings in a way that can be easily accessed, reviewed, and is available at the patient's leisure for repeated consumption as they may require. Thus, it is advantageous for systems and methods that enable communication of these findings beyond a simple paper report by leveraging patient-specific information for generation of reports in the forms of more advanced and contemporary technology, such as movies, mixed reality or holographic environments. Various aspects of systems and methods of generating a medical report dataset and a corresponding medical report for a specific patient are disclosed herein. In one example, a process includes receiving selection of a report generation request, for a patient, for display on a display of a computing system having one or more computer processors and one or more displays, receiving patient information from a patient information source storing said patient information, the patient information associated with the report generation request, determining patient characteristics associated with the report generation request based on the patient information, accessing a data structure storing associations between patient characteristics and respective patient medical information, medical images, and test results of one or more test performed on the patient, and storing associations between patient characteristics and multimedia report data that is not related to a specific patient, selecting from the data structure a report package associated with the patient medical information and the report generation request, wherein the selected report package comprises a patient greeting in the language of the patient and presented by an avatar selected based on the patient data, a multimedia presentation conveying an explanation of the test performed, of the results of the test, an explanation of the results of the test, and a conclusion segment presented by the avatar, wherein at least a portion of the multimedia presentation includes report multimedia data from the report data source, test results from the results information source, medical information from the medical information source, and medical images related to the test from the medical image source, automatically generating the selected report package, and displaying the selected report package on the one or more displays, wherein the selected reports are configured to receive input from a user of the computing system that is usable in interacting with the selected parent report. Systems for generating medical report can utilize existing patient medical information, new images and test data, and/or contemporaneous information of the patient received from, for example, the medical wearable device monitoring one or more physiological conditions or characteristics of the patient. Such systems can be configured to automatically generate a desired report. In some embodiments, the systems may use medical practitioner and/or patient interactive inputs to the determine certain aspects to include in the medical report. In one example, a system for automatically generating a medical report can include a patient information source providing stored patient information patient information format, a medical information source providing medical information in a medical information format, and a medical image source providing medical images in a medical image format. The medical images can be any images depicting a portion of a patient's anatomy, for example, an arterial bed, one or more arterial beds. In an example, an arterial bed includes arteries of one of the aorta, carotid arteries, lower extremity arteries, renal arteries, or cerebral arteries. The medical images can be any images depicting one or more arterial beds. In an example, a first arterial bed includes arteries of one of the aorta, carotid arteries, lower extremity arteries, renal arteries, or cerebral arteries, and a second arterial bed includes arteries of one of the aorta, carotid arteries, lower extremity arteries, renal arteries, or cerebral arteries that are different than the arteries of the first arterial bed. In some embodiments, a normalization device (e.g., as described herein) is used when generating the medical images, and the information from the normalization device is used when processing the medical images. The medical images can be processes using any of the methods, processes, and/or systems described herein, or other methods, processes, and/or systems. Any of the methods described herein can be based on imaging using the normalization device to improve quality of the automatic image assessment of the generated images. The system for automatically generating a medical report can also include a test results information source providing test results of one or more test performed on the patient in a results information format, a report data source, the report data source providing multimedia data for including in a medical report, the multimedia data indexed by at least some of the stored patient information relating to non-medical characteristics of the patient, a report generation interface unit to receive said patient information, the patient information including non-medical characteristics of a patient including characteristics indicative of the patients age, gender, language, race, education level, and/or culture, and the like, wherein said report generation interface unit can be adapted to automatically create medical report data links associated with said patient characteristics and associated with report multimedia data on the report data source that is indexed by said respective patient characteristics based on a received report generation request associated with the patient and a test, and wherein the report generation interface unit is further adapted to automatically create links to patient information, medical information, medical images, and test results associated with the patient and the test based on the report generation request. The system further includes a medical report dataset generator adapted to automatically access and retrieve the report multimedia data, patient information, medical information, medical images, the test results using the medical report data links, and automatically generate a medical report associated with the test and the patient based on the report multimedia data, patient information, medical information, medical images, the test results, the medical report conveying a patient greeting in the language of the patient and presented by an avatar selected based on the patient data, a multimedia presentation conveying an explanation of the test performed, of the results of the test, an explanation of the results of the test, and a conclusion segment presented by the avatar, wherein at least a portion of the multimedia presentation includes report multimedia data from the report data source, test results from the results information source, medical information from the medical information source, and medical images related to the test from the medical image source. As described herein, one innovation relates to generating interactive medical data reports. More particularly, the present application describes methods and systems for generating interactive coronary artery medical reports that are optimized for interactive presentation and clearer understanding by the patient. One innovation includes a method of generating a medical report of a medical test associated with one or more patient tests. The method can include receiving an input of a request of a medical report to generate for a particular patient, the request indicating a selection of a format of the medical report, and receiving patient information from a patient information source storing said patient information, where the patient information is associated with the report generation request. The method can include determining patient characteristics associated with the patient based on the patient information, and accessing one or more data structures storing associations of types of medical reports, patient characteristics and respective patient medical information, medical images, and test results of one or more test performed on the patient. The data structures are structured to store associations between patient characteristics and multimedia report data that is not related to a specific patient. Such methods can include accessing report content associated with the patient's medical information and the medical report request using the one or more data structures. The content of the medical report can include multimedia content including a greeting in the language of the patient, an explanation segment of a type of test conducted, a results segment for conveying test results, an explanation segment explaining results of the test, and a conclusion segment, wherein at least a portion of the multimedia content includes report data from the report data source, test results from the results information source, medical information from the medical information source, and medical images related to the test from the medical image source. Such methods can also include automatically generating the requested medical report using the accessed report content based at least in part on the selected format of the medical report. Such methods can also include displaying the medical report to the patient. In some embodiments, the multimedia information further comprises data for generating and displaying an avatar on a display, the avatar being included in the medical report. In some embodiments, the method further comprising generating the avatar based on one or more patient characteristics. In some embodiments, the patient characteristics include one or more of age, race, and gender. In some embodiments of such methods, a method can include displaying the medical report on one or more displays of a computer system, receiving user input while the medical report can be displayed, and changing at least one portion of the medical report based on said received user input. In some embodiments, displaying the medical report comprises displaying the medical report on the patient's smart device. In some embodiments, the method includes storing the medical report. In some embodiments, the one or more data structures is configured to store information representative of the severity of the patient's medical condition, wherein selection of the content of the segments of the medical report are based on in part on the stored information representative of the severity of the patient's medical condition. Such methods can also include selecting a greeting segment for the medical report based on one or more of the patient's race, age, gender, ethnicity, culture, language, education, geographic location, and severity of prognosis. The method can also include selecting multimedia content for the explanation segment based on one or more of the patient's race, age, gender, ethnicity, culture, language, education, geographic location, and severity of prognosis. The method can also include selecting multimedia content for the explanation of the results segment based on one or more of the patient's race, age, gender, ethnicity, culture, language, education, geographic location, and severity of prognosis. The method can also include selecting multimedia content for the conclusion segment based on one or more of the patient's race, age, gender, ethnicity, culture, language, education, geographic location, and severity of prognosis. In some embodiments, the one or more data structures are configured to store associations related to normality, risk, treatment type, and treatment benefit of medical conditions, and wherein the method further includes automatically determining normality, risk, treatment type, and treatment benefit to include in the report based on the patients test results, and the stored associations related to normality, risk, treatment type, and treatment benefits. In some embodiments, the method can further include generating an updated medical report based on a previously generated medical report, new test results, and an input by a medical practitioner. Example System and Method for Automatically Generating Coronary Artery Medical Data Described herein are systems and methods for generating medical reports that provides an in-depth explanation of what the medical test or examination was intended to look for, the results of the patient's specific medical findings, and what those findings may mean to the patient. The medical reports can be automatically generated, understandable educational empowering movie of individualized adapted personal aggregated medical information. As an example, a computer implemented method of generating a multi-media medical report for a patient, the medical report associated with one or more tests of the patient. One or more images used to determine information for the medical report, and/or one or more of the images used in the medical report, can be based on images generated using a normalization device described herein, the normalization device improving accuracy of the non-invasive medical image analysis. In an example, a method comprises receiving an input of a request to generate the medical report for a patient, the request indicating a format for the medical report, receiving patient information relating to the patient, the patient information associated with the report generation request, determining one or more patient characteristics associated with the patient using the patient information, accessing associations between types of medical reports and patient medical information, wherein the patient medical information includes medical images relating to the patient and test results of one or more test that were performed on the patient, the medical images generated using the normalization device, and accessing report content associated with the patient's medical information and the medical report requested. The report content can include multimedia content that is not related to a specific patient. For example, the multimedia content can include a greeting segment in the language of the patient, an explanation segment explaining a type of test conducted, a results segment for conveying test results, and an explanation segment explaining results of the test, and a conclusion segment, wherein at least a portion of the multimedia content includes a test result and one or more medical images that are related to a test performed on the patient. The method can further include generating, based at least in part on the format of the medical report, the requested medical report using the patient information and report content. Certain components of certain embodiments of such systems and methods are described herein. An example of cardiac CT study imaging in a single examination is provided.1) Transform individual patient specific medical information into an understandable movie. This invention combines patient facing medical education with patient specific medical results in a manner that has not been previously performed. While many online sites explain medical disease processes, they do not have the results of the patients' medical tests and the patients often do not know if they are even looking in the right area. By combining patient facing educational background as well as specific analysis of their test results and meaning, the patients will be educated in a manner that empowers them to make better health decisions. This approach can then combine additional materials beyond just the present test findings, including additional information derived from patient history, physical, clinical electronic medical record, wearable fitness and wellness trackers, patient-specific web browser search history and so on.2) Provide an in-depth explanation of the test performed. To understand what the results of a test may be, patients must understand what the test was intended to do, an explanation regarding how it works, as well as the potential range of results, both normal and abnormal. An explanation of the test performed would include simple understandable methods of what the test is intended to find and what the range of possibilities of the results may be. In the example provided a coronary artery CT angiogram is intended to evaluate if there are blockages or plaque within the patient's coronary arteries. In order to understand the results, a patient needs to understand that the test is intended to evaluate the blood vessels that feed the heart muscle, that by injecting contrast and doing CT images their coronary arteries can be evaluated for the presence of plaque and associated blockages. This understanding can be conveyed to the patient using a patient's actual images so that there is increased engagement and understanding.3) Provide the results of the patient's individual patient specific examination. Having educated the patient regarding what test they had as well as the range of all possible results, they are now better empowered to understand what their specific results are in the context of the range of potential results from the examination. Combining the results of the patient's findings with an explanation of what the test was looking for enables the patient to better understand the meaning of those results. The patient's individual results, whether they are quantitative values from a blood test, images and resulting interpretation from a diagnostic imaging study such as CT, MRI, ultrasound etc., results from an ECG exam etc. Quantitative results, images, PDFs, or other results can be uploaded and presented within the movie.4) Give explanations of the results. In addition to presenting the results directly to the patient, an explanation of the meaning of the results can then be presented simultaneously. This is performed using defined aggregation algorithms with previously recorded definitions and discussions of the range of results expected for an individual test. For example, in the case of the cardiac CT angiogram report, we will develop short explanations of the significance of the result of narrowing of a blood vessel. If there is no narrowing present then a short, animated video discussion will explain that no narrowing was present and what that means, if there is a mild narrowing which is clinically defined as a narrowing between one and 24%, then a different video will be played. If the narrowing is between 24 and 49%, another video is played etc. Previously created video explanations of the range of expected results will have been created and are available to then be placed within the video depending on the individual results of the examination. In some cases, there may only be a binary result, and therefore only two explanations are necessary. In other cases, it may be many videos depending on the initial test and the range of possible clinically significant results. The patient specific results can sometimes even be compared to what would be expected to an average patient of the same age and sex or to what age that result would be considered “average-normal”. Specifically, in this step, the patient's test findings can be linked to clinical treatment or additional diagnostic recommendations that can be based upon professional societal practice guidelines or contemporary research science, such as that derived from large-scale registries and trials. In this way, this approach can also be educational to the medical professional and may allow for improved and contemporary clinical decision support. This will allow for a shared decision-making moment for the patient and the medical professional, without the need for them to read through scientific literature.5) Use animation that is patient friendly and non-threatening. The animation selected for the video will be intended to be professional but friendly and non-threatening to the patient in order to put them more at ease and make them more open to hearing and understanding the explanations. The animated physician or other explainer in the video can also be matched to the patience sex, age, and race and even be presented in the patient's primary language. Alternatively, the patient's own countenance can be the patient within the video in a manner that is from a photography or, alternatively, rendered as a cartoon or avatar.6) Can be delivered via web based and non-web-based methods. The method of delivery to the patient can be via encrypted HIPAA compliant web-based methods or non-web-based methods such as computer disks, other storage media, etc.7) Can be viewed on computers, cell phones, and other devices. In this manner, all patients will have access to the reports regardless of their socioeconomic status. Not all patients have access to the Internet, cell phones or other devices. Making it available on multiple media platforms increases the degree of access.8) Uses mixed reality for explanations. The use of advanced computer graphics an augmented or virtual reality may make some of the explanations easier for the patients to understand. For example, a virtual reality trip into the body and through a blood vessel then demonstrating the blood flow slowing down and/or stopping at the sight of a blockage will help the patient to understand the significance of having that blockage in their body. Demonstrating the deployment of a stent in that blood vessel at the sight of that blockage will then help the patient understand how their pathology may be treated and why. This could also be done in a 3D/4D virtual reality manner; or as a hologram; or by other visual display. Similarly, this information can be conveyed by audio methods, such as a podcast or others.9) Can be saved by the patient for future reference. The patient specific movie containing an explanation of the test, their results and additional information becomes property of the patient that they can store for future use.10) Can be compared to a normal reference population value. In some cases, there may be findings that, to maximize patient understanding, can be compared to normative reference values that are derived from population-based cohorts or other disease cohorts. This may be provided in percentile, by age comparison (e.g., heart age versus biological age), or by visual display (e.g., on a bell-shaped curve or histogram).11) Can be compared to prior studies. In some cases, the patient may have 2 studies (either the same test, e.g., CT-CT or different tests CT-ultrasound) that can be automatically compared for differences and reported as described above in #1-10. This will allow a patient to understand his/her progress over time in response to lifestyle or medical therapy or interventional therapies. In other cases, the test findings can be conveyed as in #1-10 as a function of heritability (e.g., from genomics or other 'omics or family history), susceptibility (e.g., from lab markers over time, or from environmental lifestyle insults, such as smoking).12) Can be configured to communicate the likelihood of success. In some cases, the video generated will estimate the likelihood of success or failure of any given intervention by calculating the likelihood through risk calculators or using clinical trial data or practice guidelines; and this can be reported in the movie. Examples of Medical Report Generation Systems and Methods FIG.16is a system diagram which shows various components of an example of a system1600for automatically generating patient medical reports, for example, patient medical reports based on CT scans and analysis, utilizing certain systems and methods described herein. Various embodiments of such systems may include fewer components than is shown inFIG.16, additional components, or different components. In this example, the system1600includes an MRI scanner16160, an ultrasound scanner1611, the CT scanner1612, and other types of imaging devices1613. Information from scanners and imaging devices is provided to other components of the system through one or more communication links1601or other communication mechanism for communicating information. The communication link is also connected to other components the system illustrated inFIG.16. The system1600further includes archived patient medical information and records1602which may have been collected in a variety of sources and over a period of time. The information and records may include patient data1604, patient results1606, patient images1608, (e.g., stored images of CT scans, ultrasound scans, MRI scans, or other imaging data. The system1600further includes stored images1614(which may or may not be patient related). The system1600further includes patient wearable information1616which may be collected one or more devices worn by patient, devices sensing or measuring one or more types of physiological data or a characteristic of the patient, typically over a period of time. The system1600can further include laboratory data1618(e.g., recent blood analysis results), and medical practitioner analysis1620of any patient related data (e.g., images, laboratory data, wearable information, etc.). The system1600may communicate with other systems and devices over a network1650which is in communication with communication links1601. System1600may further include a computing system1622which may be used perform any of the functionality related to communicating, analyzing, gathering, or viewing information on the system1600. The computing system1622can include a bus (not shown) that is coupled to the illustrated components of the computing system1622(e.g., processor1624, memory1628, display1630, interfaces1632, input/output devices1634, communication link1601, and may also be coupled to other components of the computing system1622. The computing system1622may include a processor1624or multiple processors for processing information and executing computer instructions. Hardware processor1624may be, for example, one or more general purpose microprocessors. Computer system1622also includes memory (e.g., a main memory)1628, such as a random-access memory (RAM), cache and/or other dynamic storage devices, for storing information and instructions to be executed by processor1624. Memory1628also may be used for storing temporary variables or other intermediate information during execution of instructions to be executed by processor1624. Such instructions, when stored in storage media accessible to processor1624, render computer system1622into a special-purpose machine that is customized to perform the operations specified in the instructions. The memory1628may, for example, include instructions to allow a user to manipulate time-series data to store the patient information and medical data, for example as described in reference toFIGS.16and17. The memory1628can include read only memory (ROM) or other static storage device(s) coupled in communication with the processor1624storing static information and instructions for processor1624. Memory1628can also include a storage device, such as a magnetic disk, optical disk, or USB thumb drive (Flash drive), etc., coupled the processor1628and configured for storing information and instructions. The computer system1622may be coupled via a bus to a display1630, for example, a cathode ray tube (CRT), light emitting diode (LED), or a liquid crystal display (LCD). The display may include a touchscreen interface. The computing system1622may include an input device1634, including alphanumeric and other keys, is coupled to bus for communicating information and command selections to processor1622. Another type of user input device is cursor control, such as a mouse, a trackball, or cursor direction keys for communicating direction information and command selections to processor1622and for controlling cursor movement on display1630. The input device typically has two degrees of freedom in two axes, a first axis (e.g., x) and a second axis (e.g., y), that allows the device to specify positions in a plane. In some embodiments, the same direction information and command selections as cursor control may be implemented via receiving touches on a touch screen without a cursor. Computing system1622may include a user interface module1632to implement a GUI that may be stored in a mass storage device as computer executable program instructions that are executed by the computing device(s). Computer system1622may further, implement the techniques described herein using customized hard-wired logic, one or more ASICs or FPGAs, firmware and/or program logic which in combination with the computer system causes or programs computer system1622to be a special-purpose machine. According to one embodiment, the techniques herein are performed by computer system1622in response to processor(s)1624executing one or more sequences of one or more computer readable program instructions contained in memory1628. Such instructions may be read into memory1628from another storage medium. Execution of the sequences of instructions contained in the memory1628causes processor(s)1624to perform the process steps described herein. In alternative embodiments, hard-wired circuitry may be used in place of or in combination with software instructions. Various forms of computer readable storage media may be involved in carrying one or more sequences of one or more computer readable program instructions to processor1624for execution. The instructions received by memory1628may optionally be stored before or after execution by processor1624. Computer system1622also includes a communication interface1637coupled to other components of the computer system and to communication link1601. Communication interface1637provides a two-way data communication coupling to a network link that is connected to a communication link1601. For example, communication interface1637may be an integrated services digital network (ISDN) card, cable modem, satellite modem, or a modem to provide a data communication connection to a corresponding type of telephone line. As another example, communication interface1637may be a local area network (LAN) card to provide a data communication connection to a compatible LAN (or WAN component to communicate with a WAN). Wireless links may also be implemented. In any such implementation, communication interface1637sends and receives electrical, electromagnetic or optical signals that carry digital data streams representing various types of information. A network link typically provides data communication through one or more networks to other data devices. For example, a network link may provide a connection through local network to a host computer or to data equipment operated by an Internet Service Provider (ISP). An ISP in turn provides data communication services through the worldwide packet data communication network now commonly referred to as the “Internet.” Computer system1622can send messages and receive data, including program code, through the network(s), communication link1601and communication interface1637. In the Internet example, a server might transmit a requested code for an application program through the Internet, ISP, local network communication link1601, and a communication interface. The received code may be executed by processor1624as it is received, and/or stored in memory1628, or other non-volatile storage for later execution. The processor1624, operating system1626, memory components1628, one or more displays1630, one or more interfaces1632, input devices1634, and modules1636, which may be hardware or software, or a combination of hardware and software, that when utilized performs functionality for the system. For example, the modules1626may include computer executable instructions that are executed by processor1624to perform the functionality of system1600. The system1600may further include medical report generation system1638(“or medical report generator”) which can include various components that are used to generate medical report data set for a particular patient for a requested type of report. Medical report generation system1638may include a computing system, e.g., a server or a computing system1640. In some embodiments, the computing system1640includes a server. The medical report generation system also includes collected or determined patient specific information1648, and a report template data structure1642which includes associations between a patient, the patient information1648(images, medical analysis and test results associated with the patient), and report segments, report elements, reports of elements for the desired. Medical report generation system1638further includes user parameters1646that may be specific to a medical practitioner and/or to a patient or entered by a medical practitioner and/or the patient. The system1600may also include one or more computing devices1652communication with the components of the system via a communication link(s)1601. Communication link(s)1601may include wired and wireless links. Computing device1652may be a tablet computer, laptop computer, a desktop computer, a smart phone, or another mobile device. FIG.17is a block diagram that shows an example of data flow and functionality1700for generating the patient medical report based on one or more scans of the patient, patient information, medical practitioner's analysis of the scans, and/or previous test results. At the beginning of this data flow new medical images1702are received by the system or are generated by a scanner. The images can be generated using a normalization device described herein. Information derived from images generated and processed using the normalization device can be more consistent and/or accurate, as described herein. The images can be from a CT, MRI, ultrasound, or other type of scanner. The images depict a target feature of a patient's body, for example, coronary arteries. The images may be archived in a patient medical information storage component1708, which stores other types of patient data (for example, previously generated images, patient test results, patient specific information that can include age, gender, race, BMI, medication, blood pressure, heart rate, weight, height, body habitus, smoking, diabetes, hypertension, prior CAD, family history, lab test results, and the like). The new images1702are provided for image analysis1704, which may include analysis of the images using artificial intelligence/machine learning algorithms that have been trained to detect features in certain characteristics in the images. Other test1706may also have been conducted on the patient (e.g., blood work or another test). The new images1702, machine generated results to1712, results determined by medical practitioners1714, and previous test results1716are collected in a results phase1710, and this information is communicated to medical report data set generation block1720. Other patient medical information1718can also be provided to medical report data set generation1720. As indicated above, this information may include, for example, a patient's age, gender, race, BMI, medication, blood pressure, heart rate, weight, height, body habitus, smoking, diabetes, hypertension, prior CAD, family history, lab test results, and the like. In addition to the results1710and the other patient medical information1718, medical report data set generation1720can also receive report data1728. Report data1728can include multimedia information used for the report. For example, audio, images, sequences of images (i.e., video), text, backgrounds, avatars, or anything else for the report that is not related to the specific patient's medical information. Medical report data set generation1720can use the new images1702, the results1710, other patient medical information1718, and report data1728to generate a medical report dataset for a requested type of report. The medical report data set generation1770can be interactive, and a medical practitioner can provide input identified what type of report is being generated. At block1722, during the medical report data set generation, all of the information that is needed for the requested report, is aggregated and the medical report is generated. For example, images, patient data, and other information needed for the report are identified collected from the various inputs. At block1724, the process uses certain patient information to tailor the report for the particular patient. For example, one or more characteristics of an avatar that presents information in the report to the patient can be identified from the patient data such that the avatar is created to best convey report information to the patient. In some examples, such information includes the gender, age, language, education, culture, and the like, characteristics of the patient. At block1726, the process determines the test explanation that is best used for the report. For example, there may be ten different explanations for a particular test, and one of the ten explanations is selected for the report. The determination of the test explanation may be based on patient and/or the diagnosis or prognosis of results of the test. In other words, the same test may be explained in various ways based on what the results of the test turned out to be. At block1728, the process determines results explanation. There can be multiple explanations for the same results, and one of the explanations the selected port. The selection of the results explanation can be based on, for example, patient information, the substance of the results, or other information. At block1730, the process determines a greeting to be used in the report. The greeting selected for the report may be one of numerous possible greetings. In various embodiments, the greeting may be selected based on patient information, user input, or the results the test. For example, if the test results indicate great news for the patient, a first type of greeting may be selected. If the test results are unfavorable to the patient, a second type of greeting may be selected is more appropriate for subsequently delivered results. At block1732, the process determines the conclusion to be used in the report. The conclusion selected for the report may be one of numerous possible conclusions. In various embodiments, inclusion may be selected based on patient information, user input, or the results of the test. For example, the test results indicate great is for the patient the first type of the selected. The test results are unfavorable to the patient, the second type of conclusion selected is more appropriate for the previously reported unfavorable results. The medical report data set generation1720provides a medical report1736. In some embodiments, the medical report is a video that includes a patient identification greeting1738, and for each test, an explanation of the test1740results of the test1742and explanation of the results1744. For medical reports that include multiple tests, the report may iteratively present a test explanation, present the results, and present an explanation results for each test conducted. The medical report also includes a conclusion segment1746. In some embodiments, the medical report is displayed on the display to the patient/patient's family. In some embodiments, the medical report is provided as a video for the patient to view at their home or anywhere else on a computer. In some embodiments, medical report can be provided is a paper copy. FIG.18Ais a block diagram of an example of a first portion of a process for generating medical report using the functionality and data described in reference toFIG.17, according to some embodiments. At block1802, one or more medical tests are performed on a patient. At block1804, results are generated by machine (e.g., a blood test), the train medical interpreter, and/or are automatically/semi-automatically determined based on artificial intelligence/machine learning algorithms. At block1806, results, patient information, and other data is collected and sent to a computer device or network for creation of the medical report. At block1808, results are aggregated with images, patient information, other data, multimedia information and the like to generate a medical related portion of report. At block1810, the process generates the video presenter (e.g., an avatar) of the report using certain selected patient information, for example, biographical data of the patient. For example, when the patient is a child, patient information may be used to create child avatar which presents the report to the child. In some embodiments, the child avatar may have been avatar pet which also helps present the report to the child, making the report more interesting and more fun for the child. When the patient is a highly educated adult, patient information may be used to create an avatar that is appropriate to present the report to that patient. In some embodiments, the avatar may mirror certain characteristics of the patient (e.g., race, age, or gender) or be a determined complementary avatar to certain characteristics of patient. FIG.18Bis a block diagram of an example of a second portion of a process for generating medical report using the functionality and data described in reference toFIG.17, according to some embodiments. At block1812, the process selects a test explanation to be used for the report. The selection of the test explanation can be based on the patient information the severity of injury or disease, and/or the seriousness of the report (e.g., the final diagnosis). In one example, a certain test explanation may be selected from one of four test explanation videos. At block1814, the process selects explanation results to be used for the report. The selection of the results can also be based on the patient information, severity of the injury or disease, and/or seriousness of report (e.g., the final diagnosis). In one example, the certain results explanation may be selected from one of four results explanation videos. FIG.18Cis a block diagram of an example of a third portion of a process for generating medical report using the functionality and data described in reference toFIG.17, according to some embodiments. At block1816, the process selects patient identification greeting. The report and start with identification reading of the patient this may include a cartoon character, or avatar, reading the patient by name and stating what test does been explained and when it was performed, who ordered the test and where it was performed. At block1818, the process explains the test conducted on the patient. A previously recorded segment explains, for example, the patient what test was performed, how it works, why it is usually ordered by a provider, and what the range of expected results may be. At block1820, the report then presents the results to the patient. The results can include quantitative values, images, charts, videos, and other types of data that may help to convey the results to the patient. At block1822, the report may present a discussion of results to help clarify to the patient exactly what the results mean in some examples, appropriate prerecorded animation of videos explains the meaning of a result. If multiple tests were performed on the patient, the process may iteratively explain each test, present the test results, and then explain the results. At block1824, the process presents a conclusion segment that may summarize information for the patient, provide additional information, and/or provide guidance on the next steps taken by the patient or that will be taken by the medical practitioner. For all the parts of the report, medical report generation functionality uses a combination of patient information, actual images and/or test results, and other multimedia information to present a comprehensive clear explanation of each test that was performed in the results of the test. FIG.18Dis a diagram illustrating various portions that can make up the medical report, and input can be provided by the medical practitioner and by patient information or patient input. As shown inFIG.18D, the medical practitioner can interactively select a type of medical report to be generated (e.g., report 1, report 2, etc.). Each medical report is a collection of data and information that can be collected and presented in various segments of the report. For example, the segments can include a greeting, an explanation of the test(s) performed, results, an explanation of the results, and a conclusion. Medical reports that include multiple tests can include multiple segments that present an explanation of each test performed, the results of each test, and an explanation of the results of each test. In some embodiments, all or portions of the segment are automatically generated based on patient information, types of test performed, and the results of each test. In some embodiments, the medical practitioner can select or prove information to use for each segment. In some embodiments, the report can be interactive in a patient's input can help determine what information to use to generate a segment or present a portion of the report. Each segment may include a number of elements. Each of the elements can include one or more sub elements. For example, a segment of test results may include an element for each of the test results to be included in the report. In some embodiments, the medical practitioner can select or approve of what information to use for an element and/or a sub-element. In some embodiments, the elements and/or the sub-elements can be at least partially determined based on the patient information and/or the patient input. Typically, the medical practitioner can interactively select and/or approve of all material that is used in the report. In some embodiments, contents of the report are based on predetermined algorithms that use the combination of patient information, medical tests, medical results, and medical practitioners' preferences to determine the elements in each segment of the medical report. FIG.18Eis a schematic illustrating an example of a medical report generation data flow and communication of data used to generate a report. As illustrated components and data related to the components and data illustrated inFIGS.16-18D. A medical report generator1850receives plurality of inputs which it uses to generate a particular medical report for particular patient. This medical report is generated to educate and inform a patient, and a patient's caregivers, of a specific patient's medical tests and results. This medical reporting is a process that transforms individual medical information in an understandable movie. The movie is made with the patient's avatar or avatar like (e.g., matched by sex, age ethnicity, etc.). Viewing of the report can be done anywhere on a computer that a medical facility or on a patient's computer (e.g., a smart phone, tablet, laptop, etc.). Report may contain multimedia data audio, text, images, and/or video. The video may contain, cartoon, real life videos. Animation can include virtual reality for example video enters body, see heart pumping with blood flowing, centered at vessels, see blood through vessels flowing and showing plaque with changes in velocity and flowing—go to plaque and see its distinct types. In some embodiments, augmented reality may be used to simulate, age, pharmacological changes, pharmacological agents available where the exam is done, different degrees of disease, the effect of interventions such as stents and bypass, behavior changes and exercise. The report may be shareable allowing a user able to share with anyone with a defined time of availability or forever. For example, it can be transformed and condensed in a PDF, DICOM, or Word document, or another format, for printing. The language used in the report can be the patient's native language. In some embodiments, subtitles can be used for hearing impaired in native language, or braille for the blind. In embodiments using avatar, the avatar narration can be individualized for the patient, to include age, gender, ethnicity—change in patient look, level of understanding—change in language and depth of information. The medical report generator1850can receive input1875from a medical practitioner indicating to generate a particular type of report for particular patient. In some embodiments, a medical practitioner can provide inputs to determine certain aspects of the report. For example, the medical practitioner may indicate which image data to use in which test results to include in the report. In another example, the medical practitioner can, based on the test results and/or the severity of the diagnosis, the medical practitioner can influence the “tone” or seriousness of the report such that is appropriate for reporting the test results in the diagnosis. In some embodiments, the medical practitioner can provide inputs to approve tentative automatically selected material to include in the report. The medical report generator1800in communication with data structures1880which store associations related to report generation. In some embodiments, the data structures1880include associations between the particular medical practitioner and characteristics of medical reports that he prefers to generate. The associations may be dynamic and may interactively or automatically change over time. The data structures1880can also include associations that relate to all of material that can be used to generate a report. For example, after a medical practitioner indicates that a certain medical report generated for certain patient, the medical report generator1880receives patient information1880based on the associations data structures begins to it needs to generate the medical report. As illustrated inFIG.18E, medical report generator1850can receive pre-existing portions of a report1855(segments, elements, sub-elements) that can include multi-media greetings, explanation of a test, presentation of results, explanation results, and conclusions. This material can be combined with other inputs the medical report generator1850to generate the report. For example, the medical report generator1850can receive patient information1860that includes the patient's age, gender, race, education, ethnicity, geographic location, in any other characteristic of pertinent information of the patient which may be used to tailor the medical report such that the information in the medical report is best conveyed to the particular patient. Medical report generator1850can also receive image data1862related to recent test performed on the patient (e.g., CT, MRI, ultrasound scans, or other image data), and/or previously collected image data1865(e.g., previously collected CT, MRI, ultrasound scans, or other image data). For example, the previously collected image data1865can include image data that was taken over a period of time (for example, days, weeks, months, or years). The medical report generator1850can also receive other medical data1867including but not limited to test, results, diagnosis of the patient. The medical report generator1850can also receive multimedia report data1870which is used to form portions of medical report. The multimedia report data1870can include information relating to avatars, audio information, video information, images, and text that may be included in the report. The medical report can apply to and/or discuss test results—imaging and non-imaging tests, and other medical information isolated or aggregated with or without therapeutic approach. For example, for a gallstone surgery, the medical report can aggregate information from lab tests, objective observation, medical history, imaging tests, include surgery proposal, surgery explanation, virtual surgery, pathological findings (more important in cancer), and explain after surgery recuperation until normal life or treatment FUP (ex: chemotherapy in cancer). A medical report can also be educational, and generic and adapted to a patient, a disease, and/or a treatment, a test, and address disease, risk factors, treatment, behavior, and behavior changes. Some examples, medical report can be generated to form part of a patient's complete electronic medical record (EMR) information. In some examples, the medical report generator1850can generate a comprehensive medical report per patient showing the patient “your medical life movie report.” The medical report generator1850can be configured to generate the medical report in many different formats. For example, a movie, augmented reality, virtual reality, the hologram, a podcast (audio only), a webcast (video), or for access using an interactive web-based portal. In some embodiments, the information generated for the medical report can be stored in the data structures1880(e.g., the data structures1880can be revised or updated to include information from any of the inputs to the medical report generator1850). In some embodiments, the medical report, or the information from the medical report stored in the data structures1880can be used to determine eligibility of the patient for additional trials test through an auto calculation feature. In such cases, the data structures1880are configured to store information that is needed for determining (or auto-calculating) such eligibility, including for example information relating to the patient's age, gender, ethnicity, and/or race, wellness, allergies, pre-existing conditions, medical diagnosis, etc. In some examples, information stored in the data structures1880can be used to determine whether a patient fits inclusion criteria for large-scale randomized trials, determine whether patient fit criteria for appropriate use criteria or professional societal guidelines (e.g., AHA/ACC practice guidelines), determines whether patient's insurance will cover certain medications (e.g., statins vs. PCSK9 inhibitors), and determine whether a patient qualifies for certain employee benefits (e.g., exercise program). In some embodiments, the information used in the data structures1880can be used to determine/indicate a patient's normality, risk, treatment type and treatment benefits, and such information can be included in the medical report, for example, based on medical practitioners' preferences. Accordingly, in various embodiments, in addition to the predetermined video/information1855relating to greetings, test explanations, results presented, results explanation, and conclusions, the medical report generator1850can be configured to generate a medical report that includes information to help the medical practitioner explain the results and best way forward, the information being based at least in part on the patient's specific data (e.g., test data), including:a. patient-specific findings.b. comparison to normal values (age, gender, ethnicity, race-specific values of population-based norms).c. comparison to abnormal values (e.g., comparing someone's CAD results to database of those who experienced heart attack; or another database of similar).d. comparison to outcomes (e.g., identifying inclusion criteria for trials and medication treatments therein, and auto-calculating Kaplan Meier curves or other visual representations showing the probability of an event respective time interval (e.g., survival rate).e. comparison to identify benefits of treatment (e.g., auto-linking to clinical trials or clinical data in order to examine the relative benefits of specific types of treatment, e.g., medication therapy with statins vs. PCSK9 inhibitors; medication treatment vs. percutaneous intervention; PCI vs. surgical bypass).f. calculations of previously published (or unpublished) scores, e.g., CONFIRM score, SYNTAX score, etc.g. comparisons from serial studies.h. auto-links to EMR or patient-entered data to enable patient-specific explanation of medications and other treatments.i. can include “test” or “quiz” at the end to promote patient engagement and ensure patient literacy.j. interactive patient satisfaction surveys.k. interactive with patients through patient input1875, allowing a patient to select which information they want to view and better understand.l. ethnically, racially and gender diversity, and allow dynamic changes in language, content based upon gender, race and ethnicity that is used to convey report to patient; andm. adaptations for age allowing changes in language and content based upon age, timeframe born (millennial vs. baby boomer). In some embodiments, the medical report generator1850can be configured to check for updates/received updates over time (e.g., auto-updating) such that the medical reports change over time and include the latest available reports. In some embodiments, the medical report generator1850can communicate via a network or web-based portal to include information from other medical or wearable devices. In some embodiments, the medical report generator1850can be configured to provide the patient patient-specific education based upon published scientific evidence and specifically curated to the patient's medical report, and auto-update the report based upon serial changes. FIG.18Fis a diagram illustrating a representation of an example of a system1881having multiple structures for storing and accessing associated information that is used in a medical report, the information associated with a patient based on one or more of characteristics of the patient, the patient's medical condition, or an input from the patient and/or a medical practitioner. In some embodiments, the system1881is a representation of how the information used for generating a medical report is stored in systems ofFIG.16,17, or18E. InFIG.18F, information is described as being stored in a plurality of databases. As used herein, a database refers to a way of storing information such that the information can be referenced by one or more values (e.g., other information) associated with stored information. In various embodiments, a “database” can be, for example, a database, a data storage structure, a linked list, a lookup table, etc.). In some embodiments, the database can be configured to store structured information (e.g., information of a predetermined size, for example, a name, age, gender, or other information with a predetermined maximum field size). In some embodiments, database can be configured to store structured or unstructured information (e.g., information that may or may not be predetermined, e.g., an image or a video). Stored information may be associated with any other information of the patient. For example, stored information can be associated with one or more of a characteristic of a patient (e.g., name, age, gender, ethnicity, geographic origin, education, weight, and/or height), one or more medical conditions of a patient, a prognosis for a patient's medical condition, medical treatments, etc. Although the example system1881inFIG.18Fillustrates having 13 different databases (e.g., for clarity of the description), in other embodiments such systems can have more or fewer databases, or certain information stored in illustrated databases can be combined with other information and stored together in the same database. System1881includes a communication bus1897, which allows the components to communicate with each other, as needed. One or more portions of the communication bus1897can be implemented as a wired communication bus, or implemented as a wireless communication bus. In various embodiments, the communication but1897includes a plurality of communication networks, or one or more types (e.g., a larger are network (LAN), a wide area network (WAN), the Internet, or a local wireless network (e.g., Bluetooth). System1881also includes a medical report generator1894, which is in communication with the communication bus1897. The medical report generator1894is also in communication with one or more input components1895, which can be used for a patient and/or a medical practitioner to interface with the medical report generator1894using a computer (e.g., a desktop computer, a laptop computer, a tablet computer, or a mobile device, e.g., a smart phone. The medical report generator1894can communicate with any of the databases data structures using the communication bus1897. In various embodiments, medical report generator1894can use information from one or more of the illustrated databases in a workflow, for generating a patient specific report, that includes patient identification, patient preferences, medical image findings, patient diagnosis, prognostication, clinical decision making, health literacy, patient education, image generation/display, and post-report education. Patient identification is used by the medical report generator1894for generating an avatar that will be included in the medical report. For example, to be displayed during at least a portion of the medical report, or to be displayed and to “present” at least a portion of the medical report to the patient. Determining patient information can be based upon either active or passive methods. Passive In some embodiments, a medical report generator1894can be configured to automatically communicate with an electronic medical record (EMR) database1893to (for a certain patient) ascertain patient demographic characteristics to determine patient age, gender, ethnicity, and other potential relevant characteristics to understand patient biometrics (e.g., height, weight). In some embodiments, the medical report generator1894can be configured to automatically query a proprietary or web-based name origin database1883containing names and ethnic origins of names to determine, wholly or in part, a patient's gender and ethnicity based on the patient's name and/or other patient information. Active In some embodiments, the medical report generator1894can receive input information from an interface system1895, and the input information can be used to generate portions of the medical report. For example, a patient, family/friend member, or medical professional can enter patient age, gender and ethnicity, and other potential relevant characteristics. This can be done, for example, at the time of receiving report and in advance of playing the report; or at the time of registration of the patient into the system. In some embodiments, the medical report generator1894can receive a picture of the patient through an interface system1895, or via the communication bus1897, and the picture can be used to generate portions of the medical report. For example, a picture of the patient can be input into the system or be taken (e.g., input as an electronic image, or input by scanning in a photograph), and the picture can be used by the medical report generator (or a system coupled to the medical report generator) to automatically morph the picture into a relevant avatar (e.g., relevant to the patient). The determination of characteristics of the avatar can done using linked image-based algorithms that determine or choose an avatar from a repository of avatars that exist within the data system, the avatar selected at least partially based on the picture of the patient. In some embodiments, a QR code can be used for all products related to a company (e.g., Cleerly-related products) that can house information about the patient that can be used to generate the avatar. Patient Preferences. In some embodiments, in this step the medical report generator1902can be configured to receive input from a patient, or a medical practitioner (e.g., via the interface system1895) to identify the ideal or desired educational method to maximize patient understanding of the medical report. In some embodiments, the system generates graphical user interfaces (GUIs) that include options that can be selected by a patient. In some embodiments, GUIs can include one or more fields that a user (e.g., patient, medical practitioner, or another) can enter data related to a preference (e.g., the length of the report in minutes). Examples of inputs that can be received by a system are illustrated below:Method of delivery—The patient may choose to view their medical report as a movie, in mixed reality (AR/VR), holography, podcast. In other embodiments, the method of delivery is determined at least in part by patient information.Length of report. Some patients are more detailed than other, and would like more vs. less information. Patients can select the length of their report (e.g., <5 minutes, 5-10 minutes, >10 minutes). In other embodiments, the length of the report is determined automatically at least in part using patient information.Popularity of report. If patients do not know what type of report they want, the patients can select the “most popular” options. In other embodiments, the type of report is determined automatically at least in part using patient information.Effectiveness of the report. If patients do not have a preference of what type of report they want, they can choose “most educational,” which can be linked to report methods that have been demonstrated by patient voting or by scientific study to maximize healthy literacy. In other embodiments, the “effectiveness” of the report is determined automatically at least in part based on patient information.Report delivery voice. Patients can select what type of voice they would like to hear for the report. The medical report generator1894can also utilize a medical image findings database1884for the patient-specific medical report. There are a number of “medical image findings” that can be determined and stored in the medical image findings database1884, and any one or more of them can be incorporated into the medical report. The following are some examples of the information that can be determined and stored in the medical image findings database1884. Image processing algorithms process the heart and heart arteries from a CT scan to segment:Coronary arteries—atherosclerosis, vascular morphology, ischemiaCardiovascular structures—left ventricular mass, left ventricular volume, atrial volumes, aortic dimensions, epicardial fat, fatty liver, valves Heart and heart artery findings are quantified by, for example, the following:Coronary artery plaque—e.g., plaque burden, volume; plaque type, percent atheroma volume, location, directionality, etc.Vascular morphology—e.g., lumen volume, vessel volume, arterial remodeling, anomaly, aneurysm, bridging, dissection, etc.Left ventricular mass—in grams or indexed to body surface area or body mass indexLeft ventricular volume—in ml or indexed to body surface area or body mass indexAtrial volumes—in ml or indexed to body surface area or body mass indexAortic dimensions—in ml or indexed to body surface area or body mass indexEpicardial fat—in ml or indexed to body surface area or body mass indexFatty liver—Hounsfield unit density alone or in relevance to spleen Quantified heart and heart artery findings are automatically sent to a medical image quantitative findings database1885that has well-defined areas for classification of each of these findings. In some embodiments, the medical image quantitative findings database1885has an algorithm that links together relevant findings that comprise syndromes over single disease states. In an example, the presence of left ventricular volume elevation, along with the presence of left atrial volume elevation, along with thickening of the mitral valve, along with a normal right atrial volume may suggest a patient with significant mitral regurgitation (or leaky mitral valve). In another example, the presence of an increased aortic dimension and increased left ventricular mass may suggest a person has hypertension. The medical image quantitative findings database1885can link to other electronic data source (e.g., company database, electronic health record, etc.) to identify potential associative relationships between study findings. For example, perhaps the electronic health record indicates the patient has hypertension, in which case, the report will automatically curate a health report card for patients specifically with hypertension, i.e., normality or left ventricular mass, atrial volume, ventricular volume, aortic dimensions. The medical image quantitative findings database1885can link to the Internet to perform medical imaging finding-specific search (i.e., search is based upon the image data curation as described above), to retrieve information that may link relevant findings that comprise syndromes. Diagnosis: morphologic classification of heart and heart artery findings: Morphologic classification can be based upon: Comparison to a population-based normative reference range database1886which includes ranges that have the mean/95% confidence interval, median/interquartile interval; deciles for normality; quintiles of normality, etc. These data can also be reported in the medical report in “ages.” For example, perhaps a patient's biological age is 50 years, while their heart age is 70 years based upon comparison to the age- and gender-based normative reference range database. If the population-based normative reference range database1886does not exist in a system1881, in some embodiments the system1881can search the Internet looking for these normative ranges, e.g., in PubMed search and by natural language processing and “reading” of the scientific papers. Classification grades: can be done in many ways:presence/absentnormal, mild, moderate, severeelevated or reducedpercentile for age, gender and ethnicity Any of the above categorization systems, also accounting for other patient conditions (e.g., if a patient has hypertension, their expected plaque volume may be higher than for a patient without hypertension). Temporal/Dynamic changes can be done and integrated into the medical report by automatic comparison of findings with a patient's prior study which exists in a specific prior exams database1887, e.g., reporting the change that has occurred, and direct comparison to the population-based normative reference range database1886to determine whether this change in disease is expectedly normal, mild, moderate, severe, etc. (or other classification grading method). Temporal/Dynamic changes may be done by comparison of >2 studies (e.g., 4 studies) in the database of patient's studies, in which changes can be reported by absolute, relative %, along a regression line, or by other mathematical transformation, with these findings compared to the population-based normative reference range database. Prognostication Automatic prognostication of patient outcomes can be done by integrating the medical imaging findings (±coupled to other patient data±coupled to normative reference range database) by direct interrogation of a prognosis database1888that exists with patient-level outcomes. The prognosis database1888may be a single database (e.g., of coronary plaque findings), or multiple databases (e.g., one database for coronary plaque, one database for ventricular findings, one database for non-coronary vascular findings, etc.). In some embodiments, several and separate databases may exist for different types of prognosis, e.g., one database may exist for auto-calculation of risk of major adverse cardiovascular events (MACE), while another database may exist for auto-calculation of rapid disease progression. These databases may be interrogated sequentially, or they may be interactive with each other (e.g., a person who has a higher rate of rapid disease progression may also have a higher risk of MACE, but the presence of rapid disease progression may increase risk of MACE beyond that of someone who does not experience rapid disease progression). Prognostic findings can be reported into the movie report by: elevated/reduced; % risk, hazards ratio, time-to-event Kaplan Meier curves, and others. Clinical Decision Making Automatic recommendation of treatments can be done by integrating the above findings with a treatment database1889. The treatment database1889can house scientific and clinical evidence data to which a patient's medical image findings, diagnosis, syndromes and prognosis can be linked. Based upon these findings—as well as clinical trial inclusion/exclusion/eligibility criteria—a treatment recommendation can be given for a specific medication or procedure that may improve the patient's condition. For example, perhaps a patient had a specific amount of plaque on the patient's 1ststudy and that plaque progressed significantly on the patient's 2ndstudy. The system will report the change as high, normal, or low based upon query of the normative reference range database and the prior studies database and, based upon this, render a prognosis. The system could then query the EMR database to see which medications the patient is currently taking, and the system finds out that the patient is taking a statin. The system could then examine the databases that would let the system know that adding a PCSK9 inhibitor medication on top of the statin medication would be associated with an XX % relative risk reduction. A similar example will be for a patient being considered for an invasive procedure. In many cases, a treatment path is not 100% clear where there is benefit as well as risk for doing a specific kind of therapy. In this case, the system can query the shared decision database1890, which lists the scientific evidence for treatment options, and lists all of the benefits as well as limitations of these approaches. The “pros” and “cons” of the different treatment approaches can be integrated into the patient medical report. For example, based upon the medical image findings, normative reference comparison, prognosis evaluation and treatment query, perhaps an 81-year-old woman would highly benefit from a medication whose side effect is worsening of osteoporosis. In this case, the woman may have severe osteoporosis and for her, the benefits of the medication outweigh the risk as is illustrated and communicated through the shared decision making database. For these types of cases, an alternative may be provided. For example, the shared decision making database may show comparative effectiveness of treatments, similar to the way Consumer Reports or amazon.com product options are listed so that the patient can understand the options, pros and cons. The system1881can also include a health literacy database1891. This portion of the workflow to produce a medical report can be an interactive “quiz” to ensure that the patient understood the study findings, the diagnosis, the prognosis, and the treatment decision making. If the patient fails the “quiz”, then the system would automatically curate content into more and more simple terms so that the patient does understand their condition. Thus, the health literacy database1891can be a tiered database of movies based upon simple to complex, and would be tailored to the patient's preferences as well as their score on the “quiz”. This information can be kept for future movies for that patient. The opposite can also occur. As an example, perhaps a patient passes the “quiz” and the system asks the patient whether they would like to know more about the condition. If the patient answers ‘yes’, then the system can extract more and more complex movies for display to the patient. In this way, the health literacy database1891is multilevel and interactive. The system1881can also include an education database1892, which has educational materials that are based upon science and medicine, and are redundant in content but different in delivery method. As an example, if the system notes that the patient has a certain finding, the system can inquire with the patient whether they would like to learn more about a specific conditions. If the patient indicates ‘yes,’ then the system can inquire whether the patient would like to see a summary infographic page, a slide presentation, a movie, etc. The system1881can also include an image display database1893that includes images that the medical report generator1894uses to morph medical images into cartoon formats, or simpler formats, that a patient can better understand. The system can also include a post-report education database1896that continually uploads new information in real time related to specific medical conditions. The medical report generator1894can query this post-report education database1896, and curate educational content (e.g., scientific articles, publications, presentations, etc.) that exist on the internet, and then modify them through the post-report education database1896to information that the patient would like to see, for example, as determined by the patient information or by a user input. The medical report generator1894system can be interactive, not just passive. Different types of reports and information can be generated as a set of information for a medical report, and a user can interactively select what information to view using the interface system1895(e.g., a computer system of the user), and can select other information to be presented/displayed by providing input to the medical report generator1894. Systems and Methods for Imaging Methods of Non-Contiguous, or Different, Arterial Beds for Determining Atherosclerotic Cardiovascular Disease (ASCVD) This portion of the disclosure relates to systems and methods for assessing atherosclerotic cardiovascular disease risk using sequential non-contiguous arterial bed imaging. Various embodiments described herein relate to quantification and characterization of sequential non-contiguous arterial bed images to generate a ASCVD assessment, or ASCVD risk score. Any risk score generated can be a suggested risk score, and a medical practitioner can use the suggested ASCVD risk score to provide a ASCVD risk score for a patient. In various embodiments, a suggested ASCVD risk score can be used to provide a ASCVD risk score to a patient based on the suggested ASCVD risk score, or with additional information. In some embodiments, the ASCVD risk score is a calculation of your risk of having a cardiovascular problem over a duration of time, for example, 1 year, 3 years, 5 years, 10 years, or longer). In some embodiments, the cardiovascular problem can include one or more of a heart attack or stroke. However, other cardiovascular problems can also be included, that is, assessed as a risk. In some embodiments, this risk estimate considers age, sex, race, cholesterol levels, blood pressure, medication use, diabetic status, and/or smoking status. In some embodiments, the ASCVD risk score is given as a percentage. This is your chance of having heart disease or stroke in the next 10 years. There are different treatment recommendations depending on your risk score. As an example, an ASCVD risk score of 0.0 to 4.9 percent risk can be considered low. Eating a healthy diet and exercising will help keep your risk low. Medication is not recommended unless your LDL, or “bad” cholesterol, is greater than or equal to 190. An ASCVD risk score of 5.0 to 7.4 percent risk can be considered borderline. Use of a statin medication may be recommended if you have certain conditions, which may be referred to as “risk enhancers.” These conditions may increase your risk of a heart disease or stroke. Talk with your primary care provider to see if you have any of the risk enhancers in the list below. An ASCVD risk score of 7.5 to 20 percent risk can be considered intermediate. Typically for a patient with a score in this range, it is recommended that a moderate-intensity statin therapy is started. An ASCVD risk score of greater than 20 percent risk can be considered high. When the ASCVD risk score indicates a high risk, it may be recommended that the patient start a high-intensity statin therapy. Various embodiments described herein also relate to systems and methods for quantifying and characterizing ASCVD of different arterial beds, e.g., from a single imaging examination. In some embodiments, the systems and methods can quantify and characterize ASCVD of different arterial beds from two or more imaging examinations. Any of the imaging performed can be done in conjunction with a normalization device, described elsewhere herein. Various embodiments described herein also relate to systems and methods for determining an integrated metric to prognosticate ASCVD events by weighting findings from each arterial bed. Examples of systems and methods are described for quantifying and characterizing ASCVD burden, type and progression to logically guide clinical decision making through improved diagnosis, prognostication, and tracking of CAD after medical therapy or lifestyle changes. As such, some systems and methods can provide both holistic patient-level ASCVD risk assessment, as well as arterial bed-specific ASCVD burden, type and progression. As an example relating to imaging of non-contiguous arterial beds that is done in conjunction with a normalization device, a normalization device is configured to normalize a medical image of a coronary region of a subject for an algorithm-based medical imaging analysis. In an example, a normalization device includes a substrate configured in size and shape to be placed in a medical imager along with a patient so that the normalization device and the patient can be imaged together such that at least a region of interest of the patient and the normalization device appear in a medical image taken by the medical imager, a plurality of compartments positioned on or within the substrate, wherein an arrangement of the plurality of compartments is fixed on or within the substrate, and a plurality of samples, each of the plurality of samples positioned within one of the plurality of compartments, and wherein a volume, an absolute density, and a relative density of each of the plurality of samples is known. The plurality of samples can include a set of contrast samples, each of the contrast samples comprising a different absolute density than absolute densities of the others of the contrast samples, a set of calcium samples, each of the calcium samples comprising a different absolute density than absolute densities of the others of the calcium samples, and a set of fat samples, each of the fat samples comprising a different absolute density than absolute densities of the others of the fat samples. The set contrast samples can be arranged within the plurality of compartments such that the set of calcium samples and the set of fat samples surround the set of contrast samples. In an example, a computer implemented method for generating a risk assessment of atherosclerotic cardiovascular disease (ASCVD) uses a normalization device (as described herein) to improve accuracy of the algorithm-based imaging analysis. In some embodiments, the medical imaging method includes receiving a first set of images of a first arterial bed and a first set of images of a second arterial bed, the second arterial bed being noncontiguous with the first arterial bed, and wherein at least one of the first set of images of the first arterial bed and the first set of images of the second arterial bed are normalized using the normalization device, quantifying ASCVD in the first arterial bed using the first set of images of the first arterial bed, quantifying ASCVD in the second arterial bed using the first set of images of the second arterial bed, and determining a first ASCVD risk score based on the quantified ASCVD in the first arterial bed and the quantified ASCVD in the second arterial bed. In some embodiments, determining a first weighted assessment of the first arterial bed based on the quantified ASCVD of the first arterial bed and weighted adverse events for the first arterial bed, and determining a second weighted assessment of the second arterial bed based on the quantified ASCVD of the second arterial bed and weighted adverse events for the second arterial bed. Determining the first ASCVD risk score further comprises determining the ASCVD risk score based on the first weighted assessment and the second weighted assessment. In some embodiments, a method can further include receiving a second set of images of the first arterial bed and a second set of images of the second arterial bed, the second set of images of the first arterial bed generated subsequent to generating the first set of image of the first arterial bed, and the second set of images of the second arterial bed generated subsequent to generating the first set of image of the second arterial bed, quantifying ASCVD in the first arterial bed using the second set of images of the first arterial bed, quantifying ASCVD in the second arterial bed using the second set of images of the second arterial bed, and determining a second ASCVD risk score based on the quantified ASCVD in the first arterial bed using the second set of images, and the quantified ASCVD in the second arterial bed using the second set of images. Determining the second ASCVD risk score can be further based on the first ASCVD risk score. In some embodiments, the first arterial bed includes arteries of one of the aorta, carotid arteries, lower extremity arteries, renal arteries, or cerebral arteries. The second arterial bed includes arteries of one of the aorta, carotid arteries, lower extremity arteries, renal arteries, or cerebral arteries that are different than the arteries of the first arterial bed. Any of the methods described herein can be based on imaging using a normalization device to improve quality of the automatic image assessment of the generated images. In an embodiment, an output of these methods can be a single patient-level risk score that can improve arterial bed-specific event-free survival in a personalized fashion. In some embodiments, any of the quantization of characterization techniques and processes described in U.S. patent application Ser. No. 17/142,120, filed Jan. 5, 2020, titled “Systems, Methods, and Devices for Medical Image Analysis, Risk Stratification, Decision Making and/or Disease Tracking” (which is incorporated by reference herein), can be employed, in whole or in part, to generate a ASCVD risk assessment. Traditional cardiovascular imaging using 3D imaging by computed tomography, magnetic resonance imaging, nuclear imaging or ultrasound have relied upon imaging single vascular beds (or territories) as regions of interest. Sometimes, multiple body parts may be imaged if they are contiguous, for example, chest-abdomen-pelvis CT, carotid and cerebral artery imaging, etc. Multi-body part imaging can be useful to identify disease processes that affect adjoining or geographically close anatomic regions. Multi-body part imaging can be used to enhance diagnosis, prognostication and guide clinical decision making of therapeutic interventions (e.g., medications, percutaneous interventions, surgery, etc.). Additionally, methods that employ multi-body part imaging of non-contiguous arterial beds can be advantageous for enhancing diagnosis, prognostication and clinical decision making of atherosclerotic cardiovascular disease (ASCVD). ASCVD is a systemic disease that can affect all vessel beds, including coronary arteries, carotid arteries, aorta, renal arteries, lower extremity arteries, cerebral arteries and upper extremity arteries. While historically considered as a single diagnosis, the relative prevalence, extent, severity and type of ASCVD (and its consequent effects on vascular morphology) can exhibit very high variance between different arterial beds. As an example, patients with severe carotid artery atherosclerosis may have no coronary artery atherosclerosis. Alternatively, patients with severe coronary artery atherosclerosis may have milder forms of lower extremity atherosclerosis. As with the prevalence, extent and severity, so too can the types of atherosclerosis differ amongst vascular beds. A significant body of research now clarifies the clinical significance of atherosclerotic cardiovascular disease (ASCVD) burden, type and progression, as quantified and characterized by advanced imaging. As an example, coronary computed tomographic angiography (CCTA) now allows for quantitative assessment of ASCVD and vascular morphology in all major vascular territories. Several research trials have demonstrated that not only the amount (or burden) of ASCVD, but also the type of plaque is important in risk stratification; in particular, low attenuation plaques (LAP) and non-calcified plaques which exhibit positive arterial remodeling are implicated in greater incidence of future major adverse cardiovascular events (MACE); calcified plaques and, in particular, calcified plaques of higher density appear to be more stable. Some studies that have evaluated this concept have been observational and within randomized controlled trials. Further, medication use has been associated with a reduction in LAP and an acceleration in calcified plaque formation in populations, with within-person estimates not yet reported. Medications such as statins, icosapent ethyl, and colchicine have been observed by coronary computed tomography angiography (CCTA) to be associated with modification of ASCVD in the coronary arteries. Similar findings relating the complexity or type of ASCVD in the carotid arteries has been espoused as an explanation for stroke, as well as for renal arteries and lower extremity arteries. Accordingly, understanding the presence, extent, severity and type of ASCVD in each of the vascular arterial beds improves understanding of future risk of adverse cardiovascular events as well as the types of adverse cardiovascular events that will occur (e.g., heart attack versus stroke versus amputation, etc.), and can allow tracking of the effects of salutary medication and lifestyle modifications on the disease process in multiple arterial beds. Further, integrating the findings from non-contiguous arterial beds into a single prediction model can improve holistic assessment of an individual's risk and response to therapy over time in a personalized, precision-based fashion. In some examples, such assessments can include integrating an assessment of coronary arteries with an assessment of one or more other arterial beds, for example, one or more of the aorta, carotid arteries, lower extremity arteries, upper extremity arteries, renal arteries, and cerebral arteries. In some examples, such assessments can include integrating an assessment of any of the aorta, carotid arteries, lower extremity arteries, upper extremity arteries, renal arteries, or cerebral arteries with a different one of the aorta, carotid arteries, lower extremity arteries, upper extremity arteries, renal arteries, or cerebral arteries. Various embodiments described herein relate to systems and methods for determining assessments that may be used for reducing cardiovascular risk and/or events. For example, such assessments can be used to, at least partly, determine or generate lifestyle, medication and/or interventional therapies based upon actual atherosclerotic cardiovascular disease (ASCVD) burden, ASCVD type, and/or and ASCVD progression. In some embodiments, the systems and methods described herein are configured to dynamically and/or automatically analyze medical image data, such as for example non-invasive CT, MRI, and/or other medical imaging data of the arterial beds of a patient, to generate one or more measurements indicative or associated with the actual ASCVD burden, ASCVD type, and/or ASCVD progression, for example using one or more artificial intelligence (AI) and/or machine learning (ML) algorithms. The arterial beds can include for example, coronary arteries, carotid arteries, and lower extremity arteries, renal arteries, and/or cerebral arteries. In some embodiments, the systems and methods described herein can further be configured to automatically and/or dynamically generate assessments that can be used in one or more patient-specific treatments and/or medications based on the actual ASCVD burden, ASCVD type, and/or ASCVD progression, for example using one or more artificial intelligence (AI) and/or machine learning (ML) algorithms. In some embodiments, the systems and methods described herein are configured to utilize one or more CCTA algorithms and/or one or more medical treatment algorithms on two or more arterial bodies to quantify the presence, extent, severity and/or type of ASCVD, such as for example its localization and/or peri-lesion tissues. In some embodiments, the one or more medical treatment algorithms are configured to analyze any medical images obtained from any imaging modality, such as for example computed tomography (CT), magnetic resonance (MR), ultrasound, nuclear medicine, molecular imaging, and/or others. In some embodiments, the systems and methods described herein are configured to utilize one or more medical treatment algorithms that are personalized (rather than population-based), treat actual disease (rather than surrogate markers of disease, such as risk factors), and/or are guided by changes in CCTA-identified ASCVD over time (such as for example, progression, regression, transformation, and/or stabilization). In some embodiments, the one or more CCTA algorithms and/or the one or more medical treatment algorithms are computer-implemented algorithms and/or utilize one or more AI and/or ML algorithms. In some embodiments, the systems and methods are configured to assess a baseline ASCVD in an individual using two or more arterial bodies. In some embodiments, the systems and methods are configured to evaluate ASCVD by utilizing coronary CT angiography (CCTA). In some embodiments, the systems and methods are configured to identify and/or analyze the presence, local, extent, severity, type of atherosclerosis, peri-lesion tissue characteristics, and/or the like. In some embodiments, the method of ASCVD evaluation can be dependent upon quantitative imaging algorithms that perform analysis of coronary, carotid, and/or other vascular beds (such as, for example, lower extremity, aorta, renal, and/or the like). In some embodiments, the systems and methods are configured to categorize ASCVD into specific categories based upon risk. For example, some example of such categories can include: Stage 0, Stage I, Stage II, Stage III; or none, minimal, mild, moderate; or primarily calcified vs. primarily non-calcified; or X units of low density non-calcified plaque); or X % of NCP as a function of overall volume or burden. In some embodiments, the systems and methods can be configured to quantify ASCVD continuously. In some embodiments, the systems and methods can be configured to define categories by levels of future ASCVD risk of events, such as heart attack, stroke, amputation, dissection, and/or the like. In some embodiments, one or more other non-ASCVD measures may be included to enhance risk assessment, such as for example cardiovascular measurements (left ventricular hypertrophy for hypertension; atrial volumes for atrial fibrillation; fat; etc.) and/or non-cardiovascular measurements that may contribute to ASCVD (e.g., emphysema). In some embodiments, these measurements can be quantified using one or more CCTA algorithms. In some embodiments, the systems and methods described herein can be configured to generate a personalized or patient-specific treatment based on an assessment of two or more arterial bodies. More specifically, in some embodiments, the systems and methods can be configured to generate therapeutic recommendations based upon ASCVD presence, extent, severity, and/or type. In some embodiments, rather than utilizing risk factors (such as, for example, cholesterol, diabetes), the treatment algorithm can comprise and/or utilize a tiered approach that intensifies medical therapy, lifestyle, and/or interventional therapies based upon ASCVD directly in a personalized fashion. In some embodiments, the treatment algorithm can be configured to generally ignore one or more conventional markers of success—such as lowering cholesterol, hemoglobin A1C, etc.—and instead leverage ASCVD presence, extent, severity, and/or type of disease to guide therapeutic decisions of medical therapy intensification. In some embodiments, the treatment algorithm can be configured to combine one or more conventional markers of success—such as lowering cholesterol, hemoglobin A1C, etc., with ASCVD presence, extent, severity, and/or type of disease to guide therapeutic decisions of medical therapy intensification. In some embodiments, the treatment algorithm can be configured to combine one or more novel markers of success—such as genetics, transcriptomics, or other 'omic measurements—with ASCVD presence, extent, severity, and/or type of disease to guide therapeutic decisions of medical therapy intensification. In some embodiments, the treatment algorithm can be configured to combine one or more other imaging markers of success—such as carotid ultrasound imaging, abdominal aortic ultrasound or computed tomography, lower extremity arterial evaluation, and others—with ASCVD presence, extent, severity, and/or type of disease to guide therapeutic decisions of medical therapy intensification. In some embodiments, the systems and methods are configured to update personalized treatment based upon response assessment of two or more arterial bodies. In particular, in some embodiments, based upon the change in ASCVD between the baseline and follow-up CCTA, personalized treatment can be updated and intensified if worsening occurs or de-escalated/kept constant if improvement occurs. As a non-limiting example, if stabilization has occurred, this can be evidence of the success of the current medical regimen. Alternatively, as another non-limiting example, if stabilization has not occurred and ASCVD has progressed, this can be evidence of the failure of the current medical regimen, and an algorithmic approach can be taken to intensify medical therapy. To facilitate an understanding of the systems and methods discussed herein, several terms are described below. These terms, as well as other terms used herein, should be construed to include the provided descriptions, the ordinary and customary meanings of the terms, and/or any other implied meaning for the respective terms, wherein such construction is consistent with context of the term. Thus, the descriptions below do not limit the meaning of these terms, but only provide example descriptions. Presence of ASCVD: This can be the presence vs. absence of plaque; or the presence vs. absence of non-calcified plaque; or the presence vs. absence of low attenuation plaque Extent of ASCVD: This can include the following:Total ASCVD VolumePercent atheroma volume (atheroma volume/vessel volume×100)Total atheroma volume normalized to vessel length (TAVnorm).Diffuseness (% of vessel affected by ASCVD) Severity of ASCVD: This can include the following:ASCVD severity can be linked to population-based estimates normalized to age, gender, ethnicity, and/or CAD risk factorsAngiographic stenosis ≥70% or ≥50% in none, 1-, 2-, or 3-VD Type of ASCVD: This can include the following:Proportion (ratio, %, etc.) of plaque that is non-calcified vs. calcifiedProportion of plaque that is low attenuation non-calcified vs. non-calcified vs. low density calcified vs. high-density calcifiedAbsolute amount of non-calcified plaque and calcified plaqueAbsolute amount of plaque that is low attenuation non-calcified vs. non-calcified vs. low density calcified vs. high-density calcifiedContinuous grey-scale measurement of plaques without ordinal classificationVascular remodeling imposed by plaque as positive remodeling (≥1.10 or ≥1.05 ratio of vessel diameter/normal reference diameter; or vessel area/normal reference area; or vessel volume/normal reference volume) vs. negative remodeling (≤1.10 or ≤1.05)Vascular remodeling imposed by plaque as a continuous ratio ASCVD ProgressionProgression can be defined as rapid vs. non-rapid, with thresholds to define rapid progression (e.g., >1.0% percent atheroma volume, >200 mm3 plaque, etc.)Serial changes in ASCVD can include rapid progression, progression with primarily calcified plaque formation; progression with primarily non-calcified plaque formation; and regression. Categories of RiskStages: 0, I, II, or III based upon plaque volumes associated with angiographic severity (none, non-obstructive, and obstructive 1VD, 2VD and 3VD)Percentile for age and gender and ethnicity and presence of risk factor (e.g., diabetes, hypertension, etc.)% calcified vs. % non-calcified as a function of overall plaque volumeX units of low density non-calcified plaqueContinuous 3D histogram analysis of grey scales of plaque by lesion, by vessel and by patientRisk can be defined in a number of ways, including risk of MACE, risk of angina, risk of ischemia, risk of rapid progression, risk of medication non-response, etc. Certain features in embodiments of systems and methods relating to determining an assessment of non-contiguous arterial beds are described below. Medical Imaging of Non-Contiguous Arterial Beds Systems and methods described herein also relate to medical imaging of non-contiguous arterial beds. For example, imaging of non-contiguous arterial beds in a single imaging examination. In other embodiments, imaging of non-contiguous arterial beds in two or more imaging examinations, and the information from the generated images can be used to determine information relating to a patient's health. As an example, coronary artery and carotid arteries are imaged using the same contrast bolus. In this case, the coronary arteries can be imaged by CCTA. Immediately after CCTA image acquisition, the CT table moves and images the carotid artery using the same or supplemental contrast dose. The example here is given for CT imaging in a single examination, but can be also applied to combining information from multiple imaging examinations; or multimodality imaging integration (e.g., ultrasound of the carotid; computed tomography of the coronary). Automated Arterial Bed-Specific Risk Assessment This is accomplished by an automated method for quantification and characterization of ASCVD in individual artery territories for improved diagnosis, prognostication, clinical decision making and tracking of disease changes over time. These findings may be arterial bed-specific. As an example, conversion of non-calcified plaque to calcified plaque may be a feature that is considered beneficial and a sign of effective medical therapy in the coronaries, but may be considered to be a pathologic process in the lower extremity arteries. Further, the prognostication enabled by the quantification and characterization of ASCVD in different artery territories may differ. As an example, untoward findings in the carotid arteries may prognosticate future stroke; while untoward findings in the coronary arteries may prognostic future myocardial infarction. Partial overlap of risk may occur, e.g., wherein adverse findings in the carotid arteries may be associated with an increase in coronary artery events. Patient-Specific Risk Assessment By combining the findings from each arterial bed, along with relative weighting of arterial bed findings, risk stratification, clinical decision making and disease tracking can be done with greater precision in a personalized fashion. Thus, patient-level prediction models are based upon understanding the ASCVD findings of non-contiguous arterial beds but communicated as a single integrated metric (e.g., 1-100, mild/moderate/severe risk, etc.). Longitudinal Updating of Arterial Bed- and Patient-Specific Risk By longitudinal serial imaging after treatment changes (e.g., medication, lifestyle, and others), changes in ASCVD can be quantified and characterized and both arterial bed-specific as well as patient-level risk can be updated based upon the changes as well as the most contemporary ASCVD image findings. FIG.19Aillustrates an example of a process1900for determining a risk assessment using sequential imaging of noncontiguous arterial beds of patient, according to some embodiments. At block1905, sequential imaging of noncontiguous arterial beds of a patient may be performed. An example, sequential imaging be noncontiguous first arterial bed second arterial bed performed. In some embodiments, the first arterial bed includes one of aorta, carotid arteries, lower extremity arteries, upper extremity arteries, renal arteries, or cerebral arteries, and the second arterial bed includes a different one of aorta, carotid arteries, lower extremity arteries, upper extremity arteries, renal arteries, or cerebral arteries. In some embodiments the third arterial bed may be imaged. In some embodiments a fourth arterial bed may be imaged. The third and fourth arterial beds may include one of aorta, carotid arteries, lower extremity arteries, upper extremity arteries, renal arteries, or cerebral arteries. The sequential imaging of the noncontiguous arterial beds may be done the same settings on the imaging machine, at different times, or with different imaging modalities, for example, CT and ultrasound). At block1910, the process1900automatically quantifies and characterizes ASCVD in the imaged arterial beds. In some embodiments, the ASCVD in the first arterial bed and the second arterial bed are quantified and characterized using any of the qualifications and characterization disclosed herein. For example, images of the first arterial bed are analyzed by a system configured with a machine learning program that has been trained on a plurality of arterial bed images and annotated features of arterial bed images. In other embodiments, the ASCVD and the first arterial bed and second arterial bed are quantified using other types of qualifications the characterizations. At block1915, the process1900generates a prognostic assessment of arterial bed specific adverse events. An example, for the coronary arteries the adverse event can be a heart attack. In another example, for the carotid arteries the adverse event is a stroke. In another example, for the lower extremity arteries the adverse event is amputation. The adverse events can be determined from patient information that is accessible to the system performing the assessment. For example, from archived patient medical information (e.g., patient medical information1602illustrated inFIG.16) or any other stored information of a previous adverse event. Each event can be associated with a weight based on a predetermined scheme. The weights can be, for example, a value between 0.00 and 1.00. The weights associated with different adverse events can be stored in a non-transient storage medium, for example, a database. For a patient, a weighted assessment of each particular occurrence of an adverse event can be determined. In some embodiments, the weights are multiplied by the event. For example, for a 1stoccurrence of event 1 that has a weight of 0.05, one occurrence of that event results in a weighted assessment of 00.05. A second occurrence of event 1 may have the same weight, or a different weight. For example, an increased weight. In one example, a second occurrence of event 1 has a weight of 0.15, such that when two occurrences of the event occur the weighted assessment is the sum of the weights of the first and second occurrence (for example, 0.20). Other events can have difference weights, and the weighted assessment can include the sum of all of the weights for all of the events that occurred. At block1920, the process1900uses the arterial bed specific risk assessment to determine a patient level risk score, for example, an ASCVD risk score. In an example, the ASCVD risk score is based on a weighted assessment of an arterial bed. In an example, the ASCVD risk score is based on a weighted assessment of an arterial bed and other patient information. At block1925, the process1900tracks changes in ASCVD based upon treatment and lifestyle to determine beneficial or adverse changes in ASCVD. In some embodiments, as indicated in block1930, the process1900uses additional sequential imaging, taken at a different time (e.g., days, weeks, months or years later) of one or more noncontiguous arterial beds and the process1900updates arterial bed and patient level risk assessments, and determines an updated ASCVD score based on the additional imaging. The baseline and updated assessment can also integrate non-imaging findings that are associated with arterial bed- and patient-specific risk. These may include laboratory tests (e.g., troponin, b-type natriuretic peptide, etc.); medication type, dose and duration (e.g., lovastatin 20 mg per day for 6 years); interactions between multiple medications (e.g., lovastatin alone versus lovastatin plus ezetimibe); biometric information (e.g., heart rate, heart rate variability, pulse oximetry, etc.) and patient history (e.g., symptoms, family history, etc.). By monitoring the ASCVD score and correlating changes in the ASCVD score with patient treatment(s) and patient lifestyle changes, better treatment protocols and lifestyle choices for that patient may be determined. FIG.19Billustrates an example where a sequential, non-contiguous arterial bed imaging is performed. In this example, a sequential, non-contiguous arterial bed imaging is performed for the (1) coronary arteries, and for the (2) carotid arteries. As can be seen in the quantification and characterization of the atherosclerosis in both the coronary and carotid arteries, the phenotypic makeup of the disease process is highly variable, with the coronary artery cross-sections showing both blue (calcified) and red (low-density non-calcified) plaque; and the carotid artery cross-sections only showing yellow (non-calcified) and red (low-density non-calcified plaque). Further, the amount of atherosclerosis is higher in the coronary arteries than the carotid arteries, indicating a differential risk of heart attack and stroke, respectively. FIG.19Cis an example of a process1950for determining a risk assessment of atherosclerotic cardiovascular disease (ASCVD) using sequential imaging of non-contiguous arterial beds, according to some embodiments. At block1952a first arterial bed of a patient is imaged. In some embodiments, the first arterial bed includes one of an aorta, carotid arteries, lower extremity arteries, upper extremity arteries, renal arteries, or cerebral arteries. In some embodiments, the imaging used can be digital subtraction angiography (DSA), duplex ultrasound (DUS), computed tomography (CT), magnetic resonance angiography (MRA), ultrasound, or magnetic resonance imaging (MRI), or another type of imaging that generates a representation of the arterial bed. At block1954the process1950images a second arterial bed. The imaging of the second arterial bed is noncontiguous with the first arterial bed. In some embodiments, the second arterial bed can be one of an aorta, carotid arteries, lower extremity arteries, upper extremity arteries, renal arteries, or cerebral arteries in his different than the first arterial bed. In some embodiments, imaging the second arterial bed can be performed by a DSA, DUS, CT, MRA, ultrasound, or MRI imaging process, or another imaging process. At block1956the process1950automatically quantifies ASCVD in the first arterial bed. At block1958, the process1950automatically quantifies ASCVD in the second arterial bed. The quantification of ASCVD in the first arterial bed and the second arterial bed can be done using any of the quantification disclosed herein (e.g., using a neural network trained with annotated images) or other quantification. At block1960, the process1950determines a first weighted assessment of the first arterial bed, the first weighted assessment associated with arterial bed specific adverse events for the first arterial bed. At block1962the process1950determines a second weighted assessment of second arterial bed, the second weighted assessment associated with arterial bed specific adverse events for the second arterial bed. At block1964the process1950generates an ASCVD patient risk score based on the first weighted assessment and the second weighted assessment. FIG.19Dis an example of a process1970for determining a risk assessment using sequential imaging of non-contiguous arterial beds, according to some embodiments. At block1972, the process1970receives images of the first arterial bed and a second arterial bed, the second arterial bed being noncontiguous with the first arterial bed and different than the first arterial bed. In some embodiments, the first arterial bed can be one of an aorta, carotid arteries, lower extremity arteries, upper extremity arteries, renal arteries, or cerebral arteries. The imaging of the second arterial bed is noncontiguous with the first arterial bed. In some embodiments, the images of the first arterial bed were generated by a DSA, DUS, CT, MRA, ultrasound, or MRI imaging process, or another imaging process. In some embodiments, the images of the second arterial bed were generated by a DSA, DUS, CT, MRA, ultrasound, or MRI imaging process, or another imaging process. In some embodiments, the second arterial bed can be one of an aorta, carotid arteries, lower extremity arteries, upper extremity arteries, renal arteries, or cerebral arteries, and is different than the first arterial bed. In some embodiments, the images of the first arterial bed and the second arterial bed may be received from a computer storage medium that is configured to store patient images. In some embodiments, the images of the first arterial bed and the second arterial bed may be received directly from a facility which generates the images. In some embodiments, the images of the first arterial bed and second arterial bed may be received indirectly from a facility which generates the images. In some embodiments, images of first arterial bed may be received from a different source than images of second arterial bed. At block1974the process1970automatically quantifies ASCVD in the first arterial bed. At block1976, the process1970automatically quantifies ASCVD in the second arterial bed. The quantification of ASCVD in the first arterial bed and the second arterial bed can be done using any of the quantification disclosed herein, or other quantification. At block1978the process1970determines a first weighted assessment of the first arterial bed, the first weighted assessment associated with arterial bed specific adverse events for the first arterial bed. At block1980the process1970determines a second weighted assessment of second arterial bed, the second weighted assessment associated with arterial bed specific adverse events for the second arterial bed. At block1982the process1970generates an ASCVD patient risk score based on the first weighted assessment and the second weighted assessment. FIG.19Eis a block diagram depicting an embodiment of a computer hardware system1985configured to run software for implementing one or more embodiments of systems and methods for determining a risk assessment using sequential imaging of noncontiguous arterial beds of a patient. In some embodiments, the systems, processes, and methods described herein are implemented using a computing system, such as the one illustrated inFIG.19E. The example computer system1985is in communication with one or more computing systems1994and/or one or more data sources1995via one or more networks1993. WhileFIG.19Eillustrates an embodiment of a computing system1985, it is recognized that the functionality provided for in the components and modules of computer system1985can be combined into fewer components and modules, or further separated into additional components and modules. The computer system1985can comprise a Quantification, Weighting, and Assessment Engine1991that carries out the functions, methods, acts, and/or processes described herein. For example, in some embodiments the functions of blocks1956,1958,1960,1962, and1964ofFIG.19C. In some embodiments, the functions of blocks1972,1974,1976,1978,1980, and1982ofFIG.19D. The Quantification, Weighting, and Assessment Engine1991is executed on the computer system1985by a central processing unit1989discussed further below. In general the word “engine,” as used herein, refers to logic embodied in hardware or firmware or to a collection of software instructions, having entry and exit points. Such “engines” may also be referred to as a module, and are written in a program language, such as JAVA, C, or C++, or the like. Software modules can be compiled or linked into an executable program, installed in a dynamic link library, or can be written in an interpreted language such as BASIC, PERL, LAU, PHP or Python and any such languages. Software modules can be called from other modules or from themselves, and/or can be invoked in response to detected events or interruptions. Modules implemented in hardware include connected logic units such as gates and flip-flops, and/or can include programmable units, such as programmable gate arrays or processors. Generally, the modules described herein refer to logical modules that can be combined with other modules or divided into sub-modules despite their physical organization or storage. The modules are executed by one or more computing systems, and can be stored on or within any suitable computer readable medium, or implemented in-whole or in-part within special designed hardware or firmware. Not all calculations, analysis, and/or optimization require the use of computer systems, though any of the above-described methods, calculations, processes, or analyses can be facilitated through the use of computers. Further, in some embodiments, process blocks described herein can be altered, rearranged, combined, and/or omitted. The computer system1985includes one or more processing units (CPU, GPU, TPU)1989, which can comprise a microprocessor. The computer system1985further includes a physical memory1990, such as random access memory (RAM) for temporary storage of information, a read only memory (ROM) for permanent storage of information, and a mass storage device1986, such as a backing store, hard drive, rotating magnetic disks, solid state disks (SSD), flash memory, phase-change memory (PCM), 3D XPoint memory, diskette, or optical media storage device. Alternatively, the mass storage device can be implemented in an array of servers. Typically, the components of the computer system1985are connected to the computer using a standards-based bus system. The bus system can be implemented using various protocols, such as Peripheral Component Interconnect (PCI), Micro Channel, SCSI, Industrial Standard Architecture (ISA) and Extended ISA (EISA) architectures. The computer system1985includes one or more input/output (I/O) devices and interfaces1988, such as a keyboard, mouse, touch pad, and printer. The I/O devices and interfaces1988can include one or more display devices, such as a monitor, that allows the visual presentation of data to a user. More particularly, a display device provides for the presentation of GUIs as application software data, and multi-media presentations, for example. The I/O devices and interfaces1988can also provide a communications interface to various external devices. The computer system1985can comprise one or more multi-media devices1985, such as speakers, video cards, graphics accelerators, and microphones, for example. Computing System Device/Operating System The computer system1985can run on a variety of computing devices, such as a server, a Windows server, a Structure Query Language server, a Unix Server, a personal computer, a laptop computer, and so forth. In other embodiments, the computer system1985can run on a cluster computer system, a mainframe computer system and/or other computing system suitable for controlling and/or communicating with large databases, performing high volume transaction processing, and generating reports from large databases. The computing system1985is generally controlled and coordinated by an operating system software, such as z/OS, Windows, Linux, UNIX, BSD, PHP, SunOS, Solaris, MacOS, ICloud services or other compatible operating systems, including proprietary operating systems. Operating systems control and schedule computer processes for execution, perform memory management, provide file system, networking, and I/O services, and provide a user interface, such as a graphical user interface (GUI), among other things. Network The computer system1985illustrated inFIG.19Eis coupled to a network1993, such as a LAN, WAN, or the Internet via a communication link1992(wired, wireless, or a combination thereof). Network1993communicates with various computing devices and/or other electronic devices. Network1993is communicating with one or more computing systems1994and one or more data sources1995. For example, the computer system1985can receive image information (e.g., including images of arteries or an arterial bed, information associated to the images, etc.) from computing systems1994and/or data source1995via the network1993and store the received image information in the mass storage device1986. The Quantification, Weighting, and Assessment Engine1991can then access the mass storage device1986as needed to. In some embodiments, the Quantification, Weighting, and Assessment Engine1991can access computing systems1994and/or data sources1995, or be accessed by computing systems1985and/or data sources1995, through a web-enabled user access point. Connections can be a direct physical connection, a virtual connection, and other connection type. The web-enabled user access point can comprise a browser module that uses text, graphics, audio, video, and other media to present data and to allow interaction with data via the network1993. The output module can be implemented as a combination of an all-points addressable display such as a cathode ray tube (CRT), a liquid crystal display (LCD), a plasma display, or other types and/or combinations of displays. The output module can be implemented to communicate with input devices1988and they also include software with the appropriate interfaces which allow a user to access data through the use of stylized screen elements, such as menus, windows, dialogue boxes, tool bars, and controls (for example, radio buttons, check boxes, sliding scales, and so forth). Furthermore, the output module can communicate with a set of input and output devices to receive signals from the user. Other Systems The computing system1985can include one or more internal and/or external data sources (for example, data sources1995). In some embodiments, one or more of the data repositories and the data sources described above can be implemented using a relational database, such as DB2, Sybase, Oracle, CodeBase, and Microsoft® SQL Server as well as other types of databases such as a flat-file database, an entity relationship database, and object-oriented database, and/or a record-based database. The computer system1985can also access one or more databases1995. The data sources1995can be stored in a database or data repository. The computer system1985can access the one or more data sources1995through a network1993or can directly access the database or data repository through I/O devices and interfaces1988. The data repository storing the one or more data sources1995can reside within the computer system1985. Additional Detail—General In connection with any of the features and/or embodiments described herein, in some embodiments, the system can be configured to analyze, characterize, track, and/or utilize one or more plaque features derived from a medical image. For example, in some embodiments, the system can be configured to analyze, characterize, track, and/or utilize one or more dimensions of plaque and/or an area of plaque, in two dimensions, three dimensions, and/or four dimensions, for example over time or changes over time. In addition, in some embodiments, the system can be configured to rank one or more areas of plaque and/or utilize such ranking for analysis. In some embodiments, the ranking can be binary, ordinal, continuous, and/or mathematically transformed. In some embodiments, the system can be configured to analyze, characterize, track, and/or utilize the burden or one or more geometries of plaque and/or an area of plaque. For example, in some embodiments, the one or more geometries can comprise spatial mapping in two dimensions, three dimensions, and/or four dimensions over time. As another example, in some embodiments, the system can be configured to analyze transformation of one or more geometries. In some embodiments, the system can be configured to analyze, characterize, track, and/or utilize diffuseness of plaque regions, such as spotty v. continuous. For example, in some embodiments, pixels or voxels within a region of interest can be compared to pixels or voxels outside of the region of interest to gain more information. In particular, in some embodiments, the system can be configured to analyze a plaque pixel or voxel with another plaque pixel or voxel. In some embodiments, the system can be configured to compare a plaque pixel or voxel with a fat pixel or voxel. In some embodiments, the system can be configured to compare a plaque pixel or voxel with a lumen pixel or voxel. In some embodiments, the system can be configured to analyze, characterize, track, and/or utilize location of plaque or one or more areas of plaque. For example, in some embodiments, the location of plaque determined and/or analyzed by the system can include whether the plaque is within the left anterior descending (LAD), left circumflex artery (LCx), and/or the right coronary artery (RCA). In particular, in some embodiments, plaque in the proximal LAD can influence plaque in the mid-LAD, and plaque in the LCx can influence plaque in the LAD, such as via mixed effects modeling. As such, in some embodiments, the system can be configured to take into account neighboring structures. In some embodiments, the location can be based on whether it is in the proximal, mid, or distal portion of a vessel. In some embodiments, the location can be based on whether a plaque is in the main vessel or a branch vessel. In some embodiments, the location can be based on whether the plaque is myocardial facing or pericardial facing (for example as an absolute binary dichotomization or as a continuous characterization around 360 degrees of an artery), whether the plaque is juxtaposed to fat or epicardial fat or not juxtaposed to fat or epicardial fat, subtending a substantial amount of myocardium or subtending small amounts of myocardium, and/or the like. For example, arteries and/or plaques that subtend large amounts of subtended myocardium can behave differently than those that do not. As such, in some embodiments, the system can be configured to take into account the relation to the percentage of subtended myocardium. In connection with any of the features and/or embodiments described herein, in some embodiments, the system can be configured to analyze, characterize, track, and/or utilize one or more peri-plaque features derived from a medical image. In particular, in some embodiments, the system can be configured to analyze lumen, for example in two dimensions in terms of area, three dimensions in terms of volume, and/or four dimensions across time. In some embodiments, the system can be configured to analyze the vessel wall, for example in two dimensions in terms of area, three dimensions in terms of volume, and/or four dimensions across time. In some embodiments, the system can be configured to analyze peri-coronary fat. In some embodiments, the system can be configured to analyze the relationship to myocardium, such as for example a percentage of subtended myocardial mass. In connection with any of the features and/or embodiments described herein, in some embodiments, the system can be configured to analyze and/or use medical images obtained using different image acquisition protocols and/or variables. In some embodiments, the system can be configured to characterize, track, analyze, and/or otherwise use such image acquisition protocols and/or variables in analyzing images. For example, image acquisition parameters can include one or more of mA, kVp, spectral CT, photon counting detector CT, and/or the like. Also, in some embodiments, the system can be configured to take into account ECG gating parameters, such as retrospective v. prospective ECG helical. Another example can be prospective axial v. no gating. In addition, in some embodiments, the system can be configured to take into account whether medication was used to obtain the image, such as for example with or without a beta blocker, with or without contrast, with or without nitroglycerin, and/or the like. Moreover, in some embodiments, the system can be configured to take into account the presence or absence of a contrast agent used during the image acquisition process. For example, in some embodiments, the system can be configured to normalize an image based on a contrast type, contrast-to-noise ratio, and/or the like. Further, in some embodiments, the system can be configured to take into account patient biometrics when analyzing a medical image. For example, in some embodiments, the system can be configured to normalize an image to Body Mass Index (BMI) of a subject, normalize an image to signal-to-noise ratio, normalize an image to image noise, normalize an image to tissue within the field of view, and/or the like. In some embodiments, the system can be configured to take into account the image type, such as for example CT, non-contrast CT, MRI, x-ray, nuclear medicine, ultrasound, and/or any other imaging modality mentioned herein. In connection with any of the features and/or embodiments described herein, in some embodiments, the system can be configured to normalize any analysis and/or results, whether or not based on image processing. For example, in some embodiments, the system can be configured to standardize any reading or analysis of a subject, such as those derived from a medical image of the subject, to a normative reference database. Similarly, in some embodiments, the system can be configured to standardize any reading or analysis of a subject, such as those derived from a medical image of the subject, to a diseased database, such as for example patients who experienced heart attack, patients who are ischemic, and/or the like. In some embodiments, the system can be configured to utilize a control database for comparison, standardization, and/or normalization purposes. For example, a control database can comprise data derived from a combination of subjects, such as 50% of subjects who experience heart attack and 50% who did not, and/or the like. In some embodiments, the system can be configured to normalize any analysis, result, or data by applying a mathematical transform, such as a linear, logarithmic, exponential, and/or quadratic transform. In some embodiments, the system can be configured to normalize any analysis, result, or data by applying a machine learning algorithm. In connection with any of the features and/or embodiments described herein, in some embodiments, the term “density,” can refer to radiodensity, such as in Hounsfield units. In connection with any of the features and/or embodiments described herein, in some embodiments, the term “density,” can refer to absolute density, such as for example when analyzing images obtained from imaging modalities such as dual energy, spectral, photon counting CT, and/or the like. In some embodiments, one or more images analyzed and/or accessed by the system can be normalized to contrast-to-noise. In some embodiments, one or more images analyzed and/or accessed by the system can be normalized to signal-to-noise. In some embodiments, one or more images analyzed and/or accessed by the system can be normalized across the length of a vessel, such as for example along a transluminal attenuation gradient. In some embodiments, one or more images analyzed and/or accessed by the system can be mathematically transformed, for example by applying a logarithmic, exponential, and/or quadratic transformation. In some one or more images analyzed and/or accessed by the system can be transformed using machine learning. In connection with any of the features and/or embodiments described herein, in some embodiments, the term “artery” can include any artery, such as for example, coronary, carotid, cerebral, aortic, renal, lower extremity, and/or upper extremity. In connection with any of the features and/or embodiments described herein, in some embodiments, the system can utilize additional information obtained from various sources in analyzing and/or deriving data from a medical image. For example, in some embodiments, the system can be configured to obtain additional information from patient history and/or physical examination. In some embodiments, the system can be configured to obtain additional information from other biometric data, such as those which can be gleaned from wearable devices, which can include for example heart rate, heart rate variability, blood pressure, oxygen saturation, sleep quality, movement, physical activity, chest wall impedance, chest wall electrical activity, and/or the like. In some embodiments, the system can be configured to obtain additional information from clinical data, such as for example from Electronic Medical Records (EMR). In some embodiments, additional information used by the system can be linked to serum biomarkers, such as for example of cholesterol, renal function, inflammation, myocardial damage, and/or the like. In some embodiments, additional information used by the system can be linked to other omics markers, such as for example transcriptomics, proteomics, genomics, metabolomics, microbiomics, and/or the like. In connection with any of the features and/or embodiments described herein, in some embodiments, the system can utilize medical image analysis to derive and/or generate assessment of a patient and/or provide assessment tools to guide patient assessment, thereby adding clinical importance and use. In some embodiments, the system can be configured to generate risk assessment at the plaque-level (for example, will this plaque cause heart attack and/or does this plaque cause ischemia), vessel-level (for example, will this vessel be the site of a future heart attack and/or does this vessel exhibit ischemia), and/or patient level (for example, will this patient experience heart attack and/or the like). In some embodiments, the summation or weighted summation of plaque features can contribute to segment-level features, which in turn can contribute to vessel-level features, which in turn can contribute to patient-level features. In some embodiments, the system can be configured to generate a risk assessment of future major adverse cardiovascular events, such as for example heart attack, stroke, hospitalizations, unstable angina, stable angina, coronary revascularization, and/or the like. In some embodiments, the system can be configured to generate a risk assessment of rapid plaque progression, medication non-response (for example if plaque progresses significantly even when medications are given), benefit (or lack thereof) of coronary revascularization, new plaque formation in a site that does not currently have any plaque, development of symptoms (such as angina, shortness of breath) that is attributable to the plaque, ischemia and/or the like. In some embodiments, the system can be configured to generate an assessment of other artery consequences, such as for example carotid (stroke), lower extremity (claudication, critical limb ischemia, amputation), aorta (dissection, aneurysm), renal artery (hypertension), cerebral artery (aneurysm, rupture), and/or the like. Additional Detail—Determination of Non-Calcified Plaque from a Medical Image(s) As discussed herein, in some embodiments, the system can be configured to determine non-calcified plaque from a medical image, such as a non-contrast CT image and/or image obtained using any other image modality as those mentioned herein. Also, as discussed herein, in some embodiments, the system can be configured to utilize radiodensity as a parameter or measure to distinguish and/or determine non-calcified plaque from a medical image. In some embodiments, the system can utilize one or more other factors, which can be in addition to and/or used as an alternative to radiodensity, to determine non-calcified plaque from a medical image. For example, in some embodiments, the system can be configured to utilize absolute material densities via dual energy CT, spectral CT or photon-counting detectors. In some embodiments, the system can be configured to analyze the geometry of the spatial maps that “look” like plaque, for example compared to a known database of plaques. In some embodiments, the system can be configured to utilize smoothing and/or transform functions to get rid of image noise and heterogeneity from a medical image to help determine non-calcified plaque. In some embodiments, the system can be configured to utilize auto-adjustable and/or manually adjustable thresholds of radiodensity values based upon image characteristics, such as for example signal-to-noise ratios, body morph (for example obesity can introduce more image noise), and/or the like. In some embodiments, the system can be configured to utilize different thresholds based upon different arteries. In some embodiments, the system can be configured to account for potential artifacts, such as beam hardening artifacts that may preferentially affect certain arteries (for example, the spine may affect right coronary artery in some instances). In some embodiments, the system can be configured to account for different image acquisition parameters, such as for example, prospective vs. retrospective ECG gating, how much mA and kvP, and/or the like. In some embodiments, the system can be configured to account for different scanner types, such as for example fast-pitch helical vs. traditional helical. In some embodiments, the system can be configured to account for patient-specific parameters, such as for example heart rate, scan volume in imaged field of view, and/or the like. In some embodiments, the system can be configured to account for prior knowledge. For example, in some embodiments, if a patient had a contrast-enhanced CT angiogram in the past, the system can be configured to leverage findings from the previous contrast-enhanced CT angiogram for a non-contrast CT image(s) of the patient moving forward. In some embodiments, in cases where epicardial fat is not present outside an artery, the system can be configured to leverage other Hounsfield unit threshold ranges to depict the outer artery wall. In some embodiments, the system can be configured to utilize a normalization device, such as those described herein, to account for differences in scan results (such as for example density values, etc.) between different scanners, scan parameters, and/or the like. Additional Detail—Determination of Cause of Change in Calcium As discussed herein, in some embodiments, the system can be configured to determine a cause of change in calcium level of a subject by analyzing one or more medical images. In some embodiments, the change in calcium level can be by some external force, such as for example, medication treatment, lifestyle change (such as improved diet, physical activity), stenting, surgical bypass, and/or the like. In some embodiments, the system is configured to include one or more assessments of treatment and/or recommendations of treatment based upon these findings. In some embodiments, the system can be configured to determine a cause of change in calcium level of a subject and use the same for prognosis. In some embodiments, the system can be configured to enable improved diagnosis of atherosclerosis, stenosis, ischemia, inflammation in the peri-coronary region, and/or the like. In some embodiments, the system can be configured to enable improved prognostication, such as for example forecasting of some clinical event, such as major adverse cardiovascular events, rapid progression, medication non-response, need for revascularization, and/or the like. In some embodiments, the system can be configured to enable improved prediction, such as for example enabling identification of who will benefit from what therapy and/or enabling monitoring of those changes over time. In some embodiments, the system can be configured to enable improved clinical decision making, such as for example which medications may be helpful, which lifestyle interventions might be helpful, which revascularization or surgical procedures may be helpful, and/or the like. In some embodiments, the system can be configured to enable comparison to one or more normative databases in order to standardize findings to a known ground truth database. In some embodiments, a change in calcium level can be linear, non-linear, and/or transformed. In some embodiments, a change in calcium level can be on its own or in other words involve just calcium. In some embodiments, a change in calcium level can be in relation to one or more other constituents, such as for example, other non-calcified plaque, vessel volume/area, lumen volume/area, and/or the like. In some embodiments, a change in calcium level can be relative. For example, in some embodiments, the system can be configured to determine whether a change in calcium level is above or below an absolute threshold, whether a change in calcium level comprises a continuous change upwards or downwards, whether a change in calcium level comprises a mathematical transform upwards or downwards, and/or the like. As discussed herein, in some embodiments, the system can be configured to analyze one or more variables or parameters, such as those relating to plaque, in determining the cause of a change in calcium level. For example, in some embodiments, the system can be configured to analyze one or more plaque parameters, such as a ratio or function of volume or surface area, heterogeneity index, geometry, location, directionality, and/or radiodensity of one or more regions of plaque within the coronary region of the subject at a given point in time. As discussed herein, in some embodiments, the system can be configured to characterize a change in calcium level between two points in time. For example, in some embodiments, the system can be configured to characterize a change in calcium level as one of positive, neutral, or negative. In some embodiments, the system can be configured to characterize a change in calcium level as positive when the ratio of volume to surface area of a plaque region has decreased, as this can be indicative of how homogeneous and compact the structure is. In some embodiments, the system can be configured to characterize a change in calcium level as positive when the size of a plaque region has decreased. In some embodiments, the system can be configured to characterize a change in calcium level as positive when the density of a plaque region has increased or when an image of the region of plaque comprises more pixels with higher density values, as this can be indicative of stable plaque. In some embodiments, the system can be configured to characterize a change in calcium level as positive when there is a reduced diffuseness. For example, if three small regions of plaque converge into one contiguous plaque, that can be indicative of non-calcified plaque calcifying along the entire plaque length. In some embodiments, the system can be configured to characterize a change in calcium level as negative when the system determines that a new region of plaque has formed. In some embodiments, the system can be configured to characterize a change in calcium level as negative when more vessels with calcified plaque appear. In some embodiments, the system can be configured to characterize a change in calcium level as negative when the ratio of volume to surface area has increased. In some embodiments, the system can be configured to characterize a change in calcium level as negative when there has been no increase in Hounsfield density per calcium pixel. In some embodiments, the system can be configured to utilize a normalization device, such as those described herein, to account for differences in scan results (such as for example density values, etc.) between different scanners, scan parameters, and/or the like. Additional Detail—Quantification of Plaque, Stenosis, and/or CAD-RADS Score As discussed herein, in some embodiments, the system can be configured to generate quantifications of plaque, stenosis, and/or CAD-RADS scores from a medical image. In some embodiments, as part of such quantification analysis, the system can be configured to determine a percentage of higher or lower density plaque within a plaque region. For example, in some embodiments, the system can be configured to classify higher density plaque as pixels or voxels that comprise a Hounsfield density unit above 800 and/or 1000. In some embodiments, the system can be configured to classify lower density plaque as pixels or voxels that comprise a Hounsfield density unit below 800 and/or 1000. In some embodiments, the system can be configured to utilize other thresholds. In some embodiments, the system can be configured to report measures on a continuous scale, an ordinal scale, and/or a mathematically transformed scale. In some embodiments, the system can be configured to utilize a normalization device, such as those described herein, to account for differences in scan results (such as for example density values, etc.) between different scanners, scan parameters, and/or the like. Additional Detail—Disease Tracking As discussed herein, in some embodiments, the system can be configured to track the progression and/or regression of an arterial and/or plaque-based disease, such as atherosclerosis, stenosis, ischemia, and/or the like. For example, in some embodiments, the system can be configured to track the progression and/or regression of a disease over time by analyzing one or more medical images obtained from two different points in time. As an example, in some embodiments, one or more normal regions from an earlier scan can turn into abnormal regions in the second scan or vice versa. In some embodiments, the one or more medical images obtained from two different points in time can be obtained from the same modality and/or different modalities. For example, scans from both points in time can be CT, whereas in some cases the earlier scan can be CT while the later scan can be ultrasound. Further, in some embodiments, the system can be configured to track the progression and/or regression of disease by identifying and/or tracking a change in density of one or more pixels and/or voxels, such as for example Hounsfield density and/or absolute density. In some embodiments, the system can be configured to track change in density of one or more pixels or voxels on a continuous basis and/or dichotomous basis. For example, in some embodiments, the system can be configured to classify an increase in density as stabilization of a plaque region and/or classify a decrease in density as destabilization of a plaque region. In some embodiments, the system can be configured to analyze surface area and/or volume of a region of plaque, ratio between the two, absolute values of surface area and/or volume, gradient(s) of surface area and/or volume, mathematical transformation of surface area and/or volume, directionality of a region of plaque, and/or the like. In some embodiments, the system can be configured to track the progression and/or regression of disease by analyzing vascular morphology. For example, in some embodiments, the system can be configured to analyze and/or track the effects of the plaque on the outer vessel wall getting bigger or smaller, the effects of the plaque on the inner vessel lumen getting smaller or bigger, and/or the like. In some embodiments, the system can be configured to utilize a normalization device, such as those described herein, to account for differences in scan results (such as for example density values, etc.) between different scanners, scan parameters, and/or the like. Global Ischemia Index Some embodiments of the systems, devices, and methods described herein are configured to determine a global ischemia index that is representative of risk of ischemia for a particular subject. For example, in some embodiments, the system is configured to generate a global ischemia index for a subject based at least in part on analysis of one or more medical images and/or contributors of ischemia as well as consequences and/or associated factors to ischemia along the temporal ischemic cascade. In some embodiments, the generated global ischemia index can be used by the systems, methods, and devices described herein for determining and/or predicting the outcome of one or more treatments and/or generating or guiding a recommended medical treatment, therapy, medication, and/or procedure for the subject. In particular, in some embodiments, the systems, devices, and methods described herein can be configured to automatically and/or dynamically analyze one or more medical images and/or other data to identify one or more features, such as plaque, fat, and/or the like, for example using one or more machine learning, artificial intelligence (AI), and/or regression techniques. In some embodiments, one or more features identified from medical image data can be inputted into an algorithm, such as a second-tier algorithm which can be a regression algorithm or multivariable regression equation, for automatically and/or dynamically generating a global ischemia index. In some embodiments, the AI algorithm for determining a global ischemia index can be configured to utilize one or more variables as input, such as different temporal stages of the ischemia cascade as described herein, and compare the same to an output, such as myocardial blood flow, as a ground truth. In some embodiments, the output, such as myocardial blood flow, can be indicative of the presence or absence of ischemia as a binary measure and/or one or more moderations of ischemia, such as none, mild, moderate, severe, and/or the like. In some embodiments, the system can be configured to utilize a normalization device, such as those described herein, to account for differences in scan results (such as for example density values, etc.) between different scanners, scan parameters, and/or the like. In some embodiments, by utilizing one or more computer-implemented algorithms, such as for example one or more machine learning and/or regression techniques, the systems, devices, and methods described herein can be configured to analyze one or more medical images and/or other data to generate a global ischemia index and/or a recommended treatment or therapy within a clinical reasonable time, such as for example within about 1 minute, about 2 minutes, about 3 minutes, about 4 minutes, about 5 minutes, about 10 minutes, about 20 minutes, about 30 minutes, about 40 minutes, about 50 minutes, about 1 hour, about 2 hours, about 3 hours, and/or within a time period defined by two of the aforementioned values. In generating the global ischemia index, in some embodiments, the systems, devices, and methods described herein are configured to: (a) temporally integrate one or more variables along the “ischemic” pathway and weight their input differently based upon their temporal sequence in the development and worsening of coronary ischemia; and/or (b) integrate the contributors, associated factors and consequences of ischemia to improve diagnosis of ischemia. Furthermore, in some embodiments, the systems, devices, and methods described herein transcend analysis beyond just the coronary arteries or just the left ventricular myocardium, and instead can include a combination one or more of: coronary arteries; coronary arteries after nitroglycerin or vasodilator administration; relating coronary arteries to the fractional myocardial mass; non-cardiac cardiac examination; relationship of the coronary-to-non-coronary cardiac; and/or non-cardiac examinations. In addition, in some embodiments, the systems, devices, and methods described herein can be configured to determine the fraction of myocardial mass or subtended myocardial mass to vessel or lumen volume, for example in combination with any of the other features described herein such as the global ischemia index, to further determine and/or guide a recommended medical treatment or procedure, such as revascularization, stenting, surgery, medication such as statins, and/or the like. As such, in some embodiments, the systems, devices, and methods described herein are configured to evaluate ischemia and/or provide recommended medical treatment for the same in a manner that does not currently exist today, accounting for the totality of information contributing to ischemia. In some embodiments, the system can be configured to differentiate between micro and macro vascular ischemia, for example based on analysis of one or more of epicardial coronaries, measures of myocardium densities, myocardium mass, volume of epicardial coronaries, and/or the like. In some embodiments, by differentiating between micro and macro vascular ischemia, the system can be configured to generate different prognostic and/or therapeutic approaches based on such differentiation. In some embodiments, when a medical image(s) of a patient is obtained, such as for example using CT, MRI, and/or any other modality, not only information relating to coronary arteries but other information is also obtained, which can include information relating to the vascular system and/or the rest of the heart and/or chest area that is within the frame of reference. While certain technologies may simply focus on the information relating to coronary arteries from such medical scans, some embodiments described herein are configured to leverage more of the information that is inherently obtained from such images to obtain a more global indication of ischemia and/or use the same to generate and/or guide medical therapy. In particular, in some embodiments, the systems, devices, and methods described herein are configured to examine both the contributors as well as consequences and associated factors to ischemia, rather than focusing only on either contributors or consequences. In addition, in some embodiments, the systems, devices, and methods described herein are configured to consider the entirety and/or a portion of temporal sequence of ischemia or the “ischemic pathway.” Moreover, in some embodiments, the systems, devices, and methods described herein are configured to consider the non-coronary cardiac consequences as well as the non-cardiac associated factors that contribute to ischemia. Further, in some embodiments, the systems, devices, and methods described herein are configured to consider the comparison of pre- and post-coronary vasodilation. Furthermore, in some embodiments, the systems, devices, and methods described herein are configured to consider a specific list of variables, rather than a general theme, appropriately weighting their contribution to ischemia. Also, in some embodiments, the systems, devices, and methods described herein can be validated against multiple “measurements” of ischemia, including absolutely myocardial blood flow, myocardial perfusion, and/or flow ratios. Generally speaking, ischemia diagnosis is currently evaluated by either stress tests (myocardial ischemia) or flow ratios in the coronary artery (coronary ischemia), the latter of which can include fractional flow reserve, instantaneous wave-free pressure ratio, hyperemic resistance, coronary flow, and/or the like. However, coronary ischemia can be thought of as only an indirect measure of what is going on in the myocardium, and myocardial ischemia can be thought of as only an indirect measure of what is going on in the coronary arteries. Further certain tests measure only individual components of ischemia, such as contributors of ischemia (such as, stenosis) or sequelae of ischemia (such as, reduced myocardial perfusion or blood flow). However, there are numerous other contributors to ischemia beyond stenosis, numerous associated factors that increase likelihood of ischemia, and many other early and late consequences of ischemia. One technical shortcoming of such existing techniques is that if you only look at factors that contribute or are associated with ischemia, then you are always too early—i.e., in the pre-ischemia stage. Conversely, if you only look at factors that are consequences/sequelae of ischemia, then you are always too late—i.e., in the post-ischemia stage. And ultimately, if you do not look at everything (including associative factors, contributors, early and late consequences), you will not understand where an individual exists on the continuum of coronary ischemia. This may have very important implications in the type of therapy an individual should undergo—such as for example medical therapy, intensification of medical therapy, coronary revascularization by stenting, and/or coronary revascularization by coronary artery bypass surgery. As such, in some embodiments described herein, the systems, methods, and devices are configured to generate or determine a global ischemia index for a particular patient based at least in part on analysis of one or more medical images or data of the patient, wherein the generated global ischemia index is a measure of ischemia for the patient along the continuum of coronary ischemia or the ischemic cascade as described in further detail below. In other words, in some embodiments, unlike in existing technologies or techniques, the global ischemia index generated by the system can be indicative of a stage or risk or development of ischemia of a particular patient along the continuum of coronary ischemia or the ischemic cascade. Further, there can be a relationship between the things that contribute/cause ischemia and the consequences/sequelae of ischemia that occur in a continuous and overlapping fashion. Thus, it can be much more accurate to identify ischemic individuals by combining various factors that contribute/cause ischemia with factors that are consequences/sequelae of ischemia. As such, in some embodiments described herein, the systems, devices, and methods are configured to analyze one or more associative factors, contributors, as well as early and late consequences of ischemia in generating a global ischemia index, which can provide a more global indication of the risk of ischemia. Further, in some embodiments described herein, the systems, devices, and methods are configured to use such generated global ischemia index to determine and/or guide a type of therapy an individual should undergo, such as for example medical therapy, intensification of medical therapy, coronary revascularization by stenting, and/or coronary revascularization by coronary artery bypass surgery. As discussed herein, in some embodiments, the systems, devices, and methods are configured to generate a global ischemia index indicative and/or representative of a risk of ischemia for a particular subject based on one or more medical images and/or other data. More specifically, in some embodiments, the system can be configured to generate a global ischemia index as a measurement of myocardial ischemia. In some embodiments, the generated global ischemia index provides a much more accurate and/or direct measurement of myocardial ischemia compared to existing techniques. Ischemia, by its definition, is an inadequate blood supply to an organ or part of the body. By this definition, the diagnosis of ischemia can be best performed by examining the relationship of the coronary arteries (blood supply) to the heart (organ or part of the body). However, this is not the case as current generation tests measure either the coronary arteries (e.g., FFR, iFR) or the heart (e.g. stress testing by nuclear SPECT, PET, CMR or echo). Because current generation tests fail to examine the relationships of the coronary arteries, they do not account for the temporal sequence of events that occurs in the evolution of ischemia (from none-to-some, as well as from mild-to-moderate-to-severe) or the “ischemic pathway,” as will be described in more detail herein. Quantifying the relationship of the coronary arteries to the heart and other non-coronary structures to the manifestation of ischemia, as well as the temporal findings associated with the stages of ischemia in the ischemic cascade, can improve our accuracy of diagnosis—as well as our understanding of ischemia severity—in a manner not possible with current generation tests. As discussed above, no test currently exists for directly measuring ischemia; rather, existing tests only measure certain specific factors or surrogate markers associated with ischemia, such as for example hypoperfusion or fractional flow reserve (FFR) or wall motion abnormalities. In other words, the current approaches to ischemia evaluation are entirely too simplistic and do not consider all of the variables. Ischemia has historically been “measured” by stress tests. The possible stress tests that exist include: (a) exercise treadmill ECG testing without imaging; (b) stress testing by single photon emission computed tomography (SPECT); (c) stress testing by positron emission tomography (PET); (d) stress testing by computed tomography perfusion (CTP); (e) stress testing by cardiac magnetic resonance (CMR) perfusion; and (f) stress testing by echocardiography. Also, SPECT, PET, CTP and CMR can measure relative myocardial perfusion, in that you compare the most normal appearing portion of the left ventricular myocardium to the abnormal-appearing areas. PET and CTP can have the added capability of measuring absolute myocardial blood flow and using these quantitative measures to assess the normality of blood supply to the left ventricle. In contrast, exercise treadmill ECG testing measures ST-segment depression as an indirect measure of subendocardial ischemia (reduced blood supply to the inner portion of the heart muscle), while stress echocardiography evaluates the heart for stress-induced regional wall motion abnormalities of the left ventricle. Abnormal relative perfusion, absolute myocardial blood flow, ST segment depression and regional wall motion abnormalities occur at different points in the “ischemic pathway.” Furthermore, in contrast to myocardial measures of the left ventricle, alternative methods to determine ischemia involve direct evaluation of the coronary arteries with pressure or flow wires. The most common 2 measurements are fractional flow reserve (FFR) or iFR. These techniques can compare the pressure distal to a given coronary stenosis to the pressure proximal to the stenosis. While easy to understand and potentially intuitive, these techniques do not account for important parameters that can contribute to ischemia, including diffuseness of “mild” stenoses, types of atherosclerosis causing stenosis; and these techniques take into account neither the left ventricle in whole nor the % left ventricle subtended by a given artery. In some embodiments, the global ischemia index is a measure of myocardial ischemia, and leverages the quantitative information regarding the contributors, associated factors and consequences of ischemia. Further, in some embodiments, the system uses these factors to augment ischemia prediction by weighting their contribution accordingly. In some embodiments, the global ischemia index is aimed to serve as a direct measure of both myocardial perfusion and coronary pressure and to integrate these findings to improve ischemia diagnosis. In some embodiments, unlike existing ischemia “measurement” techniques that focus only on a single factor or a single point in the ischemic pathway, the systems, devices, and methods described herein are configured to analyze and/or use as inputs one or more factors occurring at different points in the ischemic pathway in generating the global ischemia index. In other words, in some embodiments, the systems, devices, and methods described herein are configured to take into account the whole temporal ischemic cascade in generating a global ischemia index for assessing the risk of ischemia and/or generating a recommended treatment or therapy for a particular subject. FIG.20Aillustrates one or more features of an example ischemic pathway. While the ischemic pathway is not definitively proven, it is thought to be as shown inFIG.20A. Having said this, this ischemic pathway may not actually occur in this exact sequence. The ischemic pathway may in fact occur in different order, or many of the events may occur simultaneously and overlap. Nonetheless, the different points along the ischemic pathway can occur at different points in time, thereby adding a temporal aspect in the development of ischemia that some embodiments described herein consider. As illustrated inFIG.20A, the ischemic pathway can illustrate different conditions that can occur when you have a blockage in a heart artery that reduces blood supply to the heart muscle. In other words, the ischemic pathway can illustrate a sequence of pathophysiologic events caused by coronary artery disease. As illustrated inFIG.20A, ischemia can occur or gradually develop in a number of different steps rather than a binary concept. The ischemic pathway illustrates different conditions that may arise as a patient gets more and more ischemic. Different existing tests can show ischemia at different stages along the ischemic pathway. For example, a nuclear stress test can show ischemia sooner rather than an echo test, because nuclear imaging probes hypoperfusion, which is an earlier event in the ischemic pathway, whereas a stress echocardiography probes a later event such as systolic dysfunction. Further, an exercise treadmill EKG testing can show ischemia sometime after an echo stress test, as if EKG testing becomes abnormal ECG changes will show. In addition, a PET scan can measure flow maldistribution, and as such can show signs of ischemia prior to before nuclear stress tests. As such, different tests exist for measuring different conditions and steps along the ischemic cascade. However, there does not exist a global technique that takes into account all of these different conditions that arise throughout the course of the ischemic pathway. As such, in some embodiments herein, the systems, devices and methods are configured to analyze multiple different measures along the temporal ischemic pathway and/or weight them differently in generating a global ischemia index, which can be used to diagnose ischemia and/or provide a recommended therapy and/or treatment. In some embodiments, such multiple measures along the temporal ischemic pathway can be weighted differently in generating the global ischemic index; for example, certain measures that come earlier can be weighted less than those measures that arise later in the ischemic cascade in some embodiments. More specifically, in some embodiments, one or more measures of ischemia can be weighted from less to more heavily in the following general order: flow maldistribution, hypoperfusion, diastolic dysfunction, systolic dysfunction, ECG changes, angina, and/or regional wall motion abnormality. In some embodiments, the system can be configured to take the temporal sequence of the ischemic pathway and integrate and weight various conditions or events accordingly in generating the global ischemia index. Further, in some embodiments, the system can be configured to identify certain conditions or “associative factors” well before actual signs ischemia occur, such as for example fatty liver which is associated with diabetes which is associated with coronary disease. In other words, in some embodiments, the system can be configured to integrate one or more factors that are associated, causal, contributive, and/or consequential to ischemia, take into account the temporal sequence of the same and weight them accordingly to generate an index representative of and/or predicting risk of ischemia and/or generating a recommend treatment. As discussed herein, the global ischemia index generated by some embodiments provide substantial technical advantages over existing techniques for assessing ischemia, which have a number of shortcomings. For example, coronary artery examination alone does not consider the wealth of potential contributors to ischemia, including for example: (1) 3D flow (lumen, stenosis, etc.); (2) endothelial function/vasodilation/vasoconstrictive ability of the coronary artery (e.g., plaque type, burden, etc.); (3) inflammation that may influence the vasodilation/vasoconstrictive ability of the coronary artery (e.g., epicardial adipose tissue surrounding the heart); and/or (4) location (plaques that face the myocardium are further away from the epicardial fat, and may be less influenced by the inflammatory contribution of the fat. Plaques that are at the bifurcation, trifurcation or proximal/ostial location may influence the likelihood of ischemia more than those that are not at the bifurcation, trifurcation or proximal/ostial location). One important consideration is that current methods for determining ischemia by CT rely primarily on computational fluid dynamics which, by its definition, does not include fluid-structure interactions (FSI). However, the use of FSI requires the understanding of the material densities of coronary artery vessels and their plaque constituents, which is not known well. Thus, in some embodiments described herein, one important component is that the lateral boundary conditions in the coronary arteries (lumen wall, vessel wall, plaque) can be known in a relative fashion by setting Hounsfield unit thresholds that represent different material densities or setting absolute material densities to pixels based upon comparison to a known material density (i.e., normalization device in our prior patent). By doing so, and coupling to a machine learning algorithm, some embodiments herein can improve upon the understanding of fluid-structure interactions without having to understand the exact material density, which may inform not only ischemia (blood flow within the vessel) but the ability of a plaque to “fatigue” over time. In addition, in some embodiments, the system is configured to take into account non-coronary cardiac examination and data in addition to coronary cardiac data. The coronary arteries supply blood to not only the left ventricle but also the other chambers of the heart, including the left atrium, the right ventricle and the right atrium. While perfusion is not well measured in these chambers by current generation stress tests, in some embodiments, the end-organ effects of ischemia can be measured in these chambers by determining increases in blood volume or pressure (i.e., size or volumes). Further, if blood volume or pressure increases in these chambers, they can have effects of “backing up” blood flow due to volume overload into the adjacent chambers or vessels. So, as a chain reaction, increases in left ventricular volume may increase volumes in sequential order of: (1) left atrium; (2) pulmonary vein; (3) pulmonary arteries; (4) right ventricle; (5) right atrium; (6) superior vena cava or inferior vena cava. In some embodiments, by taking into account non-coronary cardiac examination, the system can be configured to differentiate the role of ischemia on the heart chambers based upon how “upstream” or “downstream” they are in the ischemic pathway. Moreover, in some embodiments, the system can be configured to take into account the relationship of coronary arteries and non-coronary cardiac examination. Existing methods of ischemia determination limit their examination to either the coronary arteries (e.g., FFR, iFR) or the heart left ventricular myocardium. However, in some embodiments herein, the relationship of the coronary arteries with the heart chambers may act synergistically to improve our diagnosis of ischemia. Further, in some embodiments, the system can be configured to take into account non-cardiac examination. At present, no method of coronary/myocardial ischemia determination accounts for the effects of clinical contributors (e.g., hypertension, diabetes) on the likelihood of ischemia. However, these clinical contributors can manifest several image-based end-organ effects which may increase the likelihood of an individual to manifest ischemia. These can include such image-based signs such as aortic dimension (aneurysms are a common end-organ effect of hypertension) and/or non-alcoholic steatohepatitis (fatty liver is a common end-organ effect of diabetes or pre-diabetes). As such, in some embodiments, the system can be configured to account for these features to augment the likelihood of ischemia diagnosis on a scan-specific, individualized manner. Furthermore, at present, no method of myocardial ischemia determination incorporates other imaging findings that may not be ascertainable by a single method, but can be determined through examination by other methods. For example, the ischemia pathway is often thought to occur, in sequential order, from metabolic alterations (laboratory tests), perfusion abnormalities (stress perfusion), diastolic dysfunction (echocardiogram), systolic dysfunction (echocardiogram or stress test), ECG changes (ECG) and then angina (chest pain, human patient report). In some embodiments, the system can be configured to integrate these factors with the image-based findings of the CT scan and allow for improvements in ischemia determination by weighting these variables in accordance with their stage of the ischemic cascade. As described herein, in some embodiments, the systems, methods, and devices are configured to generate a global ischemia index to diagnose ischemia. In some embodiments, the global ischemia index considers the totality of findings that contribute to ischemia, including, for example one or more of: coronary arteries+nitroglycerin/vasodilator administration+relating coronary arteries to the fractional myocardial mass+non-cardiac cardiac examination+relationship of the coronary-to-non-coronary cardiac+non-cardiac examinations, and/or a subset thereof. In some embodiments, the global ischemia index provides weighted increases of variables to contribution of ischemia based upon where the image-based finding is in the pathophysiology of ischemia. In some embodiments, in generating the global ischemia index, the system is configured to input into a regression model one or more factors that are associative, contributive, casual, and/or consequential to ischemia to optimally diagnose whether a subject ischemic or not. FIG.20Bis a block diagram depicting one or more contributors and one or more temporal sequences of consequences of ischemia utilized by an example embodiment(s) described herein. As illustrated inFIG.20B, in some embodiments, the system can be configured to analyze a number of factors, including contributors, associated factors, causal factors, and/or consequential factors of ischemia and/or use the same as input for generating the global ischemia index. Some of such factors can include those conditions shown inFIG.20B. For example, signs of a fatty liver and/or emphysema in the lungs can be associated factors used by the system as inputs for generating the global ischemia index. Some examples of contributors used as an input(s) by the system can include the inability to vasodilate with nitric oxide and/or nitroglycerin, low density non-calcified plaque, small artery, and/or the like. Some examples of early consequences of ischemia used as an input(s) by the system can include reduced perfusion in the heart muscle, increase in size of the volume of the heart. An example of late consequences of ischemia used as an input(s) by the system can include blood starting to back up into other chambers of heart in addition to the left ventricle. In some embodiments, the global ischemia index accounts for the direct contributors to ischemia, the early consequences of ischemia, the late consequences of ischemia, the associated factors with ischemia and other test findings in relation to ischemia. In some embodiments, one or more these factors can be identified and/or derived automatically, semi-automatically, and/or dynamically using one or more algorithms, such as a machine learning algorithm. Some example algorithms for identifying such features are described in more detail below. Without such trained algorithms, it can be difficult, if not impossible, to take into account all of these factors in generating the global ischemia index within a reasonable time. In some embodiments, these factors, weighted differently and appropriately, can improve diagnosis of ischemia.FIG.20Cis a block diagram depicting one or more features of an example embodiment(s) for determining ischemia by weighting different factors differently. In some embodiments, in generating the global ischemia index, the system is configured to take into account the temporal aspect of the ischemic cascade and weight one or more factors according to the temporal aspect, for example where early signs of ischemia can be weighted less heavily compared to later signs of ischemia. In some embodiments, the system can automatically and/or dynamically determine the different weights for each factor, for example using a regression model. In some embodiments, the system can be configured to derive one or more appropriate weighting factors based on previous analysis of data to determine which factor should be more or less heavily weighted compared to others. In some embodiments, a user can guide and/or otherwise provide input for weighting different factors. As described herein, in some embodiments, the global ischemia index can be generated by a machine learning algorithm and/or a regression algorithm that condenses this multidimensional information into an output of “ischemia” or “no ischemia” when compared to a “gold standard” of ischemia, as measured by myocardial blood flow, myocardial perfusion or flow ratios. In some embodiments, the system can be configured to output an indication of moderation of ischemia, such none, mild, moderate, severe, and/or the like. In some embodiments, the output indication of ischemia can be on a continuous scale. FIG.20Dis a block diagram depicting one or more features of an example embodiment(s) for calculating a global ischemia index. As illustrated inFIG.20D, in some embodiments, the system can be configured to validate the outputted global ischemia index against absolute myocardial blood flow, which can be measured for example by PET and/or CT scans to measure different regions of the heart to see if there are different flows of blood within different regions. As absolute myocardial blood flow can provide an absolute value of volume per time, in some embodiments, the system can be configured to compare the absolute myocardial blood flow of one region to another region, which would not be possible using relative measurements, such as for example using nuclear stress testing. As discussed herein, in some embodiments, the systems, devices, and methods can be configured to utilize a machine learning algorithm and/or regression algorithm for analyzing and/or weighting different factors for generating the global ischemia index. By doing so, in some embodiments, the system can be configured to take into account one or more statistical and/or machine learning considerations. More specifically, in some embodiments, the system can be configured to deliberately duplicate the contribution of particular variables. For example, in some embodiments, non-calcified plaque (NCP), low density non-calcified plaque (LD-NCP), and/or high-risk plaque (HRP) may all contribute to ischemia. In traditional statistics, collinearity could be a reason to select only one out of these three variables, but by utilizing machine learning in some embodiments, the system may allow for data driven exploration of the contribution of multiple variables, even if they share a specific feature. In addition, in some embodiments, the system may take into account certain temporal considerations when training and/or applying an algorithm for generating the global ischemia index. For example, in some embodiments, the system can be configured to give greater weight to consequences/sequelae rather than causes/contributors, as the consequences/sequelae have already occurred. In addition, in some situations, coronary vasodilation is induced before a coronary CT scan because it allows the coronary arteries to be maximum in their size/volume. Nitroglycerin is an endothelium-independent vasodilator as compared to, for example, nitric oxide, which is an endothelium-dependent vasodilator. As nitroglycerin-induced vasodilation occurs in the coronary arteries—and, because a “timing” iodine contrast bolus is often administered before the actual coronary CT angiogram, comparison of the volume of coronary arteries before and after a nitroglycerin administration may allow a direct evaluation of coronary vasodilatory capability, which may significantly augment accurate ischemia diagnosis. Alternatively, an endothelium-dependent vasodilator—like nitric oxide or carbon dioxide—may allow for augmentation of coronary artery size in a manner that can be either replaced or coupled to endothelium-independent vasodilation (by nitroglycerin) to maximize understanding of the ability of coronary arteries to vasodilate. In some embodiments, the system can be configured to measure vasodilatory effects, for example by measuring the diameter of one or more arteries before and/or after administration of nitroglycerin and/or nitric oxide, and use such vasodilatory effects as a direct measurement or indication of ischemia. Alternatively and/or in addition to the foregoing, in some embodiments, the system can be configured to measure such vasodilatory effects and use the same as an input in determining or generating the global ischemia index and/or developing a recommended medical therapy or treatment for the subject. Further, in some embodiments, the system can be configured to relate the coronary arteries to the heart muscle that it provides blood to. In other words, in some embodiments, the system can be configured to take into account fractional myocardial mass when generating a global ischemia index. For ischemia diagnosis, stress testing can be, at present, limited to the left ventricle. For example, in stress echocardiogram (ultrasound), the effects of stress-induced left ventricular regional wall motion abnormalities are examined, while in SPECT, PET and cardiac MRI, the effects of stress-induced left ventricular myocardial perfusion are examined. However, no currently existing technique relates the size (volume), geometry, path and relation to other vessels with the % fractional myocardial mass subtended by that artery. Further, one assumes that the coronary artery distribution is optimal but, in many people, it may not be. Therefore, understanding an optimization platform to compute optimal blood flow through the coronary arteries may be useful in guiding treatment decisions. As such, in some embodiments, the system is configured to determine the fractional myocardial mass or the relationship of coronary arteries to the left ventricular myocardium that they subtend. In particular, in some embodiments, the system is configured to determine and/or tack into account the subtended mass of myocardium to the volume of arterial vessel. Historically, myocardial perfusion evaluation for myocardial ischemia has been performed using stress tests, such as nuclear SPECT, PET, cardiac MRI or cardiac CT perfusion. These methods have relied upon a 17-segment myocardial model, which classifies perfusion defects by location. There can be several limitations to this, including: (1) assuming that all 17 segments have the same size; (2) assuming that all 17 segments have the same prognostic importance; and (3) does not relate the myocardial segments to the coronary arteries that provide blood supply to them. As such, to address such shortcomings, in some embodiments, the system can be configured to analyze fractional myocardial mass (FMM). Generally speaking, FMM aims to relate the coronary arteries to the amount of myocardium that they subtend. These can have important implications on prognostication and treatment. For example, a patient may have a 70% stenosis in an artery, which has been a historical cut point where coronary revascularization (stenting) is considered. However, there may be very important prognostic and therapeutic implications for patients who have a 70% stenosis in an artery that subtends 1% of the myocardium vs. a 70% stenosis in an artery that subtends 15% of the myocardium. This FMM has been historically calculated using a “stem-and-crown” relationship between the myocardium on CT scans and the coronary arteries on CT scans and has been reported to have the following relationship: M=kL¾, where M=mass, k=constant, and L=length. However, this relationship, while written about quite frequently, has not been validated extensively. Nor have there been any cut points that can effectively guide therapy. The guidance of therapy can come in many regards, including: (1) decision to perform revascularization: high FMM, perform revascularization to improve event-free survival; low FMM, medical therapy alone without revascularization; (2) different medical therapy regimens: high FMM, give several medications to improve event-free survival; low FMM, give few medications; (3) prognostication: high FMM, poor prognosis; low FMM, good prognosis. Further, in the era of 3D imaging, the M=kL relationships should be expanded to the M=kV relationship, where V=volume of the vessel or volume of the lumen. As such, in some embodiments, the system is configured to (1) describe the allometric scaling law in 3 dimensions, i.e., M=kVn; (2) use FMM as a cut point to guide coronary revascularization; and/or (3) use FMM cut points for clinical decision making, including (a) use of medications vs. not, (b) different types of medications (cholesterol lowering, vasodilators, heart rate slowing medications, etc.) based upon FMM cut points; (c) number of medications based upon FMM cut points; and/or (d) prognostication based upon FMM cut points. In some embodiments, the use of FMM cut points by 3D FMM calculations can improve decision making in a manner that improves event-free survival. As described above, in some embodiments, the system can be configured to utilize one or more contributors or causes of ischemia as inputs for generating a global ischemia index. An example of a contributor or cause of ischemia that can be utilized as input and/or analyzed by the system can include vessel caliber. In particular, in some embodiments, the system can be configured to analyze and/or utilize as an input the percentage diameter of stenosis, wherein the greater the stenosis the more likely the ischemia. In addition, in some embodiments, the system can be configured to analyze and/or utilize as in input lumen volume, wherein the smaller the lumen volume, the more likely the ischemia. In some embodiments, the system can be configured to analyze and/or utilize as an input lumen volume indexed to % fractional myocardial mass, body surface area (BSA), body mass index (BMI), left ventricle (LV) mass, overall heart size, wherein the smaller the lumen volume, the more likely the ischemia. In some embodiments, the system can be configured to analyze and/or utilize as an input vessel volume, wherein the smaller the vessel volume, the more likely the ischemia. In some embodiments, the system can be configured to analyze and/or utilize as an input minimal luminal diameter (MLD), minimal luminal are (MLA), and/or a ratio between MLD and MLA, such as MLD/MLA. Another example contributor or cause of ischemia that can be utilized as input and/or analyzed by the system can include plaque, which may have marked effects on the ability of an artery to vasodilate/vasoconstrict. In particular, in some embodiments, the system can be configured to analyze and/or utilize as an input non-calcified plaque (NCP), which may cause greater endothelial dysfunction and inability to vasodilate to hyperemia. In some embodiments, the system may utilize one or more arbitrary cutoffs for analyzing NCP, such as binary, trinary, and/or the like for necrotic core, fibrous, and/or fibrofatty. In some embodiments, the system may utilize continuous density measures for NCP. Further, in some embodiments, the system may analyze NCP for dual energy, monochromatic, and/or material basis decomposition. In some embodiments, the system can be configured to analyze and/or identify plaque geometry and/or plaque heterogeneity and/or other radiomics features. In some embodiments, the system can be configured to analyze and/or identify plaque facing the lumen and/or plaque facing epicardial fat. In some embodiments, the system can be configured to derive and/or identify imaging-based information, which can be provided directly to the algorithm for generating the global ischemia index. In some embodiments, the system can be configured to analyze and/or utilize as an input low density NCP, which may cause greater endothelial dysfunction and inability to vasodilate to hyperemia, for example using one or more specific techniques described above in relation to NCP. In some embodiments, the system can be configured to analyze and/or utilize as an input calcified plaque (CP), which may cause more laminar flow, less endothelial dysfunction and less ischemia. In some embodiments, the system may utilize one or more arbitrary cutoffs, such as 1K plaque (plaques >1000 Hounsfield units), and/or continuous density measures for CP. In some embodiments, the system can be configured to analyze and/or utilize as an input the location of plaque. In particular, the system may determine that myocardial facing plaque may be associated with reduced ischemia due to its proximity to myocardium (e.g., myocardial bridging rarely has atherosclerosis). In some embodiments, the system may determine that pericardial facing plaque may be associated with increased ischemia due to its proximity to peri-coronary adipose tissue. In some embodiments, the system may determine that bifurcation and/or trifurcation lesions may be associated with increased ischemia due to disruptions in laminar flow. In some embodiments, visualization of three-dimensional plaques can be generated and/or provided by the system to a user to improve understanding to the human observer of where plaques are in relationship to each other and/or to the myocardium to the pericardium. For example, in a particular vein, the system may be configured to allow the visualization of all the plaques on a single 2D image. As such, in some embodiments, the system can allow for all of the plaques to be visualized in a single view, with color-coded and/or shadowed labels and/or other labels to plaques depending on whether they are in the 2D field of view, or whether they are further away from the 2D field of view. This can be analogous to the maximum intensity projection view, which highlights the lumen that is filled with contrast agent, but applies an intensity projection (maximum, minimum, average, ordinal) to the plaques of different distance from the field of view or of different densities. In some embodiments, the system can be configured to visualize plaque using maximum intensity projection (MIP) techniques. In some embodiments, the system can be configured to visualize plaque in 2D, 3D, and/or 4D, for example using MIP techniques and/or other techniques, such as volume rendering techniques (VRT). More specifically, for 4D, in some embodiments, the system can be configured to visualize progression of plaque in terms of time. In some embodiments, the system can be configured to visualize on an image and/or on a video and/or other digital support the lumen and/or the addition of plaque in 2D, 3D, and/or 4D. In some embodiments, the system can be configured to show changes in time or 4D. In some embodiments, the system can be configured to take multiple scans taken from different points in time and/or integrate all or some of the information with therapeutics. In some embodiments, based on the same, the system can be configured to decide on changes in therapy and/or determine prognostic information, for example assessing for therapy success. Another example contributor or cause of ischemia that can be utilized as input and/or analyzed by the system can include fat. In some embodiments, the system can be configured to analyze and/or utilize as an input peri-coronary adipose tissue, which may cause ischemia due to inflammatory properties that cause endothelial dysfunction. In some embodiments, the system can be configured to analyze and/or utilize as an input epicardial adipose tissue, which may be a cause of overall heart inflammation. In some embodiments, the system can be configured to analyze and/or utilize as input epicardial fat and/or radiomics or imaging-based information provided directly to the algorithm, such as for example heterogeneity, density, density change away from the vessel, volume, and/or the like. As described above, in some embodiments, the system can be configured to utilize one or more consequences or sequelae of ischemia as inputs for generating a global ischemia index. An example consequence or sequelae of ischemia that can be utilized as input and/or analyzed by the system can be related to the left ventricle. For example, in some embodiments, the system can be configured to analyze the perfusion and/or Hounsfield unit density of the left ventricle, which can be global and/or related to the percentage of fractional myocardial mass. In some embodiments, the system can be configured to analyze the mass of the left ventricle, wherein the greater the mass, the greater the potential mismatch between lumen volume to LV mass, which can be global as well as related to the percentage of fractional myocardial mass. In some embodiments, the system can be the system can be configured to analyze the volume of the left ventricle, wherein an increase in the left ventricle volume can be a direct sign of ischemia. In some embodiments, the system can be configured to analyze and/or utilize as input density measurements of the myocardium, which can be absolute and/or relative, for example using a sticker or normalization device. In some embodiments, the system can be configured to analyze and/or use as input regional and/or global changes in densities. In some embodiments, the system can be configured to analyze and/or use as input endo, mid-wall, and/or epicardial changes in densities. In some embodiments, the system can be configured to analyze and/or use as input thickness, presence of fat and/or localization thereof, presence of calcium, heterogeneity, radiomic features, and/or the like. Another example consequence or sequelae of ischemia that can be utilized as input and/or analyzed by the system can be related to the right ventricle. For example, in some embodiments, the system can be configured to analyze the perfusion and/or Hounsfield unit density of the right ventricle, which can be global and/or related to the percentage of fractional myocardial mass. In some embodiments, the system can be configured to analyze the mass of the right ventricle, wherein the greater the mass, the greater the potential mismatch between lumen volume to LV mass, which can be global as well as related to the percentage of fractional myocardial mass. In some embodiments, the system can be the system can be configured to analyze the volume of the right ventricle, wherein an increase in the right ventricle volume can be a direct sign of ischemia. Another example consequence or sequelae of ischemia that can be utilized as input and/or analyzed by the system can be related to the left atrium. For example, in some embodiments, the system can be configured to analyze the volume of the left atrium, in which an increased left atrium volume can occur in patients who become ischemic and go into heart failure. Another example consequence or sequelae of ischemia that can be utilized as input and/or analyzed by the system can be related to the right atrium. For example, in some embodiments, the system can be configured to analyze the volume of the right atrium, in which an increased right atrium volume can occur in patients who become ischemic and go into heart failure. Another example consequence or sequelae of ischemia that can be utilized as input and/or analyzed by the system can be related to one or more aortic dimensions. For example, an increased aortic size as a long-standing contributor of hypertension may be associated with the end-organ effects of hypertension on the coronary arteries (resulting in more disease) and the LV mass (resulting in more LV mass-coronary lumen volume mismatch). Another example consequence or sequelae of ischemia that can be utilized as input and/or analyzed by the system can be related to the pulmonary veins. For example, for patients with volume overload, engorgement of the pulmonary veins may be a significant sign of ischemia. As described above, in some embodiments, the system can be configured to utilize one or more associated factors of ischemia as inputs for generating a global ischemia index. An example associated factor of ischemia that can be utilized as input and/or analyzed by the system can be related to the presence of fatty liver or non-alcoholic steatohepatitis, which is a condition that can be diagnosed by placing regions of interest (ROIs) in the liver to measure Hounsfield unit densities. Another example associated factor of ischemia that can be utilized as input and/or analyzed by the system can be related to emphysema, which is a condition that can be diagnosed by placing regions of interest in the lung to measure Hounsfield unit densities. Another example associated factor of ischemia that can be utilized as input and/or analyzed by the system can be related to osteoporosis, which is a condition that can be diagnosed by placing regions of interest in the spine. Another example associated factor of ischemia that can be utilized as input and/or analyzed by the system can be related to mitral annular calcification, which is a condition that can be diagnosed by identifying calcium (e.g., HU>350 etc.) in the mitral annulus. Another example associated factor of ischemia that can be utilized as input and/or analyzed by the system can be related to aortic valve calcification, which is a condition that can be diagnosed by identifying calcium in the aortic valve. Another example associated factor of ischemia that can be utilized as input and/or analyzed by the system can be related to aortic enlargement, often seen in hypertension, can reveal an enlargement in the proximal aorta due to long-standing hypertension. Another example associated factor of ischemia that can be utilized as input and/or analyzed by the system can be related to mitral valve calcification, which can be diagnosed by identifying calcium in the mitral valve. As discussed herein, in some embodiments, the system can be configured to utilize one or more inputs or variables for generating a global ischemia index, for example by inputting the like into a regression model or other algorithm. In some embodiments, the system can be configured to use as input one or more radiomics features and/or imaging-based deep learning. In some embodiments, the system can be configured to utilize as input one or more of patient height, weight, sex, ethnicity, body surface, previous medication, genetics, and/or the like. In some embodiments, the system can be configured to analyze and/or utilize as input calcium, separate calcium densities, localization calcium to lumen, volume of calcium, and/or the like. In some embodiments, the system can be configured to analyze and/or utilize as input contrast vessel attenuation. In particular, in some embodiments, the system can be configured to analyze and/or utilize as input average contrast in the lumen in the beginning of a segment and/or average contrast in the lumen at the end of that segment. In some embodiments, the system can be configured to analyze and/or utilize as input average contrast in the lumen in the beginning of the vessel to the beginning of the distal segment of that vessel, for example because the end can be too small in some instances. In some embodiments, the system can be configured to analyze and/or utilize as input plaque heterogeneity. In particular, in some embodiments, the system can be configured to analyze and/or utilize as input calcified plaque volume versus and/or non-calcified plaque volume. In some embodiments, the system can be configured to analyze and/or utilize as input standard deviation of one or more of the 3 different components of plaque. In some embodiments, the system can be configured to analyze and/or utilize as input one or more vasodilation metrics. In particular, in some embodiments, the system can be configured to analyze and/or utilize as input the highest remodeling index of a plaque. In some embodiments, the system can be configured to analyze and/or utilize as input the highest, average, and/or smallest thickness of plaque, and for example for its calcified and/or non-calcified components. In some embodiments, the system can be configured to analyze and/or utilize as input the highest remodeling index and/or lumen area. In some embodiments, the system can be configured to analyze and/or utilize as input the lesion length and/or segment length of plaque. In some embodiments, the system can be configured to analyze and/or utilize as input bifurcation lesion, such as for example the presence of absence thereof. In some embodiments, the system can be configured to analyze and/or utilize as input coronary dominance, for example left dominance, right dominance, and/or codominance. In particular, in some embodiments, if left dominance, the system can be configured to disregard and/or weight less one or more right coronary metrics. Similarly, if right dominance, the system can be configured to disregard and/or weight less one or more left coronary metrics. In some embodiments, the system can be configured to analyze and/or utilize as input one or more vascularization metrics. In particular, in some embodiments, the system can be configured to analyze and/or utilize as input the volume of the lumen of one or more, some, or all vessels. In some embodiments, the system can be configured to analyze and/or utilize as input the volume of the lumen of one or more secondary vessels, such as for example, non-right coronary artery (non-RCA), left anterior descending artery (LAD) vessel, circumflex (CX) vessel, and/or the like. In some embodiments, the system can be configured to analyze and/or utilize as input the volume of vessel and/or volume of plaque and/or a ratio thereof. In some embodiments, the system can be configured to analyze and/or utilize as input one or more inflammation metrics. In particular, in some embodiments, the system can be configured to analyze and/or utilize as input the average density of one or more pixels outside a lesion, such as for example 5 pixels and/or 3 or 4 pixels of 5, disregarding the first 1 or 2 pixels. In some embodiments, the system can be configured to analyze and/or utilize as input the average density of one or more pixels outside a lesion including the first ⅔ of each vessel that is not a lesion or plaque. In some embodiments, the system can be configured to analyze and/or utilize as input one or more pixels outside a lesion and/or the average of the same pixels on a 3 mm section above the proximal right coronary artery (R1) if there is no plaque in that place. In some embodiments, the system can be configured to analyze and/or utilize as input one or more ratios of any factors and/or variables described herein. As described above, in some embodiments, the system can be configured to utilize one or more machine learning algorithms in identifying, deriving, and/or analyzing one or more inputs for generating the global ischemia index, including for example one or more direct contributors to ischemia, early consequences of ischemia, late consequences of ischemia, associated factors with ischemia, and other test findings in relation to ischemia. In some embodiments, one or more such machine learning algorithms can provide fully automated quantification and/or characterization of such factors. As an example, in some embodiments, the system can be configured to utilize one or more machine learning algorithms to identify, derive, and/or analyze inferior vena cava from one or more medical images. Measures of inferior vena cava can be of high importance in patients with right-sided heart failure and tricuspid regurgitation. In addition, in some embodiments, the system can be configured to utilize one or more machine learning algorithms to identify, derive, and/or analyze the interatrial septum from one or more medical images. Interatrial septum dimensions can be vital for patients undergoing left-sided transcatheter procedures. In some embodiments, the system can be configured to utilize one or more machine learning algorithms to identify, derive, and/or analyze descending thoracic aorta from one or more medical images. Measures of descending thoracic aorta can be of critical importance in patients with aortic aneurysms, and for population-based screening in long-time smokers. In some embodiments, the system can be configured to utilize one or more machine learning algorithms to identify, derive, and/or analyze the coronary sinus from one or more medical images. Coronary sinus dimensions can be vital for patients with heart failure who are undergoing biventricular pacing. In some embodiments, by analyzing the coronary sinus, the system can be configured to derive all or some myocardium blood flow, which can be related to coronary volume, myocardium mass. In addition, in some embodiments, the system can be configured to analyze, derive, and/or identify hypertrophic cardiomyopathy (HCM), other hypertrophies, ischemia, and/or the like to derive ischemia and/or microvascular ischemia. In some embodiments, the system can be configured to utilize one or more machine learning algorithms to identify, derive, and/or analyze the anterior mitral leaflet from one or more medical images. For a patient being considered for surgical or transcatheter mitral valve repair or replacement, no current method currently exists to measure anterior mitral leaflet dimensions. In some embodiments, the system can be configured to utilize one or more machine learning algorithms to identify, derive, and/or analyze the left atrial appendage from one or more medical images. Left atrial appendage morphologies are linked to stroke in patients with atrial fibrillation, but no automated characterization solution exists today. In some embodiments, the system can be configured to utilize one or more machine learning algorithms to identify, derive, and/or analyze the left atrial free wall mass from one or more medical images. No current method exists to accurately measure left atrial free wall mass, which may be important in patients with atrial fibrillation. In some embodiments, the system can be configured to utilize one or more machine learning algorithms to identify, derive, and/or analyze the left ventricular mass from one or more medical images. Certain methods of measuring left ventricular hypertrophy as an adverse consequence of hypertension rely upon echocardiography, which employs a 2D estimated formula that is highly imprecise. 3D imaging by magnetic resonance imaging (MRI) or computed tomography (CT) are much more accurate, but current software tools are time-intensive and imprecise. In some embodiments, the system can be configured to utilize one or more machine learning algorithms to identify, derive, and/or analyze the left atrial volume from one or more medical images. Determination of left atrial volume can improve diagnosis and risk stratification in patients with and at risk of atrial fibrillation. In some embodiments, the system can be configured to utilize one or more machine learning algorithms to identify, derive, and/or analyze the left ventricular volume from one or more medical images. Left ventricular volume measurements can enable determination of individuals with heart failure or at risk of heart failure. In some embodiments, the system can be configured to utilize one or more machine learning algorithms to identify, derive, and/or analyze the left ventricular papillary muscle mass from one or more medical images. No current method currently exists to measure left ventricular papillary muscle mass. In some embodiments, the system can be configured to utilize one or more machine learning algorithms to identify, derive, and/or analyze the posterior mitral leaflet from one or more medical images. For patients being considered for surgical or transcatheter mitral valve repair or replacement, no current method currently exists to measure posterior mitral leaflet dimensions. In some embodiments, the system can be configured to utilize one or more machine learning algorithms to identify, derive, and/or analyze pulmonary veins from one or more medical images. Measures of pulmonary vein dimensions can be of critical importance in patients with atrial fibrillation, heart failure and mitral regurgitation. In some embodiments, the system can be configured to utilize one or more machine learning algorithms to identify, derive, and/or analyze pulmonary arteries from one or more medical images. Measures of pulmonary artery dimensions can be of critical importance in patients with pulmonary hypertension, heart failure and pulmonary emboli. In some embodiments, the system can be configured to utilize one or more machine learning algorithms to identify, derive, and/or analyze the right atrial free wall mass from one or more medical images. No current method exists to accurately measure right atrial free wall mass, which may be important in patients with atrial fibrillation. In some embodiments, the system can be configured to utilize one or more machine learning algorithms to identify, derive, and/or analyze the right ventricular mass from one or more medical images. Methods of measuring right ventricular hypertrophy as an adverse consequence of pulmonary hypertension and/or heart failure do not currently exist. In some embodiments, the system can be configured to utilize one or more machine learning algorithms to identify, derive, and/or analyze the proximal ascending aorta from one or more medical images. Aortic aneurysms can require highly precise measurements of the aorta, which are more accurate by 3D techniques such as CT and MRI. At present, current algorithms do not allow for highly accurate automated measurements. In some embodiments, the system can be configured to utilize one or more machine learning algorithms to identify, derive, and/or analyze the right atrial volume from one or more medical images. Determination of right atrial volume can improve diagnosis and risk stratification in patients with and at risk of atrial fibrillation. In some embodiments, the system can be configured to utilize one or more machine learning algorithms to identify, derive, and/or analyze the right ventricular papillary muscle mass from one or more medical images. No current method currently exists to measure right ventricular papillary muscle mass. In some embodiments, the system can be configured to utilize one or more machine learning algorithms to identify, derive, and/or analyze the right ventricular volume from one or more medical images. Right ventricular volume measurements can enable determination of individuals with heart failure or at risk of heart failure. In some embodiments, the system can be configured to utilize one or more machine learning algorithms to identify, derive, and/or analyze the superior vena cava from one or more medical images. No reliable method exists to date to measure superior vena cava dimensions, which may be important in patients with tricuspid valve insufficiency and heart failure. In some embodiments, the system can be configured to utilize one or more machine learning algorithms to identify, derive, analyze, segment, and/or quantify one or more cardiac structures from one or more medical images, such as the left and right ventricular volume (LVV, RVV), left and right atrial volume (LAV, RAV), and/or left ventricular myocardial mass (LVM). Further, in some embodiments, the system can be configured to utilize one or more machine learning algorithms to identify, derive, analyze, segment, and/or quantify one or more cardiac structures from one or more medical images, such as the proximal ascending and descending aorta (PAA, DA), superior and inferior vena cava (SVC, IVC), pulmonary artery (PA), coronary sinus (CS), right ventricular wall (RVW), and left atrial wall (LAW). In addition, in some embodiments, the system can be configured to utilize one or more machine learning algorithms to identify, derive, analyze, segment, and/or quantify one or more cardiac structures from one or more medical images, such as the left atrial appendage, left atrial wall, coronary sinus, descending aorta, superior vena cava, inferior vena cava, pulmonary artery, right ventricular wall, sinuses of Valsalva, left ventricular volume, left ventricular wall, right ventricular volume, left atrial volume, right atrial volume, and/or proximal ascending aorta. FIG.20Eis a flowchart illustrating an overview of an example embodiment(s) of a method for generating a global ischemia index for a subject and using the same to assist assessment of risk of ischemia for the subject. As illustrated inFIG.20E, in some embodiments, the system can be configured to access one or more medical images of a subject at block202, in any manner and/or in connection with any feature described above in relation to block202. In some embodiments, the system is configured to identify one or more vessels, plaque, and/or fat in the one or more medical images at block2002. For example, in some embodiments, the system can be configured to use one or more AI and/or ML algorithms and/or other image processing techniques to identify one or more vessels, plaque, and/or fat. In some embodiments, the system at block2004is configured to analyze and/or access one or more contributors to ischemia of the subject, including any contributors to ischemia described herein, for example based on the accessed one or more medical images and/or other medical data. In some embodiments, the system at block2006is configured to analyze and/or access one or more consequences of ischemia of the subject, including any consequences of ischemia described herein, including early and/or late consequences, for example based on the accessed one or more medical images and/or other medical data. In some embodiments, the system at block2008is configured to analyze and/or access one or more associated factors to ischemia of the subject, including any associated factors to ischemia described herein, for example based on the accessed one or more medical images and/or other medical data. In some embodiments, the system at block2010is configured to analyze and/or access one or more results from other testing, such as for example invasive testing, non-invasive testing, image-based testing, non-image based testing, and/or the like. In some embodiments, the system at block2012can be configured to generate a global ischemia index based on one or more parameters, such as for example one or more contributors to ischemia, one or more consequences of ischemia, one or more associated factors to ischemia, one or more other testing results, and/or the like. In some embodiments, the system is configured to generate a global ischemia index for the subject by generating a weighted measure of one or more parameters. For example, in some embodiments, the system is configured to weight one or more parameters differently and/or equally. In some embodiments, the system can be configured weight one or more parameters logarithmically, algebraically, and/or utilizing another mathematical transform. In some embodiments, the system is configured to generate a weighted measure using only some or all of the parameters. In some embodiments, at block2014, the system is configured to verify the generated global ischemia index. For example, in some embodiments, the system is configured to verify the generated global ischemia index by comparison to one or more blood flow parameters such as those discussed herein. In some embodiments, at block2016, the system is configured to generate user assistance to help a user determine an assessment of risk of ischemia for the subject based on the generated global ischemia index, for example graphically through a user interface and/or otherwise. CAD Score(s) Some embodiments of the systems, devices, and methods described herein are configured to generate one or more coronary artery disease (CAD) scores representative of a risk of CAD for a particular subject. In some embodiments, the risk score can be generated by analyzing and/or combining one or more aspects or characteristics relating to plaque and/or cardiovascular features, such as for example plaque volume, plaque composition, vascular remodeling, high-risk plaque, lumen volume, plaque location (proximal v. middle v. distal), plaque location (myocardial v. pericardial facing), plaque location (at bifurcation or trifurcation v. not at bifurcation or trifurcation), plaque location (in main vessel v. branch vessel), stenosis severity, percentage coronary blood volume, percentage fractional myocardial mass, percentile for age and/or gender, constant or other correction factor to allow for control of within-person, within-vessel, inter-plaque, plaque-myocardial relationships, and/or the like. In some embodiments, a CAD risk score(s) can be generated based on automatic and/or dynamic analysis of one or more medical images, such as for example a CT scan or an image obtained from any other modality mentioned herein. In some embodiments, data obtained from analyzing one or more medical images of a patient can be normalized in generating a CAD risk score(s) for that patient. In some embodiments, the systems, devices, and methods described herein can be configured to generate a CAD risk score(s) for different vessels, vascular territories, and/or patients. In some embodiments, the systems, devices, and methods described herein can be configured to generate a graphical visualization of risk of CAD of a patient based on a vessel basis, vascular territory basis, and/or patient basis. In some embodiments, based on the generated CAD risk score(s), the systems, methods, and devices described herein can be configured to generate one or more recommended treatments for a patient. In some embodiments, the system can be configured to utilize a normalization device, such as those described herein, to account for differences in scan results (such as for example density values, etc.) between different scanners, scan parameters, and/or the like. In some embodiments, the systems, devices, and methods described herein can be configured to assess patients with suspected coronary artery disease (CAD) by use of one or more of a myriad of different diagnostic and prognostic tools. In particular, in some embodiments, the systems, devices, and methods described herein can be configured to use a risk score for cardiovascular care for patients without known CAD. As a non-limiting example, in some embodiments, the system can be configured to generate an Atherosclerotic Cardiovascular Disease (ASCVD) risk score, which can be based upon a combination of age, gender, race, blood pressure, cholesterol (total, HDL and LDL), diabetes status, tobacco use, hypertension, and/or medical therapy (such as for example, statin and aspirin). As another non-limiting example, in some embodiments, the system can be configured to generate a Coronary Artery Calcium Score (CACS), which can be based upon a non-contrast CT scan wherein coronary arteries are visualized for the presence of calcified plaque. In some embodiments, an Agatston (e.g., a measure of calcium in a coronary CT scan) score may be used to determine the CACS. In particular, in some embodiments, a CACS score can be calculated by: Agatston score=surface area×Hounsfield unit density (with brighter plaques with higher density receiving a higher score). However, in some embodiments, there may be certain limitations with a CACS score. For example, in some embodiments, because surface area to volume ratio decreases as a function of the overall volume, more spherical plaques can be incorrectly weighted as less contributory to the Agatston score. In addition, in some embodiments, because Hounsfield unit density is inversely proportional to risk of major adverse cardiac events (MACE), weighting the HU density higher can score a lower risk plaque as having a higher score. Moreover, in some embodiments, 2.5-3 mm thick CT “slices” can miss smaller calcified plaques, and/or no use of beta blocker results in significant motion artifact, which can increase the calcium score due to artifact. In some embodiments, for symptomatic patients undergoing coronary CT angiography, the system can be configured to generate and/or utilize one or more additional risk scores, such as a Segment Stenosis Score, Segment Involvement Score, Segments-at-Risk Score, Duke Prognostic Index, CTA Score, and/or the like. More specifically, in some embodiments, a Segment Stenosis Score weights specific stenoses (0=0%, 1=1-24%, 2=25-49%, 3=50-69%, 4⇒70%) across the entire 18 coronary segment, resulting in a total possible score of 72. In some embodiments, a Segment Involvement Score counts the number of plaques located in the 18 segments and has a total possible score of 18. In some embodiments, a Segments-at-Risk Score reflects the potential susceptibility of all distal coronary segments subtended by severe proximal plaque. Thus, in some embodiments, all segments subtended by severe proximal plaque can be scored as severe as well, then summated over 18 segments to create a segment-at-risk score. For example, if the proximal portion of the LCx is considered severely obstructive, the segments-at-risk score for the LCx can be proximal circumflex (=3)+mid circumflex (=3)+distal circumflex (=3)+proximal obtuse marginal (=3)+mid obtuse marginal (=3)+distal obtuse marginal (=3), for a total circumflex segments-at-risk score of 18. In this individual, if the LAD exhibits mild plaque in the proximal portion (=1) and moderate plaque in the midportion (=2), the LAD segments-at-risk score can be 3. If the RCA exhibits moderate plaque in the proximal portion (=3), the RCA segments-at-risk score can be 2. Thus, for this individual, the total segments-at-risk score can be 23 out of a possible 48. In some embodiments, a Duke Prognostic Index can be a reflection of the coronary artery plaque severity considering plaque location. In some embodiments, a modified Duke CAD index can consider overall plaque extent relating it to coexistent plaque in the left main or proximal LAD. In some embodiments, using this scoring system, individuals can be categorized into six distinct groups: no evident coronary artery plaque; ≥2 mild plaques with proximal plaque in any artery or 1 moderate plaque in any artery; 2 moderate plaques or 1 severe plaque in any artery; 3 moderate coronary artery plaques or 2 severe coronary artery plaques or isolated severe plaque in the proximal LAD; 3 severe coronary artery plaques or 2 severe coronary artery plaques with proximal LAD plaque; moderate or severe left main plaque. In some embodiments, a CT angiography (CTA) Score can be calculated by determining CAD in each segment, such as for example proximal RCA, mid RCA, distal RCA, R-PDA, R-PLB, left main, proximal LAD, mid LAD, distal LAD, D1, D2, proximal LCX, distal LCX, IM/AL, OM, L-PL, L-PDA, and/or the like. In particular, for each segment, when plaque is absent, the system can be configured to assign a score of 0, and when plaque is present, the system can be configured to assign a score of 1.1, 1.2 or 1.3 according to plaque composition (such as calcified, non-calcified and mixed plaque, respectively). In some embodiments, these scores can be multiplied by a weight factor for the location of the segment in the coronary artery tree (for example, 0.5-6 according to vessel, proximal location and system dominance). In some embodiments, these scores can also be multiplied by a weight factor for stenosis severity (for example, 1.4 for ≥50% stenosis and 1.0 for stenosis <50%). In some embodiments, the final score can be calculated by addition of the individual segment scores. In some embodiments, the systems, devices, and methods described herein can be configured to utilize and/or perform improved quantification and/or characterization of many parameters on CT angiography that were previously very difficult to measure. For example, in some embodiments, the system can be configured to determine stenosis severity leveraging a proximal/distal reference and report on a continuous scale, for example from 0-100%, by diameter, area, and/or volumetric stenosis. In some embodiments, the system can be configured to determine total atheroma burden, reported in volumes or as a percent of the overall vessel volume (PAV), including for example non-calcified plaque volume (for example, as a continuous variable, ordinal variable or single variable), calcified plaque volume (for example, as a continuous variable, ordinal variable or single variable), and/or mixed plaque volume (for example, as a continuous variable, ordinal variable or single variable). In some embodiments, the system can be configured to determine low attenuation plaque, for example reported either as yes/no binary or continuous variable based upon HU density. In some embodiments, the system can be configured to determine vascular remodeling, for example reported as ordinal negative, intermediate or positive (for example, <0.90, 0.90-1.10, or >1.0) or continuous. In some embodiments, the system can be configured to determine and/or analyze various locations of plaque, such as for example proximal/mid/distal, myocardial facing vs. pericardial facing, at bifurcation v. not at bifurcation, in main vessel vs. branch vessel, and/or the like. In some embodiments, the system can be configured to determine percentage coronary blood volume, which can report out the volume of the lumen (and downstream subtended vessels in some embodiments) as a function of the entire coronary vessel volume (for example, either measured or calculated as hypothetically normal). In some embodiments, the system can be configured to determine percentage fractional myocardial mass, which can relate the coronary lumen or vessel volume to the percentage downstream subtended myocardial mass. In some embodiments, the system can be configured to determine the relationship of all or some of the above to each other, for example on a plaque-plaque basis to influence vessel behavior/risk or on a vessel-vessel basis to influence patient behavior/risk. In some embodiments, the system can be configured to utilize one or more comparisons of the same, for example to normal age- and/or gender-based reference values. In some embodiments, one or more of the metrics described herein can be calculated on a per-segment basis. In some embodiments, one or more of the metrics calculated on a per-segment basis can then summed across a vessel, vascular territory, and/or patient level. In some embodiments, the system can be configured to visualize one or more of such metrics, whether on a per-segment basis and/or on a vessel, vascular territory, and/or patient basis, on a geographical scale. For example, in some embodiments, the system can be configured to visualize one or more such metrics on a graphical scale using 3D and/or 4D histograms. Further, in some embodiments, cardiac CT angiography enables quantitative assessment of a myriad of cardiovascular structures beyond the coronary arteries, which may both contribute to coronary artery disease as well as other cardiovascular diseases. For example, these measurements can include those of one or more of: (1) left ventricle—e.g., left ventricular mass, left ventricular volume, left ventricle Hounsfield unit density as a surrogate marker of ventricular perfusion; (2) right ventricle—e.g., right ventricular mass, right ventricular volume; (3) left atrium—e.g., volume, size, geometry; (4) right atrium—e.g., volume, size, geometry; (5) left atrial appendage—e.g., morphology (e.g., chicken wing, windsock, etc.), volume, angle, etc.; (6) pulmonary vein—e.g., size, shape, angle of takeoff from the left atrium, etc.; (7) mitral valve—e.g., volume, thickness, shape, length, calcification, anatomic orifice area, etc.; (8) aortic valve—e.g., volume, thickness, shape, length, calcification, anatomic orifice area, etc.; (9) tricuspid valve—e.g., volume, thickness, shape, length, calcification, anatomic orifice area, etc.; (10) pulmonic valve—e.g., volume, thickness, shape, length, calcification, anatomic orifice area, etc.; (11) pericardial and pericoronary fat—e.g., volume, attenuation, etc.; (12) epicardial fat—e.g., volume, attenuation, etc.; (13) pericardium—e.g., thickness, mass, volume; and/or (14) aorta—e.g., dimensions, calcifications, atheroma. Given the multitude of measurements that can help characterize cardiovascular risk, certain existing scores can be limited in their holistic assessment of the patient and may not account for many key parameters that may influence patient outcome. For example, certain existing scores may not take into account the entirety of data that is needed to effectively prognosticate risk. In addition, the data that will precisely predict risk can be multi-dimensional, and certain scores do not consider the relationship of plaques to one another, or vessel to one another, or plaques-vessels-myocardium relationships or all of those relationships to the patient-level risk. Also, in certain existing scores, the data may categorize plaques, vessels and patients, thus losing the granularity of pixel-wise data that are summarized in these scores. In addition, in certain existing scores, the data may not reflect the normal age- and gender-based reference values as a benchmark for determining risk. Moreover, certain scores may not consider a number of additional items that can be gleaned from quantitative assessment of coronary artery disease, vascular morphology and/or downstream ventricular mass. Further, within-person relationships of plaques, segments, vessels, vascular territories may not considered within certain risk scores. Furthermore, no risk score to date that utilizes imaging normalizes these risks to a standard that accounts for differences in scanner make/model, contrast type, contrast injection rate, heart rate/cardiac output, patient characteristics, contrast-to-noise ratio, signal-to-noise ratio, and/or image acquisition parameters (for example, single vs. dual vs. spectral energy imaging; retrospective helical vs. prospective axial vs. fast-pitch helical; whole-heart imaging versus non-whole-heart [i.e., non-volumetric] imaging; etc.). In some embodiments described herein, the systems, methods, and devices overcome such technical shortcomings. In particular, in some embodiments, the systems, devices, and methods described herein can be configured to generate and/or a novel CAD risk score that addresses the aforementioned limitations by considering one or more of: (1) total atheroma burden, normalized for density, such as absolute density or Hounsfield unit (HU) density (e.g., can be categorized as total volume or relative volume, i.e., plaque volume/vessel volume×100%); (2) plaque composition by density or HU density (e.g., can be categorized continuously, ordinally or binarily); (3) low attenuation plaque (e.g., can be reported as yes/no binary or continuous variable based upon density or HU density); (4) vascular remodeling (e.g., can be reported as ordinal negative, intermediate or positive (<0.90, 0.90-1.10, or >1.0) or continuous); (5) plaque location—proximal v. mid v. distal; (6) plaque location—which vessel or vascular territory; (7) plaque location—myocardial facing v. pericardial facing; (8) plaque location—at bifurcation v. not at bifurcation; (9) plaque location—in main vessel v. branch vessel; (10) stenosis severity; (11) percentage coronary blood volume (e.g., this metric can report out the volume of the lumen (and downstream subtended vessels) as a function of the entire coronary vessel volume (e.g., either measured or calculated as hypothetically normal)); (12) percentage fractional myocardial mass (e.g., this metric can relate the coronary lumen or vessel volume to the percentage downstream subtended myocardial mass); (13) consideration of normal age- and/or gender-based reference values; and/or (14) statistical relationships of all or some of the above to each other (e.g., on a plaque-plaque basis to influence vessel behavior/risk or on a vessel-vessel basis to influence patient behavior/risk). In some embodiments, the system can be configured to determine a baseline clinical assessment(s), including for such factors as one or more of: (1) age; (2) gender; (3) diabetes (e.g., presence, duration, insulin-dependence, history of diabetic ketoacidosis, end-organ complications, which medications, how many medications, and/or the like); (4) hypertension (e.g., presence, duration, severity, end-organ damage, left ventricular hypertrophy, number of medications, which medications, history of hypertensive urgency or emergency, and/or the like); (5) dyslipidemia (e.g., including low-density lipoprotein (LDL), triglycerides, total cholesterol, lipoprotein(a) Lp(a), apolipoprotein B (ApoB), and/or the like); (6) tobacco use (e.g., including what type, for what duration, how much use, and/or the like); (7) family history (e.g., including which relative, at what age, what type of event, and/or the like); (8) peripheral arterial disease (e.g., including what type, duration, severity, end-organ damage, and/or the like); (9) cerebrovascular disease (e.g., including what type, duration, severity, end-organ damage, and/or the like); (10) obesity (e.g., including how obese, how long, is it associated with other metabolic derangements, such as hypertriglyceridemia, centripetal obesity, diabetes, and/or the like); (11) physical activity (e.g., including what type, frequency, duration, exertional level, and/or the like); and/or (12) psychosocial state (e.g., including depression, anxiety, stress, sleep, and/or the like). In some embodiments, a CAD risk score is calculated for each segment, such as for example for segment 1, segment 2, or for some or all segments. In some embodiments, the score is calculated by combining (e.g., by multiplying or applying any other mathematical transform or generating a weighted measure of) one or more of: (1) plaque volume (e.g., absolute volume such as in mm3 or PAV; may be weighted); (2) plaque composition (e.g., NCP/CP, Ordinal NCP/Ordinal CP; Continuous; may be weighted); (3) vascular remodeling (e.g., Positive/Intermediate/Negative; Continuous; may be weighted); (4) high-risk plaques (e.g., positive remodeling+low attenuation plaque; may be weighted); (5) lumen volume (e.g., may be absolute volume such as in mm3 or relative to vessel volume or relative to hypothetical vessel volume; may be weighted); (6) location—proximal/mid/distal (may be weighted); (7) location—myocardial vs. pericardial facing (may be weighted); (8) location—at bifurcation/trifurcation vs. not at bifurcation/trifurcation (may be weighted); (9) location—in main vessel vs. branch vessel (may be weighted); (10) stenosis severity (e.g., ><70%, < >50%, 1-24, 25-49, 50-69, >70%; 0, 1-49, 50-69, >70%; continuous; may use diameter, area or volume; may be weighted); (11) percentage Coronary Blood Volume (may be weighted); (12) percentage fractional myocardial mass (e.g., may include total vessel volume-to-LV mass ratio; lumen volume-to-LV mass ratio; may be weighted); (13) percentile for age- and gender; (14) constant/correction factor (e.g., to allow for control of within-person, within-vessel, inter-plaque, and/or plaque-myocardial relationships). As a non-limiting example, if Segment 1 has no plaque, then it can be weighted as 0 in some embodiments. In some embodiments, to determine risk (which can be defined as risk of future myocardial infarction, major adverse cardiac events, ischemia, rapid progression, insufficient control on medical therapy, progression to angina, and/or progression to need of target vessel revascularization), all or some of the segments are added up on a per-vessel, per-vascular territory and per-patient basis. In some embodiments, by using plots, the system can be configured to visualize and/or quantify risk based on a vessel basis, vascular territory basis, and patient-basis. In some embodiments, the score can be normalized in a patient- and scan-specific manner by considering items such as for example: (1) patient body mass index; (2) patient thorax density; (3) scanner make/model; (4) contrast density along the Z-axis and along vessels and/or cardiovascular structures; (5) contrast-to-noise ratio; (6) signal-to-noise ratio; (7) method of ECG gating (e.g., retrospective helical, prospective axial, fast-pitch helical); (8) energy acquisition (e.g., single, dual, spectral, photon counting); (9) heart rate; (10) use of pre-CT medications that may influence cardiovascular structures (e.g., nitrates, beta blockers, anxiolytics); (11) mA; and/or (12) kvp. In some embodiments, without normalization, cardiovascular structures (coronary arteries and beyond) may have markedly different Hounsfield units for the same structure (e.g., if 100 vs. 120 kvp is used, a single coronary plaque may exhibit very different Hounsfield units). Thus, in some embodiments, this “normalization” step is needed, and can be performed based upon a database of previously acquired images and/or can be performed prospectively using an external normalization device, such as those described herein. In some embodiments, the CAD risk score can be communicated in several ways by the system to a user. For example, in some embodiments, a generated CAD risk score can be normalized to a scale, such as a 100 point scale in which 90-100 can refer to excellent prognosis, 80-90 for good prognosis, 70-80 for satisfactory prognosis, 60-70 for below average prognosis, <60 for poor prognosis, and/or the like. In some embodiments, the system can be configured to generate and/or report to a user based on the CAD risk score(s) vascular age vs. biological age of the subject. In some embodiments, the system can be configured to characterize risk of CAD of a subject as one or more of normal, mild, moderate, and/or severe. In some embodiments, the system can be configured to generate one or more color heat maps based on a generated CAD risk score, such as red, yellow, green, for example in ordinal or continuous display. In some embodiments, the system can be configured to characterize risk of CAD for a subject as high risk vs. non-high-risk, and/or the like. As a non-limiting example, in some embodiments, the generated CAD risk score for Lesion 1 can be calculated as Vol×Composition (HU)×RI×HRP×Lumen Volume×Location×Stenosis %×% CBV×% FMM×Age-/Gender Normal Value %×Correction Constant)×Correction factor for scan- and patient-specific parameters×Normalization factor to communicate severity of findings. Similarly, in some embodiments, the generated CAD risk score for Lesion 2 can be calculated as Vol×Composition (HU)×RI X HRP×Lumen Volume×Location×Stenosis %×% CBV×% FMM×Age-/Gender Normal Value %×Correction Constant)×Correction factor for scan- and patient-specific parameters×Normalization factor to communicate severity of findings. In some embodiments, the generated CAD risk score for Lesion 3 can be calculated as Vol×Composition (HU)×RI×HRP×Lumen Volume×Location×Stenosis %×% CBV×% FMM×Age-/Gender Normal Value %×Correction Constant)×Correction factor for scan- and patient-specific parameters×Normalization factor to communicate severity of findings. In some embodiments, the generated CAD risk score for Lesion 4 can be calculated as Vol×Composition (HU)×RI×HRP×Lumen Volume×Location×Stenosis %×% CBV×% FMM×Age-/Gender Normal Value %×Correction Constant)×Correction factor for scan- and patient-specific parameters×Normalization factor to communicate severity of findings. In some embodiments, a CAD risk score can similarly be generated for any other lesions. In some embodiments, the CAD risk score can be adapted to other disease states within the cardiovascular system, including for example: (1) coronary artery disease and its downstream risk (e.g., myocardial infarction, acute coronary syndromes, ischemia, rapid progression, progression despite medical therapy, progression to angina, progression to need for target vessel revascularization, and/or the like); (2) heart failure; (3) atrial fibrillation; (4) left ventricular hypertrophy and hypertension; (5) aortic aneurysm and/or dissection; (6) valvular regurgitation or stenosis; (7) sudden coronary artery dissection, and/or the like. FIG.21is a flowchart illustrating an overview of an example embodiment(s) of a method for generating a coronary artery disease (CAD) Score(s) for a subject and using the same to assist assessment of risk of CAD for the subject. As illustrated inFIG.21, in some embodiments, the system is configured to conduct a baseline clinical assessment of a subject at block2102. In particular, in some embodiments, the system can be configured to take into account one or more clinical assessment factors associated with the subject, such as for example age, gender, diabetes, hypertension, dyslipidemia, tobacco use, family history, peripheral arterial disease, cerebrovascular disease, obesity, physical activity, psychosocial state, and/or any details of the foregoing described herein. In some embodiments, one or more baseline clinical assessment factors can be accessed by the system from a database and/or derived from non-image-based and/or image-based data. In some embodiments, at block202, the system can be configured to access one or more medical images of the subject at block202, in any manner and/or in connection with any feature described above in relation to block202. In some embodiments, the system is configured to identify one or more segments, vessels, plaque, and/or fat in the one or more medical images at block2104. For example, in some embodiments, the system can be configured to use one or more AI and/or ML algorithms and/or other image processing techniques to identify one or more segments, vessels, plaque, and/or fat. In some embodiments, the system at block2106is configured to analyze and/or access one or more plaque parameters. For example, in some embodiments, one or more plaque parameters can include plaque volume, plaque composition, plaque attenuation, plaque location, and/or the like. In particular, in some embodiments, plaque volume can be based on absolute volume and/or PAV. In some embodiments, plaque composition can be determined by the system based on density of one or more regions of plaque in a medical image, such as absolute density and/or Hounsfield unit density. In some embodiments, the system can be configured to categorize plaque composition binarily, for example as calcified or non-calcified plaque, and/or continuously based on calcification levels of plaque. In some embodiments, plaque attenuation can similarly be categorized binarily by the system, for example as high attenuation or low attenuation based on density, or continuously based on attenuation levels of plaque. In some embodiments, plaque location can be categorized by the system as one or more of proximal, mid, or distal along a coronary artery vessel. In some embodiments, the system can analyze plaque location based on the vessel in which the plaque is located. In some embodiments, the system can be configured to categorize plaque location based on whether it is myocardial facing, pericardial facing, located at a bifurcation, located at a trifurcation, not located at a bifurcation, and/or not located at a trifurcation. In some embodiments, the system can be configured to analyze plaque location based on whether it is in a main vessel or in a branch vessel. In some embodiments, the system at block2108is configured to analyze and/or access one or more vessel parameters, such as for example stenosis severity, lumen volume, percentage of coronary blood volume, percentage of fractional myocardial mass, and/or the like. In some embodiments, the system is configured to categorize or determine stenosis severity based on one or more predetermined ranges of percentage stenosis, for example based on diameter, area, and/or volume. In some embodiments, the system is configured to determine lumen volume based on absolute volume, volume relative to a vessel volume, volume relative to a hypothetical volume, and/or the like. In some embodiments, the system is configured to determine percentage of coronary blood volume based on determining a volume of lumen as a function of an entire coronary vessel volume. In some embodiments, the system is configured to determine percentage of fractional myocardial mass as a ratio of total vessel volume to left ventricular mass, a ratio of lumen volume to left ventricular mass, and/or the like. In some embodiments, the system at block2110is configured to analyze and/or access one or more clinical parameters, such as for example percentile condition for age, percentile condition for gender of the subject, and/or any other clinical parameter described herein. In some embodiments, the system at block2112is configured to generate a weighted measure of one or more parameters, such as for example one or more plaque parameters, one or more vessel parameters, and/or one or more clinical parameters. In some embodiments, the system is configured to generate a weighted measure of one or more parameters for each segment. In some embodiments, the system can be configured to generate the weighted measure logarithmically, algebraically, and/or utilizing another mathematical transform. In some embodiments, the system can be configured to generate the weighted measure by applying a correction factor or constant, for example to allow for control of within-person, within-vessel, inter-plaque, and/or plaque-myocardial relationships. In some embodiments, the system at block2114is configured to generate one or more CAD risk scores for the subject. For example, in some embodiments, the system can be configured to generate a CAD risk score on a per-vessel, per-vascular territory, and/or per-subject basis. In some embodiments, the system is configured to generate one or more CAD risk scores of the subject by combining the generated weighted measure of one or more parameters. In some embodiments, the system at block2116can be configured to normalize the generated one or more CAD scores. For example, in some embodiments, the system can be configured to normalize the generated one or more CAD scores to account for differences due to the subject, scanner, and/or scan parameters, including those described herein. In some embodiments, the system at block2118can be configured to generate a graphical plot of the generated one or more per-vessel, per-vascular territory, or per-subject CAD risk scores for visualizing and quantifying risk of CAD for the subject. For example, in some embodiments, the system can be configured to generate a graphical plot of one or more CAD risk scores on a per-vessel, per-vascular, and/or per-subject basis. In some embodiments, the graphical plot can include a 2D, 3D, or 4D representation, such as for example a histogram. In some embodiments, the system at block2120can be configured to assist a user to generate an assessment of risk of CAD for the subject based the analysis. For example, in some embodiments, the system can be configured to generate a scaled CAD risk score for the subject. In some embodiments, the system can be configured to determine a vascular age for the subject. In some embodiments, the system can be configured to categorize risk of CAD for the subject, for example as normal, mild, moderate, or severe. In some embodiments, the system can be configured to generate one or more colored heart maps. In some embodiments, the system can be configured to categorize risk of CAD for the subject as high risk or low risk. Treat to the Image Some embodiments of the systems, devices, and methods described herein are configured to track progression of a disease, such as a coronary artery disease (CAD), based on image analysis and use the results of such tracking to determine treatment for a patient. In other words, in some embodiments, the systems, methods, and devices described herein are configured to treat a patient or subject to the image. In particular, in some embodiments, the system can be configured to track progression of a disease in response to a medical treatment by analyzing one or more medical images over time and use the same to determine whether the medical treatment is effective or not. For example, in some embodiments, if the prior medical treatment is determined to be effectiveness based on tracking of disease progression based on image analysis, the system can be configured to propose continued use of the same treatment. On the other hand, in some embodiments, if the prior medical treatment is determined to be neutral or non-effective based on tracking of disease progression based on image analysis, the system can be configured to propose a modification of the prior treatment and/or a new treatment for the subject. In some embodiments, the treatment can include medication, lifestyle changes or actions, and/or revascularization procedures. In particular, some embodiments of the systems, devices, and methods described herein are configured to determine one or more of the progression, regression or stabilization, and/or destabilization of coronary artery disease or other vascular disease over time in a manner that will reduce adverse coronary events. For example, in some embodiments, the systems, devices, and methods described herein are configured to provide medical analysis and/or treatment based on plaque attenuation tracking. In some embodiments, the systems, devices, and methods described herein can be configured to utilize a computer system and/or an artificial intelligence platform to track the attenuation of plaque, wherein an automatically detected transformation from low attenuation plaque to high attenuation plaque on a medical image, rather than regression of plaque, can be used as the main basis for generating a plaque attenuation score or status, which can be representative of the rate of progression and/or rate of increased/decreased risk of coronary disease. As such, in some embodiments, the systems, devices, and methods described herein can be configured to provide response assessment of medical therapy, lifestyle interventions, and/or coronary revascularization along the life course of an individual. In some embodiments, the system can be configured to utilize computed tomography angiography (CCTA). Generally speaking, computed tomography angiography (CCTA) can enable evaluation of presence, extent, severity, location and/or type of atherosclerosis in the coronary and other arteries. These factors can change with medical therapy and lifestyle modifications and coronary interventions. As a non-limiting example, in some cases, Omega-3 fatty acids, after 38.6 months can lower high-risk plaque prevalence, number of high-risk plaques, and/or napkin-ring sign. Also, the CT density of plaque can be higher in omega-3 fatty acids group. As another non-limiting example, in some cases, icosapent ethyl can result in reduced low attenuation plaque (LAP) volume by 17% and overall plaque volume by 9% compared to baseline and placebo. In addition, as another non-limiting example, in some cases of HIV positive patients, higher non-calcified and high-risk plaque burden on anti-retroviral therapy can be higher and can involve higher cardiovascular risk. Further, as another non-limiting example, in some cases of patients taking statins, there can be slower rate of percent atheroma progression with more rapid progression of calcified percent atheroma volume. Other changes in plaque can also occur due to some other exposure. Importantly, in some instances, patients may often be taking combinations of these medications and/or living healthy or unhealthy lifestyles that may contribute multi-factorially to the changes in plaque over time in a manner that is not predictable, but can be measurable, for example utilizing one or more embodiments described herein. In some embodiments, the systems, methods, and devices described herein can be configured to analyze dichotomous and/or categorical changes in plaque (e.g., from non-calcified to calcified, high-risk to non-high-risk, and/or the like) and burden of plaque (e.g., volume, percent atheroma volume, and/or the like), as well as analyze serial continuous changes over time. In addition, in some embodiments, the systems, methods, and devices described herein can be configured to leverage the continuous change of a plaque's features as a longitudinal method for guiding need for intensification of medical therapy, change in lifestyle, and/or coronary revascularization. Further, in some embodiments, the systems, methods, and devices described herein can be configured to leverage the difference in these changes over time as a method to guide therapy in a manner that improves patient-specific event-free survival. As such, in some embodiments, the systems, methods, and devices described herein can be configured to determine the progression, regression or stabilization, and/or destabilization of coronary artery disease and/or other vascular disease over time, for example in response to a medical treatment, in a manner that will reduce adverse coronary events. In particular, in some embodiments, the systems, methods, and devices described herein can be configured to analyze the density/signal intensity, vascular remodeling, location of plaques, plaque volume/disease burden, and/or the like. In some embodiments, the system can be configured to utilize a normalization device, such as those described herein, to account for differences in scan results (such as for example density values, etc.) between different scanners, scan parameters, and/or the like. In some embodiments, the system can be configured to track imaging density (CT) and/or signal intensity (MRI) of coronary atherosclerotic lesions over time by serial imaging. In some embodiments, the system can be configured to leverage directionality changes in coronary lesions over time (e.g. lower-to-higher CT density, higher-to-even higher CT density, etc.) as measurements of stabilization of plaque. In some embodiments, the system can be configured to leverage directionality changes to link to risk of disease events (e.g., high CT density is associated with lower risk of heart attack). In some embodiments, the system can be configured to guide decision making as to whether to add another medication/intensity medical therapy. For example, if there is no change in density/signal intensity for a patient after 1 year, the system can be configured to propose addition of another medication. In some embodiments, the system can be configured to guide decision making in the above manner in order to reduce adverse coronary events (e.g., acute coronary syndrome, rapid progression, ischemia, and/or the like). FIG.22Aillustrates an example(s) of tracking the attenuation of plaque for analysis and/or treatment of coronary artery and/or other vascular disease. As a non-limiting example,FIG.22Aillustrates example cross sections of arteries from a CT image. In the illustrated example embodiment, the yellow circles are the lumen, the orange circles are the outer vessel wall and everything in between is plaque tissue or similar. In the illustrated example embodiment, the “high-risk plaques” by CT are indicated to the left, where they are classified as such by having low attenuation plaque (e.g., <30 Hounsfield units) and positive (>1) vascular remodeling (e.g., cross-sectional area or diameter at the site of maximum plaque compared to cross-sectional area at the most proximal normal appearing cross-section). In some embodiments, positive arterial remodeling can be defined as >1.05 or >1.10. As illustrated in the example embodiment ofFIG.22A, in some embodiments, plaques can be of continuously different density. In the left most cross-section of the illustrated example embodiment, the plaque is black, and turns progressively gray and then lighter and then brighter until it becomes very bright white, with a Hounsfield unit density of >1000 in the right most cross-section of the illustrated example embodiment. In some embodiments, this density can be reported out continuously as Hounsfield unit densities or other depending on the acquisition mode of the CT image, which can include single-energy, dual energy, spectral, and/or photon counting imaging. In some embodiments, using imaging methods (e.g., by CT), darker plaques (e.g., with lower Hounsfield unit densities) can represent higher risk (e.g., of myocardial infarction, of causing ischemia, of progressing rapidly, and/or the like), while brighter plaques (e.g., with higher Hounsfield unit density) can represent lower risk. In some embodiments, the system is configured to leverage the continuous scale of the plaque composition density as a marker for increased stabilization of plaque after treatment, and to leverage this information to continually update prognostic risk stratification for future coronary events (e.g., acute coronary syndromes, ischemia, etc.). Thus, in some embodiments, an individual's risk of a heart attack can be dependent on the density of the plaque, and changes in the density after treatment can attenuate that risk, increase that risk, and/or have no effect on risk. In some embodiments, the system can be configured to generate and/or suggest treatment in a number of different forms, which may include: medications (e.g., statins, human immunodeficiency virus (HIV) medications, icosapent ethyl, bempedoic acid, rivaroxaban, aspirin, proprotein convertase subtilisin/kexin type 9 (PCSK-9) inhibitors, inclisiran, sodium-glucose cotransporter-2 (SGLT-2) inhibitors, glucagon-like peptide-1 (GLP-1) receptor agonists, low-density lipoprotein (LDL) apheresis, etc.); lifestyle (increased exercise, aerobic exercise, anaerobic exercise, cessation of smoking, changes in diet, etc.); and/or revascularization (after bypass grafting, stenting, bioabsorbable scaffolds, etc.). In some embodiments, the system can be configured to generate and/or provide a “treat to the image” continuous approach that offers clinicians and patients a method for following plaque changes over time to ensure that the plaque is stabilizing and the prognosis is improving. For example, in some embodiments, a patient may be started on a statin medication after their CT scan. Over time (e.g., months), a plaque may change in Hounsfield unit density from 30 to 45 HUs. In some embodiments, this may represent a beneficial outcome of plaque stabilization and connote the efficacy of the statin medications on the plaque. Alternatively, over time, a plaque may not change in Hounsfield unit density, staying at 30 HU over time. In this case, in some embodiments, this may represent an adverse outcome wherein the statin medication is ineffective in stabilizing the plaque. In some embodiments, should a plaque not stabilize to medical therapy (e.g., HU density remains low, or is very slow to rise), then another medication (e.g., PCSK-9 inhibitor) may be added as the constancy in the HU ca be a titratable biomarker that is used to guide medical therapy intensification and, ultimately, improve patient outcomes (e.g., by reducing myocardial infarction, rapid progression, ischemia, and/or other adverse event). In some embodiments, densities of plaques may be influenced by a number of factors that can include one or more of: scanner type, image acquisition parameters (e.g., mA, kVp, etc.), energy (e.g., single-, dual-, spectral, photon counting, etc.), gating (e.g., axial vs. retrospective helical, etc.), contrast, age, patient body habitus, surrounding cardiac structures, plaque type (e.g., calcium may cause partial volume artifact, etc.), and/or others. As such, in some embodiments, the system can be configured to normalize one or more of these factors to further standardize comparisons in plaque types over time. In some embodiments, the system can be configured to track vascular remodeling of coronary atherosclerotic lesions over time using image analysis techniques. In some embodiments, the system can be configured to leverage directionality changes in remodeling (e.g., outward, intermediate, inward, and/or the like). In some embodiments, the system can be configured to evaluate directionality on a patient, vessel, segment, lesion and/or cross section basis. In some embodiments, the system can be configured to leverage directionality changes to link to risk of disease events. For example, in some embodiments, more outward remodeling can be indicative of a higher risk of heart attack, and/or the like. In some embodiments, the system can be configured to guide decision making as to whether to add another medication/intensify medical therapy and/or perform coronary revascularization based upon worsening or new positive remodeling. In some embodiments, the system can be configured to guide decision making in the above manner in order to reduce adverse coronary events (e.g., acute coronary syndrome, rapid progression, ischemia, and/or the like). In some embodiments, a similar analogy for plaque composition can be applied to measures of vascular remodeling in a specific coronary lesion and/or across all coronary lesions within the coronary vascular tree. In particular, in some embodiments, the remodeling index can be a continuous measure and can be reported by one or more of diameter, area, and/or volume. As positive remodeling can be associated with lesions at the time of acute coronary syndrome and negative remodeling may not, in some embodiments, serial imaging (e.g., CT scans, etc.) can be followed across time to determine whether the plaque is causing more or less positive remodeling. In some embodiments, cessation and/or slowing of positive remodeling can be favorable sign that can be used to prognostically update an individual or a lesion's risk of myocardial infarction or other adverse coronary event (e.g., ischemia, etc.). In some embodiments, the system can be configured to provide a “treat to the image” continuous approach that offers clinicians and patients a method for following plaque changes over time to ensure that the plaque is stabilizing and the prognosis is improving. For example, in some embodiments, a patient may be started on a statin medication after their CT scan. Over time (e.g., months, etc.), a plaque may change in remodeling index from 1.10 to 1.08. In some embodiments, this may represent a beneficial outcome of plaque stabilization and connote the efficacy of the statin medications on the plaque. Alternatively, over time, a plaque may not change in remodeling index over time, staying at 1.10. In this case, in some embodiments, this may represent an adverse outcome wherein the statin medication is ineffective in stabilizing the plaque. In some embodiments, should a plaque not stabilize to medical therapy (for example if the remodeling index remains high or is very slow to decrease), then another medication (e.g., PCSK-9 inhibitor, etc.) may be added, as the constancy in the remodeling can be a titratable biomarker that is used to guide medical therapy intensification and, ultimately, improve patient outcomes (e.g., by reducing myocardial infarction, rapid progression, ischemia, and/or other adverse event). In some embodiments, remodeling indices of plaques may be influenced by a number of factors that can include one or more of: scanner type, image acquisition parameters (e.g., mA, kVp, etc.), energy (e.g., single-, dual-, spectral, photon counting, etc.), gating (e.g., axial vs. retrospective helical, etc.), contrast, age, patient body habitus, surrounding cardiac structures, plaque type (e.g., calcium may cause partial volume artifact, etc.), and/or the like. In some embodiments, the system can be configured to normalize to one or more of these factors to further standardize comparisons in plaque types over time. In some embodiments, the system can be configured to track location of one or more regions of plaque over time. For example, in some embodiments, the system can be configured to track the location of one or more regions of plaque based on one or more of: myocardial facing vs. pericardial facing; at a bifurcation or trifurcation; proximal vs. mid vs. distal; main vessel vs. branch vessel; and/or the like. In some embodiments, the system can be configured to evaluate directionality on a patient, vessel, segment, lesion and/or cross section basis. In some embodiments, the system can be configured to leverage directionality changes to link to risk of disease events (e.g. more outward remodeling, higher risk of heart attack, and/or the like). In some embodiments, the system can be configured to guide decision making as to whether to add another medication/intensify medical therapy or perform coronary revascularization, and/or the like. In some embodiments, the system can be configured to guide decision making in the above manner in order to reduce adverse coronary events (e.g., acute coronary syndrome, rapid progression, ischemia, and/or the like). In some embodiments, the system can be configured to identify and/or correlate certain coronary events as being associated with increased risk over time. For example, in some embodiments, pericardial facing plaque may have a higher rate of being a culprit lesion at the time of myocardial infarction than myocardial facing plaques. In some embodiments, bifurcation lesions can appear to have a higher rate of being a culprit lesion at the time of myocardial infarction than non-bifurcation/trifurcation lesions. In some embodiments, proximal lesions can tend to be more common than distal lesions and can also be most frequently the site of myocardial infarction or other adverse coronary event. In some embodiments, the system can be configured to track each or some one of these individual locations of plaque and, based upon their presence, extent and severity, assign a baseline risk. In some embodiments, after treatment with medication, lifestyle or intervention, serial imaging (e.g., by CT, etc.) can be performed to determine changes in these features, which can be used to update risk assessment. In some embodiments, the system can be configured to provide a “treat to the image” continuous approach that offers clinicians and patients a method for following plaque changes in location over time to ensure that the plaque is stabilizing and the prognosis is improving. For example, in some embodiments, a patient may be started on a statin medication after their CT scan. Over time (e.g., months, etc.), a plaque may regress in the pericardial-facing region but remain in the myocardial facing region. In some embodiments, this may represent a beneficial outcome of plaque stabilization and connote the efficacy of the statin medications on the plaque. Alternatively, over time, a plaque may not change in location over time and remain pericardial-facing. In this case, in some embodiments, this may represent an adverse outcome wherein the statin medication is ineffective in stabilizing the plaque. In some embodiments, should a plaque not stabilize to medical therapy (for example if the location of plaque remains pericardial-facing or is very slow to change), then another medication (e.g., PCSK-9 inhibitor or other) may be added, as the constancy in the location of plaque can be a titratable biomarker that is used to guide medical therapy intensification and, ultimately, improve patient outcomes (e.g., by reducing myocardial infarction, rapid progression, ischemia, or other adverse event). In some embodiments, the CT appearance of plaque location may be influenced by a number of factors that may include one or more of: scanner type, image acquisition parameters (e.g., mA, kVp, etc.), energy (e.g., single-, dual-, spectral, photon counting, etc.), gating (e.g., axial vs. retrospective helical, etc.), contrast, age, patient body habitus, surrounding cardiac structures, plaque type (e.g., calcium may cause partial volume artifact, etc.), and/or others. In some embodiments, the system can be configured to normalize to one or more of these factors to further standardize comparisons in plaque types over time. In some embodiments, the system can be configured to track plaque volume and/or plaque volume as a function of vessel volume (e.g., percent atheroma volume or PAV, etc.). In some embodiments, plaque volume and/or PAV can be tracked on a per-patient, per-vessel, per-segment or per-lesion basis. In some embodiments, the system can be configured to evaluate directionality of plaque volume or PAV (e.g., increasing, decreasing or staying the same). In some embodiments, the system can be configured to leverage directionality changes to link to risk of disease events. For example, in some embodiments, an increase in plaque volume or PAV can be indicative of higher risk. Similarly, in some embodiments, slowing of plaque progression can be indicative of lower risk and/or the like. In some embodiments, the system can be configured to guide decision making as to whether to add another medication/intensify medical therapy or perform coronary revascularization. For example, in some embodiments, in response to increasing plaque volume or PAV, the system can be configured to propose increased/intensified medical therapy, other treatment, increased medication dosage, and/or the like. In some embodiments, the system can be configured to guide decision making in order to reduce adverse coronary events (e.g., acute coronary syndrome, rapid progression, ischemia, and/or the like). In some embodiments, the system can be configured to identify and/or correlate certain adverse coronary events as being associated with increased risk over time. For example, in some embodiments, higher plaque volume and/or higher PAV can result in high risk of CAD events. In some embodiments, the system can be configured to track plaque volume and/or PAV and assign a baseline risk based at least in part on its presence, extent, and/or severity. In some embodiments, after treatment with medication, lifestyle or intervention, serial imaging (e.g., by CT) can be performed to determine changes in these features, which can be used to update risk assessment. In some embodiments, the system can be configured to provide a “treat to the image” continuous approach that offers clinicians and patients a method for following plaque changes in location over time to ensure that the plaque is stabilizing and the prognosis is improving. For example, in some embodiments, in a patient may be started on a statin medication after their CT scan. Over time (e.g., months, etc.), a plaque may increase in volume or PAV. In some embodiments, this may represent an adverse outcome and connote the inefficacy of statin medications. Alternatively, over time, the volume of plaque may not change. In this case, in some embodiments, this may represent a beneficial outcome wherein the statin medication is effective in stabilizing the plaque. In some embodiments, should a plaque not stabilize to medical therapy (e.g., if plaque volume or PAV increases), then another medication (e.g., PCSK-9 inhibitor and/or other) may be added, as the constancy in the plaque volume or PAV can be a titratable biomarker that is used to guide medical therapy intensification and, ultimately, improve patient outcomes (e.g., by reducing myocardial infarction, rapid progression, ischemia, and/or other adverse event). In some embodiments, the CT appearance of plaque location may be influenced by a number of factors that may include one or more of: scanner type, image acquisition parameters (e.g., mA, kVp, etc.), energy (e.g., single-, dual-, spectral, photon counting, etc.), gating (e.g., axial vs. retrospective helical, etc.), contrast, age, patient body habitus, surrounding cardiac structures, plaque type (e.g., calcium may cause partial volume artifact, etc.), and/or others. In some embodiments, the system can be configured to normalize to one or more of these factors to further standardize comparisons in plaque types over time. In some embodiments, the system can be configured to analyze and/or report one or more of the overall changes described above related to plaque composition, vascular remodeling, and/or other features on a per-patient, per-vessel, per-segment, and/or per-lesion basis, for example to provide prognostic risk stratification either in isolation (e.g., just composition, etc.) and/or in combination (e.g., composition+remodeling+location, etc.). In some embodiments, the system can be configured to update risk assessment and/or guide medical therapy, lifestyle changes, and/or interventional therapy based on image analysis and/or disease tracking. In particular, in some embodiments, the system can be configured to report in a number of ways changes to arteries/plaques that occur on a continuous basis as a method for tracking disease stabilization or worsening. In some embodiments, as a method of tracking disease, the system can be configured to report the risk of adverse coronary events. For example, in some embodiments, based upon imaging-based changes, a quantitative risk score can be updated from baseline at follow-up. In some embodiments, the system can be configured to utilize a 4-category method that analyzes: (1) progression—entails worsening (e.g., lower attenuation, greater positive remodeling, etc.); (2) regression—entails diminution (e.g., higher attenuation, lower positive remodeling, etc.); (3) mixed response—progression, but of more prognostically beneficial findings (e.g., higher volume of plaque over time, but with calcified 1K plaque dominant) (mixed response can also include plaque remodeling and location); and/or (4) mixed response—progression, but of more prognostically adverse findings (higher volume of plaque over time, but with more non-calcified low attenuation plaques) (mixed response can also include plaque remodeling and location). In some embodiments, for tracking disease as a method to guide therapy, intensification of medical therapy and/or institution of lifestyle changes or coronary revascularization may occur and be prompted by increased risk of adverse coronary events or being in the “progression” or “mixed response—progression of calcified plaque” categories for example. Further, in some embodiments, serial tracking of disease and appropriate intensification of medical therapy, lifestyle changes or coronary revascularization based upon composition, remodeling and/or location changes, can be provided as a guide to reduce adverse coronary events. FIG.22Bis a flowchart illustrating an overview of an example embodiment(s) of a method for treating to the image. As illustrated inFIG.22B, in some embodiments, the system is configured to access a first set of plaque and/or vascular parameters of a subject, such as for example relating to the coronaries, at block2202. In some embodiments, one or more plaque and/or vascular parameters can be accessed from a plaque and/or vascular parameter database2204. In some embodiments, one or more plaque and/or vascular parameters can be derived and/or analyzed from one or more medical images being stored in a medical image database100. The one or more plaque parameters and/or vascular parameters can include any such parameters described herein. As a non-limiting example, the one or more plaque parameters can include one or more of density, location, or volume of one or more regions of plaque. The density can be absolute density, Hounsfield unit density, and/or the like. The location of one or more regions of plaque can be determined as one or more of myocardial facing, pericardial facing, at a bifurcation, at a trifurcation, proximal, mid, or distal along a vessel, or in a main vessel or branch vessel, and/or the like. The volume can be absolute volume, PAV, and/or the like. Further, the one or more vascular parameters can include vascular remodeling or any other vascular parameter described herein. For example, vascular remodeling can include directionality changes in remodeling, such as outward, intermediate, or inward. In some embodiments, vascular remodeling can include vascular remodeling of one or more coronary atherosclerotic lesions. In some embodiments, at block2206, the subject can be treated with some medical treatment to address a disease, such as CAD. In some embodiments, the treatment can include one or more medications, lifestyle changes or conditions, revascularization procedures, and/or the like. For example, in some embodiments, medication can include statins, human immunodeficiency virus (HIV) medications, icosapent ethyl, bempedoic acid, rivaroxaban, aspirin, proprotein convertase subtilisin/kexin type 9 (PCSK-9) inhibitors, inclisiran, sodium-glucose cotransporter-2 (SGLT-2) inhibitors, glucagon-like peptide-1 (GLP-1) receptor agonists, low-density lipoprotein (LDL) apheresis, and/or the like. In some embodiments, lifestyle changes or condition can include increased exercise, aerobic exercise, anaerobic exercise, cessation of smoking, change in diet, and/or the like. In some embodiments, revascularization can include bypass grafting, stenting, use of a bioabsorbable scaffold, and/or the like. In some embodiments, at block2208, the system can be configured to access one or more medical images of the subject taken after the subject is treated with the medical treatment for some time. The medical image can include any type of image described herein, such as for example, CT, MRI, and/or the like. In some embodiments, at block2210, the system can be configured to identify one or more regions of plaque on the one or more medical images, for example using one or more image analysis techniques described herein. In some embodiments, at block2212, the system can be configured to analyze the one or more medical images to determine a second set of plaque and/or vascular parameters. The second set of plaque and/or vascular parameters can be stored and/or accessed from the plaque and/or vascular parameter database2204in some embodiments. The second set of plaque and/or vascular parameters can include any parameters described herein, including for example those of the first set of plaque and/or vascular parameters. In some embodiments, the system at block2214can be configured to normalize one or more of the first set of plaque parameters, first set of vascular parameters, second set of plaque parameters, and/or second set of vascular parameters. As discussed herein, one or more such parameters or quantification thereof can depend on the scanner type or scan parameter used to obtain a medical image from which such parameters were derived from. As such, in some embodiments, it can be advantageous to normalize for such differences. To do so, in some embodiments, the system can be configured to utilize readings obtained from a normalization device as described herein. In some embodiments, the system at block2216can be configured to analyze one or more changes between the first set of plaque parameters and the second set of plaque parameters. For example, in some embodiments, the system can be configured to analyze changes between a specific type of plaque parameter. In some embodiments, the system can be configured to generate a first weighted measure of one or more of the first set of plaque parameters and a second weighted measure of one or more of the second set of plaque parameters and analyze changes between the first weighted measure and the second weighted measure. The weighted measure can be generated in some embodiments by applying a mathematical transform or any other technique described herein. In some embodiments, the system at block2218can be configured to analyze one or more changes between the first set of vascular parameters and the second set of vascular parameters. For example, in some embodiments, the system can be configured to analyze changes between a specific type of vascular parameter. In some embodiments, the system can be configured to generate a first weighted measure of one or more of the first set of vascular parameters and a second weighted measure of one or more of the second set of vascular parameters and analyze changes between the first weighted measure and the second weighted measure. The weighted measure can be generated in some embodiments by applying a mathematical transform or any other technique described herein. In some embodiments, at block2220, the system can be configured to track the progression of a disease, such as CAD, based on the analyzed changes between one or more plaque parameters and/or vascular parameters. In some embodiments, the system can be configured to determine progression of a disease based on analyzing changes between a weighted measure of one or more plaque parameters and/or vascular parameters as described herein. In some embodiments, the system can be configured to determine progression of a disease based on analyzing changes between one or more specific plaque parameters and/or vascular parameters. In particular, in some embodiments, an increase in density of the one or more regions of plaque can be indicative of disease stabilization. In some embodiments, a change in location of a region of plaque from pericardial facing to myocardial facing is indicative of disease stabilization. In some embodiments, an increase in volume of the one or more regions of plaque between the first point in time and the second point in time is indicative of disease stabilization. In some embodiments, more outward remodeling between the first point in time and the second point in time is indicative of disease stabilization. In some embodiments, disease progression is tracked on one or more of a per-subject, per-vessel, per-segment, or per-lesion basis. In some embodiments, disease progression can be determined by the system as one or more of progression, regression, mixed response—progression of calcified plaque, mixed response—progression of non-calcified plaque. In some embodiments, at block2222, the system can be configured to determine the efficacy of the medical treatment, for example based on the tracked disease progression. As such, in some embodiments, changes in one or more plaque and/or vascular parameters as derived from one or more medical images using image analysis techniques can be used as a biomarker for assessing treatment. In some embodiments, the system can be configured to determine efficacy of a treatment based on analyzing changes between a weighted measure of one or more plaque parameters and/or vascular parameters as described herein. In some embodiments, the system can be configured to determine efficacy of a treatment based on analyzing changes between one or more specific plaque parameters and/or vascular parameters. In particular, in some embodiments, an increase in density of the one or more regions of plaque can be indicative of a positive efficacy of the medical treatment. In some embodiments, a change in location of a region of plaque from pericardial facing to myocardial facing is indicative of a positive efficacy of the medical treatment. In some embodiments, an increase in volume of the one or more regions of plaque between the first point in time and the second point in time is indicative of a negative efficacy of the medical treatment. In some embodiments, more outward remodeling between the first point in time and the second point in time is indicative of a negative efficacy of the medical treatment. In some embodiments, at block2224, the system is configured to generate a proposed medical treatment for the subject based on the determined efficacy of the prior treatment. For example, if the prior treatment is determined to be positive or stabilizing the disease, the system can be configured to propose the same treatment. In some embodiments, if the prior treatment is determined to be negative or not stabilizing the disease, the system can be configured to propose a different treatment. The newly proposed treatment can include any of the types of treatment discussed herein, for example including those discussed in connection with the prior treatment at block2206. Determining Treatment(s) for Reducing Cardiovascular Risk and/or Events Some embodiments of the systems, devices, and methods described herein are configured to determine a treatment(s) for reducing cardiovascular risk and/or events. In particular, some embodiments of the systems and methods described herein are configured to automatically and/or dynamically determine or generate lifestyle, medication and/or interventional therapies based upon actual atherosclerotic cardiovascular disease (ASCVD) burden, ASCVD type, and/or and ASCVD progression. As such, some systems and methods described herein can provide personalized medical therapy is based upon CCTA-characterized ASCVD. In some embodiments, the systems and methods described herein are configured to dynamically and/or automatically analyze medical image data, such as for example non-invasive CT, MRI, and/or other medical imaging data of the coronary region of a patient, to generate one or more measurements indicative or associated with the actual ASCVD burden, ASCVD type, and/or ASCVD progression, for example using one or more artificial intelligence (AI) and/or machine learning (ML) algorithms. In some embodiments, the systems and methods described herein can further be configured to automatically and/or dynamically generate one or more patient-specific treatments and/or medications based on the actual ASCVD burden, ASCVD type, and/or ASCVD progression, for example using one or more artificial intelligence (AI) and/or machine learning (ML) algorithms. In some embodiments, the system can be configured to utilize a normalization device, such as those described herein, to account for differences in scan results (such as for example density values, etc.) between different scanners, scan parameters, and/or the like. In some embodiments of cardiovascular risk assessment of asymptomatic individuals, the system can be configured to use one or more risk factors to guide risk stratification and treatment. For example, some cardiovascular risk factors can include measurements of surrogate measures of coronary artery disease (CAD) of clinical states that contribute to CAD, including dyslipidemia, hypertension, diabetes, and/or the like. In some embodiments, such factors can form the basis of treatment recommendations in professional societal guidelines, which can have defined goals for medical treatment and lifestyle based upon these surrogate markers of CAD, such as total and LDL cholesterol (blood biomarkers), blood pressure (biometric) and hemoglobin A1C (blood biomarker). In some embodiments, this approach can improve population-based survival and reduces the incidence of heart attacks and strokes. However, in some embodiments, these methods also suffer a lack of specificity, wherein treatment can be more effective in populations but may not pinpoint individual persons who harbor residual risk. As an example, LDL has been found in population-based studies to explain only 29% of future heart attacks and, even in the pivotal statin treatment trials, those individuals treated effectively with statins still retain 70-75% residual risk of heart attacks. As such, some embodiments described herein address such technical shortcomings by leveraging lifestyle, medication and/or interventional therapies based upon actual atherosclerotic cardiovascular disease (ASCVD) burden, ASCVD type, and/or and ASCVD progression. Given the multitude of medications available to target the ASCVD process through atherosclerosis, thrombosis and inflammatory pathways, in some embodiments, such direct precision-medicine ASCVD diagnosis and treatment approach can be more effective than treating surrogate markers of ASCVD at the individual level. In some embodiments, the systems and methods described herein are configured to automatically and/or dynamically determine or generate lifestyle, medication and/or interventional therapies based upon actual atherosclerotic cardiovascular disease (ASCVD) burden, ASCVD type, and/or and ASCVD progression. In particular, in some embodiments, the systems and methods are configured to use coronary computed tomographic angiography (CCTA) for quantitative assessment of ASCVD in one or more or all vascular territories, including for example coronary, carotid, aortic, lower extremity, cerebral, renal arteries, and/or the like. In some embodiments, the systems and methods are configured to analyze and/or utilize not only the amount (or burden) of ASCVD, but also the type of plaque in risk stratification. For example, in some embodiments, the systems and methods are configured to associate low attenuation plaques (LAP) and/or non-calcified plaques (NCP) of certain densities with future major adverse cardiovascular events (MACE), whilst associating calcified plaques and, in particular, calcified plaques of higher density as being more stable. Further, in some embodiments, the systems and methods are configured to generate a patient-specific treatment plan that can include use of medication that has been associated with a reduction in LAP or NCP of certain densities and/or an acceleration in calcified plaque formation in populations, i.e., a transformation of plaque by compositional burden. In some embodiments, the systems and methods are configured to generate a patient-specific treatment plan that can include use of medications which can be observed by CCTA to be associated with modification of ASCVD in the coronary arteries, carotid arteries, and/or other arteries, such as for example statins, PCSK9 inhibitors, GLP receptor agonists, icosapent ethyl, and/or colchicine, amongst others. As described herein, in some embodiments, the systems and methods are configured to leverage ASCVD burden, type, and/or progression to logically guide clinical decision making. In particular, in some embodiments, the systems and methods described herein are configured to leverage, analyze, and/or utilize ASCVD burden, type, and/or progression to guide medical therapy to reduce adverse ASCVD events and/or improve patient-specific event-free survival in a personalized fashion. For example, in some embodiments, the system can be configured to analyze and/or utilize ASCVD type, such as peri-lesion tissue atmosphere, localization, and/or the like. More specifically, in some embodiments, the systems and methods described herein are configured to utilize one or more CCTA algorithms and/or one or more medical treatment algorithms that quantify the presence, extent, severity and/or type of ASCVD, such as for example its localization and/or peri-lesion tissues. In some embodiments, the one or more medical treatment algorithms are configured to analyze any medical images obtained from any imaging modality, such as for example computed tomography (CT), magnetic resonance (MR), ultrasound, nuclear medicine, molecular imaging, and/or others. In some embodiments, the systems and methods described herein are configured to utilize one or more medical treatment algorithms that are personalized (rather than population-based), treat actual disease (rather than surrogate markers of disease, such as risk factors), and/or are guided by changes in CCTA-identified ASCVD over time (such as for example, progression, regression, transformation, and/or stabilization). In some embodiments, the one or more CCTA algorithms and/or the one or more medical treatment algorithms are computer-implemented algorithms and/or utilize one or more AI and/or ML algorithms. In some embodiments, the systems and methods are configured to assess a baseline ASCVD in an individual. In some embodiments, the systems and methods are configured to evaluate ASCVD by utilizing coronary CT angiography (CCTA). In some embodiments, the systems and methods are configured to identify and/or analyze the presence, local, extent, severity, type of atherosclerosis, peri-lesion tissue characteristics, and/or the like. In some embodiments, the method of ASCVD evaluation can be dependent upon quantitative imaging algorithms that perform analysis of coronary, carotid, and/or other vascular beds (such as, for example, lower extremity, aorta, renal, and/or the like). In some embodiments, the systems and methods are configured to categorize ASCVD into specific categories based upon risk. For example, some example of such categories can include: Stage 0, Stage I, Stage II, Stage III; or none, minimal, mild, moderate/severe; or primarily calcified vs. primarily non-calcified; or X units of low density non-calcified plaque); or X % of NCP as a function of overall volume or burden. In some embodiments, the systems and methods can be configured to quantify ASCVD continuously. In some embodiments, the systems and methods can be configured to define categories by levels of future ASCVD risk of events, such as heart attack, stroke, amputation, dissection, and/or the like. In some embodiments, one or more other non-ASCVD measures may be included to enhance risk assessment, such as for example cardiovascular measurements (e.g., left ventricular hypertrophy for hypertension; atrial volumes for atrial fibrillation; fat; etc.) and/or non-cardiovascular measurements that may contribute to ASCVD (e.g., emphysema, etc.). In some embodiments, these measurements can be quantified using one or more CCTA algorithms. In some embodiments, the systems and methods described herein can be configured to generate a personalized or patient-specific treatment. More specifically, in some embodiments, the systems and methods can be configured to generate therapeutic recommendations based upon ASCVD presence, extent, severity, and/or type. In some embodiments, rather than utilizing risk factors (such as, for example, cholesterol, diabetes), the treatment algorithm can comprise and/or utilize a tiered approach that intensifies medical therapy, lifestyle, and/or interventional therapies based upon ASCVD directly in a personalized fashion. In some embodiments, the treatment algorithm can be configured to generally ignore one or more conventional markers of success (e.g., lowering cholesterol, hemoglobin A1C, etc.) and instead leverage ASCVD presence, extent, severity, and/or type of disease to guide therapeutic decisions of medical therapy intensification. In some embodiments, the treatment algorithm can be configured to combine one or more conventional markers of success (e.g., lowering cholesterol, hemoglobin A1C, etc.) with ASCVD presence, extent, severity, and/or type of disease to guide therapeutic decisions of medical therapy intensification. In some embodiments, the treatment algorithm can be configured to combine one or more novel markers of success (e.g., such as genetics, transcriptomics, or other 'omics measurements, etc.) with ASCVD presence, extent, severity, and/or type of disease to guide therapeutic decisions of medical therapy intensification. In some embodiments, the treatment algorithm can be configured to combine one or more other imaging markers of success (e.g., such as carotid ultrasound imaging, abdominal aortic ultrasound or computed tomography, lower extremity arterial evaluation, and/or others) with ASCVD presence, extent, severity, and/or type of disease to guide therapeutic decisions of medical therapy intensification. In some embodiments, the systems and methods are configured to perform a response assessment. In particular, in some embodiments, the systems and methods are configured to perform repeat and/or serial CCTA in order to determine the efficacy of therapy on a personalized basis, and to determine progression, stabilization, transformation, and/or regression of ASCVD. In some embodiments, progression can be defined as rapid or non-rapid. In some embodiments, stabilization can be defined as transformation of ASCVD from non-calcified to calcified, or reduction of low attenuation plaque, or reduction of positive arterial remodeling. In some embodiments, regression of ASCVD can be defined as a decrease in ASCVD volume or burden or a decrease in specific plaque types, such as non-calcified or low attenuation plaque. In some embodiments, the systems and methods are configured to update personalized treatment based upon response assessment. In particular, in some embodiments, based upon the change in ASCVD between the baseline and follow-up CCTA, personalized treatment can be updated and intensified if worsening occurs or de-escalated/kept constant if improvement occurs. As a non-limiting example, if stabilization has occurred, this can be evidence of the success of the current medical regimen. Alternatively, as another non-limiting example, if stabilization has not occurred and ASCVD has progressed, this can be evidence of the failure of the current medical regimen, and an algorithmic approach can be taken to intensify medical therapy. In some embodiments, the intensification regimen employs lipid lowering agents in a tiered fashion, and considers ASCVD presence, extent, severity, type, and/or progression. In some embodiments, the intensification regimen considers local and/or peri-lesion tissue. In some embodiments, the intensification regimen and use of the medications therein can be guided also by LDL cholesterol and triglyceride (TG) and Lp(a) and Apo(B) levels; or cholesterol particle density and size. For example,FIGS.23F-Gillustrate an example embodiment(s) of a treatment(s) employing lipid lowering medication(s) and/or treatment(s) generated by an example embodiment(s) of systems and methods for determining treatments for reducing cardiovascular risk and/or events. In some embodiments, given the multidimensional nature of MACE contributors that include ASCVD, inflammation and thrombosis, the intensification regimen can incorporate anti-inflammatory medications (e.g., colchicine) and/or anti-thrombotic medications (e.g., rivaroxaban and aspirin) in order to control the ASCVD progress. In some embodiments, new diabetic medications that have salient effects on reducing MACE events—including SGLT2 inhibitors and GLP1R agonists—can also be incorporated. For example,FIGS.23H-Iillustrate an example embodiment(s) of a treatment(s) employing diabetic medication(s) and/or treatment(s) generated by an example embodiment(s) of systems and methods for determining treatments for reducing cardiovascular risk and/or events. FIG.23Aillustrates an example embodiment(s) of systems and methods for determining treatments for reducing cardiovascular risk and/or events. In some embodiments, the systems and methods described herein are configured to analyze coronaries. In some embodiments, the systems and methods can also be applied to other arterial bed as well, such as the aorta, carotid, lower extremity, renal artery, cerebral artery, and/or the like. In some embodiments, the system can be configured to determine and/or utilize in its analysis the presence of ASCVD, which can be the presence vs. absence of plaque, the presence vs. absence of non-calcified plaque, the presence vs. absence of low attenuation plaque, and/or the like. In some embodiments, the system can be configured to determine and/or utilize in its analysis the extent of ASCVD, which can include the total ASCVD volume, percent atheroma volume (atheroma volume/vessel volume×100), total atheroma volume normalized to vessel length (TAVnorm), diffuseness (% of vessel affected by ASCVD), and/or the like. In some embodiments, the system can be configured to determine and/or utilize in its analysis severity of ASCVD. In some embodiments, ASCVD severity can be linked to population-based estimates normalized to age-, gender-, ethnicity-, CAD risk factors, and/or the like. In some embodiments, ASCVD severity can include angiographic stenosis >70% or >50% in none, 1-, 2-, and/or 3-VD. In some embodiments, the system can be configured to determine and/or utilize in its analysis the type of ASCVD, which can include for example the proportion (ratio, %, etc.) of plaque that is non-calcified vs. calcified, proportion of plaque that is low attenuation non-calcified vs. non-calcified vs. low density calcified vs. high-density calcified, absolute amount of non-calcified plaque and calcified plaque, absolute amount of plaque that is low attenuation non-calcified vs. non-calcified vs. low density calcified vs. high-density calcified, continuous grey-scale measurement of plaques without ordinal classification, radiomic features of plaque, including heterogeneity and others, vascular remodeling imposed by plaque as positive remodeling (>1.10 or >1.05 ratio of vessel diameter/normal reference diameter; or vessel area/normal reference area; or vessel volume/normal reference volume) vs. negative remodeling (<1.10 or <1.05), vascular remodeling imposed by plaque as a continuous ratio, and/or the like. In some embodiments, the system can be configured to determine and/or utilize in its analysis the locality of plaque, such as for example in the arterial bed, regarding vessel, segment, bifurcation, and/or the like. In some embodiments, the system can be configured to determine and/or utilize in its analysis the peri-lesion tissue environment, such as for example density of the peri-plaque tissues such as fat, amount of fat in the peri-vascular space, radiomic features of peri-lesion tissue, including heterogeneity and others, and/or the like. In some embodiments, the system can be configured to determine and/or utilize in its analysis ASCVD progression. In some embodiments, progression can be defined as rapid vs. non-rapid, with thresholds to define rapid progression (e.g., >1.0% percent atheroma volume, >200 mm3 plaque, etc.). In some embodiments, serial changes in ASCVD can include rapid progression, progression with primarily calcified plaque formation, progression with primarily non-calcified plaque formation, and regression. In some embodiments, the system can be configured to determine and/or utilize in its analysis one or more categories of risk. In some embodiments, the system can be configured to utilize one or more stages, such as 0, I, II, or III based upon plaque volumes associated with angiographic severity (such as, for example, none, non-obstructive, and obstructive 1VD, 2VD and 3VD). In some embodiments, the system can be configured to utilize one or more percentiles, for example taking into account age, gender, ethnicity, and/or presence of one or more risk factors (such as, diabetes, hypertension, etc.). In some embodiments, the system can be configured to determine a percentage of calcified plaque vs. percentage of non-calcified plaque as a function of overall plaque volume. In some embodiments, the system can be configured to determine the number of units of low density non-calcified plaque. In some embodiments, the system can be configured to generate a continuous 3D histogram and/or geospatial map (for plaque geometry) analysis of grey scales of plaque by lesion, by vessel, and/or by patient. In some embodiments, risk can be defined in a number of ways, including for example risk of MACE, risk of angina, risk of ischemia, risk of rapid progression, risk of medication non-response, and/or the like. In some embodiments, treatment recommendations can be based upon ASCVD presence, extent, severity type of disease, ASCVD progression, and/or the like. For example,FIGS.23F-Gillustrate an example embodiment(s) of a treatment(s) employing lipid lowering medication(s) and/or treatment(s) andFIGS.23H-Iillustrate an example embodiment(s) of a treatment(s) employing diabetic medication(s) and/or treatment(s) generated by an example embodiment(s) of systems and methods for determining treatments for reducing cardiovascular risk and/or events. In some embodiments, the generated treatment protocols are aimed (e.g., based upon CCTA-based ASCVD characterization) to properly treat at the right point in time with medications aimed at ASCVD stabilization, inflammation reduction, and/or reduction of thrombosis potential. In some embodiments, the rationale behind this is that ASCVD events can be an inflammatory atherothrombotic phenomenon, but serum biomarkers, biometrics and conventional measures of angiographic stenosis severity can be inadequate to optimally define risk and guidance to clinical decision making. As such, some systems and methods described herein can provide personalized medical therapy is based upon CCTA-characterized ASCVD. In some embodiments, the system can be configured to generate a risk score that combines one or more traditional risk factors, such as the ones described herein, together with one or more quantified ASCVD measures. In some embodiments, the system can be configured to generate a risk score that combines one or more genetics analysis with one or more quantified ASCVD measures, as some medications may work better on some people and/or people with particular genes. In addition, in some embodiments, the system can be configured to exclude or deduct certain plaque from the rest of disease. For example, in some embodiments, the system can be configured to ignore or exclude high density calcium that is so stable that the risk of having it can be better than having a disease without it, such that the existence of such plaque may impact risk negatively. FIGS.23B-Cillustrate an example embodiment(s) of definitions or categories of atherosclerosis severity used by an example embodiment(s) of systems and methods for determining treatments for reducing cardiovascular risk and/or events. FIG.23Dillustrates an example embodiment(s) of definitions or categories of disease progression, stabilization, and/or regression used by an example embodiment(s) of systems and methods for determining treatments for reducing cardiovascular risk and/or events. FIG.23Eillustrates an example embodiment(s) of a time-to-treatment goal(s) for an example embodiment(s) of systems and methods for determining treatments for reducing cardiovascular risk and/or events. FIG.23Jis a flowchart illustrating an overview of an example embodiment(s) of a method for determining treatments for reducing cardiovascular risk and/or events. As illustrated inFIG.23J, in some embodiments, the system is configured to determine a proposed personalized treatment for a subject to lower ASCVD risk based on CCTA analysis using one or more quantitative image analysis techniques and/or algorithms. In particular, in some embodiments, the system can be configured to access one or more medical images taken from a first point in time at block2302, for example from a medical image database100. The one or more medical images can include images obtained using any imaging modality described herein. In some embodiments, the one or more medical images can include one or more arteries, such as for example coronary, carotid, lower extremity, upper extremity, aorta, renal, and/or the like. In some embodiments, the system at block2304can be configured to analyze the one or more medical images. More specifically, in some embodiments, the system can be configured to utilize CCTA analysis and/or quantitative imaging algorithms to identify and/or derive one or more parameters from the medical image. In some embodiments, the system can be configured to store one or more identified and/or derived parameters in a parameter database2306. In some embodiments, the system can be configured to access one or more such parameters from a parameter database2306. In some embodiments, the system can be configured to analyze one or more plaque parameters, vascular parameters, atherosclerosis parameters, and/or perilesional tissue parameters. The plaque parameters and/or vascular parameters can include any one or more such parameters discussed herein. In some embodiments, at block2308, the system can be configured to assess a baseline ASCVD risk of the subject based on one or more such parameters. In some embodiments, at block2310, the system can be configured to categorize the baseline ASCVD risk of the subject. In some embodiments, the system can be configured to categorize the baseline ASCVD risk into one or more predetermined categories. For example, in some embodiments, the system can be configured to categorize the baseline ASCVD risk as one of Stage 0, I, II, or III. In some embodiments, the system can be configured to categorize the baseline ASCVD risk as one of none, minimal, mild, or moderate. In some embodiments, the system can be configured to categorize the baseline ASCVD risk as one of primarily calcified or primarily non-calcified plaque. In some embodiments, the system can be configured to categorize the baseline ASCVD risk based on units of low density non-calcified plaque identified from the image. In some embodiments, the system is configured to categorize the baseline ASCVD risk on a continuous scale. In some embodiments, the system is configured to categorize the baseline ASCVD risk based on risk of future ASCVD events, such as heart attack, stroke, amputation, dissection, and/or the like. In some embodiments, the system is configured to categorize the baseline ASCVD risk based on one or more non-ASCVD measures, which can be quantified using one or more CCTA algorithms. For example, non-ASCVD measures can include one or more cardiovascular measurements (e.g., left ventricular hypertrophy for hypertension or atrial volumes for atrial fibrillation, and/or the like) or non-cardiovascular measurements that may contribute to ASCVD (e.g., emphysema, etc.). In some embodiments, the system at block2312can be configured to determine an initial proposed treatment for the subject. In some embodiments, the system can be configured to determine an initial proposed treatment with or without analysis of cholesterol or hemoglobin A1C. In some embodiments, the system can be configured to determine an initial proposed treatment with or without analysis of low-density lipoprotein (LDL) cholesterol or triglyceride (TG) levels of the subject. In some embodiments, the initial proposed treatment can include medical therapy, lifestyle therapy, and/or interventional therapy. For example, medical therapy can include one or more medications, such as lipid-lowering medications, anti-inflammatory medications (e.g., colchicine, etc.), anti-thrombotic medications (e.g., rivaroxaban, aspirin, etc.), diabetic medications (e.g., sodium-glucose cotransporter-2 (SGLT2) inhibitors, glucagon-like peptide-1 receptor (GLP1R) agonists, etc.), and/or the like. Lifestyle therapy and/or interventional therapy can include any one or more such therapies discussed herein. In some embodiments, at block2314, the subject can be treated with one or more such medical treatments. In some embodiments, the system at block2316can be configured to access one or more medical images taken from a second point in time after the subject is treated with the initial treatment, for example from a medical image database100. The one or more medical images can include images obtained using any imaging modality described herein. In some embodiments, the one or more medical images can include one or more arteries, such as for example coronary, carotid, lower extremity, upper extremity, aorta, renal, and/or the like. In some embodiments, the system at block2318can be configured to analyze the one or more medical images taken at the second point in time. More specifically, in some embodiments, the system can be configured to utilize CCTA analysis and/or quantitative imaging algorithms to identify and/or derive one or more parameters from the medical image. In some embodiments, the system can be configured to store one or more identified and/or derived parameters in a parameter database2306. In some embodiments, the system can be configured to access one or more such parameters from a parameter database2306. In some embodiments, the system can be configured to analyze one or more plaque parameters, vascular parameters, atherosclerosis parameters, and/or perilesional tissue parameters. The plaque parameters and/or vascular parameters can include any one or more such parameters discussed herein. In some embodiments, at block2320, the system can be configured to assess an updated ASCVD risk of the subject based on one or more such parameters. In some embodiments, at block2322, the system can be configured to categorize the updated ASCVD risk of the subject. In some embodiments, the system can be configured to categorize the updated ASCVD risk into one or more predetermined categories. For example, in some embodiments, the system can be configured to categorize the updated ASCVD risk as one of Stage 0, I, II, or III. In some embodiments, the system can be configured to categorize the updated ASCVD risk as one of none, minimal, mild, or moderate. In some embodiments, the system can be configured to categorize the updated ASCVD risk as one of primarily calcified or primarily non-calcified plaque. In some embodiments, the system can be configured to categorize the updated ASCVD risk based on units of low density non-calcified plaque identified from the image. In some embodiments, the system is configured to categorize the updated ASCVD risk on a continuous scale. In some embodiments, the system is configured to categorize the updated ASCVD risk based on risk of future ASCVD events, such as heart attack, stroke, amputation, dissection, and/or the like. In some embodiments, the system is configured to categorize the updated ASCVD risk based on one or more non-ASCVD measures, which can be quantified using one or more CCTA algorithms. For example, non-ASCVD measures can include one or more cardiovascular measurements (e.g., left ventricular hypertrophy for hypertension or atrial volumes for atrial fibrillation, and/or the like) or non-cardiovascular measurements that may contribute to ASCVD (e.g., emphysema, etc.). In some embodiments, the system at block2324can be configured to assess the subject's response to the initial proposed treatment. For example, in some embodiments, the system can be configured to compare differences or changes in ASCVD risk and/or categorized ASCVD risk between the first point in time and the second point in time. In some embodiments, the subject response is assessed based on one or more of progression, stabilization, or regression of ASCVD. In some embodiments, progression can include rapid and/or non-rapid progression. In some embodiments, stabilization can include transformation of ASCVD from non-calcified to calcified, reduction of low attenuation plaque, and/or reduction of positive arterial remodeling. In some embodiments, regression can include decrease in ASCVD volume or burden, decrease in non-calcified plaque, and/or decrease in low attenuation plaque. In some embodiments, the system at block2326can be configured to determine a continued proposed treatment for the subject, for example based on the subject response to the initial treatment. In particular, in some embodiments, if the system determines that there was progression in ASCVD risk in response to the initial treatment, the system can be configured to propose a higher tiered treatment compared to the initial treatment. In some embodiments, if the system determines that there was stabilization or regression in ASCVD risk in response to the initial treatment, the system can be configured to propose the same initial treatment or a same or similar tiered alternative treatment or a lower tiered treatment compared to the initial treatment. In some embodiments, the system can be configured to determine a continued proposed treatment with or without analysis of cholesterol or hemoglobin A1C. In some embodiments, the system can be configured to determine a continued proposed treatment with or without analysis of low-density lipoprotein (LDL) cholesterol or triglyceride (TG) levels of the subject. In some embodiments, the continued proposed treatment can include medical therapy, lifestyle therapy, and/or interventional therapy. For example, medical therapy can include one or more medications, such as lipid-lowering medications, anti-inflammatory medications (e.g., colchicine, etc.), anti-thrombotic medications (e.g., rivaroxaban, aspirin, etc.), diabetic medications (e.g., sodium-glucose cotransporter-2 (SGLT2) inhibitors, glucagon-like peptide-1 receptor (GLP1R) agonists, etc.), and/or the like. Lifestyle therapy and/or interventional therapy can include any one or more such therapies discussed herein. In some embodiments, the system can be configured to repeat one or more processes described in connection withFIG.23Jat different points in time. In other words, in some embodiments, the system can be configured to apply serial analysis and/or tracking of treatments to continue to monitor ASCVD of a subject and the subject's response to treatment for continued treatment of the subject. Determining Treatment(s) for Reducing Cardiovascular Risk and/or Events Some embodiments of the systems, devices, and methods described herein are configured to determine stenosis severity and/or vascular remodeling in the presence of atherosclerosis. In particular, some embodiments of the systems, devices, and methods described herein are configured to determine stenosis severity and vascular remodeling, for example whilst accounting for presence of plaque, natural artery tapering, and/or 3D volumes. In some embodiments, the systems, devices, and methods described herein are configured to determine % fractional blood volume, for example for determining of contribution of specific arteries and/or branches to important pathophysiologic processes (such as, risk of size of myocardial infarction; ischemia, and/or the like), whilst accounting for the presence of plaque in non-normal arteries. In some embodiments, the systems, methods, and devices described herein are configured to determine ischemia, for example by applying the continuity equation, whilst accounting for blood flow across a range of physiologically realistic ranges (e.g., ranges for rest, mild/moderate/extreme exercise, and/or the like). Generally speaking, coronary artery imaging can be a key component for diagnosis, prognostication and/or clinical decision making of patients with suspected or known coronary artery disease (CAD). More specifically, in some embodiments, an array of coronary artery imaging parameters can be useful for guiding and informing these clinical tasks and can include such measures of arterial narrowing (steno sis) and vascular remodeling. In some embodiments, the system can be configured to define relative arterial narrowing (stenosis) due to coronary artery atherosclerotic lesions. In some embodiments, these measures can largely rely upon (1) comparisons to diseased regions to normal regions of coronary vessels, and/or (2) 2D measures of diameter or area reduction due to coronary artery lesions. However, limitations can exist in such embodiments. For example, in some of such embodiments, relative narrowing can be difficult to determine in diseased vessels. Specifically, in some embodiments, coronary stenosis can be reported as a relative narrowing, i.e., Diameter disease/Diameter normal reference×100% or Area disease/Area normal reference×100%. However, in some instances, coronary vessels are diffusely diseased, which can render comparison of diseased, stenotic regions to “normal” regions of the vessel problematic and difficult when there is no normal region of the vessel without disease to compare to. In addition, in some of such embodiments, stenosis measurements can be reported in 2D, not 3D. Specifically, some embodiments rely upon imaging methods which are two-dimensional in nature and thus, report out stenoses as relative % area narrowing (2D) or relative % diameter narrowing (2D). Some of such embodiments do not account for the marked irregularity in coronary artery lesions that are often present and do not provide information about the coronary artery lesion across the length of a vessel. In particular, if the x-axis is considered the axial distance along a coronary vessel, the y-axis the width of an artery wall, and the z-axis the irregular topology of plaque along the length of a vessel, then it can become evident that a single % area narrowing or a single % diameter narrowing is inadequate to communicate the complexity of the coronary lesion. In some of such embodiments, because % area and % diameter stenosis are based upon 2D measurements, certain methods that define stenosis severity can rely upon maximum % stenosis rather than the stenosis conferred by three-dimensional coronary lesions that demonstrate heterogeneity in length and degree of narrowing across their length (i.e., volume). As such, in some of such embodiments, tracking over time can be difficult (e.g., monitoring the effects of therapy) where changes in 2D would be much less accurate. A similar analogy can be when evaluating changes in a pulmonary nodule while the patient is in follow up, which can be much more accurate in 3D than 2D. Furthermore, in some of such embodiments, the natural tapering of arteries may not be accounted for any and/or all forms of imaging. As illustrated inFIG.24A, the coronary arteries can naturally get smaller along their length. This can be problematic for % area and % diameter measurements, as these approaches may not take into account that a normal coronary artery tapers gradually along its length. Hence, in some of such embodiments, the comparison to a normal reference diameter or normal reference area has been to use the most normal appearing vessel segment/cross-section proximal to a lesion. In this case, because the proximal cross-section is naturally larger (due to the tapering), the actual % narrowing (by area or diameter) can be lower than it actually is. As such, in some of such embodiments, there are certain limitations to grading of coronary artery stenosis. Thus, it can be advantageous to account for the diffuseness of disease in a volumetric fashion, whilst accounting for natural vessel tapering, as in certain other embodiments described below. Instead, in some of such embodiments described above, certain formulas can be used to evaluate these phenomena in 2 dimensions rather than 3 dimensions, in which the relative degree of narrowing, also called stenosis or maximum diameter reduction, is determined by measuring the narrowest lumen diameter in the diseased segment and comparing it to the lumen diameter in the closest adjacent proximal disease-free section. In some of such embodiments, this is because with plaque present it can be no longer possible to measure directly what the lumen diameter at that point was originally. Similarly, in some of such embodiments, the remodeling index can be problematic. In particular, in some of such embodiments, the remodeling index is determined by measuring the outer diameter of the vessel and this is compared to the diameter in the closest adjacent proximal disease-free section. In some of such embodiments, on CT imaging, the normal coronary artery wall is not resolved as it's thickness of ˜0.3 mm is beyond the ability of being depicted on CT due to resolution limitations. Some examples of these problems in some of such embodiments are illustrated inFIGS.24B-Gand accompanying text. For example,FIG.24Billustrates such an embodiment(s) of determining % stenosis and remodeling index. In the illustrated embodiment(s), it is assumed that the diameter of the closest adjacent proximal disease-free section (R) accurately reflects what the diameter at the point of stenosis or outward remodeling would be. However, this simple formula may significantly overestimate the actual stenosis and underestimate the remodeling index. In particular, these simple formulas may not take into account that a normal coronary artery tapers gradually along its length as depicted inFIG.24C. As illustrated inFIG.24C, the coronary diameter may not be constant, but rather the vessel can taper gradually along its course. For example, the distal artery diameter (D2) may be less than 50% or more of the proximal diameter (D1). Further, when there is a long atherosclerotic plaque present, the reference diameter R0 measured in a “normal” proximal part of the vessel may have a significantly larger diameter than the diameter that was initially present, especially when the measured stenosis or remodeling index is positioned far from the beginning of the plaque. This can introduce error into the Stenosis % equation, resulting in a percent diameter stenosis larger and remodeling index significantly lower than it should be. As illustrated inFIG.24D, when there is a long plaque positioned proximal to the point of maximal stenosis (Lx) or positive remodeling (Wx), in some of such embodiments, the reference diameter R0 can be currently measured in the closest normal part of the vessel; however at this point the vessel can be significantly larger than it would have initially been at position x, introducing error. Generally speaking, clinical decision making in cardiology is often guideline driven and decisions often take the quantitative percent stenosis or remodeling index into account. For example, in the case of percent stenosis, a threshold of 50 or 70% can be used to determine if additional diagnostic testing or intervention is required. As a non-limiting example,FIG.24Edepicts how an inaccurately estimated R0 could significantly affect the resulting percent stenosis and remodeling index. As illustrated inFIG.24E, if the estimated R0 is larger than the true lumen at the site of stenosis or positive remodeling, significant error can be introduced. In some embodiments, with current technology by imaging (including but not limited to CT, MRI and others), the internal lumen (L) and outer (W) is continuously measurable along the entire length of a coronary artery. In some embodiments, when the lumen diameter is equal to the wall diameter, there is no atherosclerotic plaque present, the vessel is “normal.” Conversely, in some embodiments, when the wall diameter is greater than the lumen diameter, plaque is present. This is illustrated inFIG.24F. As illustrated inFIG.24F, in some embodiments, both the lumen diameter and outer wall diameter are continuously measured using current imaging techniques, such as CT. In some embodiments, when L=W there is no plaque present. In some embodiments, an estimated reference diameter can be calculated continuously at every point in the vessel where plaque is present. For example, by using the R0 just before plaque, and a Rn just after the end of the plaque, the degree of tapering along the length of the plaque can be calculated. In some embodiments, this degree of tapering is, in most cases, linear; but may also taper in other mathematically-predictable fashions (log, quadratic, etc.) and hence, the measurements may be transformed by certain mathematical equations, as illustrated inFIG.24G. In some embodiments, using the formula inFIG.24G, an Rx can then be determined at any position along the plaques length. In some embodiments, this assumes that the “normal” vessel would have tapered in a linear (or other mathematically predictable fashions) manner across its length. As illustrated inFIG.24G, in some embodiments, the reference diameter can be better estimated continuously along the length of the diseased portion of the vessel as long as the diameter just before the plaque R0 and just after the plaque Rn is known. In some embodiments, once the continuous Rx reference diameter is determined, a continuous percent stenosis and/or remodeling index across the plaque and be easily calculated, for example using the following. %Stenosisx=Rx-LxRx⨯100RemodelingIndexRIx=WxRx More specifically, in some embodiments, since the continuous lumen diameter Lx and wall diameter Wx are already known, continuous values for percent stenosis and remodeling index and be easily calculated once the Rx values have been generated. As described above, in some embodiments, there are certain limitations to calculating stenosis severity and remodeling index in two dimensions. Further, even as improved upon with the accounting of the vessel taper and presence of plaque in some embodiments, these approaches may still be limited in that they are reliant upon 2D (areas, diameters) rather than 3D measurements (e.g., volume). Thus, as described in some embodiments herein, an improvement to this approach may be to calculate volumetric stenosis, volumetric remodeling, and/or comparisons of compartments of the coronary artery to each other in a volumetric fashion. As such, in some embodiments, the systems, devices, and methods described herein are configured to calculate volumetric stenosis, volumetric remodeling, and/or comparisons of compartments of the coronary artery to each other in a volumetric fashion, for example by utilizing one or more image analysis techniques to one or more medical images obtained from a subject using one or more medical imaging scanning modalities. In some embodiments, the system can be configured to utilize a normalization device, such as those described herein, to account for differences in scan results (such as for example density values, etc.) between different scanners, scan parameters, and/or the like. In particular, in some embodiments, volumetric stenosis is calculated as illustrated inFIGS.24H and24I. As illustrated inFIGS.24H and24I, in some embodiments, the system can be configured to analyze a medical image of a subject to identify one or more boundaries along a vessel. For example, in some embodiments, the system can be configured to identify the theoretically or hypothetically normal boundaries of the artery wall in the case a plaque was not present. In some embodiments, the system can be configured to identify the lumen wall and, in the absence of plaque, the vessel wall. In some embodiments, the system can be configured to identify an area of interrogation (e.g., site of maximum obstruction). In some embodiments, the system can be configured to determine a segment with the plaque. Thus, in some embodiments as illustrated inFIG.24I, % volumetric stenosis can be calculated by the following equation, which accounts for the 3D irregularity of contribution of the plaque to narrowing the lumen volume, whilst considering the normal vessel taper and hypothetically normal vessel wall boundary: Lumen volume accounting for plaque (which can be measured)/Volume of hypothetically normal vessel (which can be calculated)×100%=Volumetric % stenosis. In some embodiments, an alternative method for % volume stenosis can be to include the entire vessel volume (i.e., that which is measured rather than that which is hypothetical). This can be governed by the following equation: Lumen volume accounting for plaque (which can be measured)/Volume of vessel (which can be measured)×100%=Volumetric % stenosis. In some embodiments, another alternative method for determining % volumetric stenosis is to include the entire artery (i.e., that which is before, at the site of, and after a narrowing), as illustrated inFIG.24I. In some embodiments, the systems, devices, and methods described herein are configured to calculate volumetric remodeling. In particular, in some embodiments, volumetric remodeling can account for the natural tapering of a vessel, the 3D nature of the lesion, and/or the comparison to a proper reference standard.FIG.24Jis a schematic illustration of an embodiment(s) of determining volumetric remodeling. In the example ofFIG.24J, the remodeling index of Lesion #1, that is 5.2 mm in length, is illustrated. As illustrated inFIG.24J, in some embodiments, the system can be configured to identify from a medical image a length of Lesion #1 in which a region of plaque is present (note the natural 8% taper by area, diameter or volume). In some embodiments, the system can be configured to identify a lesion length immediately before Lesion #1 in a normal part of the vessel (note the natural 12% taper by area, diameter or volume). In some embodiments, the system can be configured to identify a lesion length immediately after Lesion #1 in a normal part of the vessel (note the natural 6% taper by area, diameter or volume). In some embodiments, the system can be configured to identify one or more regions of plaque. In some embodiments, the system can be configured to identify or determine a 3D volume of the vessel across the lesion length of 5.2 mm immediately before and/or after Lesion #1 and/or in Lesion #1. In some embodiments, the system can be configured to calculate a Volumetric Remodeling Index by the following: (Volume within Lesion #1 had plaque not been present+Volume of plaque in Lesion #1 exterior to the vessel wall)/Volume within Lesion #1 had plaque not been present. By utilizing this formula, in some embodiments, the resulting volumetric remodeling index can take into account tapering, as the volume within lesion #1 had plaque not been present takes into account any effect of tapering. In some embodiments, the Volumetric Remodeling Index can be calculated using other methods, such as: Volume within Lesion #1 had plaque not been present/Proximal normal volume immediately proximal to Lesion #1×100%, mathematically adjusted for the natural vessel tapering. This volumetric remodeling index uses the proximal normal volume as the reference standard. Alternatively, in some embodiments, a method of determining volumetric remodeling index that does not directly account for natural vessel tapering can be calculated by Volume within Lesion #1 had plaque not been present/((Proximal normal volume immediately proximal to Lesion #1+Distal normal volume immediately distal to Lesion #1))/2 in order to account for the natural tapering. Further, in some embodiments, with the ability to evaluate coronary vessels in 3D, along with the ability to determine the hypothetically-normal boundaries of the vessel wall even in the presence of plaque, the systems, methods, and devices described herein can be configured to either measure (in the absence of plaque) or calculate the normal coronary vessel blood volume. For example, in some embodiments, this coronary vessel blood volume can be assessed by one or more of the following: (1) Total coronary volume (which represents the total volume in all coronary arteries and branches); (2) Territory- or Artery-specific volume, or % fractional blood volume (which represents the volume in a specific artery or branch); (3) Segment-specific volume (which represents the volume in a specific coronary segment, of which there are generally considered 18 segments); and/or within-artery % fractional blood volume (which represents the volume in a portion of a vessel or branch, i.e., in the region of the artery before a lesion, in the region of the artery at the site of a lesion, in the region of the artery after a lesion, etc.). FIG.24Killustrates an embodiment(s) of coronary vessel blood volume assessment based on total coronary volume.FIG.24Lillustrates an embodiment(s) of coronary vessel blood volume assessment based on territory or artery-specific volume. For example, in the illustrated embodiment, the right the right coronary artery territory volume would be the volume within #1, #2, #3, #4, and #5, while the right coronary artery volume would be the volume within #1, #2, and #3. As an example of segment-specific volume-based assessment of coronary vessel blood volume, a segment-specific volume (e.g., mid-right coronary artery) can be the volume in #2.FIG.24Millustrates an embodiment(s) of coronary vessel blood volume assessment based on within-artery % fractional blood volume, where the proximal and distal regions comprise portions of the artery fractional blood volume. Numerous advantages exist for assessing fractional blood volume. In some embodiments, because this method allows for determination of coronary volume hypothetically-normal boundaries of the vessel wall even in the presence of plaque, these approaches allow for calculation of the % blood volume conferring potential risk to myocardium—comes the ability to either measure (in the absence of plaque) or calculate the normal coronary vessel blood volume.FIG.24Nillustrates an embodiment(s) of assessment of coronary vessel blood volume. In some embodiments, based on one or more metrics described above, as well as the ability to determine the hypothetically normal boundaries of the vessel, the systems, devices, and methods described herein can be configured to determine the ischemia-causing nature of a vessel by a number of different methods. In particular, in some embodiments, the system can be configured to determine % vessel volume stenosis, for example by: Measured lumen volume/Hypothetically normal vessel volume×100%. This is depicted inFIG.24O. In some embodiments, the system can be configured to determine pressure difference across a lesion using hypothetically normal artery, continuity equation and naturally occurring coronary flow rate ranges and/or other physiologic parameters. This is illustrated inFIG.24P. In the embodiment illustrated inFIG.24P, there is a plaque that extends into the lumen and narrows the lumen (at the maximum narrowing, it is R0). In some embodiments, the system can compare R0 to R-1, R-2, R-3 or any cross-section before the lesion. In some embodiments, using this comparison, the system can apply the continuity equation either using actual measurements (e.g., at lines inFIG.24P) or the hypothetically normal diameter of the vessel. The continuity equation applied to the coronary arteries is illustrated inFIG.24Q. As illustrated inFIG.24Q, in some embodiments, the system, by using imaging (CT, MRI, etc.), can be configured to determine the cross-sectional area of artery at a defined point before the site of maximum narrowing (A1) and the cross-sectional area of artery at the site of maximum narrowing (A2) with high accuracy. However, in some embodiments, velocity and velocity time integral are unknown. Thus, in some embodiments, the velocity time integral (VTI) at a defined point before the site of maximum narrowing (V1) and the VTI at a defined point after the site of maximum narrowing (V2) are provided, for example in categorical outputs based upon what has been empirically measured for people at rest and during exertion (mild, moderate and extreme). As a non-limiting example, at rest, the total coronary blood flow can be about ˜250 ml/min (˜0.8 ml/min*g of heart muscle), which represents ˜5% of cardiac output. At increasing levels of exertion, the coronary blood flow can increase up to 5 times its amount (˜1250 ml/min). Thus, in some embodiments, the system can categorize the flow into about 250 ml/min, about 250-500 ml/min, about 500-750 ml/min, about 750-1000 ml/min, and/or about 1250 ml/min. Other categorizations can exist, and these numbers can be reported in continuous, categorical, and/or binary expressions. Further, based upon the observations of blood flow, these relationships may not necessarily be linear, and can be transformed by mathematical operations (such as log transform, quadratic transform, etc.). Further, in some embodiments, other factors can be calculated based upon ranges, binary expressions, and/or continuous values, such as for example heart rate, aortic blood pressure and downstream myocardial resistance, arterial wall/plaque resistance, blood viscosity, and/or the like. Empirical measurements of fluid behavior in these differing conditions can allow for putting together a titratable input for the continuity equation. Further, in some embodiments, because imaging allows for evaluation of the artery across the entire cardiac cycle, measured (or assumed) coronary vasodilation can allow for time-averaged A1 and A2 measurements. As such, in some embodiments, the system can be configured to utilize one or more of the following equations: (1) Q=area×velocity @ site of maximum obstruction (across a range of flows observed in empirical measurements); and (2) Q=area×velocity @ site proximal to maximum obstruction (across a range of flows observed in empirical measurements). From the assumed flows and measured areas, in some embodiments, the system can then back-calculate the velocity. Then, the system can apply the simplified or full Bernoulli's equations to equal: Pressure change=4(V2−V1)2. From this, in some embodiments, the system can calculate the pressure drop across a lesion and, of equal import, can assess this pressure change across physiologically-realistic parameters that a patient will face in real life (e.g., rest, mild/moderate/extreme exertion). Further, in some embodiments, the system can apply a volumetric continuity equation to account for a volume of blood before and after a lesion narrowing, such as for example: (1) Q=volume×velocity @ site of maximum obstruction (across a range of flows observed in empirical measurements); and (2) Q=volume×velocity @ site proximal to maximum obstruction (across a range of flows observed in empirical measurements). From the assumed flows and measured volumes, in some embodiments, the system can then back-calculate the velocity and, if assuming or measuring heart rate, the system can then back-calculate the velocity time integral. FIG.24Ris a flowchart illustrating an overview of an example embodiment(s) of a method for determining volumetric stenosis and/or volumetric vascular remodeling. As illustrated inFIG.24R, in some embodiments, at block2402the system is configured to access one or more medical images, for example from a medical image database100. The one or more medical images can be obtained using any one or more of the imaging modalities discussed herein. In some embodiments, at block2404, the system can be configured to identify one or more segments of arteries and/or regions of plaque by analyzing the medical image. In some embodiments, the system at block2406can be configured to determine a lumen wall boundary in the one or more segments where plaque is present. In some embodiments, the system at block2406can be configured to determine a hypothetical normal artery boundary if plaque were not present. In some embodiments, the system at block2408can be configured to quantify the lumen volume with plaque and/or a hypothetical normal vessel volume had plaque not been present. In some embodiments, using the foregoing, the system at block2410can be configured to determine volumetric stenosis of the one or more segments, taking into account tapering and true assessment of the vessel morphology based on image analysis. In some embodiments, the system at block2412can be configured to quantify the volume of one or more regions of plaque. For example, in some embodiments, the system can be configured to quantify for a segment or lesion the total volume of plaque, volume of plaque inside the hypothetical normal artery boundary, volume of plaque outside the hypothetical normal artery boundary, and/or the like. In some embodiments, the system at block2414can be configured to utilize the foregoing to determine a volumetric remodeling index. For example, in some embodiments, the system can be configured to determine a volumetric remodeling index by dividing the sum of the hypothetical normal vessel volume and the plaque volume outside the hypothetical normal artery boundary by the hypothetical normal vessel volume. In some embodiments, the system at block2416can be configured to determine a risk of CAD for the subject, for example based on one or more of the determined volumetric stenosis and/or volumetric vascular remodeling index. FIG.24Sis a flowchart illustrating an overview of an example embodiment(s) of a method for determining ischemia. As illustrated inFIG.24S, in some embodiments, the system can access a medical image at block2402, identify one or more segments of arteries and/or region of plaque at block2404, and/or determine the lumen wall boundary while taking into account the present plaque and/or a hypothetical normal artery boundary if plaque were not present at block2406. In some embodiments, at block2418, the system can be configured to quantify a proximal and/or distal cross-sectional area and/or volume along an artery. For example, in some embodiments, the system can be configured to quantify a proximal cross-sectional area and/or volume at a lesion that is proximal to a lesion of interest. In some embodiments, the lesion of interest can include plaque and/or a maximum narrowing of a vessel. In some embodiments, the system can be configured to quantify a distal cross-sectional area and/or volume of the lesion of interest. In some embodiments, the system can be configured to apply an assumed velocity of blood flow at the proximal section at block2420. In some embodiments, the assumed velocity of blood flow can be prestored or predetermined, for example based on different states, such as at rest, during mild exertion, during moderate exertion, during extreme exertion, and/or the like. In some embodiments, at block2422, the system can be configured to quantify the velocity of blood flow at the distal section, for example at the lesion that includes plaque and/or maximum narrowing of the vessel. In some embodiments, the system is configured to quantify the velocity of blood flow at the distal section by utilizing the continuity equation. In some embodiments, the system is configured to quantify the velocity of blood flow at the distal section by utilizing one or more of the quantified proximal cross-sectional area or volume, quantified distal cross-sectional area or volume, and/or assumed velocity of blood flow at the proximal section. In some embodiments, the system at block2424is configured to determine a change in pressure between the proximal and distal sections, for example based on the assumed velocity of blood flow at the proximal section, the quantified velocity of blood flow at the distal section, the cross-sectional area at the proximal section, and/or the cross-sectional area at the distal section. In some embodiments, at block2426, the system is configured to determine a velocity time integral (VTI) at the distal section, for example based on the quantified velocity of blood flow at the distal section. In some embodiments, the system at block2428is configured to determine ischemia for the subject, for example based on one or more of the determined change in pressure between the proximal and distal sections and/or VTI at the distal section. Determining Myocardial Infarction Risk and Severity The systems and methods described herein can also be used for determining myocardial infarction risk and severity from image-based quantification and characterization of coronary atherosclerosis. For example, various embodiments described herein relate to systems, methods, and devices for determining patient-specific indications of myocardial infarction risk and severity risk from image-based quantification and characterization of coronary atherosclerosis burden, type, and/or rate of progression. One innovation includes a computer-implemented method of determining a myocardial risk factor via an algorithm-based medical imaging analysis is provided, the method comprising performing a comprehensive atherosclerosis and vascular morphology characterization of a portion of the coronary vasculature of a patient using information extracted from medical images of the portion of the coronary vasculature of the patient, performing a characterization of the myocardium of the patient using information extracted from medical images of the myocardium of the patient, correlating the characterized vascular morphology of the patient with the characterized myocardium of the patient, and determining a myocardial risk factor indicative of a degree of myocardial risk from at least one atherosclerotic lesion. Performing the comprehensive atherosclerosis and vascular morphology characterization of the portion of the coronary vasculature of the patient can include identifying the location of the at least one atherosclerotic lesion. Determining the myocardial risk factor indicative of the degree of myocardial risk from the at least one atherosclerotic lesion can include determining a percentage of the myocardium at risk from the at least one atherosclerotic lesion. Determining a percentage of the myocardium at risk from the at least one atherosclerotic lesion can include determining the percentage of the myocardium subtended by the at least one atherosclerotic lesion. Determining the myocardial risk factor indicative of the degree of myocardial risk from the at least one atherosclerotic lesion can include determining an indicator reflective of a likelihood that the at least one atherosclerotic lesion will contribute to a myocardial infarction. Performing the characterization of the myocardium of the patient can include performing a characterization of the left ventricular myocardium of the patient. The method can further include correlating the determined myocardial risk factor to at least one risk of a severe clinical event. The method can further include comparing the determined myocardial risk factor to a second myocardial risk factor indicative of a degree of myocardial risk to the patient at a previous point in time. Another innovation includes a computer-implemented method of determining a segmental myocardial risk factor via an algorithm-based medical imaging analysis is provided, the method comprising characterizing vascular morphology of the coronary vasculature of a patient using information extracted from medical images of the coronary vasculature of the patient, identifying at least one atherosclerotic lesion within the coronary vasculature of the patient using information extracted from medical images of the portion of the coronary vasculature of the patient, characterizing a plurality of segments of the myocardium of the patient to generate a segmented myocardial characterization using information extracted from medical images of the myocardium of the patient, correlating the characterized vascular morphology of the patient with the segmented myocardial characterization of the patient, and generating an indicator of segmented myocardial risk from the at least one atherosclerotic lesion. Generating an indicator of segmented myocardial risk can include generating a discrete indicator of myocardial risk for at least a subset of the plurality of segments of the myocardium. Generating an indicator of segmented myocardial risk can include generating a discrete indicator of myocardial risk for each of the plurality of segments of the myocardium. Correlating the characterized vascular morphology of the patient with the segmented myocardial characterization of the patient can include identifying for each of the myocardial segments a coronary artery primarily responsible for supplying oxygenated blood to that myocardial segment. The segmented myocardial characterization can be segmented into 17 segments according to a standard AHA 17-segment model. In another innovation, a computer-implemented method of determining a segmental myocardial risk factor via an algorithm-based medical imaging analysis is provided, the method comprising applying at least a first algorithm to a first plurality of images of the coronary vasculature of a patient obtained using a first imaging technology to characterize the vascular morphology of the coronary vasculature of the patient and to identify a plurality of atherosclerotic plaque lesions, applying at least a second algorithm to a first plurality of images of the myocardium of the patient obtained using a second imaging technology to characterize the myocardium of the patient, applying at least a third algorithm to relate the characterized vascular morphology of the patient with the characterized myocardium of the patient, and calculating a percentage of subtended myocardium at risk from at least one of the plurality of identified atherosclerotic plaque lesions. The method can additionally include applying an algorithm to a second plurality of images of the coronary vasculature of the patient obtained using a third imaging technology to characterize the vascular morphology of the coronary vasculature of the patient and to identify a plurality of atherosclerotic plaque lesions. Applying an algorithm to a second plurality of images of the coronary vasculature of the patient can include applying the first algorithm to the second plurality of images of the coronary vasculature of the patient. The method can additionally include applying an algorithm to a second plurality of images of the myocardium of the patient obtained using a third imaging technology to characterize the myocardium of the patient. Applying at least the first algorithm to the first plurality of images of the coronary vasculature of a patient obtained using the first imaging technology can additionally include determining characteristics of the identified plurality of atherosclerotic plaque lesions. The can additionally include determining a risk of the identified plurality of atherosclerotic plaque lesions contributing to a myocardial infarction, and determining an overall risk indicator based on the determined risk of the identified plurality of atherosclerotic plaque lesions contributing to a myocardial infarction and the calculated percentage of subtended myocardium at risk from the identified plurality of atherosclerotic plaque lesions. The method can additionally include relating the calculated percentage of subtended myocardium at risk from at least one of the plurality of identified atherosclerotic plaque lesions to a risk of at least one adverse clinical events. Overview Various embodiments described herein relate to systems, methods, and devices for determining patient-specific myocardial infarction (MI) risk indicators from image-based analysis of arterial atherosclerotic lesion(s). The heart includes epicardial coronary arteries, vessels which transmit oxygenated blood from the aorta to the myocardium of the heart. Within these epicardial coronary arteries, atherosclerotic lesions can build up due to plaque accumulation. These atherosclerotic lesions can erode or rupture, dislodging plaque and leading to thrombotic occlusion of a blood vessel at a location distal of the atherosclerotic lesion location, leading to a myocardial infarction (MI) or major adverse cardiovascular events (MACE), also known as a heart attack. During a heart attack, flow of oxygenated blood to the myocardium is impeded by the thrombotic occlusion of the blood vessel, leading to damage, including irreversible damage, of the myocardium. Myocardial damage may directly impact the ability of the heart muscle to contract and/or relax normally, a condition which may lead to clinically manifest heart failure. Heart failure is a complex syndrome which may affect a patient in a number of ways. The quality of life may be impaired, due to shortness of breath or other symptoms, and mortality may be accelerated. The contractile function of the heart may be impaired in one or more aspects, including reduced ejection fraction, elevated left ventricular volumes, left ventricular non-viability, and myocardial stunning, as well as abnormal heart rhythms, such as ventricular tachyarrhythmias. Surgical intervention, including coronary artery bypass surgery and heart transplants, may be needed, along with other invasive procedures, such as stent procedures. The likelihood that a given atherosclerotic lesion may lead to a myocardial infarction or other major adverse cardiovascular event may be dependent, at least in part, on the properties of the lesion, including the nature of the accumulated plaque. The presence of fatty plaque buildup can inhibit blood flow therethrough to a greater extent than calcified plaque build-up. When an artery contains “good” or stable plaque, or plaque comprising hardened calcified content, the lesion may be less likely to result in a life-threatening condition such as a myocardial infarction. In contrast, atherosclerotic lesions containing “bad” or unstable plaque or plaque comprising fatty material can be more likely to rupture within arteries, releasing the fatty material into the arteries. Such a fatty material release in the blood stream can cause inflammation that may result in a blood clot. A blood clot in the artery can cause a stoppage of blood flow to the heart tissue, which can result in a heart attack or other cardiac event. Evaluation of the nature of a given atherosclerotic lesion may be used to make a determination as to whether a lesion contains “high-risk plaque” or “vulnerable plaque” which is likely to contribute to a future myocardial infarction. Although such predictions are not exact, evaluation of various characteristics of a given atherosclerotic lesion may be used to classify the atherosclerotic lesion as being a high-risk plaque. These characteristics include, but are not limited to, atherosclerosis burden, composition, vascular remodeling, diffuseness, location, direction, and napkin-ring sign, among other characteristics. The evaluation may be based on medical imagery indicative of the cardiovascular system of a patient. Various medical imaging processes may be used in the analyses described herein. In some embodiments, invasive medical imaging may be used to gather information regarding a given atherosclerotic lesion. In other embodiments, however, non-invasive medical imaging may be used, such as coronary computed tomographic angiography (CCTA), which allows direct visualization of coronary arteries in a non-invasive fashion. In some embodiments, the characterization of atherosclerosis and vascular morphology may include the analysis of a series of CCTA images or any other suitable images, and the generation of a three-dimensional model of a portion of the patient's cardiovascular system. This analysis can include the generation of one or more quantified measurements of vessels from the raw medical image, such as for example diameter, volume, morphology, and/or the like. This analysis may segment the vessels in a predetermined manner, or in a dynamic manner, in order to provide more detailed overview of the vascular morphology of the patient. In particular, in some embodiments, the system can be configured to utilize one or more AI and/or ML algorithms to automatically and/or dynamically identify one or more arteries, including for example coronary arteries, although other portions of a patient's cardiovascular system may also be generated. In some embodiments, one or more AI and/or ML algorithms use a neural network (CNN) that is trained with a set of medical images (e.g., CT scans) on which arteries and features (e.g., plaque, lumen, perivascular tissue, and/or vessel walls) have been identified, thereby allowing the AI and/or ML algorithm to automatically identify arteries directly from a medical image. In some embodiments, the arteries are identified by size and/or location. This analysis can also include the identification and classification of plaque within the cardiovascular system of the patient. In some embodiments, the system can be configured to identify a vessel wall and a lumen wall for each of the identified coronary arteries in the medical image. In some embodiments, the system is then configured to determine the volume in between the vessel wall and the lumen wall as plaque. In some embodiments, the system can be configured to identify regions of plaque based on the radiodensity values typically associated with plaque, for example by setting a predetermined threshold or range of radiodensity values that are typically associated with plaque with or without normalizing using a normalization device. In some embodiments, the characterization of atherosclerosis may include the generation of one or more quantified measurements from a raw medical image, such as for example radiodensity of one or more regions of plaque, identification of stable plaque and/or unstable plaque, volumes thereof, surface areas thereof, geometric shapes, heterogeneity thereof, and/or the like. Using this plaque identification and classification, the overall plaque volume may be determined, as well as the amount of calcified stable plaque and the amount of uncalcified plaque. In some embodiments, more detailed classification of atherosclerosis than a binary assessment of calcified vs. non-calcified plaque may be made. For example, the plaque may be classified ordinally, with plaque classified as dense calcified plaque, calcified plaque, fibrous plaque, fibrofatty plaque, necrotic core, or admixtures of plaque types. The plaque may also be classified continuously, by attenuation density on a scale such as a Hounsfield unit scale or a similar classification system. The information which can be obtained in the characterization of atherosclerosis may be dependent upon the type of imaging being performed. For example, when the CCTA images are creating using a single-energy CT process, the relative material density of the plaque relative to the surrounding tissue can be determined, but the absolute material density may be unknown. In contrast, when the CCTA images are creating using a multi-energy CT process, the absolute material density of the plaque and other surrounding tissue can be measured. The characterization of atherosclerosis and vascular morphology may include in particular the identification and classification of atherosclerotic lesion within the cardiovascular system of the patient, and in certain embodiments within the coronary arteries of the patient. This may include the calculation or determination of a binary or numerical indicator regarding one or more parameters of an atherosclerotic lesion, based on the quantified and/or classified atherosclerosis derived from the medical image. The system may be configured to calculate such indicators regarding one or more parameters of an atherosclerotic lesion using the one or more vascular morphology parameters and/or quantified plaque parameters derived from the medical image of a coronary region of the patient. In some embodiments, the system is configured to dynamically identify an atherosclerotic lesion within an artery, and calculate information regarding the atherosclerotic lesion and the adjacent section of the vessel, such as vessel parameters including diameter, curvature, local vascular morphology, and the shape of the vessel wall and the lumen wall in the area of the atherosclerotic lesion. Calculation of Myocardial Risk FIG.25Ais a flowchart illustrating a process2500for determining an indicator of risk that an atherosclerotic lesion will contribute to a myocardial infarction or other major adverse cardiovascular event. At block2505, a system can access a plurality of images indicative of a portion of a cardiovascular system of a patient. These images can be, for example, CCTA images or any other suitable images, and can be generated at a medical facility. These images can be reflective of, for example a portion of the cardiovascular system of a patient including the coronary arteries, and can be representative of at least an entire cardiac cycle. In some embodiments, these CCTA images may be reflective of the portion of the cardiovascular system of a patient both prior to and after exposure of the patient to a vasodilatory substance, such as nitroglycerin or iodinated contrast. These CCTA images can be reflective of one or more known physiologic condition of the patient, such as an at rest state or a hyperemic state. At block2510, the system can analyze the images to identify at least one atherosclerotic lesion (e.g., artery abnormalities) within the portion of the cardiovascular system of the patient. Atherosclerotic lesions may develop predominantly at branches, bends, and bifurcations in the arterial tree. Identifying the at least one atherosclerotic lesion within the portion of the cardiovascular system can include determining information on characteristics and parameters of the atherosclerotic lesion using any of the functionality described herein, for example, information on plaque and it characteristics/parameters, lesion size, lesion location, vessel and/or lumen size and shape information, etc. This identification may be, for example, part of a broader characterization of atherosclerosis and vascular morphology based on the plurality of images. A characterization of atherosclerosis can include the identification of the location, volume and/or type of plaque throughout the portion of the cardiovascular system of the patient. At block2515, the system can apply an algorithm that analyzes characteristics/parameters of the identified atherosclerotic lesion to determining an indicator of risk that an atherosclerotic lesion will contribute to a myocardial infarction or other major adverse cardiovascular event. This analysis can include, for example, any of atherosclerosis burden, composition, vascular remodeling, diffuseness, location, direction, and napkin-ring sign, among other characteristics, as well as any combination thereof. The napkin-ring sign refers to a rupture-prone plaque in a coronary artery, comprising a necrotic core covered by a thin cap fibro-atheroma. In some embodiments, the indicator of risk may be a binary indicator, and the system may designate one or more analyzed atherosclerotic lesions as either being high-risk for a heart attack (myocardial infarction (MI)) or other major cardiac event, or not being a high-risk for an MI or other major cardiac event. In other embodiments, the indicator may be a numerical indicator providing a more granular indication of the decree of risk presented by a given atherosclerotic lesion. For example, a number from 1.0 (low) to 10.0 (high), or in another example, from 1 (low) to 100 (high). In some embodiments, multiple analyses may be used, using different combinations of parameters and/or different weightings of parameters, and multiple analyses of the same atherosclerotic lesion may be used in making an aggregate assessment of risk. For example, if any of the multiple analyses classify an atherosclerotic lesion as being high risk, the atherosclerotic lesion may be designated as high risk out of caution. In other embodiments, the indicators of risk from the various analyses may be averaged or otherwise combined into an aggregate indicator of risk. While such an analysis may be used to provide a binary or numerical indication of a risk that a given atherosclerotic lesion may contribute to a myocardial infarction or other major adverse cardiovascular event, such an indicator, in isolation, may not provide an indication of a level of risk associated with a myocardial infarction or other major adverse cardiovascular event which would be caused by that atherosclerotic lesion. An important factor in the overall level of risk to the health of a patient presented by a given atherosclerotic lesion is the location of that atherosclerotic lesion relative to the surrounding portions of the cardiovascular. FIG.25Bis schematic illustration of a human heart, illustrating certain coronary arteries. The heart muscle2520is supplied with oxygenated blood from the aorta2521by the coronary vasculature, which includes a complex network of vessels ranging from large arteries to arterioles, capillaries, venules, veins, etc. Like all other tissues in the body, the heart muscle2520needs oxygen-rich blood to function. Also, oxygen-depleted blood must be carried away. The coronary arteries wrap around the outside of the heart muscle2520. Small branches extend into the heart muscle2520to bring it blood. The coronary arteries include the right coronary artery (RCA)2525which extends from the aorta2521downward along the right side of the heart2520, and the left main coronary artery (LMCA)2522which extends from the aorta2521downward on the left side of the heart2520. The RCA2525supplies blood to the right ventricle, the right atrium, and the SA (sinoatrial) and AV (atrioventricular) nodes, which regulate the heart rhythm. The RCA2525divides into smaller branches, including the right posterior descending artery and the acute marginal artery. Together with the left anterior descending artery2524, the RCA2525helps supply blood to the middle or septum of the heart. The LMCA2522branches into two arteries, the anterior interventricular branch of the left coronary artery, also known as the left anterior descending (LAD) artery2524and the circumflex branch of the left coronary artery2523. The LAD artery2524supplies blood to the front of the left side of the heart2520. The circumflex branch of the left coronary artery2523encircles the heart muscle. The circumflex branch of the left coronary artery2523supplies blood to the outer side and back of the heart, following the left part of the coronary sulcus, running first to the left and then to the right, reaching nearly as far as the posterior longitudinal sulcus. Because the various coronary arteries supply blood to particular regions of the heart, the impact of an interruption in the amount of oxygenated blood passing through a given vessel caused by stenosis or occlusion is dependent upon the location of the vessel at which the stenosis or occlusion occurs. A stenosis or occlusion proximal the aorta and/or located in a larger vessel can impact a larger percentage of the heart muscle2520, and in particular the myocardium, than a stenosis or occlusion distal the aorta and/or located in a smaller vessel. In some embodiments, the ischemic impact of a stenosis in a coronary artery can be evaluated by relating blood flow within the coronary arteries of a patient to the corresponding myocardium that the coronary arteries subtend. In such ischemia imaging processes, coronary stenosis can be evaluated to identify regions that may impede blood flow within the epicardial coronary arteries, and relate that impediment to blood flow to the percentage of myocardium that is at risk of becoming ischemic, or otherwise impacted by reduced blood supply. The evaluation of impacted myocardium can be combined with the evaluation of the risk that a given lesion may cause a myocardial infarction or other major adverse cardiovascular event in order to provide an indictor of the risk to the broader cardiac health of a patient posed by a given atherosclerotic lesion. In some embodiments, this can be expressed in terms of a percentage of subtended myocardium at risk (referred to herein as % SMAR), linking a given coronary atherosclerotic plaque lesion location within a coronary artery to the myocardium subtended by the coronary artery distal of the lesion location. FIG.25Cis a flowchart illustrating a process2530for determining an indicator of a myocardial risk posed by an atherosclerotic lesion. At block2531, a system can access a plurality of images indicative of a portion of a cardiovascular system of a patient. These images can be, for example, the result of a contrast-enhanced CT scan performed of the patient's heart and cardiac arteries. In other embodiments, however, these images may be generated using a wide variety of other imaging techniques, including but not limited to ultrasound, magnetic resonance imaging, or nuclear testing. In addition, multiple imaging modalities can be used to enhance the analysis, as discussed in greater detail herein. At block2532, the system can determine a characterization of atherosclerosis and vascular morphology based on the plurality of accessed images. The characterization of the vascular morphology can include, for example, the automated extraction and labeling of the coronary arteries, including the various branches and segments thereof. As in example, this labeling can include the identification and labeling of the centerlines of the various vessel segments to facilitate the extraction and labeling of the various segments. As in example, this labeling can include the identification and labeling of the lumen and vessel walls of the various vessel segments to facilitate the extraction and labeling of the various segments. This characterization of the vascular morphology provides a patient-specific characterization of the vascular morphology of the patient. In particular, it can be used to provide a patient-specific characterization of the coronary artery tree. The system can also determine a characterization of atherosclerosis within the coronary vessels. In particular, the characterization of atherosclerosis can include the automated identification of atherosclerotic plaque lesions within the vasculature of the patient. A number of characteristic parameters of the identified atherosclerotic plaque lesions can be automatically calculated by the system, including but not limited to their volume, their composition, their remodeling, their location, and their relation to the myocardium of the patient. The use, for example, of a contrast enhanced CT scan allows the identification by the system of the composition of the various identified atherosclerotic plaque lesions, such as by identifying them as primarily fatty plaque build-up or primarily calcified plaque build-up, as well as an indication of the density of the plaque build-up. Positive remodeling of the surrounding vessel in the location of the identified atherosclerotic plaque lesions can also provide an indication of the risk posed by the identified atherosclerotic plaque lesions. Although illustrated as a single block2532, the characterization of the vascular morphology can be performed in separate steps and in any suitable order. For example, in some embodiments, further characterization of the vascular morphology may be performed based at least in part on the characterization of atherosclerosis, with additional analysis applied to portions of the vasculature of the patient affected by the identified atherosclerotic plaque lesions. At block2533, the system can determine a characterization of the myocardium of the patient based on a plurality of accessed images. In some embodiments, the myocardium of the patient may be characterized using one or more of the same accessed images used for the characterization of atherosclerosis and vascular morphology, while in other embodiments, medical imagery obtained via a different imaging technique may be used in the characterization of the myocardium. In some embodiments, a cardiac MRI or other imaging technique may be used to generate images used for characterization of the myocardium. In some embodiments, the characterization of the myocardium may be a characterization of only a portion of the myocardium of the patient, or may be a characterization which focuses primarily on certain regions of the myocardium, such as the left ventricular myocardium, due to the increased thickness of the myocardium in the left ventricle. This characterization may include, for example, the relative and absolute size of the ventricular mass, as well as the overall shape of the ventricular mass. At block2534, the system relates the characterization of the vascular morphology to the characterization of the myocardium to provide a patient-specific characterization of the relationship between the patient-specific vascular morphology characterization and the patient specific myocardium characterization. Because there can be significant differences between patients in terms of the blood supply from specific coronary arteries to various portions of the myocardium, the relation of the patient-specific vascular morphology characterization to the characterization of the myocardium can be used to more accurately predict the impact on the myocardium of an occlusion or other stenosis at a given location within the patient-specific vasculature. This relation between the patient-specific vascular morphology characterization and the patient specific myocardium characterization can include relating the identified atherosclerotic plaque lesions within the vasculature of the patient to the characterization of the myocardium. The relation can include, for example, one or more atherosclerosis metrics in this relation, including the volume and composition, of the atherosclerotic plaque lesions, as well as the percent atheroma volume, the percentage of total vessel wall occupied by the atherosclerotic plaque. The remodeling of the surrounding vessel wall may also be taken into account in this analysis. At block2535, the system determines an indicator of the amount of the myocardium at risk for a given atherosclerotic plaque lesion. This indicator may be, for example, a measure of the subtended myocardium at risk from that atherosclerotic plaque lesion. The myocardium at risk may be calculated or estimated based on the percentage of the myocardium that is subtended by the coronary artery at and distal the point of the atherosclerotic plaque lesion. In other embodiments, this indicator may be a binary or numerical indicator which may be based on the percentage of the subtended myocardium at risk, but may also take into account other factors, such as a likelihood that a given atherosclerotic plaque lesion will lead to an MI or similar event. By determining an indicator based at least in part on the subtended myocardium percentage, a broader indication of the risk to a patient's cardiovascular health can be provided. The use of such an indicator allows further tailoring of patient diagnosis and treatment based upon a patient-specific indication of the degree of risk posed by an MI or other major cardiac event caused by a given atherosclerotic lesion. If a given atherosclerotic lesion may represent a high risk to result in an MI or other major cardiac event, but only a small percentage of the myocardium, such as 2% of the myocardium (or e.g., less than 5%), is subtended by the lesion and at risk, less drastic medical treatment, such as medical therapy, may be prescribed to the patient, rather than invasive percutaneous procedures such as stent placement or bypass surgery. In contrast, if a given atherosclerotic lesion subtends a comparatively high percentage of the myocardium, such as 20% of the myocardium (or e.g., more than 20%), percutaneous intervention to seal or bypass the legion may be prescribed, as the intervention would be expected to result in a significant reduction of risk of adverse consequences associated with an MI or other severe event. This may be the case even when the risk of such an MI or other severe event is comparatively low, due to the danger to a substantial percentage of the myocardium posed by that atherosclerotic lesion. In some embodiments, the characterization of the myocardium may include a segmented analysis of specific segments of the myocardium.FIG.25Dis a flowchart illustrating a process2540for determining an indicator of a segmental myocardial risk posed by an atherosclerotic lesion. At block2541, a system can access a plurality of images indicative of a portion of a cardiovascular system of a patient. These images can be, for example, the result of a contrast-enhanced CT scan performed of the patient's heart and cardiac arteries, but may also include images generated by another imaging technique. At block2542, the system can determine a characterization of atherosclerosis and vascular morphology based on the plurality of accessed images. The characterization of the vascular morphology can include, for example, the automated extraction and labeling of the coronary arteries, including the various branches and segments thereof, as well as the automatic identification of atherosclerotic plaque lesions within the vascular morphology. At block2543, the system can determine a characterization of one or more segments of the myocardium of the patient based on a plurality of accessed images. In some embodiments, the myocardium of the patient may be characterized using the same accessed images used for the characterization of atherosclerosis and vascular morphology, while in other embodiments, medical imagery obtained via a different imaging technique may be used in the characterization of the myocardium segments, such as a cardiac MRI or intracardiac echocardiography. In some embodiments, the myocardium may be segmented according to a standard AHA 17-segment model. The AMA 17-segment model divides the left ventricle vertically into a basal section, a mid-cavity section, and an apical section, each of which is radially subdivided into additional segments. The basal segment is divided into six radial segments, the basal anterior, basal anteroseptal, basal inferoseptal, basal inferior, basal inferolateral, and basal anterolateral. The mid-cavity is similarly divided into six radial segments, the mid-anterior, mid-anteroseptal, mid-inferoseptal, mid inferior, mid-inferolateral, and mid-anterolateral. The tapered apical segment is divided into four radial segments, the apical anterior, apical septal, apical inferior, and apical lateral. The apical cap, or apex, is analyzed as a single contiguous segment. The AHA 17-segment model is one example of a segmentation model which can be used to characterize the myocardium, although any other suitable segmentation model may also be used. Due to the symmetrical radial segmentation, segmental characterization of the myocardium according to the AHA 17-segment model can provide a reproducible segmentation which can be used to monitor changes in the myocardium of a patient over time, compare the myocardial characteristics in various states for a given patent, and compare patients to one another. The regular segmentation can also facilitate the analysis of prior myocardial characterizations, even if not generated using the same system. Under the standard AHA model, certain segments of the myocardium can be considered to generally be provided with blood by a specific coronary artery of the left anterior descending artery, right coronary artery, and left circumflex artery, with a larger percentage of the segments being considered to be provided with blood by the left anterior descending artery. For example, occlusion of the left anterior descending is often called the widow-maker infarction, due to the severe impact it can have on the operation of the heart. However, there can be significant variation on a patient-by-patient basis due to the specific cardiovascular anatomy of each patient. For example, the apex segment can be provided with blood by any of the left anterior descending, right coronary artery, and left circumflex artery. Other segments can be primarily provided with oxygenated blood by different coronary arteries in different patients. In other embodiments, alternative segmentation patterns may be used, and in some embodiments, the myocardium may be dynamically segmented for the purposes of characterization. Such dynamic segmentation may, for example, take into account the patient-specific vasculature characterization to identify segments of the myocardium within which a given vessel is likely to supply the majority of the oxygenated blood. Such dynamic segmentation can also be used as part of an iterative process once the vasculature characterization is related to an initial myocardial characterization. At block2544, the system relates the characterization of the vascular morphology to the segmented characterization of the myocardium to provide a patient-specific characterization of the relationship between the patient-specific vascular morphology characterization and the patient-specific characterization of at least one segment of the myocardium. In some embodiments, the characterization of all segments of the myocardium are related to the characterization of the vascular morphology. By providing a patient-specific relation of the characterization of the vascular morphology to the segmented characterization of the myocardium, the system may be able to more accurately model the impacted regions of the myocardium of a given patient than would be possible using a standardized association between the myocardial segments and the coronary vessels. At block2545, the system determines an indicator of the segmental myocardial risk for a given atherosclerotic plaque lesion. In some embodiments, the indicator of the segmental myocardial risk may include an identification of the myocardial segments which are at least partially subtended by the atherosclerotic plaque lesion, and at risk from an MI or other severe cardiac event caused by the atherosclerotic plaque lesion. In some embodiments, a percentage of subtended myocardium at risk (e.g., “% SMAR”) for each of the analyzed myocardial segments may be generated, which may provide a more precise indication of the risks posed by a given atherosclerotic plaque lesion. In addition to or in place of risk indicators relating to the risks posed by given atherosclerotic plaque lesions, overall risk factors may also be determined which are indicative of the risks posed by a plurality of atherosclerotic plaque lesions, or by all identified atherosclerotic plaque lesions. In some embodiments, such an overall risk factor may include a cumulative % SMAR value for all identified atherosclerotic plaque lesions. In some embodiments, risk indicators associated with the various identified atherosclerotic plaque lesions may be weighted or otherwise used in the calculation of a cumulative risk indicator. In some embodiments, the % SMAR or other risk indicator based thereon may be related to a risk of adverse clinical events.FIG.25Eis a flowchart illustrating a process2550for determining a risk of adverse clinical events caused by an atherosclerotic lesion. At block2551, a system determines characterizations of atherosclerosis, vascular morphology, and myocardium of a patient based on one or more pluralities of accessed images. The characterization of atherosclerosis can include the identification of at least one atherosclerotic lesion within the vasculature of a patient. The characterization of the myocardium may include the characterization of discrete sections of the myocardium. At block2552, the system correlates the characterization of the vascular morphology to the characterization of the myocardium to provide a patient-specific characterization of the relationship between the vascular morphology characterization and the myocardial characterization. At block2553, the system calculates a percentage of myocardium at risk from at least one atherosclerotic plaque lesion. In some embodiments, the calculated percentage is reflective of the percentage of the entire myocardium at risk. In some embodiments, the calculated percentage is reflective of the percentage of one or more segments of the myocardium. At block2554, the system can relate the calculated percentage of myocardium at risk to a risk of one or more adverse clinical events. In some embodiments, the risk may be calculated for each of a plurality of adverse clinical events. In some embodiments, the adverse clinical events may include reductions in quality of life, such as shortness of breath. In some embodiments, these adverse clinical events may include severe clinical events such as accelerated mortality, a need for percutaneous coronary revascularization such as a stent procedure, or a need for heart transplant or coronary artery bypass surgery. In some embodiments, these adverse clinical effects may relate to reduced contractile function, such as low ejection fraction, elevated left ventricular volumes, left ventricular non-viability, and myocardial stunning. In some embodiments, these adverse clinical events may include abnormal heart rhythms such as ventricular tachyarrhythmias. In some embodiments, a risk indicator based on percentage of myocardium at risk may be reevaluated after some time has elapsed, or after treatment has been carried out.FIG.25Fis a flowchart illustrating a process2560for updating a risk of adverse clinical events caused by an atherosclerotic lesion. At block2561, a system accesses information indicative of the state of a portion of a cardiovascular system of a patient at a first point in time, as well as a plurality of images indicative of the state of the portion of the cardiovascular system of the patient at a second point in time after the first point in time. In some embodiments, the second point in time may be a recent point in time, such that the reevaluated risk indicator will be indicative of the risk of the patient at the current point in time. In some embodiments, the information indicative of the state of the portion of the cardiovascular system of the patient at the first point in time can include a previously calculated risk indicator. In such embodiments, the in other embodiments, the information indicative of the state of the portion of the cardiovascular system of the patient at the first point in time can include a plurality of images indicative of the state of the portion of the cardiovascular system of the patient at the first point in time, and a risk factor indicative of the state of the portion of the cardiovascular system of the patient at the first point in time can be determined at the same time as the updated risk factor reflective of the state of the patient at the second point in time. At block2562, the system determines characterizations of atherosclerosis, vascular morphology, and myocardium of the patient based on the plurality of accessed images indicative of the state of the patient at the second point in time. If the information indicative of the state of the portion of the cardiovascular system of the patient at the first point in time includes a plurality of images indicative of the state of the portion of the cardiovascular system of the patient at the first point in time, the system may also determine characterizations of atherosclerosis, vascular morphology, and myocardium of the patient based on the plurality of accessed images indicative of the state of the patient at the first point in time. In an embodiment in which the system uses an AI or ML algorithm to determine these characterizations, redetermination of the characteristics of the patient at the first point in time can ensure consistency between these determinations, in the event that the AI or ML algorithm has been updated or otherwise altered, such as due to the analysis of additional data, in the intervening time. At block2563, the system correlates the characterization of the vascular morphology to the characterization of the myocardium to provide a patient-specific characterization of the relationship between the vascular morphology characterization and the myocardial characterization at the second point in time. If the information indicative of the state of the portion of the cardiovascular system of the patient at the first point in time includes a plurality of images indicative of the state of the portion of the cardiovascular system of the patient at the first point in time, the system may also correlate the characterization of the vascular morphology to the characterization of the myocardium to provide a patient-specific characterization of the relationship between the vascular morphology characterization and the myocardial characterization at the first point in time. In an embodiment in which two such correlations are made at substantially the same point in time, or in which the information indicative of the state of the portion of the cardiovascular system of the patient at the first point in time includes an indication of a previously determined correlation, the system may compare the correlation at the first point in time to the correlation at the second point in time, to determine if the vasculature or myocardium of the patent has significantly changed if. If so, additional analysis regarding the cause for such a change may be performed, either by the system itself, or by a clinical practitioner evaluating the patient who can be alerted to this discrepancy by the system. At block2564, the system calculates a percentage of myocardium at risk from at least one atherosclerotic plaque lesion at the second point in time. If the information indicative of the state of the portion of the cardiovascular system of the patient at the first point in time includes a plurality of images indicative of the state of the portion of the cardiovascular system of the patient at the first point in time, the system may also calculate a percentage of myocardium at risk from at least one atherosclerotic plaque lesion at the first point in time. At block2565, the system compares the percentage of the myocardium at risk at the first point in time to the calculated percentage of the myocardium at risk at the second point of time. In some embodiments, this comparison may provide a practitioner with information regarding the efficacy of an intervening treatment of the patient, such as a stent procedure or the use of statins which can solidify previously fatty plaque deposits. In some embodiments, this comparison may provide a practitioner with information regarding an updated prognosis for the patient based upon more recent characterizations of the atherosclerosis, vascular morphology, and/or myocardium of the patient. In some embodiments, the process may proceed to an additional step where the risk of one or more adverse clinical events can be updated based upon the updated calculated percentage of the myocardium subtended by a given lesion or a plurality of lesions. In addition, where prior imaging information is available, images from different points in time may be fused together or otherwise used to generate a composite image or other representation indicative of changes over time. These changes can in some embodiments be due to interventions such as medication, exercise, or other medical procedures. In some embodiments, as discussed herein, different imaging techniques may be used to characterize the atherosclerosis and vascular morphology than those used to characterize the myocardium of the patient. However, in other embodiments, multiple imaging techniques may be used in any of these individual characterizations, as well. For example, the system may analyze CT imagery to extract information indicative of atherosclerosis, while the system may analyzed information extracted from positron emission tomography (PET) imagery to extract information indicative of inflammation. By synthesizing information from multiple imaging modalities, the disclosed technology can be used to enhance the phenotypic richness of the particular portion of the body being characterized. Although described herein primarily in the context of imaging and analysis of the coronary arteries, the systems, methods and devices of the disclosed technology can also be used in the context of other portions of the body, including other arterial beds. For example, the disclosed technology can be used with ultrasound imagery of the carotid arterial bed, the aorta, and the arterial beds of the lower extremities, among other portions of the cardiovascular system of the patient. The disclosed technology may be used with any suitable imaging technology or combination of imaging technologies, including but not limited to CT, ultrasound, MRI, PET, and nuclear testing. Computer System In some embodiments, the systems, processes, and methods described herein are implemented using a computing system, such as the one illustrated inFIG.25G. The example computer system2572is in communication with one or more computing systems2590and/or one or more data sources2592via one or more networks2586. WhileFIG.25Gillustrates an embodiment of a computing system2572, it is recognized that the functionality provided for in the components and modules of computer system2572can be combined into fewer components and modules, or further separated into additional components and modules. The computer system2572can comprise a Patient-Specific Myocardial Risk Determination Module2584that carries out the functions, methods, acts, and/or processes described herein. The patient-Specific Myocardial Risk Determination Module2584is executed on the computer system2572by a central processing unit (e.g., one or more hardware processors)2576discussed further below. In general the word “module,” as used herein, refers to logic embodied in hardware or firmware or to a collection of software instructions, having entry and exit points. Modules are written in a program language, such as JAVA, C, or C++, or the like. Software modules can be compiled or linked into an executable program, installed in a dynamic link library, or can be written in an interpreted language such as BASIC, PERL, LAU, PHP or Python and any such languages. Software modules can be called from other modules or from themselves, and/or can be invoked in response to detected events or interruptions. Modules implemented in hardware include connected logic units such as gates and flip-flops, and/or can include programmable units, such as programmable gate arrays or processors. Generally, the modules described herein refer to logical modules that can be combined with other modules or divided into sub-modules despite their physical organization or storage. The modules are executed by one or more computing systems, and can be stored on or within any suitable computer readable medium, or implemented in-whole or in-part within special designed hardware or firmware. Not all calculations, analysis, and/or optimization require the use of computer systems, though any of the above-described methods, calculations, processes, or analyses can be facilitated through the use of computers. Further, in some embodiments, process blocks described herein can be altered, rearranged, combined, and/or omitted. The computer system2572includes one or more processing units (CPU)706, which can comprise a microprocessor. The computer system2572further includes a physical memory2580, such as random access memory (RAM) for temporary storage of information, a read only memory (ROM) for permanent storage of information, and a mass storage device2574, such as a backing store, hard drive, rotating magnetic disks, solid state disks (SSD), flash memory, phase-change memory (PCM), 3D XPoint memory, diskette, or optical media storage device. Alternatively, the mass storage device can be implemented in an array of servers. Typically, the components of the computer system2572are connected to the computer using a standards based bus system. The bus system can be implemented using various protocols, such as Peripheral Component Interconnect (PCI), Micro Channel, SCSI, Industrial Standard Architecture (ISA) and Extended ISA (EISA) architectures. The computer system2572includes one or more input/output (I/O) devices and interfaces2582, such as a keyboard, mouse, touch pad, and printer. The I/O devices and interfaces2582can include one or more display devices, such as a monitor, that allows the visual presentation of data to a user. More particularly, a display device provides for the presentation of GUIs as application software data, and multi-media presentations, for example. The I/O devices and interfaces2582can also provide a communications interface to various external devices. The computer system2572can comprise one or more multi-media devices2578, such as speakers, video cards, graphics accelerators, and microphones, for example. Computing System Device/Operating System The computer system2572can run on a variety of computing devices, such as a server, a Windows server, a Structure Query Language server, a Unix Server, a personal computer, a laptop computer, and so forth. In other embodiments, the computer system2572can run on a cluster computer system, a mainframe computer system and/or other computing system suitable for controlling and/or communicating with large databases, performing high volume transaction processing, and generating reports from large databases. The computing system2572is generally controlled and coordinated by an operating system software, such as z/OS, Windows, Linux, UNIX, BSD, PHP, SunOS, Solaris, MacOS, ICloud services or other compatible operating systems, including proprietary operating systems. Operating systems control and schedule computer processes for execution, perform memory management, provide file system, networking, and I/O services, and provide a user interface, such as a graphical user interface (GUI), among other things. Network The computer system2572illustrated inFIG.25Gis coupled to a network2588, such as a LAN, WAN, or the Internet via a communication link2586(wired, wireless, or a combination thereof). Network2588communicates with various computing devices and/or other electronic devices. Network2588is in communication with one or more computing systems2590and one or more data sources2592. The Patient-Specific Myocardial Risk Determination Module2584can access or can be accessed by computing systems2590and/or data sources2592through a web-enabled user access point. Connections can be a direct physical connection, a virtual connection, and other connection type. The web-enabled user access point can comprise a browser module that uses text, graphics, audio, video, and other media to present data and to allow interaction with data via the network2588. The output module can be implemented as a combination of an all-points addressable display such as a cathode ray tube (CRT), a liquid crystal display (LCD), a plasma display, or other types and/or combinations of displays. The output module can be implemented to communicate with input devices2582and they also include software with the appropriate interfaces which allow a user to access data through the use of stylized screen elements, such as menus, windows, dialogue boxes, tool bars, and controls (for example, radio buttons, check boxes, sliding scales, and so forth). Furthermore, the output module can communicate with a set of input and output devices to receive signals from the user. Other Systems The computing system2572can include one or more internal and/or external data sources (for example, data sources2592). In some embodiments, one or more of the data repositories and the data sources described above can be implemented using a relational database, such as DB2, Sybase, Oracle, CodeBase, and Microsoft® SQL Server as well as other types of databases such as a flat-file database, an entity relationship database, and object-oriented database, and/or a record-based database. The computer system2572can also access one or more data sources (or databases)2592. The databases2592can be stored in a database or data repository. The computer system2572can access the one or more databases2592through a network2588or can directly access the database or data repository through I/O devices and interfaces2582. The data repository storing the one or more databases2592can reside within the computer system2572. Examples of Embodiments Relating to Myocardial Infarction Risk and Severity from Image-Based Quantification and Characterization of Coronary Atherosclerosis: The following are non-limiting examples of certain embodiments of systems and methods for determining myocardial infarction risk and severity and/or other related features. Other embodiments may include one or more other features, or different features, that are discussed herein. Embodiment 1: A computer-implemented method of determining a myocardial risk factor via an algorithm-based medical imaging analysis, comprising: performing a atherosclerosis and vascular morphology characterization of a portion of the coronary vasculature of a patient using information extracted from medical images of the portion of the coronary vasculature of the patient; performing a characterization of the myocardium of the patient using information extracted from medical images of the myocardium of the patient; correlating the characterized vascular morphology of the patient with the characterized myocardium of the patient; and determining a myocardial risk factor indicative of a degree of myocardial risk from at least one atherosclerotic lesion. Embodiment 2: The method of embodiment 1, wherein performing the atherosclerosis and vascular morphology characterization of the portion of the coronary vasculature of the patient comprises identifying the location of the at least one atherosclerotic lesion. Embodiment 3: The method of embodiment 1 or 2, wherein determining the myocardial risk factor indicative of the degree of myocardial risk from the at least one atherosclerotic lesion comprises determining a percentage of the myocardium at risk from the at least one atherosclerotic lesion. Embodiment 4: The method of embodiment 3, wherein determining a percentage of the myocardium at risk from the at least one atherosclerotic lesion comprises determining the percentage of the myocardium subtended by the at least one atherosclerotic lesion. Embodiment 5: The method of embodiment 3 or 4, determining the myocardial risk factor indicative of the degree of myocardial risk from the at least one atherosclerotic lesion comprises determining an indicator reflective of a likelihood that the at least one atherosclerotic lesion will contribute to a myocardial infarction. Embodiment 6: The method of any one of embodiments 1-5, wherein performing the characterization of the myocardium of the patient comprises performing a characterization of the left ventricular myocardium of the patient. Embodiment 7: The method of any one of embodiments 1-6, further comprising correlating the determined myocardial risk factor to at least one risk of a severe clinical event, and/or correlating the determined myocardial risk factor to the severity of an event (for example, st-elevation myocardial infarction, non-ST elevation myocardial infarction, unstable angina, stable angina, and the like). Embodiment 8: The method of any one of embodiments 1-7, further comprising comparing the determined myocardial risk factor to a second myocardial risk factor indicative of a degree of myocardial risk to the patient at a previous point in time. Embodiment 9: A computer-implemented method of determining a segmental myocardial risk factor via an algorithm-based medical imaging analysis, comprising: characterizing vascular morphology of the coronary vasculature of a patient using information extracted from medical images of the coronary vasculature of the patient; identifying at least one atherosclerotic lesion within the coronary vasculature of the patient using information extracted from medical images of the portion of the coronary vasculature of the patient; characterizing a plurality of segments of the myocardium of the patient to generate a segmented myocardial characterization using information extracted from medical images of the myocardium of the patient; correlating the characterized vascular morphology of the patient with the segmented myocardial characterization of the patient; and generating an indicator of segmented myocardial risk from the at least one atherosclerotic lesion. Embodiment 10: The method of embodiment 9, wherein generating an indicator of segmented myocardial risk comprises generating a discrete indicator of myocardial risk for at least a subset of the plurality of segments of the myocardium. Embodiment 11: The method of embodiment 9 or 10, wherein generating an indicator of segmented myocardial risk comprises generating a discrete indicator of myocardial risk for each of the plurality of segments of the myocardium. Embodiment 12: The method of embodiment 9 or 11, wherein correlating the characterized vascular morphology of the patient with the segmented myocardial characterization of the patient comprises identifying for each of the myocardial segments a coronary artery primarily responsible for supplying oxygenated blood to that myocardial segment. Embodiment 13: The method of any one of embodiments 9-12, wherein the segmented myocardial characterization is segmented into 17 segments according to a standard AHA 17-segment model. Embodiment 14: A computer-implemented method of determining a segmental myocardial risk factor via an algorithm-based medical imaging analysis, comprising: applying at least a first algorithm to a first plurality of images of the coronary vasculature of a patient obtained using a first imaging technology to characterize the vascular morphology of the coronary vasculature of the patient and to identify a plurality of atherosclerotic plaque lesions; applying at least a second algorithm to a first plurality of images of the myocardium of the patient obtained using a second imaging technology to characterize the myocardium of the patient; applying at least a third algorithm to relate the characterized vascular morphology of the patient with the characterized myocardium of the patient; and calculating a percentage of subtended myocardium at risk from at least one of the plurality of identified atherosclerotic plaque lesions. Embodiment 15: The method of embodiment 14, additionally comprising applying an algorithm to a second plurality of images of the coronary vasculature of the patient obtained using a third imaging technology to characterize the vascular morphology of the coronary vasculature of the patient and to identify a plurality of atherosclerotic plaque lesions. The third imaging technology can be, for example, intracardiac echocardiography, MRI, and any other suitable technology that can generate images that depict the vascular morphology of the coronary vasculature of the patient and to identify a plurality of atherosclerotic plaque lesions. Embodiment 16: The method of embodiment 15, wherein applying an algorithm to a second plurality of images of the coronary vasculature of the patient comprises applying the first algorithm to the second plurality of images of the coronary vasculature of the patient. Embodiment 17: The method of embodiment 14, additionally comprising applying an algorithm to a second plurality of images of the myocardium of the patient obtained using a third imaging technology to characterize the myocardium of the patient. Embodiment 18: The method of any one of embodiments 14-17, wherein applying at least the first algorithm to the first plurality of images of the coronary vasculature of a patient obtained using the first imaging technology additionally comprises determining characteristics of the identified plurality of atherosclerotic plaque lesions. Embodiment 19: The method of embodiment 18, additionally comprising determining a risk of the identified plurality of atherosclerotic plaque lesions contributing to a myocardial infarction, and determining an overall risk indicator based on the determined risk of the identified plurality of atherosclerotic plaque lesions contributing to a myocardial infarction and the calculated percentage of subtended myocardium at risk from the identified plurality of atherosclerotic plaque lesions. Embodiment 20: The method of any one of embodiments 14-19, additionally comprising relating the calculated percentage of subtended myocardium at risk from at least one of the plurality of identified atherosclerotic plaque lesions to a risk of at least one adverse clinical events. Combining CFD-Based Evaluation with Atherosclerosis and Vascular Morphology Various embodiments described herein relate to systems, methods, and devices for medical image analysis, diagnosis, risk stratification, decision making and/or disease tracking. One innovation includes a computer-implemented method of identifying a presence and/or degree of ischemia via an algorithm-based medical imaging analysis is provided, the method including performing a computational fluid dynamics (CFD) analysis of a portion of the coronary vasculature of a patient using imaging data of the portion of the coronary vasculature of the patient, performing a comprehensive atherosclerosis and vascular morphology characterization of the portion of the coronary vasculature of the patient using coronary computed tomographic angiography (CCTA) of the portion of the coronary vasculature of the patient, applying an algorithm that integrates the CFD analysis and the atherosclerosis and vascular morphology characterization to provide an indication of the presence and/or degree of ischemia within the portion of the coronary vasculature of the patient on a pixel-by-pixel basis, the algorithm providing an indication of the presence and/or degree of ischemia for a given pixel based upon an analysis of the given pixel, the surrounding pixels, and a vessel of the portion of the coronary vasculature of the patient with which the pixel is associated. Performing a computational fluid dynamics (CFD) analysis can include generating a model of the portion of the coronary vasculature of the patient based at least in part on coronary computed tomographic angiography (CCTA) of the portion of the coronary vasculature of the patient. Performing a CFD analysis can include generating a model of the portion of the coronary vasculature of the patient based at least in part on the atherosclerosis and vascular morphology characterization of the portion of the coronary vasculature of the patient. Performing a CFD analysis can include computing a fractional flow reserve model of the portion of the coronary vasculature of the patient. Performing a comprehensive atherosclerosis and vascular morphology characterization of the portion of the coronary vasculature of the patient can include determining one or more vascular morphology parameters and a set of quantified plaque parameters. Performing a CFD analysis of a portion of the coronary vasculature of a patient can include generating a CFD-based indication of the presence and/or degree of ischemia within the portion of the coronary vasculature of the patient on a pixel-by-pixel basis. Applying the algorithm that integrates the CFD analysis and the atherosclerosis and vascular morphology characterization to provide an indication of the presence and/or degree of ischemia within the portion of the coronary vasculature of the patient on a pixel-by-pixel basis can include providing an indication of agreement with the CFD-based indication of the presence and/or degree of ischemia within the portion of the coronary vasculature of the patient on a pixel-by-pixel basis. In some embodiments, information generated from the CFD analysis and information related to one or more vascular morphology parameters and/or a set of quantified plaque parameters can be input into a ML algorithm to assess the risk of CAD or MI. In an example, the ML algorithm compares information from the CFD analysis and/or the information related to one or more vascular morphology parameters and/or a set of quantified plaque parameters to a database of patient information to assess or determine a risk of CAD or MI. In an example, the ML algorithm compares information from the CFD analysis and/or the information related to one or more vascular morphology parameters and/or a set of quantified plaque parameters to a database of patient information to assess or determine the presence and/or severity of ischemia. In an example, the ML algorithm can also use patient specific information that can include age, gender, race, BMI, medication, blood pressure, heart rate, weight, height, body habitus, smoking, diabetes, hypertension, prior CAD, family history, and/or lab test results to compare CFD and one or more vascular morphology parameters and/or a set of quantified plaque parameters of the patient being evaluated to patients in a database to assess or determine the presence and/or severity of ischemia, and/or to assess or determine a risk of CAD or MI. Applying an algorithm that integrates the CFD analysis and the atherosclerosis and vascular morphology characterization to provide an indication of the presence and/or degree of ischemia within the portion of the coronary vasculature of the patient on a pixel-by-pixel basis can include analyzing variation in coronary volume, area, and/or diameter over the entirety of a cardiac cycle. Analyzing variation in coronary volume, area, and/or diameter over the entirety of a cardiac cycle can include analyzing an effect of identified atherosclerotic plaque within a wall of an artery on the deformation of the artery. In one aspect, a computer implemented method for non-invasively estimating blood flow characteristics to assess the severity of plaque and/or stenotic lesions using contrast distribution predictions and measurements is provided, the method including generating and outputting an initial indicia of a severity of the plaque or stenotic lesion using one or more calculated blood flow characteristics, where generating and outputting the initial indicia of a severity of the plaque or stenotic lesion includes receiving one or more patient-specific images and/or anatomical characteristics of at least a portion of a patient's vasculature, receiving images reflecting a measured distribution of a contrast agent delivered through the patient's vasculature, projecting one or more contrast values of the measured distribution of the contrast agent to one or more points of a patient-specific anatomic model of the patient's vasculature generated using the received patient-specific images and/or the received anatomical thereby creating a patient-specific measured model indicative of the measured distribution, defining one or more physiological and boundary conditions of a blood flow to non-invasively simulate a distribution of the contrast agent through the patient-specific anatomic model of the patient's vasculature, simulating, using a processor, the distribution of the contrast agent through the one or more points of the patient-specific anatomic model using the defined one or more physiological and boundary conditions and the received patient-specific images and/or anatomical characteristics, thereby creating a patient-specific simulated model indicative of the simulated distribution, comparing, using a processor, the patient-specific measured model and the patient-specific simulated model to determine whether a similarity condition is satisfied, updating the defined physiological and boundary conditions and re-simulating the distribution of the contrast agent through the one or more points of the patient-specific anatomic model until the similarity condition is satisfied, calculating, using a processor, one or more blood flow characteristics of blood flow through the patient-specific anatomic model using the updated physiological and boundary conditions, and generating and outputting the initial indicia of a severity of the plaque or stenotic lesion using the one or more blood flow characteristics of blood flow that were calculated using the updated physiological and boundary conditions, performing a comprehensive atherosclerosis and vascular morphology characterization of the portion of the patient's vasculature using coronary computed tomographic angiography (CCTA) of the portion of the patient's vasculature, and applying an algorithm that integrates the initial indicia of a severity of the plaque or stenotic lesion and the atherosclerosis and vascular morphology characterization to provide an indication of the presence and/or degree of ischemia within the portion of the patient's vasculature on a pixel-by-pixel basis. The algorithm can provide an indication of the presence and/or degree of ischemia for a given pixel based upon an analysis of the given pixel, the surrounding pixels, and a vessel of the portion of the coronary vasculature of the patient with which the pixel is associated. Prior to simulating the distribution of the contrast agent in the patient-specific anatomic model for the first time, defining one or more physiological and boundary conditions can include finding form or functional relationships between the vasculature represented by the anatomic model and physiological characteristics found in populations of patients with a similar vascular anatomy. Prior to simulating the distribution of the contrast agent in the patient-specific anatomic model for the first time, defining one or more physiological and boundary conditions can include one or more of assigning an initial contrast distribution, or assigning boundary conditions related to a flux of the contrast agent (i) at one or more of vessel walls, outlet boundaries, or inlet boundaries, or (ii) near plaque and/or stenotic lesions. The blood flow characteristics can include one or more of, a blood flow velocity, a blood pressure, a heart rate, a fractional flow reserve (FFR) value, a coronary flow reserve (CFR) value, a shear stress, or an axial plaque stress. Receiving one or more patient-specific images can include receiving one or more images from coronary angiography, biplane angiography, 3D rotational angiography, computed tomography (CT) imaging, magnetic resonance (MR) imaging, ultrasound imaging, or a combination thereof. The patient-specific anatomic model can be a reduced-order mode in the two-dimensional anatomical domain, and projecting the one or more contrast values can include averaging one or more contrast values over one or more cross sectional areas of a vessel. The patient-specific anatomic model can include information related to the vasculature, including one or more of a geometrical description of a vessel, including the length or diameter, a branching pattern of a vessel, one or more locations of any stenotic lesions, plaque, occlusions, or diseased segments, or one or more characteristics of diseases on or within vessels, including material properties of stenotic lesions, plaque, occlusions, or diseased segments. The physiological conditions can be measured, obtained, or derived from computational fluid dynamics or the patient-specific anatomic model, and can include one or more of, blood pressure flux, blood velocity flux, the flux of the contrast agent, baseline heart rate, geometrical and material characteristics of the vasculature, or geometrical and material characteristics of plaque and/or stenotic lesions, and where the boundary conditions define physiological relationships between variables at boundaries of a region of interest, where the boundaries can include one or more of, inflow boundaries, outflow boundaries, vessel wall boundaries, or boundaries of plaque and/or stenotic lesions. The simulating, using the processor, of the distribution of the contrast agent for the one or more points in the patient-specific anatomic model using the defined one or more physiological and boundary conditions can include one or more of determining scalar advection-diffusion equations governing the transport of the contrast agent in the patient-specific anatomic model, the equations governing the transport of the contrast agent reflecting any changes in a ratio of flow to lumen area at or near a stenotic lesion or plaque, or computing a concentration of the contrast agent for the one or more points of the patient-specific anatomic model, where computing the concentration requires assignment of an initial contrast distribution and initial physiological and boundary conditions. Satisfying a similarity condition can include specifying a tolerance that can measure differences between the measured distribution of the contrast agent and the simulated distribution of the contrast agent, prior to simulating the distribution of the contrast agent and determining whether the difference between the measured distribution of the contrast agent and the simulated distribution of the contrast agent falls within the specified tolerance, the similarity condition being satisfied if the difference falls within the specified tolerance. Updating the defined physiological and boundary conditions and re-simulating the distribution of the contrast agent can include mapping a concentration of the contrast agent along vessels with one or more of features derived from an analytic approximation of an advection-diffusion equation describing the transport of fluid in one or more vessels of the patient-specific anatomic model, features describing the geometry of the patient-specific anatomic model, including, one or more of, a lumen diameter of a plaque or stenotic lesion, a length of a segment afflicted with a plaque or stenotic lesion, a vessel length, or the area of a plaque or stenotic lesion, or features describing a patient-specific dispersivity of the contrast agent. Updating the defined physiological and boundary conditions and re-simulating the distribution of the contrast agent can include using one or more of a derivative-free optimization based on nonlinear ensemble filtering, or a gradient-based optimization that uses finite difference or adjoint approximation. The method can further include, upon a determination that the measured distribution of the contrast agent and the simulated distribution of the contrast agent satisfy the similarity condition, enhancing the received patient-specific images using the simulated distribution of the contrast agent, and outputting the enhanced images as one or more medical images to an electronic storage medium or display. Enhancing the received patient-specific images can include one or more of replacing pixel values with the simulated distribution of the contrast agent, or using the simulated distribution of the contrast agent to de-noise the received patient-specific images via a conditional random field. The method can further include. upon a determination that the measured distribution of the contrast agent and the simulated distribution of the contrast agent satisfies the similarity condition, using the calculated blood flow characteristics associated with the simulated distribution of the contrast agent to simulate perfusion of blood in one or more areas of the patient-specific anatomic model, generating a model or medical image representing the perfusion of blood in one or more areas of the patient-specific anatomic model, and outputting the model or medical image representing the perfusion of blood in one or more areas of the patient-specific anatomic model to an electronic storage medium or display. The patient-specific anatomic model can be represented in a three-dimensional anatomical domain, and projecting the one or more contrast values can include assigning contrast values for each point of a three-dimensional finite element mesh. Performing a comprehensive atherosclerosis and vascular morphology characterization of the portion of the patient's vasculature using coronary computed tomographic angiography (CCTA) of the portion of the patient's vasculature can include generating image information for the patient, the image information including image data of computed tomography (CT) scans along a vessel of the patient, and radiodensity values of coronary plaque and radiodensity values of perivascular tissue located adjacent to the coronary plaque, and determining, using the image information of the patient, coronary plaque information of the patient, where determining the coronary plaque information can include quantifying, using the image information, radiodensity values in a region of coronary plaque of the patient, quantifying, using the image information, radiodensity values in a region of perivascular tissue adjacent to the region of coronary plaque of the patient, and generating metrics of coronary plaque of the patient using the quantified radiodensity values in the region of coronary plaque and the quantified radiodensity values in the region of perivascular tissue adjacent to the region of coronary plaque. The method can further include accessing a database of coronary plaque information and characteristics of other people, the coronary plaque information in the database including metrics generated from radiodensity values of a region of coronary plaque in the other people and radiodensity values of perivascular tissue adjacent to the region of coronary plaque in the other people, and the characteristics of the other people including information at least of age, sex, race, diabetes, smoking, and prior coronary artery disease, and characterizing the coronary plaque information of the patient by comparing the metrics of the coronary plaque information and characteristics of the patient to the metrics of the coronary plaque information of other people in the database having one or more of the same characteristics, where characterizing the coronary plaque information can include identifying the coronary plaque as a high risk plaque. Characterizing the coronary plaque can include identifying the coronary plaque as a high risk plaque if it is likely to cause ischemia based on a comparison of the coronary plaque information and characteristics of the patient to the coronary plaque information and characteristics of the other people in the database. The characterization of coronary plaque as high risk plaque can be used to provide an indication of the presence and/or degree of ischemia within a portion of the patient's vasculature in at least one pixel adjacent the coronary plaque. Characterizing the coronary plaque can include identifying the coronary plaque as a high risk plaque if it is likely to cause vasospasm based on a comparison of the coronary plaque information and characteristics of the patient to the coronary plaque information and characteristics of the other people in the database. Characterizing the coronary plaque can include identifying the coronary plaque as a high risk plaque if it is likely to rapidly progress based on a comparison of the coronary plaque information and characteristics of the patient to the coronary plaque information and characteristics of the other people in the database. Generating metrics using the quantified radiodensity values in the region of coronary plaque and the quantified radiodensity values in a region of perivascular tissue adjacent to the region of the patient can include determining, along a line, a slope value of the radiodensity values of the coronary plaque and a slope value of the radiodensity values of the perivascular tissue adjacent to the coronary plaque. Generating metrics can further include determining a ratio of the slope value of the radiodensity values of the coronary plaque and a slope value of the radiodensity values of the perivascular tissue adjacent to the coronary plaque. Generating metrics using the quantified radiodensity values in the region of coronary plaque and the quantified radiodensity values in a region of perivascular tissue adjacent to the region of the patient can include generating, using the image information, a ratio between quantified radiodensity values of the coronary plaque and quantified radiodensity values of the corresponding perivascular tissue. The perivascular tissue can be perivascular fat, and generating metrics using the quantified radiodensity values in the region of coronary plaque and the quantified radiodensity values in the region of perivascular tissue adjacent to the region of coronary plaque of the patient can include generating a ratio of a density of the coronary plaque and a density of the perivascular fat. The perivascular tissue can be a coronary artery, and generating metrics using the quantified radiodensity values in the region of coronary plaque and the quantified radiodensity values in the region of perivascular tissue adjacent to the region of coronary plaque of the patient can include generating a ratio of a density of the coronary plaque and a density of the coronary artery. Generating the ratio can include generating the ratio of a maximum radiodensity value of the coronary plaque and a maximum radiodensity value of the perivascular fat. Generating the ratio can include generating a ratio of a minimum radiodensity value of the coronary plaque and a minimum radiodensity value of the perivascular fat. Generating the ratio can include generating a ratio of a maximum radiodensity value of the coronary plaque and a minimum radiodensity value of the perivascular fat. Generating the ratio can include generating a ratio of a minimum radiodensity value of the coronary plaque and a maximum radiodensity value of the perivascular fat. Various examples described elsewhere herein are directed to systems, methods, and devices for medical image analysis, diagnosis, risk stratification, decision making and/or disease tracking. In some embodiments, the systems, devices, and methods described are configured to utilize non-invasive medical imaging technologies, such as a CT image for example, which can be inputted into a computer system configured to automatically and/or dynamically analyze the medical image to identify one or more coronary arteries and/or plaque within the same. For example, in some embodiments, the system can be configured to utilize one or more machine learning and/or artificial intelligence algorithms to automatically and/or dynamically analyze a medical image to identify, quantify, and/or classify one or more coronary arteries and/or plaque. In some embodiments, the system can be further configured to utilize the identified, quantified, and/or classified one or more coronary arteries and/or plaque to generate a treatment plan, track disease progression, and/or a patient-specific medical report, for example using one or more artificial intelligence and/or machine learning algorithms. In some embodiments, the system can be further configured to dynamically and/or automatically generate a visualization of the identified, quantified, and/or classified one or more coronary arteries and/or plaque, for example in the form of a graphical user interface. Further, in some embodiments, to calibrate medical images obtained from different medical imaging scanners and/or different scan parameters or environments, the system can be configured to utilize a normalization device comprising one or more compartments of one or more materials. As will be discussed in further detail, the systems, devices, and methods described allow for automatic and/or dynamic quantified analysis of various parameters relating to plaque, cardiovascular arteries, and/or other structures. More specifically, in some embodiments described herein, a medical image of a patient, such as a coronary CT image, can be taken at a medical facility. Rather than having a physician eyeball or make a general assessment of the patient, the medical image is transmitted to a backend main server in some embodiments that is configured to conduct one or more analyses thereof in a reproducible manner. As such, in some embodiments, the systems, methods, and devices described herein can provide a quantified measurement of one or more features of a coronary CT image using automated and/or dynamic processes. For example, in some embodiments, the main server system can be configured to identify one or more vessels, plaque, and/or fat from a medical image. Based on the identified features, in some embodiments, the system can be configured to generate one or more quantified measurements from a raw medical image, such as for example radiodensity of one or more regions of plaque, identification of stable plaque and/or unstable plaque, volumes thereof, surface areas thereof, geometric shapes, heterogeneity thereof, and/or the like. In some embodiments, the system can also generate one or more quantified measurements of vessels from the raw medical image, such as for example diameter, volume, morphology, and/or the like. Based on the identified features and/or quantified measurements, in some embodiments, the system can be configured to generate a risk assessment and/or track the progression of a plaque-based disease or condition, such as for example atherosclerosis, stenosis, and/or ischemia, using raw medical images. Further, in some embodiments, the system can be configured to generate a visualization of GUI of one or more identified features and/or quantified measurements, such as a quantized color mapping of different features. In some embodiments, the systems, devices, and methods described herein are configured to utilize medical image-based processing to assess for a subject his or her risk of a cardiovascular event, major adverse cardiovascular event (MACE), rapid plaque progression, and/or non-response to medication. In particular, in some embodiments, the system can be configured to automatically and/or dynamically assess such health risk of a subject by analyzing only non-invasively obtained medical images. In some embodiments, one or more of the processes can be automated using an AI and/or ML algorithm. In some embodiments, one or more of the processes described herein can be performed within minutes in a reproducible manner. This is stark contrast to existing measures today which do not produce reproducible prognosis or assessment, take extensive amounts of time, and/or require invasive procedures. As such, in some embodiments, the systems, devices, and methods described are able to provide physicians and/or patients specific quantified and/or measured data relating to a patient's plaque that do not exist today. For example, in some embodiments, the system can provide a specific numerical value for the volume of stable and/or unstable plaque, the ratio thereof against the total vessel volume, percentage of stenosis, and/or the like, using for example radiodensity values of pixels and/or regions within a medical image. In some embodiments, such detailed level of quantified plaque parameters from image processing and downstream analytical results can provide more accurate and useful tools for assessing the health and/or risk of patients in completely novel ways. Additional information regarding the quantification of detailed plaque data information is described in U.S. Pat. No. 10,813,612 (for example, including but not limited to description relating to FIGS. 6, 7, and 9-12) which is incorporated by reference herein. The characterization of atherosclerosis and vascular morphology, and other data indicative of the state of the vessels of the patient and the behavior of those vessels, can be combined with or otherwise used to augment or improve various types of cardiovascular analysis or monitoring. By providing a detailed level of quantified plaque parameters, a more precise patient-specific model can be generated and used in conjunction with computational fluid dynamics (CFD) and/or fluid-structure interaction (FSI) analysis to evaluate patient-specific coronary pressure and flow. Overview of Ischemia Identification Patients with coronary artery disease (CAD) are susceptible to coronary ischemia, in which a coronary vessel exhibits reduced coronary pressure and/or flow. In patients with symptoms suggestive of coronary artery disease, the identification of coronary ischemia, or the exclusion of coronary ischemia, can be helpful in evaluating the coronary artery disease and determining a recommended treatment. In particular, the identification of coronary ischemia can indicate a need for invasive treatment, such as invasive coronary angiography with intended coronary revascularization. Historically, the presence, extent, and severity of ischemia has been determined through stress testing. This stress testing can be performed without imaging, or can be performed in conjunction with imaging of the patient. In this stress testing, surrogate or actual measures of myocardial blood taken when the patient is at rest are compared to measures taken when the patient is in a ‘stress’ states. These stress states can be achieved through exercise, or can be brought about by pharmacologic vasodilation. Recently, coronary computed tomographic angiography (CCTA) has been introduced as an alternative to stress testing. CCTA allows direct visualization of coronary arteries in a non-invasive fashion. CCTA demonstrates high diagnostic performance for the detection or exclusion of high-grade coronary legions, such as coronary stenoses where the vessel is abnormally narrowed, which may be the cause of ischemia. Using diagnostic catheterization, the fractional flow reserve (FFR) of an observed lesion can be directly measured. Prior studies have demonstrated a high rate of “false positives” when severe lesions are detected by CCTA and used as an indicator of coronary ischemia. In such cases, these detected lesions are not functionally significant, and do not, in fact, cause ischemia by invasive fractional flow reserve. Because these “false positives” may result in the use of invasive and unnecessary procedures for the purposes of confirming and treating these lesions, it is desirable to improve the diagnostic performance of CCTA-based analysis in the detection of coronary ischemia and other conditions. A variety of techniques have been introduced which leverage CCTA findings for the determination of coronary ischemia. In some techniques, CCTA can be used in conjunction with stress testing. In some techniques, CCTA can be used to calculate a transarterial attenuation gradient, which can be used to determine an estimate of a pressure gradient or fractional flow reserve for the patient. In some techniques, computational fluid dynamics can be applied to CCTA in order to provide a three-dimensional evaluation of coronary pressure and/or flow in a patient-specific fashion. Computational Fluid Dynamics (CFD) Analysis CFD can be used to evaluate coronary pressure and/or flow for a given vessel geometry and boundary conditions based on the solving of the Navier-Stokes equations, or similar analysis. This information can be used, for example, to determine the functional significance of a coronary lesion, such as whether the lesion impacts blood flow, and the degree to which the blood flow is impacted by the lesion. In addition, this information can be used in a predictive manner, such as to predict changes in coronary blood flow, pressure, or myocardial perfusion under other states such as during exercise or when the patient is otherwise under a stress state. This information can also be used to predict the outcome of treatments or other interventions. Early CFD-based analysis of the cardiovascular system was used to model complex cerebral vasculature. An overview of the early development of computerized fluid dynamics analysis as applied to the evaluation of cerebral circulation is described in U.S. Pat. No. 7,191,110, which is incorporated by reference herein in its entirety. In addition to the fluid dynamics modules that were used to model vasculature, including cerebral vasculature, electrical models were also built based on the similarity of the governing equations of electrical circuits and one-dimensional linear flow, due to the suitability of electrical networks for simulating networks with capacitance and resistance. Transmission line equations similar to the linearized Navier-Stokes equation and vessel wall deformation were used to simulate the pulsatile flow and flexible vessel wall. The limitations of computing capacity during early use of CFD-based analysis placed significant restrictions on the detail that could be included in a practical implementation of CFD-based analysis for a given patient. As a result, early CFD-based analysis of portions of the cardiovascular system of a patient included assumptions which simplified the overall model, such as treating the vessel walls as a rigid tube, and treating the blood as a non-compressible Newtonian fluid. Similar methods were applied to the modeling and evaluation of blood flow in the coronary arteries and adjacent portions of the cardiovascular system. For example, U.S. Pat. No. 8,386,188, which is incorporated by reference herein in its entirety, describes methods for modeling portions of the cardiovascular system of a patient using patient-specific imaging data (for example, including but not limited to, as described in reference to FIG. 2) and generating a three-dimensional model representing at least a portion of the patient's cardiovascular system using the patient-specific data (for example, including but not limited to, as described in reference to FIG. 3-24). The CFD analysis can be based at least in part on a three-dimensional model of a portion of the cardiovascular system of the patient, such as a portion of the patient's heart. For example, the three-dimensional model can include the aorta, some or all of the main coronary arteries, and/or other vessels downstream of the main coronary arteries. In some embodiments, the three-dimensional model can include, or can be used to generate, a volumetric mesh such as a finite element mesh or a finite volume mesh. In some embodiments, this model can be generated using information obtained from a CCTA, although other imaging techniques, such as magnetic resonance imaging or ultrasound can also be used. The model can be dynamic, indicative of the changes in vessel shape over a cardiac cycle. The geometric dimensions of the model can be used to determine the boundary conditions of the vessel walls. In addition, the boundary conditions at the inlet and the outlet of the section(s) to be analyzed can also be assigned in any suitable manner, such as by coupling a model to the boundary. Noninvasive measurements such as cardiac output, blood pressure, and myocardial mass can be used in assigning the inlet or outlet boundary conditions. As described in U.S. Pat. Nos. 7,191,110 and 8,386,188, reduced order models of portions of the patient's vasculature may be generated and used in the CFD analysis, to reduce computing load and to determine boundary conditions for more robustly modeled portions of the patient's vasculature. The CFD analysis can be used to determine blood flow characteristics for the entire modeled portion of the cardiovascular system of the patient, or for one or more sections within the modeled portion. In some embodiments, the determined blood flow characteristics can include some or all of the blood flow velocity, pressure, flow rate, or FFR at various locations throughout the modeled portion of the cardiovascular system of the patient. Other conditions and parameters may also be calculated, such as shear stresses throughout the modeled portion of the cardiovascular system of the patient. The inlet and outlet boundary conditions may be assigned and/or varied based on a variety of physiologic conditions, including for a state of rest, or a state of maximum stress or maximum hyperemia, to determine blood flow characteristics under a variety of physiologic conditions. In some embodiments, a simulated blood pressure model can be generated, where the simulated blood pressure model provides information regarding the pressure at various locations along the modeled portion of the cardiovascular system of the patient. Such a simulated blood pressure model can be used, in turn, to generate an FFR model of the modeled portion of the cardiovascular system of the patient, where the FFR model can be calculated as the ratio of the blood pressure at a given location in the cardiovascular system divided by the blood pressure in the aorta under conditions of maximum stress, or hyperemia, resulting in increased coronary blood flow. The CFD model may be segmented based upon the geometry of the various segments of the modeled portion of the cardiovascular system of the patient, including both the overall vessel shape and arrangement, as well as any local variations in geometry. For example, a diseased portion which has a narrow cross-section, a lesion, and/or a stenosis may be modeled in one or more distinct segments. The cross-sectional area and local minimum of the cross-sectional area of the diseased portions and stenoses may be measured and used in the CFD analysis. The determined blood flow characteristics, and in particular the local values of the calculated FFR model, can be used to provide an indication of the presence of a functionally significant lesion or other feature which may require treatment. In particular, if the calculated FFR at a given location is below a threshold level, the local drop in FFR is indicative of the presence of a functionally significant lesion located upstream of the low FFR point. In some embodiments, an indication of the calculated FFR throughout the modeled portion of the cardiovascular system can be provided as a result, and the location of any functionally significant lesions can be identified by a user. In other embodiments, the upstream geometry of the modeled position of the cardiovascular system of the patient can be analyzed and the location of any functionally significant lesions can be identified by a computer system as part of the CFD analysis or as a separate analysis. U.S. Pat. No. 10,433,740, which is incorporated by reference herein in its entirety, broadly describes an example of machine learning as part of the analysis of a geometric model of a patient in addition to one or more measured or estimated physiological parameters. The described parameters may include global parameters, such as blood pressure, blood viscosity, patient age, patient gender, mass of the supplied tissue, or may be local, such as an estimated density of the vessel wall at a particular location. The system described in U.S. Pat. No. 10,433,740, and other similar systems, may create, for each point at which there is a value of a blood flow characteristic, a feature vector describing the patient specific geometry at that point and estimating of physiological or phenotypic parameters of the patient. Such systems as described in U.S. Pat. No. 10,433,740 may train a machine learning algorithm to predict the blood flow characteristics, such as FFR, at the various points from the feature vectors. The system may then, in turn, use the estimate of FFR to classify a vessel or patient as ischemia positive or negative based on the estimation of FFR. U.S. Pat. No. 10,307,131, which is incorporated by reference herein in its entirety, describes systems which may utilize more accurate estimations of boundary conditions to improve the accuracy of FFR computed tomography used to noninvasively determine FFR. The computed blood flow characteristics may be determined in an iterative fashion, by comparing a predicted contrast distribution and a measured contrast distribution until the solution converges, and the computed blood flow characteristics may then be used to generate a model used in a biochemical analysis. However, systems such as those described in U.S. Pat. Nos. 8,386,188, 10,433,740, and 10,307,131, are directed primarily to the use of additional analysis to improve the accuracy of the calculation of blood flow characteristics such as FFR, and to use those FFR calculations or estimations to provide more accurate predictions of the functional severity of stenoses or the presence of ischemia. In some embodiments, further analysis may be performed based on a CFD model of at least a portion of the cardiovascular system of a patient. In some embodiments, the CFD model described may be updated as described in U.S. Pat. No. 8,386,188, to reflect possible treatments, such as the insertion of a stent, and the CFD analysis performed based on the updated model to determine blood flow characteristics for at least a portion of the updated CFD model. Such a system can attempt to reduce the likelihood of a false positive by improving the FFR analysis, but does not, for example, provide an independent assessment of the presence or degree of a condition such as ischemia as a check against potential false positives generated using FFR analysis. In some embodiments, this CFD analysis can model the coronary artery and/or other vessels or portions of the cardiovascular system as a rigid tube. In other embodiments, this CFD analysis can model the cardiovascular system as a compliant tube, and the elastodynamic equations for wall dynamics may be solved together with the Navier-Stokes equations. This CFD analysis can model the blood as a non-compressible Newtonian fluid, although the blood may also be modeled as a non-Newtonian or multiphase fluid. In addition, this CFD analysis also requires certain assumptions in modeling both the boundary conditions and the vessel behavior, such as coronary vasodilation under hyperemia. In some embodiments, the model used for the CFD analysis can be developed using or based at least in part on a characterization of atherosclerosis and vascular morphology as described in U.S. Pat. No. 7,191,110. The detail and precision with respect to the atherosclerosis and vascular morphology information which can be determined using the described analysis can increase the accuracy of the CFD analysis by more precisely modeling the modeled portion of the cardiovascular system of the patient. In some embodiments, the information regarding atherosclerosis and vascular morphology can be used to provide a model more indicative of the physical parameters of the modeled portion of the cardiovascular system of the patient, particularly the physical parameters which are affected by the presence, type, and volume of plaque. Similarly, U.S. Pat. No. 10,052,031, which is incorporated by reference herein in its entirety, describes the computation of hemodynamic qualities indicative of the functional severity of stenosis, which can be used in the treatment and/or assessment of coronary artery disease. The system can be used to identify lesion specific ischemia using a combination of perfusion scanning data, anatomical imaging of coronary vessels, and computational fluid dynamics. Like the system described in U.S. Pat. No. 10,307,131, however, the system described in U.S. Pat. No. 10,052,031, is directed to improving the computed hemodynamic quantity indicative of the functional severity of the stenosis through iterative comparison of a simulated perfusion map to a measured perfusion map obtained by perfusion scanning of a patient. U.S. Pat. No. 10,888,234, which is incorporated by reference herein in its entirety, describes a system for machine learning based non-invasive functional assessment of coronary artery stenosis from medical image data. Like other systems in the references incorporated herein, the system described in U.S. Pat. No. 10,888,234 is directed towards improvement of the determination of an FFR value or other hemodynamic index value. The system of U.S. Pat. No. 10,888,234, utilizes machine learning as an alternative to more computationally-intensive physics-based modeling of portions of the cardiovascular system of the patient, although mechanistic modeling may also be used to compute an FFR value for used in the analysis. In some embodiments, Fluid-Surface Interaction (FSI) analysis may be performed in addition to or in conjunction with the CFD analysis. The characterization of atherosclerosis and vascular morphology provided by the technology disclosed in U.S. Pat. No. 7,191,110 can allow a more accurate model of the portion of the cardiovascular system of the patient. By modeling the portion of the cardiovascular system of the patient as a deformable structure, greater accuracy can be obtained in the output models generated by the CFD analysis. Atherosclerosis and Vascular Morphology Characterization In some embodiments, the characterization of atherosclerosis and vascular morphology provided by the technology disclosed in U.S. Pat. No. 7,191,110 can be performed either before or after the performance of the CFD analysis discussed above. This process may include taking one or more medical images of a patient, such as a CCTA, at a medical facility. These images may be transmitted to a backend main server in some embodiments that is configured to conduct one or more analyses thereof in a reproducible manner. This analysis may include the use of artificial intelligence (AI), machine learning (ML) and/or other algorithms. In some embodiments, the systems, methods, and devices described herein can provide a quantified measurement of one or more features of a coronary CT image using automated and/or dynamic processes. In certain embodiments, the characterization of atherosclerosis and vascular morphology may be performed prior to the performance of the CFD analysis, and the resulting characterization, or information derived therefrom, may be used as part of the generation of a model of a portion of the cardiovascular system of the patient. In some embodiments, the characterization of atherosclerosis and vascular morphology may include the analysis of a series of CCTA images or any other suitable images, and the generation of a three-dimensional model of the patient's cardiovascular system. This analysis can include the generation of one or more quantified measurements of vessels from the raw medical image, such as for example diameter, volume, morphology, and/or the like. This analysis may segment the vessels in a predetermined manner, or in a dynamic manner, in order to provide more detailed overview of the vascular morphology of the patient. In particular, in some embodiments, the system can be configured to utilize one or more AI and/or ML algorithms to automatically and/or dynamically identify one or more arteries, including for example coronary arteries, carotid arteries, aorta, renal artery, lower extremity artery, and/or cerebral artery. In some embodiments, one or more AI and/or ML algorithms use a neural network (CNN) that is trained with a set of medical images (e.g., CT scans) on which arteries and features (e.g., plaque, lumen, perivascular tissue, and/or vessel walls) have been identified, thereby allowing the AI and/or ML algorithm to automatically identify arteries directly from a medical image. In some embodiments, the arteries are identified by size and/or location. This analysis can also include the identification and classification of plaque within the cardiovascular system of the patient. In some embodiments, the system can be configured to identify a vessel wall and a lumen wall for each of the identified coronary arteries in the medical image. In some embodiments, the system is then configured to determine the volume in between the vessel wall and the lumen wall as plaque. In some embodiments, the system can be configured to identify regions of plaque based on the radiodensity values typically associated with plaque, for example by setting a predetermined threshold or range of radiodensity values that are typically associated with plaque with or without normalizing using a normalization device. In some embodiments, the characterization of atherosclerosis may include the generation of one or more quantified measurements from a raw medical image, such as for example radiodensity of one or more regions of plaque, identification of stable plaque and/or unstable plaque, volumes thereof, surface areas thereof, geometric shapes, heterogeneity thereof, and/or the like. Using this plaque identification and classification, the overall plaque volume may be determined, as well as the amount of calcified stable plaque and the amount of uncalcified plaque. In some embodiments, more detailed classification of atherosclerosis than a binary assessment of calcified vs. non-calcified plaque may be made. For example, the plaque may be classified ordinally, with plaque classified as dense calcified plaque, calcified plaque, fibrous plaque, fibrofatty plaque, necrotic core, or admixtures of plaque types. The plaque may also be classified continuously, by attenuation density on a scale such as a Hounsfield unit scale or a similar classification system. The information which can be obtained in the characterization of atherosclerosis may be dependent upon the type of imaging being performed. For example, when the CCTA images are creating using a single-energy CT process, the relative material density of the plaque relative to the surrounding tissue can be determined, but the absolute material density may be unknown. In contrast, when the CCTA images are creating using a multi-energy CT process, the absolute material density of the plaque and other surrounding tissue can be measured. The characterization of atherosclerosis and vascular morphology may include in particular the identification and classification of stenoses within the cardiovascular system of the patient. This may include the calculation or determination of a numerical calculation or representation of coronary stenosis based on the quantified and/or classified atherosclerosis derived from the medical image. The system may be configured to calculate stenosis using the one or more vascular morphology parameters and/or quantified plaque parameters derived from the medical image of a coronary region of the patient. In some embodiments, the system is configured to dynamically identify an area of stenosis within an artery, and calculate information regarding the area of stenosis, such as vessel parameters including diameter, curvature, local vascular morphology, and the shape of the vessel wall and the lumen wall in the area of stenosis. The identified stenoses may be used in the generation of a model of a portion of the cardiovascular system of the patient. The use of the quantified stenosis information may include the modeling of the vessel boundary conditions. The use of the quantified stenosis information may also include the use of the quantified stenosis information to determine a segmentation of the model for use in the CFD analysis or subsequent processing, or to alter the relative density of the nodes of a three-dimensional mesh used as a CFD model, with increased node density at and around identified stenoses. In an embodiment in which the CFD analysis is primarily focused on the identification of functionally significant stenoses, providing additional detail in a calculated FFR in regions expected to be of particular interest can improve the CFD analysis while without significantly increasing the overall computational load. This may be of particular utility when the CFD analysis is augmented or replaced with a more computationally-intensive FSI analysis of at least a portion of the modeled portion of the cardiovascular system of the patient. In other embodiments, the CFD modeling and CFD analysis may be performed partially or wholly independent of the characterization of atherosclerosis and vascular morphology. In such an embodiment, the CFD modeling and analysis may be performed prior to, or in parallel with, the characterization of atherosclerosis and vascular morphology. CFD Analysis Verification Using Atherosclerosis and Vascular Morphology Characterization In addition to the identification of functionally significant stenoses or legions using CFD analysis, the characterization of atherosclerosis and vascular morphology can be used to provide an independent assessment of functionally significant stenoses and whether a given vessel is ischemic. In particular, the determined data and calculations resulting from the atherosclerosis characterization can be analyzed to detect characteristics of atherosclerosis and vascular morphology which increase the likelihood of a vessel being ischemic. These characteristics indicative of vessel ischemia include, but are not limited to, the presence and/or volume of non-calcified plaque, and in particular low-density non-calcified plaque. Other characteristics which can be analyzed to provide an indication of vessel ischemia include lumen volume and positive remodeling of vessels in the area of lesions or stenoses. The analysis of these characteristics can be combined with the CFD analysis to improve the discrimination of vessels as ischemic or not ischemic. The analysis of these characteristics can also be used to augment the information used to generate the CFD model and perform the CFD analysis. Because CCTA images can be acquired over the entire cardiac cycle, differences in coronary volume, area, and/or diameter can be observed and measured as the coronary arteries dilate and/or constrict. The relationship of the atherosclerotic plaque within the wall of the artery, coupled to its relative ability to dilate and/or constrict, can also provide information on the effects of the atherosclerotic plaque on normal coronary vasomotor function, Even when the absolute material density of the identified plaque is unknown, such as due to the use of a single-energy CT process to acquire the CCTA, information regarding the structural properties of the identified plaque can be determined by observation of the ability of a given portion of a vessel to dilate and/or constrict in comparison to surrounding portions of the vessel. In addition to analyzing variance in vessel dimensions over the course of a cardiac cycle, the physiologic condition of the patient may vary over the course of a CCTA acquisition process. For example, nitroglycerin may often be administered immediately before CCFA acquisition, and may also be administered after non-contrast CCTA acquisitions. Because both nitroglycerin and iodinated contrast are known to have vasodilatory properties, the coronary lumen value will increase after administration, to a volume larger than the coronary lumen value in the absence of administration. Nitroglycerin-dependent coronary vasodilation is an endothelial-dependent process. Because ischemia is preceded by endothelial dysfunction, areas of non-dilation may be an anatomic/physiologic indicator of coronary health. Areas of non-dilation may be identified, such as by comparison of the vascular morphology pre-dilation and post-dilation, and can be analyzed in conjunction with the atherosclerosis characterization of the plaque in the identified areas of poor dilation. FIG.26is a flowchart illustrating a process2600for analyzing a CFD-based indication of ischemia using a characterization of atherosclerosis and vascular morphology. At block2605, a system can access a plurality of images obtained of a patient at a medical facility. These images can be CCTA images or any other suitable images generated using any other suitable imaging method discussed herein or in the attached appendices. These images can be reflective of a portion of the cardiovascular system of a patient, and can be representative of at least an entire cardiac cycle. In some embodiments, these CCTA images may be reflective of the portion of the cardiovascular system of a patient both prior to and after exposure of the patient to a vasodilatory substance, such as nitroglycerin or iodinated contrast. These CCTA images can be reflective of one or more known physiologic condition of the patient, such as an at rest state or a hyperemic state. At block2610, the system can perform a computational fluid dynamics (CFD) analysis based on the plurality of CCTA images. This CFD analysis can include the evaluation of the CCTA images to generate a model of a portion of the cardiovascular system of a patient shown in the images, and can include the generation of a three-dimensional mesh. This CFD analysis can include assigning boundary conditions to the CFD model indicative of the input flow and output flow(s) at the edges of the modeled portion of the cardiovascular system. These boundary conditions can be assigned at least in part on the basis of non-invasive measurements of the patient, such as myocardial mass, cardiac output, and blood pressure. These boundary conditions can also be assigned based on an analysis of the CCTA images, This CFD analysis may result in the determination of blood flow characteristics of some or all of the modeled portion of the cardiovascular system of the patient. In some embodiments, the determined blood flow characteristics can include some or all of the blood flow velocity, pressure, or flow rate at various locations throughout the modeled portion of the cardiovascular system of the patient. In addition, models may be calculated based on the determined blood flow characteristics, such as an FFR model indicative of FFR at various locations throughout the modeled portion of the cardiovascular system of the patient, or shear stresses throughout the modeled portion of the cardiovascular system of the patient. At block2615, the system can determine a CFD-based indication of ischemia-causing stenosis based on the CFD analysis. This CFD-based indication of ischemia-causing stenosis may include, for example, the comparison of an FFR model to a predetermined threshold to identify regions of the FFR model at which the calculated FFR is below the threshold. Such an area of low FFR is indicative of a functionally significant lesion or stenosis upstream of the low FFR area, and can be used to identify a severe stenosis or otherwise diseased portion of a blood vessel as likely causing the vessel to be ischemic. At block2620, the system can determine a characterization of atherosclerosis and vascular morphology based on a plurality of CCTA images. These CCTA images may be the images used to perform the CFD analysis, or may be a different set of images. The characterization of atherosclerosis can include the identification of the location, volume and/or type of plaque throughout the portion of the cardiovascular system of the patient. In some embodiments, the determination of the characterization of atherosclerosis and vascular morphology can be determined prior to the CFD analysis, and at least some of the determined information can be used as part of the CFD analysis, such as in generating a geometric model of the portion of the cardiovascular system of the patient. At block2525, the system can apply an algorithm that integrates the CFD analysis and the characterization of atherosclerosis and vascular morphology to provide an indication of the presence and/or degree of ischemia within the portion of the coronary vasculature of the patient on a pixel-by-pixel basis. For example, the algorithm may map both the CFD-based indication of ischemia-causing stenosis and the characterization of atherosclerosis and vascular morphology to a common image. As a result, some or all of the pixels in the vessels of the analyzed portion of the cardiovascular system of the patient can be designated as depicting or not depicting a functionally significant ischemia-causing stenosis. In some embodiments, only certain of the pixels of the blood vessels may be assigned such an indication. For example, rather than assigning a negative indication to certain pixels, the pixels depicting a functionally significant ischemia-causing stenosis, or a representative subset thereof, may be designated with such an indicator, while other pixels which do not depict a functionally significant ischemia-causing stenosis are not assigned an indication. The algorithm may make a determination, on a pixel-by-pixel basis, of the accuracy of the CFD-based indication of the presence of an ischemia-causing stenosis. This determination may be made, for example, by analyzing characteristics of the characterized atherosclerosis and vascular morphology mapped to that pixel and adjacent pixels, such as those mapped to a common vessel. A determination can be made as to whether those characteristics are consistent with the likelihood of the associated vessel to be ischemic. Depending on the data available from the CCTA, additional comparisons may be made as part of this determination. For example, where the CCTA is reflective of at least one complete cardiac cycle, the relative ability of a portion of a vessel wall to dilate and/or constrict can be analyzed in conjunction with the atherosclerosis characterization to provide information on the effects of the atherosclerotic plaque on normal coronary vasomotor function. As another example, where the CCTA is reflective of the cardiovascular system of the patient both before and after exposure to a vasodilating substance, the CCTA images can be compared to identify areas of non-dilation or other features, responses, or behaviors indicative of endothelial dysfunction. In some embodiments, the algorithm may make a binary yes/no determination as to whether the CFD-based indication of the presence of an ischemia-causing stenosis is accurate. In other embodiments, one or both of the CFD-based indication of the presence of an ischemia-causing stenosis and the algorithmic determination of agreement with that CFD-based determination may not be a binary yes/no decision. In some particular embodiments, one or both of the CFD-based indication of the presence of an ischemia-causing stenosis and the algorithmic determination of agreement with that CFD-based determination may be a probability assigned to a given pixel, or a probabilistic modeling applied to all of the pixels that comprise a given vessel, and to all of the pixels that comprise the analyzed portion of the cardiovascular system of the patient. In some embodiments, the algorithm may apply this analysis only to those pixels for which the CFD analysis indicated the presence of an ischemia-causing stenosis, such that the analysis of the characterization of atherosclerosis and vascular morphology is performed only to filter out potential false positives. In other embodiments, however, the algorithm may apply this analysis to some or all of the pixels for which there is no indication of the presence of an ischemia-causing stenosis, to identify potentially ischemic vessels which might not be identified by the CFD analysis. Although described primarily with respect to coronary vessels, the disclosed technology may be used in the analysis of other vessels elsewhere in the body of a patient. Examples of Embodiments Relating to Combining CFD-Based Evaluation with Atherosclerosis and Vascular Morphology: The following are non-limiting examples of certain embodiments of systems and methods for CFD-based evaluation with atherosclerosis and vascular morphology and/or other related features. Other embodiments may include one or more other features, or different features, that are discussed herein. Various embodiments described herein relate to systems, methods, and devices for medical image analysis, diagnosis, risk stratification, decision making and/or disease tracking. In the embodiments illustrated below, in some examples of other embodiments, instead of being performed on a “pixel” or a pixel-by-pixel basis as indicated, the embodiments relate to analysis per lesion, stenosis, per segment, per vessel, and/or per patient, that is, on a lesion-by-lesion basis, a stenosis-by-stenosis basis, a segment-by-segment basis, a vessel-by-vessel basis, or a patient-by-patient basis. Embodiment 1: A computer-implemented method of identifying a presence and/or degree of ischemia via an algorithm-based medical imaging analysis, comprising: performing a computational fluid dynamics (CFD) analysis of a portion of the coronary vasculature of a patient using imaging data of the portion of the coronary vasculature of the patient; performing a comprehensive atherosclerosis and vascular morphology characterization of the portion of the coronary vasculature of the patient using coronary computed tomographic angiography (CCTA) of the portion of the coronary vasculature of the patient; and applying an algorithm that integrates the CFD analysis and the atherosclerosis and vascular morphology characterization to provide an indication of the presence and/or degree of ischemia within the portion of the coronary vasculature of the patient on a pixel-by-pixel basis, the algorithm providing an indication of the presence and/or degree of ischemia for a given pixel based upon an analysis of the given pixel, the surrounding pixels, and a vessel of the portion of the coronary vasculature of the patient with which the pixel is associated. In other examples, instead of and/or in addition to a pixel-by-pixel basis, an indication of the presence and/or degree of ischemia within the portion of the coronary vasculature of the patient can be on a lesion-by-lesion basis, a stenosis-by-stenosis basis, a segment-by-segment basis, a vessel-by-vessel basis, or a patient-by-patient basis. Embodiment 2: The method of embodiment 1, wherein performing a computational fluid dynamics (CFD) analysis comprises generating a model of the portion of the coronary vasculature of the patient based at least in part on coronary computed tomographic angiography (CCTA) of the portion of the coronary vasculature of the patient. Embodiment 3: The method of embodiment 1, wherein performing a computational fluid dynamics (CFD) analysis comprises generating a model of the portion of the coronary vasculature of the patient based at least in part on the atherosclerosis and vascular morphology characterization of the portion of the coronary vasculature of the patient. Embodiment 4: The method of embodiment 1, wherein performing a computational fluid dynamics (CFD) analysis comprises computing a fractional flow reserve model of the portion of the coronary vasculature of the patient. Embodiment 5: The method of embodiment 1, wherein performing a comprehensive atherosclerosis and vascular morphology characterization of the portion of the coronary vasculature of the patient comprises determining one or more vascular morphology parameters and a set of quantified plaque parameters. Embodiment 6: The method of embodiment 1, wherein performing a computational fluid dynamics (CFD) analysis of a portion of the coronary vasculature of a patient comprises (i) generating a CFD-based indication of the presence and/or degree of ischemia within the portion of the coronary vasculature of the patient on a pixel-by-pixel basis, (and/or on a lesion-by-lesion basis, a stenosis-by-stenosis basis, a segment-by-segment basis, a vessel-by-vessel basis, or a patient-by-patient basis). Embodiment 7: The method of any one of embodiments 1-6, wherein applying the algorithm that integrates the CFD analysis and the atherosclerosis and vascular morphology characterization to provide an indication of the presence and/or degree of ischemia within the portion of the coronary vasculature of the patient on a pixel-by-pixel basis comprises providing an indication of agreement with the CFD-based indication of the presence and/or degree of ischemia within the portion of the coronary vasculature of the patient on a pixel-by-pixel basis. Instead of, or in addition to, a pixel-by-pixel basis, this process can be performed on a lesion-by-lesion basis, a stenosis-by-stenosis basis, a segment-by-segment basis, a vessel-by-vessel basis, or a patient-by-patient basis. Embodiment 8: The method of any one of embodiments 1-7, wherein applying the algorithm that integrates the CFD analysis and the atherosclerosis and vascular morphology characterization to provide an indication of the presence and/or degree of ischemia within the portion of the coronary vasculature of the patient on a pixel-by-pixel basis comprises analyzing variation in coronary volume, area, and/or diameter over the entirety of a cardiac cycle. Instead of, or in addition to, a pixel-by-pixel basis, this process can be performed on a lesion-by-lesion basis, a stenosis-by-stenosis basis, a segment-by-segment basis, a vessel-by-vessel basis, or a patient-by-patient basis. Embodiment 9: The method of embodiment 8, wherein analyzing variation in coronary volume, area, and/or diameter over the entirety of a cardiac cycle comprises analyzing an effect of identified atherosclerotic plaque within a wall of an artery on the deformation of the artery. Embodiment 10: A computer implemented method for non-invasively estimating blood flow characteristics to assess the severity of plaque and/or stenotic lesions using blood flow predictions and measurements, the method comprising: generating and outputting an initial indicia of a severity of the plaque or stenotic lesion using one or more calculated blood flow characteristics, where generating and outputting the initial indicia of a severity of the plaque or stenotic lesion comprises: receiving one or more patient-specific images and/or anatomical characteristics of at least a portion of a patient's vasculature; receiving images reflecting a measured blood distribution the patient's vasculature; projecting one or more values of the measured distribution to one or more points of a patient-specific anatomic model of the patient's vasculature generated using the received patient-specific images and/or the received anatomical thereby creating a patient-specific measured model indicative of the measured distribution; defining one or more physiological and boundary conditions of a blood flow to non-invasively simulate a distribution of the blood flow through the patient-specific anatomic model of the patient's vasculature; simulating, using a processor, the distribution of the blood flow through the one or more points of the patient-specific anatomic model using the defined one or more physiological and boundary conditions and the received patient-specific images and/or anatomical characteristics, thereby creating a patient-specific simulated model indicative of the simulated distribution; comparing, using a processor, the patient-specific measured model and the patient-specific simulated model to determine whether a similarity condition is satisfied; updating the defined physiological and boundary conditions and re-simulating the distribution of the blood flow through the one or more points of the patient-specific anatomic model until the similarity condition is satisfied; calculating, using a processor, one or more blood flow characteristics of blood flow through the patient-specific anatomic model using the updated physiological and boundary conditions; and generating and outputting the initial indicia of a severity of the plaque or stenotic lesion using the one or more blood flow characteristics of blood flow that were calculated using the updated physiological and boundary conditions; performing a comprehensive atherosclerosis and vascular morphology characterization of the portion of the patient's vasculature using coronary computed tomographic angiography (CCTA) of the portion of the patient's vasculature; and applying an algorithm that integrates the initial indicia of a severity of the plaque or stenotic lesion and the atherosclerosis and vascular morphology characterization to provide an indication of the presence and/or degree of ischemia within the portion of the patient's vasculature on a pixel-by-pixel basis. Alternate Embodiment 10 using contrast agent (note: any of the embodiments listed below that refer to “Embodiment 10” or reference Embodiment 10 are intended to be practiced with Embodiment 10 and/or Alternate Embodiment 10): A computer implemented method for non-invasively estimating blood flow characteristics to assess the severity of plaque and/or stenotic lesions using contrast distribution predictions and measurements, the method comprising: generating and outputting an initial indicia of a severity of the plaque or stenotic lesion using one or more calculated blood flow characteristics, where generating and outputting the initial indicia of a severity of the plaque or stenotic lesion comprises: receiving one or more patient-specific images and/or anatomical characteristics of at least a portion of a patient's vasculature; receiving images reflecting a measured distribution of a contrast agent delivered through the patient's vasculature; projecting one or more contrast values of the measured distribution of the contrast agent to one or more points of a patient-specific anatomic model of the patient's vasculature generated using the received patient-specific images and/or the received anatomical thereby creating a patient-specific measured model indicative of the measured distribution; defining one or more physiological and boundary conditions of a blood flow to non-invasively simulate a distribution of the contrast agent through the patient-specific anatomic model of the patient's vasculature; simulating, using a processor, the distribution of the contrast agent through the one or more points of the patient-specific anatomic model using the defined one or more physiological and boundary conditions and the received patient-specific images and/or anatomical characteristics, thereby creating a patient-specific simulated model indicative of the simulated distribution; comparing, using a processor, the patient-specific measured model and the patient-specific simulated model to determine whether a similarity condition is satisfied; updating the defined physiological and boundary conditions and re-simulating the distribution of the contrast agent through the one or more points of the patient-specific anatomic model until the similarity condition is satisfied; calculating, using a processor, one or more blood flow characteristics of blood flow through the patient-specific anatomic model using the updated physiological and boundary conditions; and generating and outputting the initial indicia of a severity of the plaque or stenotic lesion using the one or more blood flow characteristics of blood flow that were calculated using the updated physiological and boundary conditions; performing a comprehensive atherosclerosis and vascular morphology characterization of the portion of the patient's vasculature using coronary computed tomographic angiography (CCTA) of the portion of the patient's vasculature; and applying an algorithm that integrates the initial indicia of a severity of the plaque or stenotic lesion and the atherosclerosis and vascular morphology characterization to provide an indication of the presence and/or degree of ischemia within the portion of the patient's vasculature on a pixel-by-pixel basis. Embodiment 11: The method of embodiment 10, wherein the algorithm provides an indication of the presence and/or degree of ischemia for a given pixel based upon an analysis of the given pixel, the surrounding pixels, and a vessel of the portion of the coronary vasculature of the patient with which the pixel is associated. Instead of, or in addition to, a pixel basis, this process can be performed on a lesion basis, a stenosis basis, a segment basis, a vessel basis, or a patient basis. Embodiment 12: The computer method of embodiments 10 or 11, wherein, prior to simulating the distribution of the contrast agent in the patient-specific anatomic model for the first time, defining one or more physiological and boundary conditions includes finding form or functional relationships between the vasculature represented by the anatomic model and physiological characteristics found in populations of patients with a similar vascular anatomy. Embodiment 13: The method of embodiments 10 or 11, wherein, prior to simulating the distribution of the contrast agent in the patient-specific anatomic model for the first time, defining one or more physiological and boundary conditions includes, one or more of: assigning an initial contrast distribution; or assigning boundary conditions related to a flux of the contrast agent (i) at one or more of vessel walls, outlet boundaries, or inlet boundaries, or (ii) near plaque and/or stenotic lesions. Embodiment 14: The method of any one of embodiments 10-13, wherein the blood flow characteristics include one or more of, a blood flow velocity, a blood pressure, a heart rate, a fractional flow reserve (FFR) value, a coronary flow reserve (CFR) value, a shear stress, or an axial plaque stress. Embodiment 15: The method of any one of embodiments 10-14, wherein receiving one or more patient-specific images includes receiving one or more images from coronary angiography, biplane angiography, 3D rotational angiography, computed tomography (CT) imaging, magnetic resonance (MR) imaging, ultrasound imaging, or a combination thereof. Embodiment 16: The method of any one of embodiments 10-15, wherein the patient-specific anatomic model is a reduced-order mode in the two-dimensional anatomical domain, and wherein projecting the one or more contrast values includes averaging one or more contrast values over one or more cross sectional areas of a vessel. Embodiment 17: The method of any one of embodiments 10-16, wherein the patient-specific anatomic model includes information related to the vasculature, including one or more of: a geometrical description of a vessel, including the length or diameter; a branching pattern of a vessel; one or more locations of any stenotic lesions, plaque, occlusions, or diseased segments; or one or more characteristics of diseases on or within vessels, including material properties of stenotic lesions, plaque, occlusions, or diseased segments. Embodiment 18: The method of any one of embodiments 10-17, wherein the physiological conditions are measured, obtained, or derived from computational fluid dynamics or the patient-specific anatomic model, including, one or more of, blood pressure flux, blood velocity flux, the flux of the contrast agent, baseline heart rate, geometrical and material characteristics of the vasculature, or geometrical and material characteristics of plaque and/or stenotic lesions; and wherein the boundary conditions define physiological relationships between variables at boundaries of a region of interest, the boundaries including, one or more of, inflow boundaries, outflow boundaries, vessel wall boundaries, or boundaries of plaque and/or stenotic lesions. Embodiment 19: The method of any one of embodiments 10-18, wherein simulating, using the processor, the distribution of the contrast agent for the one or more points in the patient-specific anatomic model using the defined one or more physiological and boundary conditions includes one or more of: determining scalar advection-diffusion equations governing the transport of the contrast agent in the patient-specific anatomic model, the equations governing the transport of the contrast agent reflecting any changes in a ratio of flow to lumen area at or near a stenotic lesion or plaque; or computing a concentration of the contrast agent for the one or more points of the patient-specific anatomic model, wherein computing the concentration requires assignment of an initial contrast distribution and initial physiological and boundary conditions. Embodiment 20: The method of any one of embodiments 10-19, wherein satisfying a similarity condition comprises: specifying a tolerance that can measure differences between the measured distribution of the contrast agent and the simulated distribution of the contrast agent, prior to simulating the distribution of the contrast agent; and determining whether the difference between the measured distribution of the contrast agent and the simulated distribution of the contrast agent falls within the specified tolerance, the similarity condition being satisfied if the difference falls within the specified tolerance. Embodiment 21: The method of any one of embodiments 10-20, wherein updating the defined physiological and boundary conditions and re-simulating the distribution of the contrast agent includes mapping a concentration of the contrast agent along vessels with one or more of: features derived from an analytic approximation of an advection-diffusion equation describing the transport of fluid in one or more vessels of the patient-specific anatomic model; features describing the geometry of the patient-specific anatomic model, including, one or more of, a lumen diameter of a plaque or stenotic lesion, a length of a segment afflicted with a plaque or stenotic lesion, a vessel length, or the area of a plaque or stenotic lesion; or features describing a patient-specific dispersivity of the contrast agent. Embodiment 22: The method of any one of embodiments 10-21, wherein updating the defined physiological and boundary conditions and re-simulating the distribution of the contrast agent includes using one or more of a derivative-free optimization based on nonlinear ensemble filtering, or a gradient-based optimization that uses finite difference or adjoint approximation. Embodiment 23: The method of any one of embodiments 10-22, further comprising: upon a determination that the measured distribution of the contrast agent and the simulated distribution of the contrast agent satisfy the similarity condition, enhancing the received patient-specific images using the simulated distribution of the contrast agent; and outputting the enhanced images as one or more medical images to an electronic storage medium or display. Embodiment 24: The method of embodiment 23, wherein enhancing the received patient-specific images comprises one or more of: replacing pixel values with the simulated distribution of the contrast agent; or using the simulated distribution of the contrast agent to de-noise the received patient-specific images via a conditional random field. Embodiment 25: The method of any one of embodiments 10-24, further comprising: upon a determination that the measured distribution of the contrast agent and the simulated distribution of the contrast agent satisfies the similarity condition, using the calculated blood flow characteristics associated with the simulated distribution of the contrast agent to simulate perfusion of blood in one or more areas of the patient-specific anatomic model; generating a model or medical image representing the perfusion of blood in one or more areas of the patient-specific anatomic model; and outputting the model or medical image representing the perfusion of blood in one or more areas of the patient-specific anatomic model to an electronic storage medium or display. Embodiment 26: The method of any one of embodiments 10-25, wherein the patient-specific anatomic model is represented in a three-dimensional anatomical domain, and wherein projecting the one or more contrast values includes assigning contrast values for each point of a three-dimensional finite element mesh. Embodiment 27: The method of any one of embodiments 10-26, wherein performing a comprehensive atherosclerosis and vascular morphology characterization of the portion of the patient's vasculature using coronary computed tomographic angiography (CCTA) of the portion of the patient's vasculature comprises: generating image information for the patient, the image information including image data of computed tomography (CT) scans along a vessel of the patient, and radiodensity values of coronary plaque and radiodensity values of perivascular tissue located adjacent to the coronary plaque; and determining, using the image information of the patient, coronary plaque information of the patient, wherein determining the coronary plaque information comprises quantifying, using the image information, radiodensity values in a region of coronary plaque of the patient, quantifying, using the image information, radiodensity values in a region of perivascular tissue adjacent to the region of coronary plaque of the patient, and generating metrics of coronary plaque of the patient using the quantified radiodensity values in the region of coronary plaque and the quantified radiodensity values in the region of perivascular tissue adjacent to the region of coronary plaque. Embodiment 28: The method of embodiment 27, further comprising: accessing a database of coronary plaque information and characteristics of other people, the coronary plaque information in the database including metrics generated from radiodensity values of a region of coronary plaque in the other people and radiodensity values of perivascular tissue adjacent to the region of coronary plaque in the other people, and the characteristics of the other people including information at least of age, sex, race, diabetes, smoking, and prior coronary artery disease; and characterizing the coronary plaque information of the patient by comparing the metrics of the coronary plaque information and characteristics of the patient to the metrics of the coronary plaque information of other people in the database having one or more of the same characteristics, wherein characterizing the coronary plaque information includes identifying the coronary plaque as a high risk plaque. Embodiment 29: The method of embodiment 28, wherein characterizing the coronary plaque comprises identifying the coronary plaque as a high risk plaque if it is likely to cause ischemia based on a comparison of the coronary plaque information and characteristics of the patient to the coronary plaque information and characteristics of the other people in the database. Embodiment 30: The method of embodiment 29, wherein the characterization of coronary plaque as high risk plaque is used to provide an indication of the presence and/or degree of ischemia within a portion of the patient's vasculature in at least one pixel adjacent the coronary plaque. Embodiment 31: The method of embodiment 28, wherein characterizing the coronary plaque comprises identifying the coronary plaque as a high risk plaque if it is likely to cause vasospasm based on a comparison of the coronary plaque information and characteristics of the patient to the coronary plaque information and characteristics of the other people in the database. Embodiment 32: The method of embodiment 28, wherein characterizing the coronary plaque comprises identifying the coronary plaque as a high risk plaque if it is likely to rapidly progress based on a comparison of the coronary plaque information and characteristics of the patient to the coronary plaque information and characteristics of the other people in the database. Embodiment 33: The method of any one of embodiments 10-32, wherein generating metrics using the quantified radiodensity values in the region of coronary plaque and the quantified radiodensity values in a region of perivascular tissue adjacent to the region of the patient comprises determining, along a line, a slope value of the radiodensity values of the coronary plaque and a slope value of the radiodensity values of the perivascular tissue adjacent to the coronary plaque. Embodiment 34: The method of embodiment 33, wherein generating metrics further comprises determining a ratio of the slope value of the radiodensity values of the coronary plaque and a slope value of the radiodensity values of the perivascular tissue adjacent to the coronary plaque. Embodiment 35: The method of any one of embodiments 10-34, wherein generating metrics using the quantified radiodensity values in the region of coronary plaque and the quantified radiodensity values in a region of perivascular tissue adjacent to the region of the patient comprises generating, using the image information, a ratio between quantified radiodensity values of the coronary plaque and quantified radiodensity values of the corresponding perivascular tissue. Embodiment 36: The method of any one of embodiments 10-35, wherein the perivascular tissue is perivascular fat, and generating metrics using the quantified radiodensity values in the region of coronary plaque and the quantified radiodensity values in the region of perivascular tissue adjacent to the region of coronary plaque of the patient comprises generating a ratio of a density of the coronary plaque and a density of the perivascular fat. Embodiment 37: The method of any one of embodiments 10-35, wherein the perivascular tissue is a coronary artery, and generating metrics using the quantified radiodensity values in the region of coronary plaque and the quantified radiodensity values in the region of perivascular tissue adjacent to the region of coronary plaque of the patient comprises generating a ratio of a density of the coronary plaque and a density of the coronary artery. Embodiment 38: The method of embodiment 37, wherein generating the ratio comprises generating the ratio of a maximum radiodensity value of the coronary plaque and a maximum radiodensity value of the perivascular fat. Embodiment 39: The method of embodiment 37, wherein generating the ratio comprises generating a ratio of a minimum radiodensity value of the coronary plaque and a minimum radiodensity value of the perivascular fat. Embodiment 40: The method of embodiment 37, wherein generating the ratio comprises generating a ratio of a maximum radiodensity value of the coronary plaque and a minimum radiodensity value of the perivascular fat. Embodiment 41: The method of embodiment 37, wherein generating the ratio comprises generating a ratio of a minimum radiodensity value of the coronary plaque and a maximum radiodensity value of the perivascular fat. Individualized/Subject-Specific CAD Risk Factor Goals Various embodiments described herein relate to systems, methods, and devices for determining individualized and/or patient or subject-specific coronary artery disease (CAD) risk factor goals from image-based phenotyping of atherosclerosis. In particular, in some embodiments, the systems, methods, and devices are configured to analyze a medical image of a subject comprising one or more arteries and analyze the same to perform quantitative phenotyping of atherosclerosis or plaque. For example, quantitative phenotyping can comprise determination of atherosclerosis burden or volume, type, composition, rate of progression or stabilization, and/or the like. In some embodiments, the systems, methods, and devices described herein can be configured to correlate the phenotyping of atherosclerosis to a CAD risk factor level of the subject to determine an individualized and/or subject or patient-specific CAD risk factor goal for that particular subject. For example, a CAD risk factor goal can be based on LDL or other cholesterol level, blood pressure, diabetes, tobacco usage, inflammation level, and/or the like. As such, in some embodiments this approach of personalized phenotyping for risk factor goals can allow for development of specific treatment targets on a person-by-person basis in a manner that can reduce ASCVD events that has not been done to date. Traditionally, coronary artery disease (CAD) prevention has relied upon the use of surrogate markers of CAD that have, in population-based studies, generally been associated with increased CAD events, such as myocardial infarction and sudden coronary death. These surrogate markers of CAD can include cholesterol, blood pressure, diabetes mellitus, tobacco use, and family history of premature CAD, amongst others. However, while these approaches can be somewhat effective in discriminating different populations at risk, they tend to show significantly reduced efficacy for pinpointing individuals who will experience future heart attacks and other atherosclerotic cardiovascular disease (ASCVD) events. Indeed, certain prior studies have demonstrated that the coronary lesions that are responsible for heart attacks can be missed by sole reliance of elevated cholesterol levels in up to 80% of individuals who will suffer heart attack. Further, tracking of risk factors, e.g., cholesterol levels, following administration of medical therapy with such agents as statin medications can miss 75% of individuals who retain “residual risk” despite effective cholesterol lowering and medical treatment. These findings highlight the need for more effective measures of CAD that can be effectively tracked and used to determine personalized goals of treatment on an individual, patient-by-patient, or subject-by-subject basis. An additional limitation to traditional CAD risk factors is that it can be more than the presence or absence of a risk factor that connotes risk of future ASCVD events. Indeed, the presence, extent, severity, duration, treatment, and treatment response can all contribute together to whether a specific CAD risk factor may influence the coronary arteries in a deleterious manner, either alone or in combination with other CAD risk factors. Finally, there are likely an array of unobserved (and heretofore unknown variables) that may contribute to CAD events, including psychosocial, metabolic, inflammatory, environmental, and/or genetic causes. Thus, there is an urgent unmet need to identify more precise and/or individualized measures of CAD risk, particularly one that can integrate the lifetime exposure and treatment effects to the overall manifestation of CAD. To date, there has not been a singular metric that incorporates all of these factors into a single disease metric that can be used to diagnose, prognosticate risk, guide therapy selection and most importantly, provide goals for determining need of additional therapy or adequacy of current therapies. As such, in some embodiments, the systems, devices, and methods described herein are configured to address one or more of the shortcomings described above. In particular, in some embodiments, the systems, devices, and methods described herein are configured to incorporate one or more of such CAD risk factors described above to generate a metric or measure of patient-specific CAD risk. In some embodiments, the systems, methods, and devices described herein are configured to correlate one or more such CAD risk factors with a current disease or plaque state of a subject to determine a personalized CAD risk factor goal. For example, rather than setting the same cholesterol or other CAD risk factor goal for everyone, which may not be an accurate measure of plaque, atherosclerosis, or disease, some embodiments described herein are configured to determine a patient or subject-specific, personalized CAD risk factor goal, such as a cholesterol level goal, that more accurately tracks the state of plaque, atherosclerosis, or disease. More specifically, in some embodiments, the systems, methods, and devices described herein can be configured to analyze the state of plaque, atherosclerosis, or disease of a subject and correlate the same to one or more CAD risk factors, such as cholesterol, which can then be used to determine a personalized CAD risk factor goal for the subject which is specifically derived for that subject and has more meaningful correlation to the state of disease for that individual. Further, based on one or more such analyses, in some embodiments, the systems, devices, and methods described herein can be used to diagnose, prognosticate risk, guide therapy selection, and provide goals for determining need of additional therapy or adequacy of current therapies. As discussed herein, in some embodiments, the systems, devices, and methods are configured to determine patient-specific coronary artery disease (CAD) risk factor goals from image-based quantified phenotyping of atherosclerosis of plaque, which can include for example quantification and characterization of coronary atherosclerosis burden, type, and/or rate of progression. In particular, in some embodiments, systems, methods, and devices described herein allow for determining individualized therapeutic goals for CAD risk factor control that are disease phenotype-based (e.g., burden, type, and/or rate of progression of disease). In some embodiments, this approach of personalized phenotyping for risk factor goals allows for development of specific treatment targets on a person-by-person basis in a manner that can reduce ASCVD events that has not been done to date. FIG.27is a block diagram illustrating an example embodiment(s) of systems, devices, and methods for determining patient-specific and/or subject-specific coronary artery disease (CAD) risk factor goals from image-based quantified phenotyping of atherosclerosis. As illustrated inFIG.27, in some embodiments, the system can be configured to access and/or determine the level of a CAD risk factor of an individual, subject, or patient at block2702. For example, in some embodiments, the CAD risk factor can comprise low-density lipoprotein (LDL) cholesterol, high-density lipoprotein (HDL) cholesterol level, cholesterol particle size and fluffiness, other measures and/or types of cholesterol, inflammation, glycosylated hemoglobin, blood pressure, and/or the like. In some embodiments, the CAD risk factor can include any other factor that is used to diagnose and/or correlate with CAD. In some embodiments, the system can be configured to access a medical image of the individual, subject, or patient at block2704. In some embodiments, the medical image can include one or more arteries, such as coronary, carotid, aorta, lower extremity, and/or other arteries of the subject. In some embodiments, the medical image can be stored in a medical image database2706. In some embodiments, the medical image database2706can be locally accessible by the system and/or can be located remotely and accessible through a network connection. The medical image can comprise an image obtained by one or more modalities, such as computed tomography (CT), contrast-enhanced CT, non-contrast CT, x-ray, ultrasound, echocardiography, intravascular ultrasound (IVUS), MR imaging, optical coherence tomography (OCT), nuclear medicine imaging, positron-emission tomography (PET), single photon emission computed tomography (SPECT), and/or near-field infrared spectroscopy (NIRS). In some embodiments, the medical image comprises one or more of a contrast-enhanced CT image, non-contrast CT image, MR image, and/or an image obtained using any of the modalities described above. In some embodiments, at block2708, the system can be configured to perform quantitative phenotyping of atherosclerosis for the individual, subject, or patient. For example, in some embodiments, the quantitative phenotyping can be of atherosclerosis burden, volume, type, composition, and/or rate of progression for the individual or patient. In some embodiments, the system can be configured to utilize one or more image processing, artificial intelligence (AI), and/or machine learning (ML) algorithms to automatically and/or dynamically perform quantitative phenotyping of atherosclerosis. For example, in some embodiments, the system can be configured to automatically and/or dynamically identify one or more arteries, vessels, and/or a portion thereof on the medical image, identify one or more regions of plaque, and/or perform quantitative phenotyping of plaque. In some embodiments, as part of quantitative phenotyping, the system can be configured to identify and/or characterize different types and/or regions of plaque, for example based on density, absolute density, material density, relative density, and/or radiodensity. For example, in some embodiments, the system can be configured to characterize a region of plaque into one or more sub-types of plaque. For example, in some embodiments, the system can be configured to characterize a region of plaque as one or more of low density non-calcified plaque, non-calcified plaque, or calcified plaque. In some embodiments, calcified plaque can correspond to plaque having a highest density range, low density non-calcified plaque can correspond to plaque having a lowest density range, and non-calcified plaque can correspond to plaque having a density range between calcified plaque and low density non-calcified plaque. For example, in some embodiments, the system can be configured to characterize a particular region of plaque as low density non-calcified plaque when the radiodensity of an image pixel or voxel corresponding to that region of plaque is between about −189 and about 30 Hounsfield units (HU). In some embodiments, the system can be configured to characterize a particular region of plaque as non-calcified plaque when the radiodensity of an image pixel or voxel corresponding to that region of plaque is between about 31 and about 350 HU. In some embodiments, the system can be configured to characterize a particular region of plaque as calcified plaque when the radiodensity of an image pixel or voxel corresponding to that region of plaque is between about 351 and about 2500 HU. In some embodiments, the lower and/or upper Hounsfield unit boundary threshold for determining whether a plaque corresponds to one or more of low density non-calcified plaque, non-calcified plaque, and/or calcified plaque can be about −1000 HU, about −900 HU, about −800 HU, about −700 HU, about −600 HU, about −500 HU, about −400 HU, about −300 HU, about −200 HU, about −190 HU, about −180 HU, about −170 HU, about −160 HU, about −150 HU, about −140 HU, about −130 HU, about −120 HU, about −110 HU, about −100 HU, about −90 HU, about −80 HU, about −70 HU, about −60 HU, about −50 HU, about −40 HU, about −30 HU, about −20 HU, about −10 HU, about 0 HU, about 10 HU, about 20 HU, about 30 HU, about 40 HU, about 50 HU, about 60 HU, about 70 HU, about 80 HU, about 90 HU, about 100 HU, about 110 HU, about 120 HU, about 130 HU, about 140 HU, about 150 HU, about 160 HU, about 170 HU, about 180 HU, about 190 HU, about 200 HU, about 210 HU, about 220 HU, about 230 HU, about 240 HU, about 250 HU, about 260 HU, about 270 HU, about 280 HU, about 290 HU, about 300 HU, about 310 HU, about 320 HU, about 330 HU, about 340 HU, about 350 HU, about 360 HU, about 370 HU, about 380 HU, about 390 HU, about 400 HU, about 410 HU, about 420 HU, about 430 HU, about 440 HU, about 450 HU, about 460 HU, about 470 HU, about 480 HU, about 490 HU, about 500 HU, about 510 HU, about 520 HU, about 530 HU, about 540 HU, about 550 HU, about 560 HU, about 570 HU, about 580 HU, about 590 HU, about 600 HU, about 700 HU, about 800 HU, about 900 HU, about 1000 HU, about 1100 HU, about 1200 HU, about 1300 HU, about 1400 HU, about 1500 HU, about 1600 HU, about 1700 HU, about 1800 HU, about 1900 HU, about 2000 HU, about 2100 HU, about 2200 HU, about 2300 HU, about 2400 HU, about 2500 HU, about 2600 HU, about 2700 HU, about 2800 HU, about 2900 HU, about 3000 HU, about 3100 HU, about 3200 HU, about 3300 HU, about 3400 HU, about 3500 HU, and/or about 4000 HU. In some embodiments, the system can be configured to determine and/or characterize the burden of atherosclerosis based at least part on volume of plaque. In some embodiments, the system can be configured to analyze and/or determine total volume of plaque and/or volume of low-density non-calcified plaque, non-calcified plaque, and/or calcified plaque. In some embodiments, the system can be configured to perform phenotyping of plaque by determining a ratio of one or more of the foregoing volumes of plaque, for example within an artery, lesion, vessel, and/or the like. In some embodiments, the system can be configured to analyze the progression of plaque. For example, in some embodiments, the system can be configured to analyze the progression of one or more particular regions of plaque and/or overall progression and/or lesion and/or artery-specific progression of plaque. In some embodiments, in order to analyze the progression of plaque, the system can be configured to analyze one or more serial images of the subject for phenotyping atherosclerosis. In some embodiments, tracking the progression of plaque can comprise analyzing changes and/or lack thereof in total plaque volume and/or volume of low-density non-calcified plaque, non-calcified plaque, and/or calcified plaque. In some embodiments, tracking the progression of plaque can comprise analyzing changes and/or lack thereof in density of a particular region of plaque and/or globally. In some embodiments, at block2710, the system can be configured to determine a correlation of the baseline risk factor level of the subject with the quantitative phenotyping of atherosclerosis. In some embodiments, the system can be configured to utilize one or more multivariable regression analyses, artificial intelligence (AI), and/or machine learning (ML) algorithms to automatically and/or dynamically determine a correlation between the risk factor level of the subject with results of quantitative phenotyping of atherosclerosis. For example, there can be a correlation between one or more quantitative plaque phenotyping variables and one or more CAD risk level factors. Such correlation can be subject-dependent, meaning that such correlation can be different and/or the same among different subjects. In some embodiments, the system can utilize an AI and/or ML algorithm trained on a plurality of subject data sets with known one or more quantitative plaque phenotyping variables and one or more CAD risk level factors to determine one or more distinct patterns which can be applied to a new subject. Generally speaking, even if two people have the exact same quantified plaque phenotyping, whether based on volume, composition, rate of progression, and/or the like, they can still show different CAD risk factor levels, such as for example different LDL cholesterol levels. As such, subjecting everyone to the same CAD risk factor level goal, such as for example a particular LDL cholesterol level, may not have the same desired effect on atherosclerosis which can be thought of as the actual disease. As such, some systems, devices, and methods described herein provide for individualized, subject-specific CAD risk factor goals that will actually have a meaningful impact on atherosclerosis and risk of CAD. In particular, it can be advantageous to maintain the total amount or volume of plaque while hardening existing plaque, for example by changing more low-density non-calcified plaque and/or non-calcified plaque into calcified plaque. By being able to estimate how a change in a particular CAD risk factor level will actually affect a quantified plaque measure or variable for a subject, in some embodiments, the system can be used to generate and/or facilitate generation of effective patient-specific or subject-specific treatment(s). As discussed herein, in some embodiments, one or more quantified atherosclerosis phenotyping and/or measures and/or variables can be correlated to one or more CAD risk factor levels of a particular subject. In some embodiments, the system can be configured to access a reference values database2716to facilitate determination of such correlation. In some embodiments, the reference values database2716can be locally accessible by the system and/or can be located remotely and accessible through a network connection. In some embodiments, the reference values database2716can comprise a plurality of CAD risk factor levels and/or quantified atherosclerosis phenotyping derived from a plurality of subjects, from which the system can be configured to determine the correlation between one or more quantified atherosclerosis phenotyping and one or more CAD risk factors for the subject. In some embodiments, the system can be configured to utilize such correlation to estimate the effect of how much a particular change in a particular CAD risk level factor will affect a particular quantified atherosclerosis phenotyping for that subject. In some embodiments, at block2712, the system can be configured to determine a threshold and/or thresholds of one or more quantitative atherosclerosis phenotyping measures or variables that will cause the subject to be considered to have elevated and/or normal risk of CAD. For example, in some embodiments, one or more threshold values of one or more quantitative phenotyping measures or variables can be tied to normal, low, medium, or high risk of CAD. In some embodiments, one or more threshold values of one or more quantitative phenotyping measures or variables can be tied to a percentage and/or normal distribution of risk of CAD among a wider population, such as for example the average, 75thpercentile, 90thpercentile, and/or the like. In some embodiments, the percentage and/or normal distribution of CAD risk can be for asymptomatic and/or symptomatic population at large and/or for an age and/or gender group of the subject and/or other group determined by another clinical factor. In some embodiments, the system can be configured to determine a threshold that is specific for that particular individual or patient, rather than one that applies to the population at large. In some embodiments, the determined threshold can be applicable to a number or a group of individuals, for example of those that share one or more common characteristics. For example, for a particular subject, the system can determine that a particular volume of total plaque, non-calcified plaque, low-density non-calcified plaque, calcified plaque, and/or a ratio or thereof corresponds to a particular elevated or normal risk of CAD for the subject. In doing so, in some embodiments, the system can be configured to access the reference values database2716. In some embodiments, the system can be configured to utilize one or more AI and/or ML algorithms to determine one or more subject-specific thresholds of one or more quantitative phenotyping of atherosclerosis to lower the risk of CAD for the subject. In some embodiments, at block2714, the system can be configured to set or determine a CAD risk factor level goal for the individual or patient, for example based on the determined one or more thresholds of quantitative phenotyping of atherosclerosis. As discussed herein, in some embodiments, the determined CAD risk factor goal can be individualized and/or patient-specific. For example, in some embodiments, the system can be configured to set a patient-specific or subject-specific LDL cholesterol goal for that individual that is expected to lower one or more quantified atherosclerosis phenotyping to a desired level. In some embodiments, the system can be configured to access the reference values database2716in determining a subject-specific CAD risk factor level goal. In some embodiments, the system can be configured to utilize one or more AI and/or ML algorithms to determine a subject-specific CAD risk factor level goal. In some embodiments, at block2718, the system can be configured determine a proposed treatment for the individual based on the set risk factor goal, which can be used to treat the patient. For example, in some embodiments, the system can generate a proposed treatment for treating the patient to an LDL cholesterol level that is associated with normal or low atherosclerosis burden, type, and/or rate of progression and/or any other type of quantified phenotyping of atherosclerosis. In some embodiments, the system can be configured to access a risk/treatment database2720in determining a proposed treatment for the subject. In some embodiments, the risk/treatment database2720can be locally accessible by the system and/or can be located remotely and accessible through a network connection. In some embodiments, the risk/treatment database2720can comprise a plurality of treatments that were given to patients for lowering risk of CAD, with or without longitudinal treatment results, and/or one or more quantified atherosclerosis phenotyping variables and/or one or more CAD risk factor level data. In some embodiments, the system can be configured to utilize one or more AI and/or ML algorithms to determine a subject-specific proposed treatment for lowering risk of CAD. In some embodiments, the proposed treatment can include one or more of medical intervention, such as a stent implantation or other procedure, medical treatment, such as prescription of statins or some other pharmaceutical, and/or lifestyle change, such as exercise or dietary changes. In some embodiments, as atherosclerosis burden, volume, composition, type, and/or rate of progression may be dynamic, the system can be configured to perform serial quantified phenotyping of atherosclerosis and re-calibrate and/or update the threshold of a risk factor for the patient, such as for example LDL. As such, in some embodiments, in some embodiments, the system can be configured to repeat one or more processes described in relation to blocks2702-2720. As such, in some embodiments, the systems, devices, and methods described herein can be configured to leverage quantitative disease phenotyping to determine individual thresholds of risk factor control vs. lack of control. Further, in some embodiments, armed with this information, treatment targets for risk factors can be custom-made for individuals rather than relying on population-based estimates that average across a group of individuals. Computer System In some embodiments, the systems, processes, and methods described herein are implemented using a computing system, such as the one illustrated inFIG.27B. The example computer system2730is in communication with one or more computing systems2750and/or one or more data sources2752via one or more networks2748. WhileFIG.27Billustrates an embodiment of a computing system2730, it is recognized that the functionality provided for in the components and modules of computer system2730can be combined into fewer components and modules, or further separated into additional components and modules. The computer system2730can comprise Patient-Specific Risk Factor Goal Determination and/or Tracking Module2744that carries out the functions, methods, acts, and/or processes described herein. The Patient-Specific Risk Factor Goal Determination and/or Tracking Module2744is executed on the computer system2730by a central processing unit2736discussed further below. In general the word “module,” as used herein, refers to logic embodied in hardware or firmware or to a collection of software instructions, having entry and exit points. Modules are written in a program language, such as JAVA, C, or C++, or the like. Software modules can be compiled or linked into an executable program, installed in a dynamic link library, or can be written in an interpreted language such as BASIC, PERL, LAU, PHP or Python and any such languages. Software modules can be called from other modules or from themselves, and/or can be invoked in response to detected events or interruptions. Modules implemented in hardware include connected logic units such as gates and flip-flops, and/or can include programmable units, such as programmable gate arrays or processors. Generally, the modules described herein refer to logical modules that can be combined with other modules or divided into sub-modules despite their physical organization or storage. The modules are executed by one or more computing systems, and can be stored on or within any suitable computer readable medium, or implemented in-whole or in-part within special designed hardware or firmware. Not all calculations, analysis, and/or optimization require the use of computer systems, though any of the above-described methods, calculations, processes, or analyses can be facilitated through the use of computers. Further, in some embodiments, process blocks described herein can be altered, rearranged, combined, and/or omitted. The computer system2730includes one or more processing units (CPU)2736, which can comprise a microprocessor. The computer system2730further includes a physical memory2740, such as random access memory (RAM) for temporary storage of information, a read only memory (ROM) for permanent storage of information, and a mass storage device2734, such as a backing store, hard drive, rotating magnetic disks, solid state disks (SSD), flash memory, phase-change memory (PCM), 3D XPoint memory, diskette, or optical media storage device. Alternatively, the mass storage device can be implemented in an array of servers. Typically, the components of the computer system2730are connected to the computer using a standards based bus system. The bus system can be implemented using various protocols, such as Peripheral Component Interconnect (PCI), Micro Channel, SCSI, Industrial Standard Architecture (ISA) and Extended ISA (EISA) architectures. The computer system2730includes one or more input/output (I/O) devices and interfaces2742, such as a keyboard, mouse, touch pad, and printer. The I/O devices and interfaces2742can include one or more display devices, such as a monitor, that allows the visual presentation of data to a user. More particularly, a display device provides for the presentation of GUIs as application software data, and multi-media presentations, for example. The I/O devices and interfaces2742can also provide a communications interface to various external devices. The computer system2730can comprise one or more multi-media devices208, such as speakers, video cards, graphics accelerators, and microphones, for example. Computing System Device/Operating System The computer system2730can run on a variety of computing devices, such as a server, a Windows server, a Structure Query Language server, a Unix Server, a personal computer, a laptop computer, and so forth. In other embodiments, the computer system2730can run on a cluster computer system, a mainframe computer system and/or other computing system suitable for controlling and/or communicating with large databases, performing high volume transaction processing, and generating reports from large databases. The computing system2730is generally controlled and coordinated by an operating system software, such as z/OS, Windows, Linux, UNIX, BSD, PHP, SunOS, Solaris, MacOS, ICloud services or other compatible operating systems, including proprietary operating systems. Operating systems control and schedule computer processes for execution, perform memory management, provide file system, networking, and I/O services, and provide a user interface, such as a graphical user interface (GUI), among other things. Network The computer system2730illustrated inFIG.27Bis coupled to a network2748, such as a LAN, WAN, or the Internet via a communication link2746(wired, wireless, or a combination thereof). Network2748communicates with various computing devices and/or other electronic devices. Network2748is communicating with the one or more computing systems2750and the one or more data sources2752. The Patient-Specific Risk Factor Goal Determination and/or Tracking Module2744can access or can be accessed by computing systems2750and/or data sources2752through a web-enabled user access point. Connections can be a direct physical connection, a virtual connection, and other connection type. The web-enabled user access point can comprise a browser module that uses text, graphics, audio, video, and other media to present data and to allow interaction with data via the network218. The output module can be implemented as a combination of an all-points addressable display such as a cathode ray tube (CRT), a liquid crystal display (LCD), a plasma display, or other types and/or combinations of displays. The output module can be implemented to communicate with input devices2742and they also include software with the appropriate interfaces which allow a user to access data through the use of stylized screen elements, such as menus, windows, dialogue boxes, tool bars, and controls (for example, radio buttons, check boxes, sliding scales, and so forth). Furthermore, the output module can communicate with a set of input and output devices to receive signals from the user. Other Systems The computing system2730can include one or more internal and/or external data sources (for example, data sources2752). In some embodiments, one or more of the data repositories and the data sources described above can be implemented using a relational database, such as DB2, Sybase, Oracle, CodeBase, and Microsoft® SQL Server as well as other types of databases such as a flat-file database, an entity relationship database, and object-oriented database, and/or a record-based database. The computer system2730can also access one or more data sources (or databases)2752. The databases2752can be stored in a database or data repository. The computer system2730can access the one or more databases2752through a network2748or can directly access the database or data repository through I/O devices and interfaces2742. The data repository storing the one or more databases2752can reside within the computer system2730. URLs and Cookies In some embodiments, one or more features of the systems, methods, and devices described herein can utilize a URL and/or cookies, for example for storing and/or transmitting data or user information. A Uniform Resource Locator (URL) can include a web address and/or a reference to a web resource that is stored on a database and/or a server. The URL can specify the location of the resource on a computer and/or a computer network. The URL can include a mechanism to retrieve the network resource. The source of the network resource can receive a URL, identify the location of the web resource, and transmit the web resource back to the requestor. A URL can be converted to an IP address, and a Doman Name System (DNS) can look up the URL and its corresponding IP address. URLs can be references to web pages, file transfers, emails, database accesses, and other applications. The URLs can include a sequence of characters that identify a path, domain name, a file extension, a host name, a query, a fragment, scheme, a protocol identifier, a port number, a username, a password, a flag, an object, a resource name and/or the like. The systems disclosed herein can generate, receive, transmit, apply, parse, serialize, render, and/or perform an action on a URL. Examples of Embodiments Relating to Determining Patient Specific Risk Factor Goals from Image-Based Quantification: The following are non-limiting examples of certain embodiments of systems and methods for determining patient specific risk factor goals and/or other related features. Other embodiments may include one or more other features, or different features, that are discussed herein. Embodiment 1: A computer-implemented method for determining patient-specific coronary artery disease (CAD) risk factor goals based on quantification of coronary atherosclerosis and vascular morphology features using non-invasive medical image analysis, the method comprising: accessing, by a computer system, a CAD risk factor level for a subject; accessing, by the computer system, a medical image of the subject, the medical image comprising one or more coronary arteries; analyzing, by the computer system, the medical image of the subject to perform quantitative phenotyping of atherosclerosis and vascular morphology, the quantitative phenotyping of atherosclerosis comprising analysis of one or more of plaque volume, plaque composition, or plaque progression; determining, by the computer system, correlation of the CAD risk factor level with the quantitative phenotyping of atherosclerosis and vascular morphology; determining, by the computer system, an individualized CAD risk factor level threshold of elevated risk of CAD for the subject based at least in part on the CAD risk factor level and the determined correlation of the CAD risk factor level with the quantitative phenotyping of atherosclerosis and vascular morphology; and determining, by the computer system, a subject-specific goal for the CAD risk factor level based at least in part on the determined individualized CAD risk factor level threshold of elevated risk of CAD for the subject, wherein the determined subject-specific goal for the CAD risk factor level is configured to be used to determine an individualized treatment for the subject, wherein the computer system comprises a computer processor and an electronic storage medium. Embodiment 2: The computer-implemented method of Embodiment 1, wherein the CAD risk factor level comprises one or more of cholesterol level, low-density lipoprotein (LDL) cholesterol level, high-density lipoprotein (HDL) cholesterol level, cholesterol particle size and fluffiness, inflammation level, glycosylated hemoglobin, or blood pressure. Embodiment 3: The computer-implemented method of Embodiments 1 or 2, wherein the quantitative phenotyping of atherosclerosis is performed based at least in part on analysis of density values of one or more pixels of the medical image corresponding to plaque. Embodiment 4: The computer-implemented method of any one of Embodiments 1-3, wherein the plaque volume comprises one or more of total plaque volume, calcified plaque volume, non-calcified plaque volume, or low-density non-calcified plaque volume. Embodiment 5: The computer-implemented method of Embodiment 3, wherein the density values comprise radiodensity values. Embodiment 6: The computer-implemented method of any one of Embodiments 3-5, wherein the plaque composition comprises composition of one or more of calcified plaque, non-calcified plaque, or low-density non-calcified plaque. Embodiment 7: The computer-implemented method of Embodiment 6, wherein one or more of the calcified plaque, non-calcified plaque, of low-density non-calcified plaque is identified based at least in part on radiodensity values of one or more pixels of the medical image corresponding to plaque. Embodiment 8: The computer-implemented method of Embodiment 7, wherein calcified plaque comprises one or more pixels of the medical image with radiodensity values of between about 351 and about 2500 Hounsfield units, non-calcified plaque comprises one or more pixels of the medical image with radiodensity values of between about 31 and about 250 Hounsfield units, and low-density non-calcified plaque comprises one or more pixels of the medical image with radiodensity values of between about −189 and about 30 Hounsfield units. Embodiment 9: The computer-implemented method of any of Embodiments 1 to 8, wherein the plaque progression is determined by: accessing, by the computer system, one or more serial medical images of the patient, the one or more serial medical images comprising one or more coronary arteries; and analyzing, by the computer system, the one or more serial medical images of the patient to determine plaque progression based at least in part on a serial change in plaque volume. Embodiment 10: The computer-implemented method of Embodiment 9, wherein the serial change in plaque volume is based on one or more of total plaque volume, calcified plaque volume, non-calcified plaque volume, or low-density non-calcified plaque volume. Embodiment 11: The computer-implemented method of any one of Embodiments 1-10, wherein the vascular morphology comprises one or more of absolute minimum lumen diameter or area, lumen diameter, cross-sectional lumen area, vessel volume, lumen volume, arterial remodeling, vessel or lumen geometry, or vessel or lumen curvature. Embodiment 12: The computer-implemented method of any one of Embodiments 1-11, wherein the correlation of the CAD risk factor level with the quantitative phenotyping of atherosclerosis is determined based at least in part by multivariable regression analysis. Embodiment 13: The computer-implemented method of any one of Embodiments 1-12, wherein the correlation of the CAD risk factor level with the quantitative phenotyping of atherosclerosis is determined based at least in part by a machine learning algorithm. Embodiment 14: The computer-implemented method of any one of Embodiments 1-13, wherein the medical image comprises a Computed Tomography (CT) image. Embodiment 15: The computer-implemented method of any on of Embodiments 1-14, wherein the medical image is obtained using an imaging technique comprising one or more of CT, x-ray, ultrasound, echocardiography, intravascular ultrasound (IVUS), MR imaging, optical coherence tomography (OCT), nuclear medicine imaging, positron-emission tomography (PET), single photon emission computed tomography (SPECT), or near-field infrared spectroscopy (NIRS). Embodiment 16: The computer-implemented method of any one of Embodiments 1-15, wherein the treatment for cardiovascular disease comprises medical intervention, medical treatment, or lifestyle interventions, including but not limited to changes in diet, physical activity, anxiety and stress level, sleep and others. Embodiment 17: The computer-implemented method of any one of Embodiments 1-16, further comprising: accessing, by the computer system, a second medical image of the subject, the second medical image obtained at a later point in time than the medical image; analyzing, by the computer system, the second medical image of the subject to perform quantitative phenotyping of atherosclerosis; recalibrating, by the computer system, the individualized CAD risk factor level threshold of elevated risk of CAD for the subject based at least in part on the quantitative phenotyping of atherosclerosis of the second medical image; and updating, by the computer system, the subject-specific goal for the CAD risk factor level based at least in part on the recalibrated individualized CAD risk factor level threshold of elevated risk of CAD for the subject, wherein the updated subject-specific goal for the CAD risk factor level is configured to used to change or maintain the individualized treatment for the subject. Embodiment 18: A system for determining patient-specific coronary artery disease (CAD) risk factor goals based on quantification of coronary atherosclerosis using non-invasive medical image analysis, the system comprising: one or more computer readable storage devices configured to store a plurality of computer executable instructions; and one or more hardware computer processors in communication with the one or more computer readable storage devices and configured to execute the plurality of computer executable instructions in order to cause the system to: access a CAD risk factor level for a subject; access a medical image of the subject, the medical image comprising one or more coronary arteries; analyze the medical image of the subject to perform quantitative phenotyping of atherosclerosis, the quantitative phenotyping of atherosclerosis comprising analysis of one or more of plaque volume, plaque composition, or plaque progression; determine correlation of the CAD risk factor level with the quantitative phenotyping of atherosclerosis; determine an individualized CAD risk factor level threshold of elevated risk of CAD for the subject based at least in part on the CAD risk factor level and the determined correlation of the CAD risk factor level with the quantitative phenotyping of atherosclerosis; and determine a subject-specific goal for the CAD risk factor level based at least in part on the determined individualized CAD risk factor level threshold of elevated risk of CAD for the subject, wherein the determined subject-specific goal for the CAD risk factor level is configured to be used to determine an individualized treatment for the subject. Embodiment 19: The system of Embodiment 18, wherein the CAD risk factor level comprises one or more of cholesterol level, low-density lipoprotein (LDL) cholesterol level, high-density lipoprotein (HDL) cholesterol level, cholesterol particle size and fluffiness, inflammation level, glycosylated hemoglobin, or blood pressure. Embodiment 20: The system of Embodiments 18 or 19, wherein the quantitative phenotyping of atherosclerosis is performed based at least in part on analysis of density values of one or more pixels of the medical image corresponding to plaque. Embodiment 21: The system of any one of Embodiments 18-21, wherein the density values comprises radiodensity values. Embodiment 22: The system of any one of Embodiments 18-22, wherein the plaque volume comprises one or more of total plaque volume, calcified plaque volume, non-calcified plaque volume, or low-density non-calcified plaque volume. Embodiment 23: The system of Embodiment 21, wherein the plaque composition comprises composition of one or more of calcified plaque, non-calcified plaque, or low-density non-calcified plaque. Embodiment 24: The system of any one of Embodiments 18-23, wherein the plaque progression is determined by: accessing, by the computer system, one or more serial medical images of the patient, the one or more serial medical images comprising one or more coronary arteries; and analyzing, by the computer system, the one or more serial medical images of the patient to determine plaque progression based at least in part on a serial change in plaque volume. Embodiment 25: The system of any one of Embodiments 22-24, wherein the serial change in plaque volume is based on one or more of total plaque volume, calcified plaque volume, non-calcified plaque volume, or low-density non-calcified plaque volume. Embodiment 26: The system of any one of Embodiments 18-25, wherein the correlation of the CAD risk factor level with the quantitative phenotyping of atherosclerosis is determined based at least in part by multivariable regression analysis. Embodiment 27: The system of any one of Embodiments 18-26, wherein the correlation of the CAD risk factor level with the quantitative phenotyping of atherosclerosis is determined based at least in part by a machine learning algorithm. Embodiment 28: The system of any one of Embodiments 18-27, wherein the medical image comprises a Computed Tomography (CT) image. Embodiment 29: The system of any one of Embodiments 18-28, wherein the medical image is obtained using an imaging technique comprising one or more of CT, x-ray, ultrasound, echocardiography, intravascular ultrasound (IVUS), MR imaging, optical coherence tomography (OCT), nuclear medicine imaging, positron-emission tomography (PET), single photon emission computed tomography (SPECT), or near-field infrared spectroscopy (NIRS). Embodiment 30: The system of any one of Embodiments 18-30, wherein the treatment for cardiovascular disease comprises medical intervention, medical treatment, or lifestyle change. Embodiment 31: The system of any one of Embodiments 18-30, wherein the system is further caused to: access a second medical image of the subject, the second medical image obtained at a later point in time than the medical image; analyze the second medical image of the subject to perform quantitative phenotyping of atherosclerosis; recalibrate the individualized CAD risk factor level threshold of elevated risk of CAD for the subject based at least in part on the quantitative phenotyping of atherosclerosis of the second medical image; and update the subject-specific goal for the CAD risk factor level based at least in part on the recalibrated individualized CAD risk factor level threshold of elevated risk of CAD for the subject, wherein the updated subject-specific goal for the CAD risk factor level is configured to used to change or maintain the individualized treatment for the subject. Embodiment 32: The system of any one of Embodiments 18-25, wherein the system is further caused to analyze the medical image of the subject to perform phenotyping of vascular morphology, wherein the subject-specific goal for the CAD risk factor level is further determined based at least in part on the phenotyping of vascular morphology, the vascular morphology comprising one or more of absolute minimum lumen diameter or area, lumen diameter, cross-sectional lumen area, vessel volume, lumen volume, arterial remodeling, vessel or lumen geometry, or wherein the subject-specific goal for the CAD risk factor level is further determined based at least in part on the phenotyping of vascular morphology, the vascular morphology comprising one or more of absolute minimum lumen diameter or area, lumen diameter, cross-sectional lumen area, vessel volume, lumen volume, arterial remodeling, vessel or lumen geometry, or vessel or lumen curvature. Automated Diagnosis, Risk Assessment, and Characterization of Heart Disease Generally speaking, heart disease or a major adverse cardiovascular event (MACE) or arterial disease, such as coronary artery disease (CAD) or periphery artery disease (PAD) can be extremely difficult to diagnose until a patient becomes very symptomatic. This can be due to the fact that existing methods focus on detecting severe and/or physical symptoms which typically arise only in later stages of heart disease, such as for example active chest pain, active heart attack, cardiogenic shock, and/or the like. In addition, risk of heart disease, MACE can be dependent on a number of different factors and/or variables, making it difficult to diagnose, characterize, and/or predict. As used herein, MACE can refer to one or more of a stroke, myocardial infarction, cardiovascular death, admission for heart failure, ischemic cardiovascular events, cardiac death, hospitalization for heart failure, angina pain, cardiovascular-related illness, cardiac arrest, heart attack, and/or the like. In some embodiments, the systems, devices, and methods described herein address such technical shortcomings by providing an image-based and/or non-invasive approach to diagnose, characterize, predict, and/or otherwise assess risk of MACE or arterial disease of a subject by taking into account one or more analyses, for example of coronary atherosclerosis, aortic atherosclerosis, and/or emphysema. Coronary atherosclerosis, aortic atherosclerosis, and/or emphysema can all be considered a cause, factor, and/or variable in risk of MACE or arterial disease. However, existing technologies fail to provide a comprehensive solution that can take such multiple factors into consideration in assessing risk of MACE or arterial disease. In addition, the interrelation between coronary atherosclerosis, aortic atherosclerosis, and/or emphysema when assessing risk of MACE or arterial disease can be difficult to ascertain. As such, in some embodiments, the systems, methods, and devices are configured to determine a likelihood and/or risk of MACE or arterial disease based on inputs of one or more of coronary atherosclerosis, aortic atherosclerosis, and/or emphysema, for example utilizing a machine learning (ML) and/or artificial intelligence (AI) algorithm(s). In some embodiments, by the combination of analyzing coronary atherosclerosis, aortic atherosclerosis, and emphysema can provide synergistic effects in more accurately determining the risk of MACE and/or arterial disease. Moreover, it can be advantageous to non-invasively determine risk of MACE or arterial disease instead of using invasive measures, such as for example a stress test and/or the like. As such, in some embodiments, the systems, methods, and devices can be configured to analyze one or more images obtained non-invasively to derive, phenotype, characterize, quantify, and/or otherwise analyze coronary atherosclerosis, aortic atherosclerosis, and/or emphysema, the results of which can then be used to diagnose, assess, and/or characterize risk of MACE or arterial disease for a subject, thereby providing a multi-factor and/or non-invasive approach to MACE or arterial disease risk assessment. Such risk assessment can further be used to generate a proposed treatment for a subject for lowering and/or maintaining risk of MACE or arterial disease. In some embodiments, the system can be configured to analyze just coronary and aortic atherosclerosis, and not emphysema, in assessing risk of MACE or arterial disease. In some embodiments, the system can be configured to analyze just coronary atherosclerosis and emphysema in assessing risk of MACE or arterial disease. In some embodiments, the system can be configured to analyze coronary atherosclerosis, aortic atherosclerosis, and emphysema in assessing risk of MACE or arterial disease. In some embodiments, the system can be configured to utilize a reference database with risk assessments of MACE or arterial disease based on one or more of coronary atherosclerosis, aortic atherosclerosis, and/or emphysema to generate a population-based percentage of risk of MACE or arterial disease for a subject. In some embodiments, the population-based percentage can be based on one or more other factors, such as for example age, gender, ethnicity, and/or risk factors. In particular, in some embodiments, the systems, devices, and methods described herein are configured to diagnose, characterize, assess the risk of, and/or augment or enhance the diagnosis of MACE, heart disease, coronary heart disease, coronary atherosclerotic disease, arterial disease and/or the like on a sub-clinical level. Further, in some embodiments, the systems, devices, and methods described herein are configured to diagnose, characterize, assess the risk of, and/or augment or enhance the diagnosis of MACE, arterial disease, heart disease, coronary heart disease, coronary atherosclerotic disease, and/or the like utilizing one or more image analysis techniques and/or processes. In some embodiments, the systems, devices, and methods described herein are configured to diagnose, characterize, assess the risk of, and/or augment or enhance the diagnosis of MACE, arterial disease, heart disease, coronary heart disease, coronary atherosclerotic disease, and/or the like even when the subject has not experienced any physical symptoms, such as active chest pain, active heart attack, cardiogenic shock, and/or the like. In some embodiments, the systems, devices, and methods described herein are configured to diagnose, characterize, assess the risk of, and/or augment or enhance the diagnosis of MACE, arterial disease, heart disease, coronary heart disease, coronary atherosclerotic disease, and/or the like without the need to analyze any such physical symptoms. In some embodiments, the systems, devices, and methods described herein are configured to diagnose, characterize, assess the risk of, and/or augment or enhance the diagnosis of MACE, arterial disease, heart disease, coronary heart disease, coronary atherosclerotic disease, and/or the like utilizing one or more image analysis techniques and/or processes and optionally supplementing the same based on a history of physical symptoms experienced by the subject, such as for example active chest pain, active heart attack, cardiogenic shock, and/or the like. As such, in some embodiments, the systems, methods, and devices described herein can be configured to diagnose, characterize, assess the risk of, and/or augment or enhance the diagnosis of asymptomatic atherosclerosis, such as asymptomatic aortic atherosclerosis and/or asymptomatic coronary atherosclerosis, and/or emphysema. As discussed herein, in some embodiments, the systems, devices, and methods described herein can be configured to utilize one or more image analysis and/or processing techniques to diagnose, characterize, assess the risk of, and/or augment or enhance the diagnosis of MACE, arterial disease, and/or heart disease, whether symptomatic or asymptomatic, such as for example based on aortic atherosclerosis, coronary atherosclerosis, emphysema and/or the like. For example, in some embodiments, the systems, methods, and devices can be configured to analyze one or more medical images of a subject, such as a coronary CT angiography (CCTA), using one or more image processing, artificial intelligence, and/or machine learning techniques. In some embodiments, the systems, methods, and devices described herein can be configured to diagnose, characterize, assess the risk of, and/or augment or enhance the diagnosis of heart disease by analyzing one or more medical images, such as for example a contrast-enhanced CCTA, non-contrast CT, non-contrast coronary calcium scoring, non-gated contrast or contrast chest CT scans, abdominal CT scan, MRI angiography, x-ray fluoroscopy, and/or the like. In some embodiments, the systems, methods, and devices described herein can be configured to utilize analyses of coronary atherosclerosis, aortic atherosclerosis, and/or emphysema to identify high-risk subjects of MACE or arterial disease. For example, in some embodiments, the system can be configured to utilize analyses of coronary atherosclerosis, aortic atherosclerosis, and/or emphysema to identify new formers of plaque or non-calcified plaque, rapid progressors of plaque or non-calcified plaque, and/or non-responders to medicine or treatment. More specifically, in some embodiments, the system can be configured to utilize one or more plaque parameters, quantified plaque phenotyping, and/or the like described herein, as applied to a coronary and/or aortic artery, and/or image-based analysis of emphysema for such analyses. In some embodiments, the system can be configured to analyze just coronary and aortic atherosclerosis, and not emphysema, to identify new formers of plaque or non-calcified plaque, rapid progressors of plaque or non-calcified plaque, and/or non-responders to medicine or treatment. In some embodiments, the system can be configured to analyze just coronary atherosclerosis and emphysema, and not aortic atherosclerosis, to identify new formers of plaque or non-calcified plaque, rapid progressors of plaque or non-calcified plaque, and/or non-responders to medicine or treatment. In some embodiments, the system can be configured to analyze coronary atherosclerosis, aortic atherosclerosis, and emphysema to identify new formers of plaque or non-calcified plaque, rapid progressors of plaque or non-calcified plaque, and/or non-responders to medicine or treatment. In some embodiments, the systems, methods, and devices described herein can be configured to utilize analyses of coronary atherosclerosis, aortic atherosclerosis, and/or emphysema to determine the likelihood of peripheral artery disease (PAD). PAD has a worldwide prevalence of more than 200 million, with an estimated 8-12 million Americans affected. The prevalence of PAD is expected to increase as the population ages, smoking status persists, and the prevalence of diabetes, hypertension, and obesity grow. Although awareness has improved, PAD is still associated with significant morbidity, mortality, and quality of life impairment. Given the substantial prevalence of PAD, it can be imperative that a screening program be undertaken to identify those with high risk of PAD, which can be done utilizing one or more systems, devices, and methods described herein. As a non-limiting example,FIGS.28A-28Billustrate an example embodiment of identification of coronary and aortic disease/atherosclerosis identified on a coronary CT angiogram (CCTA) utilizing embodiments of the systems, devices, and methods described herein. As illustrated inFIG.28A, one or more coronary arteries can be imaged as part of a CCTA and, as illustrated inFIG.28B, the aorta can also be imaged as part of a CCTA. As such, analysis of coronary and aortic atherosclerosis can be performed together based on a single CCTA in some embodiments that are configured to analyze CCTAs using one or more image analysis techniques as described herein, including for example analysis of one or more plaque, fat, and/or vessel parameters. For example, the one or more plaque parameters can include plaque volume, composition, attenuation, location, geometry, and/or any other plaque parameters described herein. In some embodiments, the systems, devices, and methods described herein can be configured to utilize the diagnosis, characterization, and/or risk assessment of heart disease of a subject, such as for example coronary and/or aortic atherosclerosis, to further generate a report, treatment, and/or prognosis and/or identify or track resource utilization for the subject. By utilizing such techniques, in some embodiments, the systems, devices, and methods described herein can allow for early diagnosis and/or treatment of heart disease prior to the subject experiencing physical symptoms. For example, in some embodiments, the systems, devices, and methods described herein can be configured to automatically and/or dynamically place a subject in a particular vascular or heart disease category based at least in part on the diagnosis, characterization, and/or risk assessment of heart disease of the subject based on image analysis. In some embodiments, the systems, devices, and methods described herein can be configured to further assign a risk-adjusted weight for the subject to anticipate prognosis and/or resource utilization for the subject, for example based at least in part on the diagnosis, characterization, and/or risk assessment of heart disease of the subject based on image analysis and/or the particular vascular or heart disease category determined for the subject. FIG.28Cis a flowchart illustrating an example embodiment(s) of systems, devices, and methods for image-based diagnosis, risk assessment, and/or characterization of a major adverse cardiovascular event. As illustrated inFIG.28C, in some embodiments, the system can be configured to analyze one or more of coronary atherosclerosis, aortic atherosclerosis, and/or emphysema, for example from a medical image, to determine risk of MACE or arterial disease (AD), such as PAD, for a subject. In some embodiments, at block2802, the system can be configured to access and/or modify one or more medical images. In some embodiments, the medical image can include one or more arteries, such as coronary, aorta, carotid, and/or other arteries and/or one or more portions of the lungs of a subject. In some embodiments, the medical image can comprise a CCTA. In some embodiments, the medical image can comprise an image field that is typically acquired during a CCTA. In some embodiments, the medical image can comprise a larger image field than that is typically acquired during a CCTA, for example to capture one or more portions of the aorta and/or lungs. In some embodiments, the system can be configured to access multiple images, one or more of which captures one or more portions of the coronary arteries, aorta, and/or lungs. For example, in some embodiments, the system can be configured to access one medical image that comprises one or more portions of the coronary arteries and/or aorta of the subject and a separate image that comprises one or more portions of the lungs. In some embodiments, the system can be configured to access one medical image that comprises one or more portions of the coronary arteries, one medical image that comprises one or more portions of the aorta, and one medical image that comprises one or more portions of the lungs. In some embodiments, the system can be configured to access a single medical image that comprises one or more portions of the coronary arteries, aorta, and the lungs. For example, in some embodiments, the system can be configured to access a single image acquired from a single image acquisition to analyze one or more portions of the coronary arteries, aorta, and the lungs to determine risk of MACE or arterial disease, such as PAD, for a subject. In some embodiments, the medical image can be stored in a medical image database2804. In some embodiments, the medical image database2804can be locally accessible by the system and/or can be located remotely and accessible through a network connection. The medical image can comprise an image obtain using one or more modalities such as for example, CT, Dual-Energy Computed Tomography (DECT), Spectral CT, photon-counting CT, x-ray, ultrasound, echocardiography, intravascular ultrasound (IVUS), Magnetic Resonance (MR) imaging, optical coherence tomography (OCT), nuclear medicine imaging, positron-emission tomography (PET), single photon emission computed tomography (SPECT), or near-field infrared spectroscopy (NIRS). In some embodiments, the medical image comprises one or more of a contrast-enhanced CT image, non-contrast CT image, MR image, and/or an image obtained using any of the modalities described above. In some embodiments, the system can be configured to automatically and/or dynamically perform one or more analyses of the medical image as discussed herein. For example, in some embodiments, at block2806, the system can be configured to identify, analyze, and/or quantify coronary atherosclerosis. In some embodiments, the system can be configured to perform quantified phenotyping of coronary atherosclerosis. For example, in some embodiments, the quantitative phenotyping can be of atherosclerosis burden, volume, type, composition, and/or rate of progression for the individual or patient. In some embodiments, the system can be configured to utilize one or more image processing, artificial intelligence (AI), and/or machine learning (ML) algorithms to automatically and/or dynamically perform quantitative phenotyping of atherosclerosis. For example, in some embodiments, the system can be configured to automatically and/or dynamically identify one or more arteries, vessels, and/or a portion thereof on the medical image, identify one or more regions of plaque, and/or perform quantitative phenotyping of plaque. In some embodiments, the system can be configured to identify and/or characterize different types and/or regions of coronary atherosclerosis or plaque, for example based on density, absolute density, material density, relative density, and/or radiodensity. In some embodiments, the system can be configured to characterize a region of plaque into one or more sub-types of plaque. For example, in some embodiments, the system can be configured to characterize a region of plaque as one or more of low density non-calcified plaque, non-calcified plaque, or calcified plaque. In some embodiments, calcified plaque can correspond to plaque having a highest density range, low density non-calcified plaque can correspond to plaque having a lowest density range, and non-calcified plaque can correspond to plaque having a density range between calcified plaque and low density non-calcified plaque. For example, in some embodiments, the system can be configured to characterize a particular region of plaque as low density non-calcified plaque when the radiodensity of an image pixel or voxel corresponding to that region of plaque is between about −189 and about 30 Hounsfield units (HU). In some embodiments, the system can be configured to characterize a particular region of plaque as non-calcified plaque when the radiodensity of an image pixel or voxel corresponding to that region of plaque is between about 31 and about 350 HU. In some embodiments, the system can be configured to characterize a particular region of plaque as calcified plaque when the radiodensity of an image pixel or voxel corresponding to that region of plaque is between about 351 and about 2500 HU. In some embodiments, the lower and/or upper Hounsfield unit boundary threshold for determining whether a plaque corresponds to one or more of low density non-calcified plaque, non-calcified plaque, and/or calcified plaque can be about −1000 HU, about −900 HU, about −800 HU, about −700 HU, about −600 HU, about −500 HU, about −400 HU, about −300 HU, about −200 HU, about −190 HU, about −180 HU, about −170 HU, about −160 HU, about −150 HU, about −140 HU, about −130 HU, about −120 HU, about −110 HU, about −100 HU, about −90 HU, about −80 HU, about −70 HU, about −60 HU, about −50 HU, about −40 HU, about −30 HU, about −20 HU, about −10 HU, about 0 HU, about 10 HU, about 20 HU, about 30 HU, about 40 HU, about 50 HU, about 60 HU, about 70 HU, about 80 HU, about 90 HU, about 100 HU, about 110 HU, about 120 HU, about 130 HU, about 140 HU, about 150 HU, about 160 HU, about 170 HU, about 180 HU, about 190 HU, about 200 HU, about 210 HU, about 220 HU, about 230 HU, about 240 HU, about 250 HU, about 260 HU, about 270 HU, about 280 HU, about 290 HU, about 300 HU, about 310 HU, about 320 HU, about 330 HU, about 340 HU, about 350 HU, about 360 HU, about 370 HU, about 380 HU, about 390 HU, about 400 HU, about 410 HU, about 420 HU, about 430 HU, about 440 HU, about 450 HU, about 460 HU, about 470 HU, about 480 HU, about 490 HU, about 500 HU, about 510 HU, about 520 HU, about 530 HU, about 540 HU, about 550 HU, about 560 HU, about 570 HU, about 580 HU, about 590 HU, about 600 HU, about 700 HU, about 800 HU, about 900 HU, about 1000 HU, about 1100 HU, about 1200 HU, about 1300 HU, about 1400 HU, about 1500 HU, about 1600 HU, about 1700 HU, about 1800 HU, about 1900 HU, about 2000 HU, about 2100 HU, about 2200 HU, about 2300 HU, about 2400 HU, about 2500 HU, about 2600 HU, about 2700 HU, about 2800 HU, about 2900 HU, about 3000 HU, about 3100 HU, about 3200 HU, about 3300 HU, about 3400 HU, about 3500 HU, and/or about 4000 HU. In some embodiments, the system can be configured to determine and/or characterize the burden of coronary atherosclerosis based at least part on volume of plaque. In some embodiments, the system can be configured to analyze and/or determine total volume of coronary plaque and/or volume of low-density non-calcified plaque, non-calcified plaque, and/or calcified plaque in the analyzed coronaries. In some embodiments, the system can be configured to perform phenotyping of coronary atherosclerosis by determining a ratio of one or more of the foregoing volumes of plaque, for example within an artery, lesion, vessel, and/or the like. In some embodiments, the system can be configured to analyze the progression of coronary atherosclerosis. For example, in some embodiments, the system can be configured to analyze the progression of one or more particular regions of plaque and/or overall progression and/or lesion and/or artery-specific progression of plaque. In some embodiments, in order to analyze the progression of plaque, the system can be configured to analyze one or more serial images of the subject for phenotyping atherosclerosis. In some embodiments, tracking the progression of plaque can comprise analyzing changes and/or lack thereof in total plaque volume and/or volume of low-density non-calcified plaque, non-calcified plaque, and/or calcified plaque. In some embodiments, tracking the progression of plaque can comprise analyzing changes and/or lack thereof in density of a particular region of plaque and/or globally. In some embodiments, at block2812, the system can be configured to determine a risk of MACE and/or arterial disease, such as PAD, based at least in part on the results of coronary atherosclerosis analysis and/or quantified phenotyping. In determining risk of MACE and/or arterial disease, in some embodiments, the system can be configured to access one or more reference values of quantified phenotyping and/or other analyses of coronary atherosclerosis as compared to risk of MACE or arterial disease, which can be stored on a coronary atherosclerosis risk database2814. In some embodiments, the one or more reference values of quantified phenotyping and/or other analyses of coronary atherosclerosis as compared to risk of MACE or arterial disease can be derived from a population with varying states of coronary atherosclerosis as compared to risk of MACE and/or arterial disease. In some embodiments, the coronary risk database2814can be locally accessible by the system and/or can be located remotely and accessible through a network connection. In some embodiments, the system can be configured to utilize one or more artificial intelligence (AI) and/or machine learning (ML) algorithms to automatically and/or dynamically determine risk of MACE or arterial disease based on coronary plaque analysis. In some embodiments, at block2808, the system can be configured to identify, analyze, and/or quantify aortic atherosclerosis. In some embodiments, the system can be configured to perform quantified phenotyping of aortic atherosclerosis. For example, in some embodiments, the quantitative phenotyping can be of atherosclerosis burden, volume, type, composition, and/or rate of progression for the individual or patient. In some embodiments, the system can be configured to utilize one or more image processing, artificial intelligence (AI), and/or machine learning (ML) algorithms to automatically and/or dynamically perform quantitative phenotyping of aortic atherosclerosis. For example, in some embodiments, the system can be configured to automatically and/or dynamically identify one or more arteries, vessels, and/or a portion thereof on the medical image, identify one or more regions of plaque, and/or perform quantitative phenotyping of plaque. In some embodiments, the system can be configured to identify and/or characterize different types and/or regions of aortic atherosclerosis or plaque, for example based on density, absolute density, material density, relative density, and/or radiodensity. In some embodiments, the system can be configured to characterize a region of aortic plaque into one or more sub-types of plaque. For example, in some embodiments, the system can be configured to characterize a region of plaque as one or more of low density non-calcified plaque, non-calcified plaque, or calcified plaque. In some embodiments, calcified plaque can correspond to plaque having a highest density range, low density non-calcified plaque can correspond to plaque having a lowest density range, and non-calcified plaque can correspond to plaque having a density range between calcified plaque and low density non-calcified plaque. For example, in some embodiments, the system can be configured to characterize a particular region of plaque as low density non-calcified plaque when the radiodensity of an image pixel or voxel corresponding to that region of plaque is between about −189 and about 30 Hounsfield units (HU). In some embodiments, the system can be configured to characterize a particular region of plaque as non-calcified plaque when the radiodensity of an image pixel or voxel corresponding to that region of plaque is between about 31 and about 350 HU. In some embodiments, the system can be configured to characterize a particular region of plaque as calcified plaque when the radiodensity of an image pixel or voxel corresponding to that region of plaque is between about 351 and about 2500 HU. In some embodiments, the lower and/or upper Hounsfield unit boundary threshold for determining whether an aortic plaque corresponds to one or more of low density non-calcified plaque, non-calcified plaque, and/or calcified plaque can be about −1000 HU, about −900 HU, about −800 HU, about −700 HU, about −600 HU, about −500 HU, about −400 HU, about −300 HU, about −200 HU, about −190 HU, about −180 HU, about −170 HU, about −160 HU, about −150 HU, about −140 HU, about −130 HU, about −120 HU, about −110 HU, about −100 HU, about −90 HU, about −80 HU, about −70 HU, about −60 HU, about −50 HU, about −40 HU, about −30 HU, about −20 HU, about −10 HU, about 0 HU, about 10 HU, about 20 HU, about 30 HU, about 40 HU, about 50 HU, about 60 HU, about 70 HU, about 80 HU, about 90 HU, about 100 HU, about 110 HU, about 120 HU, about 130 HU, about 140 HU, about 150 HU, about 160 HU, about 170 HU, about 180 HU, about 190 HU, about 200 HU, about 210 HU, about 220 HU, about 230 HU, about 240 HU, about 250 HU, about 260 HU, about 270 HU, about 280 HU, about 290 HU, about 300 HU, about 310 HU, about 320 HU, about 330 HU, about 340 HU, about 350 HU, about 360 HU, about 370 HU, about 380 HU, about 390 HU, about 400 HU, about 410 HU, about 420 HU, about 430 HU, about 440 HU, about 450 HU, about 460 HU, about 470 HU, about 480 HU, about 490 HU, about 500 HU, about 510 HU, about 520 HU, about 530 HU, about 540 HU, about 550 HU, about 560 HU, about 570 HU, about 580 HU, about 590 HU, about 600 HU, about 700 HU, about 800 HU, about 900 HU, about 1000 HU, about 1100 HU, about 1200 HU, about 1300 HU, about 1400 HU, about 1500 HU, about 1600 HU, about 1700 HU, about 1800 HU, about 1900 HU, about 2000 HU, about 2100 HU, about 2200 HU, about 2300 HU, about 2400 HU, about 2500 HU, about 2600 HU, about 2700 HU, about 2800 HU, about 2900 HU, about 3000 HU, about 3100 HU, about 3200 HU, about 3300 HU, about 3400 HU, about 3500 HU, and/or about 4000 HU. In some embodiments, the system can be configured to determine and/or characterize the burden of aortic atherosclerosis based at least part on volume of plaque. In some embodiments, the system can be configured to analyze and/or determine total volume of aortic plaque and/or volume of low-density non-calcified plaque, non-calcified plaque, and/or calcified plaque in the analyzed portion of the aorta. In some embodiments, the system can be configured to perform phenotyping of aortic atherosclerosis by determining a ratio of one or more of the foregoing volumes of plaque, for example within a portion of the aorta, lesion, vessel, and/or the like. In some embodiments, the system can be configured to analyze the progression of aortic atherosclerosis. For example, in some embodiments, the system can be configured to analyze the progression of one or more particular regions of plaque and/or overall progression and/or lesion and/or artery-specific progression of plaque. In some embodiments, in order to analyze the progression of plaque, the system can be configured to analyze one or more serial images of the subject for phenotyping atherosclerosis. In some embodiments, tracking the progression of plaque can comprise analyzing changes and/or lack thereof in total plaque volume and/or volume of low-density non-calcified plaque, non-calcified plaque, and/or calcified plaque. In some embodiments, tracking the progression of plaque can comprise analyzing changes and/or lack thereof in density of a particular region of plaque and/or globally. In some embodiments, at block2816, the system can be configured to determine a risk of MACE and/or arterial disease, such as PAD, based at least in part on the results of aortic atherosclerosis analysis and/or quantified phenotyping. In determining risk of MACE and/or arterial disease, in some embodiments, the system can be configured to access one or more reference values of quantified phenotyping and/or other analyses of aortic atherosclerosis as compared to risk of MACE or arterial disease, which can be stored on an aortic atherosclerosis risk database2818. In some embodiments, the one or more reference values of quantified phenotyping and/or other analyses of aortic atherosclerosis as compared to risk of MACE or arterial disease can be derived from a population with varying states of aortic atherosclerosis as compared to risk of MACE and/or arterial disease. In some embodiments, the aortic plaque risk database2818can be locally accessible by the system and/or can be located remotely and accessible through a network connection. In some embodiments, the system can be configured to utilize one or more artificial intelligence (AI) and/or machine learning (ML) algorithms to automatically and/or dynamically determine risk of MACE or arterial disease based on aortic plaque analysis. In some embodiments, at block2810, the system can be configured to identify, analyze, and/or quantify emphysema. In some embodiments, the system can be configured to perform quantified phenotyping of emphysema. For example, in some embodiments, the quantitative phenotyping can be of emphysema burden, volume, type, composition, and/or rate of progression for the individual or patient. In some embodiments, the system can be configured to utilize one or more image processing, artificial intelligence (AI), and/or machine learning (ML) algorithms to automatically and/or dynamically perform quantitative phenotyping of emphysema. For example, in some embodiments, the system can be configured to automatically and/or dynamically identify one or more pixels corresponding to emphysema and/or different levels of emphysema and/or risk thereof for quantitative phenotyping. In some embodiments, the system can be configured to identify and/or characterize different types and/or regions and/or risk levels of emphysema, for example based on density, absolute density, material density, relative density, and/or radiodensity. For example, the system can be configured to ascertain different risk levels of emphysema based at least in part on the darkness and/or brightness of pixels corresponding to areas of the lungs, wherein a darker pixel can represent a higher risk of emphysema. In some embodiments, the system can be configured to utilize one or more Hounsfield unit thresholds for characterizing different risk levels of emphysema. For example, in some embodiments, the system can be configured to identify one or more pixels of the lungs of a subject as corresponding to emphysema and/or a particular type or risk of emphysema when the Hounsfield unit is above, below, and/or between one or more of the following Hounsfield units: about −1500 HU, about −1400 HU, about −1300 HU, about −1200 HU, about −1100 HU, about −1000 HU, about −990 HU, about −980 HU, about −970 HU, about −960 HU, about −950 HU, about −940 HU, about −930 HU, about −920 HU, about −910 HU, about −900 HU, about −800 HU, about −700 HU, about −600 HU, and/or about −500 HU. In some embodiments, the system can be configured to determine and/or characterize the burden of emphysema based at least part on volume of emphysema. In some embodiments, the system can be configured to analyze and/or determine total volume of emphysema and/or volume of particular risk level of emphysema. In some embodiments, the system can be configured to analyze the progression of emphysema. For example, in some embodiments, the system can be configured to analyze the progression of one or more particular regions of emphysema and/or overall progression of emphysema. In some embodiments, in order to analyze the progression of emphysema, the system can be configured to analyze one or more serial images of the subject for phenotyping emphysema. In some embodiments, tracking the progression of emphysema can comprise analyzing changes and/or lack thereof in total emphysema volume and/or volume of a particular risk-level of emphysema. In some embodiments, tracking the progression of emphysema can comprise analyzing changes and/or lack thereof in density of a particular region of emphysema and/or globally. In some embodiments, at block2820, the system can be configured to determine a risk of MACE and/or arterial disease, such as PAD, based at least in part on the results of emphysema analysis and/or quantified phenotyping. In determining risk of MACE and/or arterial disease, in some embodiments, the system can be configured to access one or more reference values of quantified phenotyping and/or other analyses of emphysema as compared to risk of MACE or arterial disease, which can be stored on an emphysema risk database2822. In some embodiments, the one or more reference values of quantified phenotyping and/or other analyses of emphysema as compared to risk of MACE or arterial disease can be derived from a population with varying states of emphysema as compared to risk of MACE and/or arterial disease. In some embodiments, the emphysema risk database2822can be locally accessible by the system and/or can be located remotely and accessible through a network connection. In some embodiments, the system can be configured to utilize one or more artificial intelligence (AI) and/or machine learning (ML) algorithms to automatically and/or dynamically determine risk of MACE or arterial disease based on emphysema analysis. In some embodiments, at block2824, the system can be configured to generate a weighted measure of one or more determined risk levels of MACE and/or arterial disease, such as PAD. For example, in some embodiments, the system can be configured to generate a weighted measure of risk levels of MACE and/or arterial disease derived from analysis of one or more of coronary atherosclerosis, aortic atherosclerosis, and/or emphysema. In some embodiments, the system can be configured to weight one or more individually derived risk levels of MACE and/or arterial disease the same or differently, for example between 0 and 100%. For example, in some embodiments, the system can be configured to weight a particular MACE and/or arterial disease risk level derived from one of coronary atherosclerosis, aortic atherosclerosis, and emphysema 100% while discounting the other two. In some embodiments, at block2826, the system can be configured to determine a subject-level multifactor risk of MACE and/or arterial disease, such as PAD. For example, in some embodiments, in determining the subject-level multifactor risk of MACE and/or arterial disease, the system can be configured to access one or more reference values of weighted measures of one or more MACE and/or arterial disease and/or PAD risks, which can be stored on a subject-level MACE or arterial disease risk database2828. In some embodiments, the one or more reference values of weighted measures of one or more MACE and/or arterial disease risks can be derived from a population with varying levels of risk of MACE and/or arterial disease, such as PAD. In some embodiments, the subject-level MACE or arterial disease risk database2828can be locally accessible by the system and/or can be located remotely and accessible through a network connection. In some embodiments, the system can be configured to utilize one or more artificial intelligence (AI) and/or machine learning (ML) algorithms to automatically and/or dynamically determine a subject-level multifactor risk of MACE or arterial disease, such as PAD. In some embodiments, at block2830, the system can be configured to determine a proposed treatment for the subject based on the determined subject-level multifactor risk of MACE or arterial disease, such as PAD. For example, in some embodiments, the proposed treatment can include one or more of lifestyle change, exercise, diet, medication, and/or invasive procedure. In some embodiments, in determining a proposed treatment for the subject, the system can be configured to access one or more reference treatments previously utilized for subjects with varying levels of subject-level multifactor risks of MACE or arterial disease, which can be stored on a treatment database2832. In some embodiments, the one or more reference treatments can be derived from a population with varying levels of subject-level multifactor risks of MACE or arterial disease, such as PAD. In some embodiments, the treatment database2832can be locally accessible by the system and/or can be located remotely and accessible through a network connection. In some embodiments, the system can be configured to utilize one or more artificial intelligence (AI) and/or machine learning (ML) algorithms to automatically and/or dynamically determine a proposed treatment for a subject based on a determined subject-level multifactor risk of MACE or arterial disease, such as PAD. In some embodiments, at block2834, the system can be configured to generate a graphical representation and/or report presenting one or more findings and/or analyses described herein in connection withFIG.28C. For example, in some embodiments, the system can be configured to generate or cause generation of a display presenting results of quantified phenotyping and/or other analysis of coronary atherosclerosis and/or determined risk of MACE and/or arterial disease, such as PAD, based on the same. In some embodiments, the system can be configured to generate or cause generation of a display presenting results of quantified phenotyping and/or other analysis of aortic atherosclerosis and/or determined risk of MACE and/or arterial disease, such as PAD, based on the same. Further, in some embodiments, the system can be configured to generate or cause generation of a display presenting results of quantified phenotyping and/or other analysis of emphysema and/or determined risk of MACE and/or arterial disease, such as PAD, based on the same. In some embodiments, the system can be configured to generate or cause generation of a display presenting results of a generated weighted measure of risk of MACE or arterial disease based on one or more individual analyses, subject-level multifactor risk of MACE or arterial disease, and/or a proposed treatment for a subject. In some embodiments, the system can be configured to repeat one or more processes described in relation to blocks2802-2834, for example for one or more other vessels, segment, regions of plaque, different subjects, and/or for the same subject at a different time. As such, in some embodiments, the system can provide for longitudinal tracking of risk of MACE or arterial disease and/or personalized treatment for a subject. FIG.28Dis a flowchart illustrating an example embodiment(s) of systems, devices, and methods for image-based diagnosis, risk assessment, and/or characterization of a major adverse cardiovascular event. As illustrated inFIG.28D, in some embodiments, the system can be configured to access one or more medical images and identify, analyze, and/or quantify one or more of coronary atherosclerosis, aortic atherosclerosis, and/or emphysema utilizing one or more processes and/or features described above in relation toFIG.28C, such as in connection with blocks2802,2804,2806,2808, and/or2810. In contrast to the embodiments described inFIG.28C, however, in some embodiments as illustrated inFIG.28D, the system can be configured to generate a weighted measure of one or more analysis results of coronary atherosclerosis, aortic atherosclerosis, and/or emphysema at block2836which can be used to directly determine a subject-level multifactor risk of MACE or arterial disease, such as PAD, at block2838, as opposed to determining individual risk levels of MACE or arterial disease based on a single factor of coronary atherosclerosis, aortic atherosclerosis, and/or emphysema. In particular, in some embodiments, the system can be configured to weight one or more analysis results of coronary atherosclerosis, aortic atherosclerosis, and/or emphysema the same or differently, for example between 0 and 100%. For example, in some embodiments, the system can be configured to weight the analysis results of one of coronary atherosclerosis, aortic atherosclerosis, and/or emphysema more due to predicted accuracy levels of one over another, while discounting others. In some embodiments, at block2838, the system can be configured to determine a subject-level multifactor risk of MACE and/or arterial disease, such as PAD, based on the generated weighted measure of one or more analysis results of coronary atherosclerosis, aortic atherosclerosis, and/or emphysema. For example, in some embodiments, in determining the subject-level multifactor risk of MACE and/or arterial disease, the system can be configured to access one or more reference values of weighted measures of analysis results of coronary atherosclerosis, aortic atherosclerosis, and/or emphysema, which can be stored on a subject-level MACE or arterial disease risk database2840. In some embodiments, the one or more reference values of weighted measures of analysis results of coronary atherosclerosis, aortic atherosclerosis, and/or emphysema can be derived from a population with varying levels of risk of MACE and/or arterial disease. In some embodiments, the subject-level MACE or arterial disease risk database2840can be locally accessible by the system and/or can be located remotely and accessible through a network connection. In some embodiments, the system can be configured to utilize one or more artificial intelligence (AI) and/or machine learning (ML) algorithms to automatically and/or dynamically determine a subject-level multifactor risk of MACE or arterial disease risk of MACE or arterial disease. In some embodiments, the system can further be configured to determine a proposed treatment and/or generate a graphical representation or report as discussed herein in connection with blocks2830,2832, and2834. In some embodiments, the system can be configured to repeat one or more processes described in relation toFIG.28D, for example for one or more other vessels, segment, regions of plaque, different subjects, and/or for the same subject at a different time. As such, in some embodiments, the system can provide for longitudinal tracking of risk of MACE or arterial disease and/or personalized treatment for a subject. FIG.28Eis a flowchart illustrating an example embodiment(s) of systems, devices, and methods for image-based diagnosis, risk assessment, and/or characterization of a major adverse cardiovascular event. As illustrated inFIG.28E, in some embodiments, the system can be configured to utilize one or more databases or datasets comprising a plurality of predetermined diagnoses, medical conditions, risk scores, and/or candidate treatments to effectively transform results of quantified phenotyping based on a medical image to a risk score and/or candidate treatments. For example, in some embodiments, the system can be configured to automatically and/or dynamically perform quantified phenotyping of a medical image to analyze coronary atherosclerosis, aortic atherosclerosis, and/or emphysema and/or utilize the results of such quantified phenotyping to determine a health risk assessment of the subject and/or one or more candidate treatments. In order to facilitate effective transformation of such quantified phenotyping results to a health risk assessment and/or one or more candidate treatments, the system can be configured to utilize one or more such databases and/or datasets. By utilizing such databases and/or datasets, in some embodiments, the system can be configured to efficiently process the results of quantified phenotyping in a repeatable and/or automated manner, thereby saving computing resources and/or need for human intervention. As such, in some embodiments, the system can be configured to automatically and/or dynamically analyze a medical image, perform quantified phenotyping, and process the results through an automated triage process to automatically assess a health risk and/or treatment for a subject. More specifically, in some embodiments, at block2802, the system can be configured to access one or more medical images, for example from a medical image database2804as discussed above in relation toFIGS.28C-28D. In some embodiments, at block2842, the system can be configured to analyze the one or more medical images to perform phenotyping, such as quantified phenotyping. In particular, in some embodiments, the system can be configured to identify one or more regions of interest for phenotyping, such as for example one or more portions of the coronary arteries, aortic arteries, and/or lungs of the subject. In some embodiments, the system can be configured to perform quantified phenotyping of one or more of coronary atherosclerosis, aortic atherosclerosis, and/or emphysema, for example utilizing one or more processes described herein in relation toFIGS.28C-28D. In some embodiments, based on results of the quantified phenotyping, the system at block2844can be configured to determine if a corresponding diagnosis exists in a database or dataset of predetermined diagnoses2846. In some embodiments, in order to efficiently and/or effectively disregard healthy subjects, the predetermined diagnoses can correspond only to a subset of quantified phenotyping results. In other words, in some embodiments, not all quantified phenotyping results may correspond to a predetermined diagnosis. In some embodiments, if the quantified phenotyping result does not correspond to a predetermined diagnosis, the process can then be completed, as no further analysis is warranted. In contrast, if a corresponding preset or predetermined diagnosis is found to exist for the quantified phenotyping results, then the system can be configured to further analyze the results. In some embodiments, if a corresponding predetermined diagnosis is found to exist for the quantified phenotyping results, the system at block2848can be configured to determine if a corresponding medical condition exists in a database or dataset of predetermined medical conditions2850. In some embodiments, in order to efficiently and/or effectively disregard healthy subjects, the predetermined medical conditions can correspond only to a subset of predetermined diagnoses. In other words, in some embodiments, not all predetermined diagnoses may correspond to a predetermined medical condition. In some embodiments, if the diagnosis derived from quantified phenotyping does not correspond to a predetermined medical condition, the process can then be completed, as no further analysis is warranted. In contrast, if a corresponding preset or predetermined medical condition is found to exist for the diagnosis derived from the quantified phenotyping results, then the system can be configured to further analyze the results. In some embodiments, if a corresponding predetermined medical condition is found to exist, the system at block2852can be configured to determine a health risk score for the subject, for example by accessing a risk database2854. The risk database2854can comprise one or more different risk levels and/or scores corresponding to different predetermined medical conditions. In some embodiments, the system can be configured to determine one or more proposed and/or candidate treatments for the subject at block2830, for example utilizing one or more treatments stored on a treatment database2832, as described in more detail in relation toFIGS.28C-28D. In some embodiments, the system can be configured to generate a graphical representation and/or report at block2834, for example displaying the results of one or more of the quantified phenotyping, corresponding diagnosis, corresponding medical condition, determined risk score, and/or proposed or candidate treatment(s), as described in more detail in relation toFIGS.28C-28D. In some embodiments, the system can be configured to repeat one or more processes described in relation to blocks2802-2854, for example for one or more other vessels, segment, regions of plaque, different subjects, and/or for the same subject at a different time. As such, in some embodiments, the system can provide for longitudinal tracking of a subject's health risk derived automatically from quantified phenotyping of serial medical images and utilizing one or more predetermined datasets of diagnoses, medical conditions, and/or risk scores for efficient and/or effective processing. Computer System In some embodiments, the systems, processes, and methods described herein are implemented using a computing system, such as the one illustrated inFIG.28F. The example computer system2872is in communication with one or more computing systems2890and/or one or more data sources2892via one or more networks2888. WhileFIG.28Fillustrates an embodiment of a computing system2872, it is recognized that the functionality provided for in the components and modules of computer system2872can be combined into fewer components and modules, or further separated into additional components and modules. The computer system2872can comprise a Risk Assessment Module2884that carries out the functions, methods, acts, and/or processes described herein. The Risk Assessment Module2884executed on the computer system2872by a central processing unit2876discussed further below. In general the word “module,” as used herein, refers to logic embodied in hardware or firmware or to a collection of software instructions, having entry and exit points. Modules are written in a program language, such as JAVA, C, or C++, or the like. Software modules can be compiled or linked into an executable program, installed in a dynamic link library, or can be written in an interpreted language such as BASIC, PERL, LAU, PHP or Python and any such languages. Software modules can be called from other modules or from themselves, and/or can be invoked in response to detected events or interruptions. Modules implemented in hardware include connected logic units such as gates and flip-flops, and/or can include programmable units, such as programmable gate arrays or processors. Generally, the modules described herein refer to logical modules that can be combined with other modules or divided into sub-modules despite their physical organization or storage. The modules are executed by one or more computing systems, and can be stored on or within any suitable computer readable medium, or implemented in-whole or in-part within special designed hardware or firmware. Not all calculations, analysis, and/or optimization require the use of computer systems, though any of the above-described methods, calculations, processes, or analyses can be facilitated through the use of computers. Further, in some embodiments, process blocks described herein can be altered, rearranged, combined, and/or omitted. The computer system2872includes one or more processing units (CPU)2876, which can comprise a microprocessor. The computer system2872further includes a physical memory2880, such as random access memory (RAM) for temporary storage of information, a read only memory (ROM) for permanent storage of information, and a mass storage device2874, such as a backing store, hard drive, rotating magnetic disks, solid state disks (SSD), flash memory, phase-change memory (PCM), 3D XPoint memory, diskette, or optical media storage device. Alternatively, the mass storage device can be implemented in an array of servers. Typically, the components of the computer system2872are connected to the computer using a standards based bus system. The bus system can be implemented using various protocols, such as Peripheral Component Interconnect (PCI), Micro Channel, SCSI, Industrial Standard Architecture (ISA) and Extended ISA (EISA) architectures. The computer system2872includes one or more input/output (I/O) devices and interfaces2882, such as a keyboard, mouse, touch pad, and printer. The I/O devices and interfaces2882can include one or more display devices, such as a monitor, that allows the visual presentation of data to a user. More particularly, a display device provides for the presentation of GUIs as application software data, and multi-media presentations, for example. The I/O devices and interfaces2882can also provide a communications interface to various external devices. The computer system2872can comprise one or more multi-media devices2878, such as speakers, video cards, graphics accelerators, and microphones, for example. Computing System Device/Operating System The computer system2872can run on a variety of computing devices, such as a server, a Windows server, a Structure Query Language server, a Unix Server, a personal computer, a laptop computer, and so forth. In other embodiments, the computer system2872can run on a cluster computer system, a mainframe computer system and/or other computing system suitable for controlling and/or communicating with large databases, performing high volume transaction processing, and generating reports from large databases. The computing system2872is generally controlled and coordinated by an operating system software, such as z/OS, Windows, Linux, UNIX, BSD, PHP, SunOS, Solaris, MacOS, ICloud services or other compatible operating systems, including proprietary operating systems. Operating systems control and schedule computer processes for execution, perform memory management, provide file system, networking, and I/O services, and provide a user interface, such as a graphical user interface (GUI), among other things. Network The computer system2872illustrated inFIG.28Fis coupled to a network2888, such as a LAN, WAN, or the Internet via a communication link2886(wired, wireless, or a combination thereof). Network2888communicates with various computing devices and/or other electronic devices. Network2888is communicating with one or more computing systems2890and one or more data sources2892. The Risk Assessment Module2884can access or can be accessed by computing systems2890and/or data sources2892through a web-enabled user access point. Connections can be a direct physical connection, a virtual connection, and other connection type. The web-enabled user access point can comprise a browser module that uses text, graphics, audio, video, and other media to present data and to allow interaction with data via the network2888. The output module can be implemented as a combination of an all-points addressable display such as a cathode ray tube (CRT), a liquid crystal display (LCD), a plasma display, or other types and/or combinations of displays. The output module can be implemented to communicate with input devices2882and they also include software with the appropriate interfaces which allow a user to access data through the use of stylized screen elements, such as menus, windows, dialogue boxes, tool bars, and controls (for example, radio buttons, check boxes, sliding scales, and so forth). Furthermore, the output module can communicate with a set of input and output devices to receive signals from the user. Other Systems The computing system2872can include one or more internal and/or external data sources (for example, data sources2892). In some embodiments, one or more of the data repositories and the data sources described above can be implemented using a relational database, such as DB2, Sybase, Oracle, CodeBase, and Microsoft® SQL Server as well as other types of databases such as a flat-file database, an entity relationship database, and object-oriented database, and/or a record-based database. The computer system2872can also access one or more databases2892. The databases2892can be stored in a database or data repository. The computer system2872can access the one or more databases2892through a network2888or can directly access the database or data repository through I/O devices and interfaces2882. The data repository storing the one or more databases2892can reside within the computer system2872. URLs and Cookies In some embodiments, one or more features of the systems, methods, and devices described herein can utilize a URL and/or cookies, for example for storing and/or transmitting data or user information. A Uniform Resource Locator (URL) can include a web address and/or a reference to a web resource that is stored on a database and/or a server. The URL can specify the location of the resource on a computer and/or a computer network. The URL can include a mechanism to retrieve the network resource. The source of the network resource can receive a URL, identify the location of the web resource, and transmit the web resource back to the requestor. A URL can be converted to an IP address, and a Doman Name System (DNS) can look up the URL and its corresponding IP address. URLs can be references to web pages, file transfers, emails, database accesses, and other applications. The URLs can include a sequence of characters that identify a path, domain name, a file extension, a host name, a query, a fragment, scheme, a protocol identifier, a port number, a username, a password, a flag, an object, a resource name and/or the like. The systems disclosed herein can generate, receive, transmit, apply, parse, serialize, render, and/or perform an action on a URL. A cookie, also referred to as an HTTP cookie, a web cookie, an internet cookie, and a browser cookie, can include data sent from a website and/or stored on a user's computer. This data can be stored by a user's web browser while the user is browsing. The cookies can include useful information for websites to remember prior browsing information, such as a shopping cart on an online store, clicking of buttons, login information, and/or records of web pages or network resources visited in the past. Cookies can also include information that the user enters, such as names, addresses, passwords, credit card information, etc. Cookies can also perform computer functions. For example, authentication cookies can be used by applications (for example, a web browser) to identify whether the user is already logged in (for example, to a web site). The cookie data can be encrypted to provide security for the consumer. Tracking cookies can be used to compile historical browsing histories of individuals. Systems disclosed herein can generate and use cookies to access data of an individual. Systems can also generate and use JSON web tokens to store authenticity information, HTTP authentication as authentication protocols, IP addresses to track session or identity information, URLs, and the like. Examples of Embodiments Relating to Automatically Determining a Diagnosis, Risk Assessment, and Characterization of Heart Disease The following are non-limiting examples of certain embodiments of systems and methods for determining a diagnosis, risk assessment, and characterization of heart disease and/or other related features. Other embodiments may include one or more other features, or different features, that are discussed herein. Embodiment 1: A computer-implemented method of facilitating assessment of risk of heart disease for a subject based on multi-dimensional information derived from non-invasive medical image analysis, the method comprising: accessing, by a computer system, one or more medical images of a subject, wherein the medical image of the subject is obtained non-invasively; analyzing, by the computer system, the one or more medical images of the subject to identify one or more portions of coronary arteries, aorta, and lungs of the subject; identifying, by the computer system, one or more regions of plaque in the identified one or more portions of the coronary arteries; analyzing, by the computer system, the identified one or more regions of plaque in the coronary arteries to perform quantified phenotyping of coronary atherosclerosis comprising total plaque volume, low-density non-calcified plaque volume, non-calcified plaque volume, and calcified plaque volume in the one or more portions of coronary arteries; identifying, by the computer system, one or more regions of plaque in the identified one or more portions of the aorta; analyzing, by the computer system, the identified one or more regions of plaque in the aorta to perform quantified phenotyping of aortic atherosclerosis comprising total plaque volume, low-density non-calcified plaque volume, non-calcified plaque volume, and calcified plaque volume in the one or more portions of the aorta; analyzing, by the computer system, the identified one or more portions of the lungs of the subject to determine presence or state of emphysema; and causing, by the computer system, display of a graphical representation comprising results of the quantified phenotyping of coronary atherosclerosis, results of the quantified phenotyping of aortic atherosclerosis, and presence or state of emphysema to facilitate assessment of risk of heart disease for the subject based on multidimensional analysis of coronary atherosclerosis, aortic atherosclerosis, and emphysema, wherein the computer system comprises a computer processor and an electronic storage medium. Embodiment 2: The computer-implemented method of Embodiment 1, wherein the one or more medical images comprises a single medical image showing the one or more portions of the coronary arteries, aorta, and lungs appear on a single medical image. Embodiment 3: The computer-implemented method of Embodiments 1 or 2, wherein the one or more medical images comprises a plurality of medical images. Embodiment 4: The computer-implemented method of any one of Embodiments 1 to 3, wherein one or more of the quantitative phenotyping of coronary atherosclerosis or the quantitative phenotyping of aortic atherosclerosis is performed based at least in part on analysis of density values of one or more pixels of the one or more medical images corresponding to plaque. Embodiment 5: The computer-implemented method of Embodiment 4, wherein the density values comprise radiodensity values. Embodiment 6: The computer-implemented method of any one of Embodiments 1 to 5, wherein the presence or state of emphysema is determined based at least in part on analysis of density values of one or more pixels of the one or more medical images corresponding to the one or more portions of the lungs. Embodiment 7: The computer-implemented method of Embodiment 6, wherein the density values comprise radiodensity values. Embodiment 8: The computer-implemented method of any one of Embodiments 1 to 7, wherein the one or more regions of plaque are identified as low density non-calcified plaque when a radiodensity value is between about −189 and about 30 Hounsfield units. Embodiment 9: The computer-implemented method of any one of Embodiments 1 to 8, wherein the one or more regions of plaque are identified as non-calcified plaque when a radiodensity value is between about 30 and about 350 Hounsfield units. Embodiment 10: The computer-implemented method of any one of Embodiments 1 to 9, wherein the one or more regions of plaque are identified as calcified plaque when a radiodensity value is between about 351 and 2500 Hounsfield units. Embodiment 11: The computer-implemented method of any one of Embodiments 1 to 10, wherein the one or more medical images comprise a Computed Tomography (CT) image. Embodiment 12: The computer-implemented method of any one of Embodiments 1 to 11, wherein the one or more medical images are obtained using an imaging technique comprising one or more of CT, x-ray, ultrasound, echocardiography, MR imaging, optical coherence tomography (OCT), nuclear medicine imaging, positron-emission tomography (PET), single photon emission computed tomography (SPECT), or near-field infrared spectroscopy (NIRS). Embodiment 13: The computer-implemented method of any one of Embodiments 1 to 12, further comprising generating, by the computer system, a multifactor assessment of risk of heart disease for the subject based at least in part on analysis of coronary atherosclerosis, aortic atherosclerosis, and emphysema. Embodiment 14: The computer-implemented method of any one of Embodiments 1 to 13, wherein the assessment of risk of heart disease is generated utilizing a machine learning algorithm. Embodiment 15: The computer-implemented method of any one of Embodiments 1 to 14, further comprising generating, by the computer system, a recommended treatment for the subject based at least in part on the generated assessment of risk of heart disease for the subject. Embodiment 16: The computer-implemented method of Embodiment 13, wherein the assessment of risk of heart disease is generated at least in part by: comparing results of the quantified phenotyping of coronary atherosclerosis to a set of reference values of quantified phenotyping of coronary atherosclerosis corresponding to different levels of risk of heart disease; comparing results of the quantified phenotyping of aortic atherosclerosis to a set of reference values of quantified phenotyping of aortic atherosclerosis corresponding to different levels of risk of heart disease; and comparing the presence or state of emphysema to a set of reference values of state of emphysema corresponding to different levels of risk of heart disease. Embodiment 17: The computer-implemented method of Embodiment 16, wherein one or more of the set of reference values of quantified phenotyping of coronary atherosclerosis, set of reference values of quantified phenotyping of aortic atherosclerosis, or set of reference values of state of emphysema is derived from a reference population with varying levels of risk of heart disease. Embodiment 18: The computer-implemented method of any one of Embodiments 1 to 17, wherein the reference population is selected based on one or more of age, gender, or ethnicity of the subject. Embodiment 19: The computer-implemented method of any one of Embodiments 1 to 13, wherein the assessment of risk of heart disease is generated at least in part by: assessing risk of heart disease based on the results of quantified phenotyping of coronary atherosclerosis; assessing risk of heart disease based on the results of the quantified phenotyping of aortic atherosclerosis; assessing risk of heart disease based on the presence or state of emphysema; generating a weighted measure of the risk of heart disease assessed based on the results of quantified phenotyping of coronary atherosclerosis, the results of the quantified phenotyping of aortic atherosclerosis, and the presence or state of emphysema; and generating the multifactor assessment of heart disease based on the weighted measure. Embodiment 20: A computer-implemented method of assessing risk of heart disease for a subject based on multi-dimensional information derived from non-invasive medical image analysis, the method comprising: accessing, by a computer system, results of quantified phenotyping of coronary atherosclerosis of a subject at a first point in time, the quantified phenotyping of coronary atherosclerosis comprising total plaque volume, low-density non-calcified plaque volume, non-calcified plaque volume, and calcified plaque volume in one or more portions of coronary arteries of the subject; accessing, by a computer system, results of quantified phenotyping of aortic atherosclerosis of the subject at the first point in time, the quantified phenotyping of aortic atherosclerosis comprising total plaque volume, low-density non-calcified plaque volume, non-calcified plaque volume, and calcified plaque volume in one or more portions of the aorta of the subject; accessing, by the computer system, a medical image of the subject, wherein the medical image of the subject is obtained at a second point in time, the medical image comprising the one or more portions of coronary arteries and the one or more portions of the aorta of the subject; performing, by the computer system, quantitative phenotyping of coronary atherosclerosis at the second point in time; performing, by the computer system, quantitative phenotyping of aortic atherosclerosis at the second point in time; analyzing, by the computer system, progression of coronary atherosclerosis based at least in part on comparing the results of quantitative phenotyping of coronary atherosclerosis between the first point in time and the second point in time; analyzing, by the computer system, progression of aortic atherosclerosis based at least in part on comparing the results of quantitative phenotyping of aortic atherosclerosis between the first point in time and the second point in time; and assessing, by the computer system, a risk of heart disease for the subject based at least in part on the analysis of the progression of coronary atherosclerosis and the progression of aortic atherosclerosis, wherein the computer system comprises a computer processor and an electronic storage medium. Embodiment 21: The computer-implemented method of Embodiment 20, wherein the risk of heart disease for the subject is assessed to be high when the volume of non-calcified plaque in one or more of the coronary arteries or aorta is higher at the second point in time than at the first point in time. Embodiment 22: The computer-implemented method of any one of Embodiments 20 or 21, wherein the risk of heart disease for the subject is assessed to be high when the total plaque volume in one or more of the coronary arteries or aorta is higher at the second point in time than at the first point in time. Embodiment 23: The computer-implemented method of any one of Embodiments 20 to 22, wherein the risk of heart disease for the subject is assessed to be high when the subject was non-responsive to a medication prescribed to the subject at the first point in time to stabilize atherosclerosis. Embodiment 24: A computer-implemented method of assessing risk of heart disease for a subject based on multi-dimensional information derived from non-invasive medical image analysis, the method comprising: accessing, by a computer system, results of quantified phenotyping of coronary atherosclerosis of a subject at a first point in time, the quantified phenotyping of coronary atherosclerosis comprising total plaque volume, low-density non-calcified plaque volume, non-calcified plaque volume, and calcified plaque volume in one or more portions of coronary arteries of the subject; accessing, by a computer system, a state of emphysema of the subject analyzed at the first point in time; accessing, by the computer system, a medical image of the subject, wherein the medical image of the subject is obtained at a second point in time, the medical image comprising the one or more portions of coronary arteries and lungs of the subject; analyzing, by the computer system, the medical image to perform quantitative phenotyping of coronary atherosclerosis at the second point in time; analyzing, by the computer system, the medical image to determine a state of emphysema at the second point in time; analyzing, by the computer system, progression of coronary atherosclerosis based at least in part on comparing the results of quantitative phenotyping of coronary atherosclerosis between the first point in time and the second point in time; analyzing, by the computer system, progression of emphysema based at least in part on comparing the state of emphysema between the first point in time and the second point in time; and assessing, by the computer system, a risk of heart disease for the subject based at least in part on the analysis of the progression of coronary atherosclerosis and the progression of emphysema, wherein the computer system comprises a computer processor and an electronic storage medium. Embodiment 25: The computer-implemented method of Embodiment 24, wherein the risk of heart disease for the subject is assessed to be high when the volume of non-calcified plaque in the one or more portions of coronary arteries is higher at the second point in time than at the first point in time. Embodiment 26: The computer-implemented method of any one of Embodiments 24 to 25, wherein the risk of heart disease for the subject is assessed to be high when the total plaque volume in the one or more portions of coronary arteries is higher at the second point in time than at the first point in time. Embodiment 27: The computer-implemented method of any one of Embodiments 24 to 26, wherein the risk of heart disease for the subject is assessed to be high when the subject was non-responsive to a medication prescribed to the subject at the first point in time to stabilize atherosclerosis. Embodiment 28: A computer-implemented method of assessing risk of peripheral artery disease (PAD) for a subject based on multi-dimensional information derived from non-invasive medical image analysis, the method comprising: accessing, by a computer system, one or more medical images of a subject, wherein the medical image of the subject is obtained non-invasively; analyzing, by the computer system, the one or more medical images of the subject to identify one or more coronary arteries of the subject; identifying, by the computer system, one or more regions of plaque in the identified one or more coronary arteries; analyzing, by the computer system, the identified one or more regions of plaque in the coronary arteries to perform quantified phenotyping of coronary atherosclerosis comprising total plaque volume, low-density non-calcified plaque volume, non-calcified plaque volume, and calcified plaque volume in the one or more coronary arteries; comparing, by the computer system, results of the quantified phenotyping of coronary atherosclerosis to a set of reference values of quantified phenotyping of coronary atherosclerosis corresponding to different levels of risk of PAD; and generating, by the computer system, an assessment of risk of PAD for the subject based at least in part on the comparison of the results of the quantified phenotyping of coronary atherosclerosis to the set of reference values, wherein the computer system comprises a computer processor and an electronic storage medium. Embodiment 29: The computer-implemented method of Embodiment 28, further comprising: identifying, by the computer system, one or more portions of the aorta of the subject on the medical image; identifying, by the computer system, one or more regions of plaque in the identified one or more portions of the aorta; analyzing, by the computer system, the identified one or more regions of plaque in the aorta to perform quantified phenotyping of aortic atherosclerosis comprising total plaque volume, low-density non-calcified plaque volume, non-calcified plaque volume, and calcified plaque volume in the one or more portions of the aorta; and comparing, by the computer system, results of the quantified phenotyping of aortic atherosclerosis to a set of reference values of quantified phenotyping of aortic atherosclerosis corresponding to different levels of risk of PAD, wherein the assessment of risk of PAD for the subject is further generated based at least in part on the comparison of the results of the quantified phenotyping of aortic atherosclerosis to the set of reference values of quantified phenotyping of aortic atherosclerosis. Embodiment 30: The computer-implemented method of any one of Embodiments 28 to 30, further comprising: identifying, by the computer system, one or more portions of the lungs of the subject on the medical image; analyzing, by the computer system, the identified one or more portions of the lungs of the subject to determine a state of emphysema for the subject; and comparing, by the computer system, the determined state of emphysema for the subject to a set of reference values of states of emphysema corresponding to different levels of risk of PAD, wherein the assessment of risk of PAD for the subject is further generated based at least in part on the comparison of the results of the determined state of emphysema for the subject to the set of reference values of states of emphysema. Embodiment 31: A computer-implemented method of assessing a health risk of a subject based on quantitative phenotyping derived from non-invasive medical image analysis, the method comprising: accessing, by a computer system, one or more medical images of a subject, wherein the medical image of the subject is obtained non-invasively; analyzing, by the computer system, the one or more medical images of the subject to identify one or more regions of interest, the one or more regions of interest comprising one or more portions of portions of coronary arteries, aorta, or lungs of the subject; automatically analyzing, by the computer system, the one or more regions of interest to perform quantified phenotyping, the quantified phenotyping comprising one or more of coronary atherosclerosis, aortic atherosclerosis, or emphysema; accessing, by the computer system, a first dataset comprising a plurality of predetermined diagnoses to determine presence of an applicable predetermined diagnosis corresponding to results of the quantified phenotyping; accessing, by the computer system, when an applicable predetermined diagnosis corresponding to results of the quantified phenotyping is present, a second dataset comprising a plurality of predetermined medical conditions to determine presence of an applicable predetermined medical condition corresponding to the applicable predetermined diagnosis; automatically determining, by the computer system, when an applicable predetermined medical condition corresponding to the applicable predetermined diagnosis is present, a third database comprising a plurality of health risk scores to determine an applicable health risk score for the subject corresponding to the applicable predetermined medical condition, wherein the applicable health risk score is derived from the quantified phenotyping of the one or more medical images; and determining, by the computer system, one or more candidate treatments for the subject based on the applicable health risk score, wherein the computer system comprises a computer processor and an electronic storage medium. Embodiment 32: The computer-implemented method of Embodiment 31, further comprising causing, by the computer system, generation of a graphical representation of the determined one or more candidate treatments for the subject. Embodiment 33: The computer-implemented method of Embodiments 31 or 32, wherein the quantitative phenotyping is performed based at least in part on analysis of density values of one or more pixels of the one or more medical images. Embodiment 34: The computer-implemented method of any one of Embodiments 31-33, wherein the density values comprise radiodensity values. Embodiment 35: The computer-implemented method of any one of Embodiments 31-34, wherein the one or more medical images are obtained using an imaging technique comprising one or more of CT, x-ray, ultrasound, echocardiography, MR imaging, optical coherence tomography (OCT), nuclear medicine imaging, positron-emission tomography (PET), single photon emission computed tomography (SPECT), or near-field infrared spectroscopy (NIRS). Improving Accuracy of CAD Measurements Various embodiments described herein relate to systems, devices, and methods for improving the accuracy of CAD measurements in non-invasive imaging. While the primary examples described in this section relate to approaches for improving the accuracy of CAD measurements by non-invasive CT angiography, these techniques can be applied to any imaging modality of any anatomical structure that exhibits motion (or other artifacts) across a series of acquired images. In this way, the features described herein are broadly applicable, and this disclosure should not be limited to the particular examples described herein. As an example, in some embodiments, a CT scan is performed of the heart, with multiple “phases” or “series” acquired during the cardiac cycle (e.g., as the heart is contracting or expanding). Each phase or series can comprise an image or a plurality of images (e.g., a video) captured during a different portion of the cardiac cycle. In some embodiments, the systems, methods, and devices described herein are configured to identify where, in the different phases or series acquired during the cardiac cycle, the optimal image quality for each artery, branch, or segment is present. The phase or series that provides the highest image quality for any particular artery, branch, or segment can then be used to perform vision-based or other forms of CAD measurement. For example, in some embodiments, the phase or series that provides the highest image quality for any particular artery can be analyzed to provide, for example, quantitative phenotyping of atherosclerosis. The quantitative phenotyping of atherosclerosis can include, for example, analysis of one or more of plaque volume, plaque composition, or plaque progression. In this way, the systems, methods, and devices described herein are further configured to provide the capability to “mix-and-match” these arteries across different points in the cardiac cycle to ensure that measurements of coronary atherosclerosis and vascular morphology are being done on the images at a “phase” or “series” that represents the ideal image quality for that particular artery, branch, or segment. Additionally or alternatively, in some embodiments, the different phases or series that provide the highest image quality for each of the different arteries, branches, or segments can also be combined into a composite image that provides improved visualization of the heart. These features can provide a significant improvement over conventional imaging and analysis modalities and can provide a solution to one or more drawbacks associated with the same. Coronary Computed Tomography Angiography (CCTA) has developed into a clinically useful, guideline-directed non-invasive imaging modality for diagnosis of coronary artery disease (CAD). Improvements in CT technology now enable near motion free images of the coronary arteries, which allows for accurate measurements of atherosclerosis burden and type, and vascular morphology. However, CCTA is still susceptible to significant imaging artifacts, owing to such common contributors as coronary artery motion, poor contrast opacification and beam hardening artifacts. For the first issue, coronary artery motion, common solutions have been to lower patients' heart rates using oral or intravenous beta blocker medications so that the limited temporal resolution of the current generation CT scanners can still produce relatively motion free images. Yet, even with slowing a patient's heart rate and maximizing temporal resolution on latest-generation scanners, the different coronary arteries move unpredictably during the cardiac cycle (e.g., as the heart is contracting and expanding). Imaging across the cardiac cycle can demonstrate this motion (and its associated motion artifacts) for each artery and its branches. Often, one artery is visualized with high image quality at one point of the cardiac cycle, while a different artery is visualized with high image quality at another point of the cardiac cycle. The same can be observed for contrast enhancement or beam hardening, with image quality differing across the cardiac cycle. At present, common clinical practice in image interpretation is to select the “phase” or “series” within the cardiac cycle that overall represents the best image quality with motion-free images of the heart arteries. However, this approach may allow for the analysis of the majority of vessels which exhibit ideal image quality, but does not necessarily allow for analysis of each and every vessel at the point in the cardiac cycle when it is of highest quality. That is, one artery may be of ideal image quality in one phase or series, while another artery may be of ideal image quality in another phase or series. This observation, which is noted for arteries, can also be applied to artery branches and artery segments. Currently, an imaging physician cognitively reunites the information of both reconstructions, acquisitions, or series of images and qualitatively make an interpretation. This is not ideal because it is prone to error, it is qualitatively (and not quantitatively) done, and it is very dependent on the expertise of the doctor. To address this need, this application, describes systems, methods, and devices that are configured to identify optimal image quality on an artery, branch, or segment-by-artery, branch, or segment basis, and that can provide the capability to “mix-and-match” these arteries across different points in the cardiac cycle to ensure that measurements of coronary atherosclerosis and vascular morphology are being done on the images at the phase or series that represents the ideal image quality for that particular artery, branch, or segment. In some embodiments, the inventions provided herein describe novel approaches to improving the accuracy of CAD measurements by non-invasive CT angiography, but this technique can be applied to any imaging modality of any anatomic structure that exhibits motion (or other artifacts) across a series of acquired images. As discussed herein, in some embodiments, the systems, devices, and methods described herein are configured for improving the accuracy of CAD measurements in non-invasive imaging. In particular, in some embodiments, a CT scan is performed of the heart, with multiple “phases” or “series” acquired during the cardiac cycle (e.g., as the heart is contracting or expanding). In some embodiments, the systems, methods, and devices described herein are configured to identify where, in the different phases or series acquired during the cardiac cycle, the optimal image quality for each artery, branch, or segment is determined. Then, in some embodiments, the systems, methods, and devices described herein are configured to provide the capability to “mix-and-match” these arteries across different points in the cardiac cycle to ensure that measurements of coronary atherosclerosis and vascular morphology are being done on the images at a “phase” or “series” that represents the ideal image quality for that particular artery, branch, or segment. FIG.29Ais a block diagram illustrating an example embodiment of a system, device, and method for improving the accuracy of CAD measurements in non-invasive imaging. As illustrated inFIG.29A, in some embodiments, the system can receive (e.g., capture or otherwise acquire) image data of a heart of an individual or patient at block2902. The image can include multiple phases or series acquired during the cardiac cycle. For example, the image can include phases or series representing the heart as it contracts or expands during the cardiac cycle. Each phase or series can include, for example, a single image or a plurality of images (such as a video). In some embodiments, each phase or series can correspond to a portion or sub portion of the cardiac cycle. In some embodiments, the system can be configured to receive the image of the individual or patient from a medical imaging device. For example, the image can comprise an image obtained by one or more modalities, such as computed tomography (CT), contrast-enhanced CT, non-contrast CT, x-ray, ultrasound, echocardiography, intravascular ultrasound (IVUS), MR imaging, optical coherence tomography (OCT), nuclear medicine imaging, positron-emission tomography (PET), single photon emission computed tomography (SPECT), and/or near-field infrared spectroscopy (NIRS). In some embodiments, the image can be stored on and/or received from a medical image database2914. In some embodiments, at block2904, the system can be configured to a analyze the image data received at block2902to label one or more of the coronary arteries, branches, or segments of the heart. For example, in some embodiments, an algorithm is developed, validated, and applied that is configured to auto-extract and auto-label the coronary arteries, their branches and the coronary segments. In some embodiments, the system can be configured to utilize one or more image processing, artificial intelligence (AI), and/or machine learning (ML) algorithms to automatically and/or dynamically identify and/or label one or more arteries, vessels, and/or a portion thereof within each phase or series of the image data. At block2906, the system can be configured to identify one or more anatomical landmarks of the heart that are identifiable across the multiple phases or series of the image. For example, identification of these landmarks can be used to allow comparison of the same structure or part of the structure in different acquisitions or reconstructions. In some embodiments, an algorithm can be developed, validated, and applied in order to identify the one or more anatomical landmarks. In some embodiments, the system can be configured to utilize one or more image processing, artificial intelligence (AI), and/or machine learning (ML) algorithms to automatically and/or dynamically identify the one or more anatomical landmarks associated with the one or more arteries, vessels, and/or portions thereof within each phase or series of the image data. In some embodiments, the anatomical landmarks can comprise beginning points and/or endpoints associated with the one or more arteries, vessels, and/or portions thereof. In some embodiments, the anatomical landmarks can comprise branches associated with the one or more arteries, vessels, and/or portions thereof. In some embodiments, other anatomical landmarks can be used. At block2908, the system can be configured to, for one or more (or all) of the coronary arteries, branches, or segments, rank image quality for each of the phases or series. For example, in some embodiments, an algorithm is developed, validated, and applied that is configured to, for one or more of the coronary arteries, branches, or segments, rank image quality for across the phases or series. In some embodiments, the system can be configured to utilize one or more image processing, artificial intelligence (AI), and/or machine learning (ML) algorithms to automatically and/or dynamically to determine an image quality rank. Determining an image quality rank can be based on one or more of a number of factors including, for example, clarity and/or sharpness of a representation of the coronary arteries, branches, or segments within a phase or series of the image data. At block2940, the phase or series which shows a coronary artery, branch, or segment with the highest image quality can be identified. Notably, different coronary arteries, branches, or segments may be shown with the highest image quality in different phases or series. Blocks2908and2940can be repeated for each of the coronary arteries, branches, or segments or for as many of them are desired. Adventitiously, this allows for the identification of which phase or series of the image data provides the best (e.g., clearest or sharpest) image of each of the identified coronary arteries. After the phase or series representing the best image quality for each coronary artery is identified, at block2911, that phase or series can be analyzed to determine CAD measurements and/or vascular morphology for the associated coronary artery. For example at block2708, the system can be configured to perform quantitative phenotyping of atherosclerosis for the articular coronary artery using the phase or series that has been identified to correspond to the highest image quality. For example, in some embodiments, the quantitative phenotyping can be of atherosclerosis burden, volume, type, composition, and/or rate of progression for the individual or patient. In some embodiments, the system can be configured to utilize one or more image processing, artificial intelligence (AI), and/or machine learning (ML) algorithms to automatically and/or dynamically perform quantitative phenotyping of atherosclerosis. In some embodiments, as part of quantitative phenotyping, the system can be configured to identify and/or characterize different types and/or regions of plaque, for example based on density, absolute density, material density, relative density, and/or radiodensity. For example, in some embodiments, the system can be configured to characterize a region of plaque into one or more sub-types of plaque. For example, in some embodiments, the system can be configured to characterize a region of plaque as one or more of low density non-calcified plaque, non-calcified plaque, or calcified plaque. In some embodiments, calcified plaque can correspond to plaque having a highest density range, low density non-calcified plaque can correspond to plaque having a lowest density range, and non-calcified plaque can correspond to plaque having a density range between calcified plaque and low density non-calcified plaque. For example, in some embodiments, the system can be configured to characterize a particular region of plaque as low density non-calcified plaque when the radiodensity of an image pixel or voxel corresponding to that region of plaque is between about −189 and about 30 Hounsfield units (HU). In some embodiments, the system can be configured to characterize a particular region of plaque as non-calcified plaque when the radiodensity of an image pixel or voxel corresponding to that region of plaque is between about 31 and about 350 HU. In some embodiments, the system can be configured to characterize a particular region of plaque as calcified plaque when the radiodensity of an image pixel or voxel corresponding to that region of plaque is between about 351 and about 2500 HU. In some embodiments, the lower and/or upper Hounsfield unit boundary threshold for determining whether a plaque corresponds to one or more of low density non-calcified plaque, non-calcified plaque, and/or calcified plaque can be about −1000 HU, about −900 HU, about −800 HU, about −700 HU, about −600 HU, about −500 HU, about −400 HU, about −300 HU, about −200 HU, about −190 HU, about −180 HU, about −170 HU, about −160 HU, about −150 HU, about −140 HU, about −130 HU, about −120 HU, about −110 HU, about −100 HU, about −90 HU, about −80 HU, about −70 HU, about −60 HU, about −50 HU, about −40 HU, about −30 HU, about −20 HU, about −10 HU, about 0 HU, about 10 HU, about 20 HU, about 30 HU, about 40 HU, about 50 HU, about 60 HU, about 70 HU, about 80 HU, about 90 HU, about 100 HU, about 110 HU, about 120 HU, about 130 HU, about 140 HU, about 150 HU, about 160 HU, about 170 HU, about 180 HU, about 190 HU, about 200 HU, about 210 HU, about 2950 HU, about 230 HU, about 240 HU, about 250 HU, about 260 HU, about 270 HU, about 280 HU, about 290 HU, about 300 HU, about 310 HU, about 320 HU, about 330 HU, about 340 HU, about 350 HU, about 360 HU, about 370 HU, about 380 HU, about 390 HU, about 400 HU, about 410 HU, about 420 HU, about 430 HU, about 440 HU, about 450 HU, about 460 HU, about 470 HU, about 480 HU, about 490 HU, about 500 HU, about 510 HU, about 520 HU, about 530 HU, about 540 HU, about 550 HU, about 560 HU, about 570 HU, about 580 HU, about 590 HU, about 600 HU, about 700 HU, about 800 HU, about 900 HU, about 1000 HU, about 1100 HU, about 1200 HU, about 1300 HU, about 1400 HU, about 1500 HU, about 1600 HU, about 1700 HU, about 1800 HU, about 1900 HU, about 2000 HU, about 2100 HU, about 29500 HU, about 2300 HU, about 2400 HU, about 2500 HU, about 2600 HU, about 2700 HU, about 2800 HU, about 2900 HU, about 3000 HU, about 3100 HU, about 3200 HU, about 3300 HU, about 3400 HU, about 3500 HU, and/or about 4000 HU. In some embodiments, the system can be configured to determine and/or characterize the burden of atherosclerosis based at least part on volume of plaque. In some embodiments, the system can be configured to analyze and/or determine total volume of plaque and/or volume of low-density non-calcified plaque, non-calcified plaque, and/or calcified plaque. In some embodiments, the system can be configured to perform phenotyping of plaque by determining a ratio of one or more of the foregoing volumes of plaque, for example within an artery, lesion, vessel, and/or the like. In some embodiments, the system can be configured to analyze the progression of plaque. For example, in some embodiments, the system can be configured to analyze the progression of one or more particular regions of plaque and/or overall progression and/or lesion and/or artery-specific progression of plaque. In some embodiments, in order to analyze the progression of plaque, the system can be configured to analyze one or more serial images of the subject for phenotyping atherosclerosis. In some embodiments, tracking the progression of plaque can comprise analyzing changes and/or lack thereof in total plaque volume and/or volume of low-density non-calcified plaque, non-calcified plaque, and/or calcified plaque. In some embodiments, tracking the progression of plaque can comprise analyzing changes and/or lack thereof in density of a particular region of plaque and/or globally. Additionally or alternatively, in some embodiments, at block2912, the coronary arteries, branches, or segments can be visualized according to the images identified at block2940—e.g., those that show the coronary arteries, branches, or segments with the highest image quality. In some embodiments, the coronary arteries, branches, or segments can be visualized together in a “mix-and-match” approach (e.g., combining images from different phases or series). Visualization can be performed according to various methods, including volume-rendered techniques, multiplanar reformation or reconstructions (MPRs), tabular forms, or others. In some embodiments, the visualization can use the landmarks identified at block2906to align and generate a composite image. In some embodiments, the visualization can be stored in the medical image database2914. In the example provided above, this approach has been described within the context of CT imaging of the coronary arteries. However, the methods, systems, and devices described herein can also be used with other imaging modalities and other anatomical structures as well. For example, the methods, systems, and devices described herein can also be used with ultrasound imaging (for example, of other arterial beds (e.g., carotid, aorta, lower extremity, etc.)), MRI, or nuclear testing, among others. Thus, the methods, systems, and devices described herein can also be applied to image reconstructions in other forms (e.g., reconstruction of an acquired CT volume with different thickness, different kernel, or in acquisitions with EKG synchronization, such as, different timing after the R wave of the EKG). The methods, systems, and devices described herein can also be applied to merge imaging information from different types of image acquisitions (single energy CT vs. spectral CT) so as to be able to reconstruct a specific structure with a mix and or aggregation of different information (fusion) obtain in all those different components (including change through time). In some embodiments, the methods, systems, and devices described herein can also be applied to depict “multi-phase” or “multi-series” information in a virtual 4D way. The methods, systems, and devices described herein also be applied to enhance the phenotypic richness of the artery/branch/segment (or other, such as structure/organ/patient) by combining methods for image visualization from multiple imaging modalities (e.g., CT for atherosclerosis, PET for inflammation, or other). The methods, systems, and devices described herein can be used to fuse information from previous images to illustrate the change over time after such interventions as medications, exercise or other. The methods, systems, and devices described herein can be used to predict the future response, such as from pharmacologic treatment or aging. In some embodiments, the systems, processes, and methods described herein are implemented using a computing system, such as the one illustrated inFIG.29B. The example computer system2932is in communication with one or more computing systems2950and/or one or more data sources2952via one or more networks2948. WhileFIG.2illustrates an embodiment of a computing system2932, it is recognized that the functionality provided for in the components and modules of computer system2932can be combined into fewer components and modules, or further separated into additional components and modules. The computer system2932can comprise an improved CAD measurement module2944that carries out the functions, methods, acts, and/or processes described herein. The improved CAD measurement and/or visualization module2944is executed on the computer system2932by a central processing unit2936discussed further below. In general the word “module,” as used herein, refers to logic embodied in hardware or firmware or to a collection of software instructions, having entry and exit points. Modules are written in a program language, such as JAVA, C, or C++, or the like. Software modules can be compiled or linked into an executable program, installed in a dynamic link library, or can be written in an interpreted language such as BASIC, PERL, LAU, PHP, or Python and any such languages. Software modules can be called from other modules or from themselves, and/or can be invoked in response to detected events or interruptions. Modules implemented in hardware include connected logic units such as gates and flip-flops, and/or can include programmable units, such as programmable gate arrays or processors. Generally, the modules described herein refer to logical modules that can be combined with other modules or divided into sub-modules despite their physical organization or storage. The modules are executed by one or more computing systems, and can be stored on or within any suitable computer readable medium, or implemented in-whole or in-part within special designed hardware or firmware. Not all calculations, analysis, and/or optimization require the use of computer systems, though any of the above-described methods, calculations, processes, or analyses can be facilitated through the use of computers. Further, in some embodiments, process blocks described herein can be altered, rearranged, combined, and/or omitted. The computer system2932includes one or more processing units (CPU)206, which can comprise a microprocessor. The computer system2932further includes a physical memory210, such as random access memory (RAM) for temporary storage of information, a read only memory (ROM) for permanent storage of information, and a mass storage device2934, such as a backing store, hard drive, rotating magnetic disks, solid state disks (SSD), flash memory, phase-change memory (PCM), 3D XPoint memory, diskette, or optical media storage device. Alternatively, the mass storage device can be implemented in an array of servers. Typically, the components of the computer system2932are connected to the computer using a standards-based bus system. The bus system can be implemented using various protocols, such as Peripheral Component Interconnect (PCI), Micro Channel, SCSI, Industrial Standard Architecture (ISA) and Extended ISA (EISA) architectures. The computer system2932includes one or more input/output (I/O) devices and interfaces2942, such as a keyboard, mouse, touch pad, and printer. The I/O devices and interfaces2942can include one or more display devices, such as a monitor, that allows the visual presentation of data to a user. More particularly, a display device provides for the presentation of GUIs as application software data, and multi-media presentations, for example. The I/O devices and interfaces2942can also provide a communications interface to various external devices. The computer system2932can comprise one or more multi-media devices2938, such as speakers, video cards, graphics accelerators, and microphones, for example. The computer system2932can run on a variety of computing devices, such as a server, a Windows server, a Structure Query Language server, a Unix Server, a personal computer, a laptop computer, and so forth. In other embodiments, the computer system2932can run on a cluster computer system, a mainframe computer system and/or other computing system suitable for controlling and/or communicating with large databases, performing high volume transaction processing, and generating reports from large databases. The computing system2932is generally controlled and coordinated by an operating system software, such as z/OS, Windows, Linux, UNIX, BSD, PHP, SunOS, Solaris, MacOS, ICloud services or other compatible operating systems, including proprietary operating systems. Operating systems control and schedule computer processes for execution, perform memory management, provide file system, networking, and I/O services, and provide a user interface, such as a graphical user interface (GUI), among other things. The computer system2932illustrated inFIG.29Bis coupled to a network2948, such as a LAN, WAN, or the Internet via a communication link2946(wired, wireless, or a combination thereof). Network2948communicates with various computing devices and/or other electronic devices. Network2948is communicating with one or more computing systems2950and one or more data sources2952. The improved CAD measurement and/or visualization module2944can access or can be accessed by computing systems2950and/or data sources2952through a web-enabled user access point. Connections can be a direct physical connection, a virtual connection, and other connection type. The web-enabled user access point can comprise a browser module that uses text, graphics, audio, video, and other media to present data and to allow interaction with data via the network2948. The output module can be implemented as a combination of an all-points addressable display such as a cathode ray tube (CRT), a liquid crystal display (LCD), a plasma display, or other types and/or combinations of displays. The output module can be implemented to communicate with input devices2942and they also include software with the appropriate interfaces which allow a user to access data through the use of stylized screen elements, such as menus, windows, dialogue boxes, tool bars, and controls (for example, radio buttons, check boxes, sliding scales, and so forth). Furthermore, the output module can communicate with a set of input and output devices to receive signals from the user. The computing system2932can include one or more internal and/or external data sources (for example, data sources2952). In some embodiments, one or more of the data repositories and the data sources described above can be implemented using a relational database, such as DB2, Sybase, Oracle, CodeBase, and Microsoft® SQL Server as well as other types of databases such as a flat-file database, an entity relationship database, and object-oriented database, and/or a record-based database. The computer system2932can also access one or more databases2952. The databases2952can be stored in a database or data repository. The computer system2932can access the one or more databases2952through a network2948or can directly access the database or data repository through I/O devices and interfaces2942. The data repository storing the one or more databases2952can reside within the computer system2932. Examples of Embodiments Relating to Improving Accuracy of CAD Measurements The following are non-limiting examples of certain embodiments of systems and methods for improving accuracy of CAD measurements and/or other related features. Other embodiments may include one or more other features, or different features, that are discussed herein. Embodiment 1: A computer-implemented method for improving accuracy of coronary artery disease measurements in non-invasive imaging analysis, the method comprising: accessing, by a computer system, image data of a heart of a patient, wherein the image data comprises multiple phases or series acquired during a cardiac cycle; identifying, by the computer system and based on the image data, one or more coronary arteries, branches, or segments associated with the heart; determining, by the computer system, an image quality rank for each of the one or more coronary arteries, branches, or segments for each of the phases or series of the image data; determining, by the computer system, which phase or series of the image data provides the highest image quality rank for each of the one or more coronary arteries, branches, or segments; and determining, by the computer system, for each of the one more coronary arteries, branches, or segments, one or more CAD measurements or vascular morphology based on the phase or series of the image data that provides the highest image quality rank, wherein the computer system comprises a computer processor and an electronic storage medium. Embodiment 2: The computer-implemented method of Embodiment 1, wherein determining the one or more CAD measurements or vascular morphology based on the phase or series of the image data that provides the highest image quality rank comprises analyzing, by the computer system, the phase or series to perform quantitative phenotyping of atherosclerosis. Embodiment 3: The computer-implemented method of Embodiment 2, the quantitative phenotyping of atherosclerosis comprises analysis of one or more of plaque volume, plaque composition, or plaque progression. Embodiment 4: The computer-implemented method of Embodiment 3, wherein the quantitative phenotyping of atherosclerosis is performed based at least in part on analysis of density values of one or more pixels of the medical image data corresponding to plaque. Embodiment 5: The computer-implemented method of Embodiment 4, wherein the plaque volume comprises one or more of total plaque volume, calcified plaque volume, non-calcified plaque volume, or low-density non-calcified plaque volume. Embodiment 6: The computer-implemented method of Embodiment 4, wherein the density values comprise radiodensity values. Embodiment 7: The computer-implemented method of Embodiment 4, wherein the plaque composition comprises composition of one or more of calcified plaque, non-calcified plaque, or low-density non-calcified plaque. Embodiment 8: The computer-implemented method of Embodiment 7, wherein one or more of the calcified plaque, non-calcified plaque, of low-density non-calcified plaque is identified based at least in part on radiodensity values of one or more pixels of the medical image data corresponding to plaque. Embodiment 9: The computer-implemented method of any of Embodiments 1 to 8, further comprising visualizing, by the computer system, the coronary arteries, branches, and segments based on the identified phases or series. Embodiment 10: The computer-implemented method of Embodiment 9, wherein visualizing the coronary arteries, branches, and segments comprises generating, by the computing system, a composite image from the phases or series having the highest image quality rank. Embodiment 11: The computer-implemented method of any of Embodiment 1 to 10, further comprising identifying, by the computer system, one or more landmarks within each phase or series. Embodiment 12: The computer-implemented method of Embodiment 11, wherein the landmarks comprise anatomical landmarks associated with the coronary arteries, branches, and segments. Embodiment 13: The computer-implemented method of any of Embodiments 1 to 12, wherein the medical image data is obtained using an imaging technique comprising one or more of computed tomography (CT), x-ray, ultrasound, echocardiography, intravascular ultrasound (IVUS), MR imaging, optical coherence tomography (OCT), nuclear medicine imaging, positron-emission tomography (PET), single photon emission computed tomography (SPECT), or near-field infrared spectroscopy (NIRS). Embodiment 14: The computer-implemented method of any of Embodiments 1 to 13, wherein visualizing the coronary arteries, branches, and segments based on the selected phases or series comprises presenting an image of each coronary arteries, branches, and segments based on the selected images corresponding to the phase or series associated with the highest image quality for that coronary artery, branch, or segment. Embodiment 15: A system for improving accuracy of coronary artery disease measurements in non-invasive imaging analysis, the system comprising: one or more computer readable storage devices configured to store a plurality of computer executable instructions; and one or more hardware computer processors in communication with the one or more computer readable storage devices and configured to execute the plurality of computer executable instructions in order to cause the system to: access image data of a heart of a patient, wherein the image data comprises multiple phases or series acquired during a cardiac cycle; identify based on the image data, one or more coronary arteries, branches, or segments associated with the heart; determine an image quality rank for each of the one or more coronary arteries, branches, or segments for each of the phases or series of the image data; determine which phase or series of the image data provides the highest image quality rank for each of the one or more coronary arteries, branches, or segments; and determine for each of the one more coronary arteries, branches, or segments, one or more CAD measurements or vascular morphology based on the phase or series of the image data that provides the highest image quality rank, Embodiment 16: The system of Embodiment 15, wherein determining the one or more CAD measurements or vascular morphology based on the phase or series of the image data that provides the highest image quality rank comprises analyzing, by the computer system, the phase or series to perform quantitative phenotyping of atherosclerosis. Embodiment 17: The system of Embodiment 16, wherein the quantitative phenotyping of atherosclerosis comprises analysis of one or more of plaque volume, plaque composition, or plaque progression. Embodiment 18: The system of Embodiment 17, wherein the quantitative phenotyping of atherosclerosis is performed based at least in part on analysis of density values of one or more pixels of the medical image data corresponding to plaque. Embodiment 19: The system of Embodiment 18, wherein the plaque volume comprises one or more of total plaque volume, calcified plaque volume, non-calcified plaque volume, or low-density non-calcified plaque volume. Embodiment 20: The system of Embodiment 18, wherein the density values comprise radiodensity values. Embodiment 21: The system of Embodiment 18, wherein the plaque composition comprises composition of one or more of calcified plaque, non-calcified plaque, or low-density non-calcified plaque. Embodiment 22: The system of Embodiment 21, wherein one or more of the calcified plaque, non-calcified plaque, of low-density non-calcified plaque is identified based at least in part on radiodensity values of one or more pixels of the medical image corresponding to plaque. Embodiment 23: The system of any of Embodiments 15 to 22, further comprising visualizing, by the computer system, the coronary arteries, branches, and segments based on the identified images. Embodiment 24: The system of Embodiment 23, wherein visualizing the coronary arteries, branches, and segments comprises generating, by the computing system, a composite image from the phases or series having the highest image quality rank. Embodiment 25: The system of any of Embodiments 15 to 24, further comprising identifying, by the computer system, one or more landmarks within each phase or series. Embodiment 26: The system of Embodiment 25, wherein the landmarks comprise anatomical landmarks associated with the coronary arteries, branches, and segments. Embodiment 27: The system of any of Embodiments 15 to 26, wherein the medical image is obtained using an imaging technique comprising one or more of computed tomography (CT), x-ray, ultrasound, echocardiography, intravascular ultrasound (IVUS), MR imaging, optical coherence tomography (OCT), nuclear medicine imaging, positron-emission tomography (PET), single photon emission computed tomography (SPECT), or near-field infrared spectroscopy (NIRS). Embodiment 28: The system of any of Embodiments 15 to 27, wherein visualizing the coronary arteries, branches, and segments based on the selected images comprises presenting an image of each coronary arteries, branches, and segments based on the selected images corresponding to the phase or series associated with the highest image quality for that coronary artery, branch, or segment. Longitudinal Diagnosis, Risk Assessment, Characterization of Heart Disease Various embodiments described herein relate to systems, devices, and methods for longitudinal image-based phenotyping to enhance drug discovery or development. For example, some embodiments relate to image-based phenotyping of high-risk atherosclerosis features to accelerate drug discovery or development for coronary artery disease (CAD) or the like. Historically, the process for developing new drugs has been a lengthy process involving much trial and error. In order to develop a new drug, one must first identify a target for the drug, the target being, for example, a cellular or molecular target for the drug to act upon in order to achieve a desired outcome in preventing and/or treating a disease. Example, targets for drugs for treating CAD include, LDL receptors, PCSK9, Factor VII, among others. Each of these cellular or biological targets plays, for example, a role in the process of clotting blood. By affecting one or more of these targets, the associated step in the clotting process may be affected as a way of treating CAD. Identifying a drug target using current methods is often imprecise and requires considerable time (e.g., years or decades) for several reasons. Currently, several methodologies exist for identifying a target for a drug. Historically, in the drug development process, researches have gone after risk factors associated with the disease they are attempting to treat. In the case of CAD, researchers have considered the mechanisms associated with high cholesterol, high blood pressure, and/or high glucose. Each of these has been statistically correlated with an increased risk of CAD, and accordingly, by endeavoring to affect the mechanisms associated with these risk factors, one can hope to identify a target for treating and/or preventing surrogate by considering the mechanisms of CAD risk factors as a surrogate. A problem with using surrogates in identifying drug delivery is that the specific mechanisms associated with the disease are not identified. That is, there is no guarantee that the surrogate factor is associated with a cause of the disease, and not merely a correlated effect. Another way that targets have been sought, is by considering patient outcomes over considerable lengths of time (e.g., 3 years, 5 years, 10 years, 20 years, or longer). For example, studies can be performed that follow large groups of patients (e.g., 10,000 people) over long time periods (e.g., 10 years). Members of the patient population that experience CAD events can be identified, and biological markers (e.g., collected through blood samples or other assays) of these patients can be compared with similar biological markers in members of the patient population that have not experienced CAD events. Differences between the biological markers of the patients who experience adverse events and those who do not can be useful in identifying targets for drug development. However, this process is lengthy as patient populations must be studied over significant lengths of time. Additionally, even with specifically identifying those patients that experience the disease, it can be difficult to identify targets associated with the cause of the disease. An improved method for identifying a target for drug development can include examining the blood or other biological specimens of those who currently experience the disease. This can be done in a variety of ways. For example, biological samples of those with the disease (cases) can be compared with those that do not have the disease (controls). Another example, can be examining those patients on the extremes. For example, one can examine biological samples from patients who, for various reasons, one would expect to suffer from the disease, but who do not. Similarly, it may be extremely valuable to examine biological samples from people who have the disease, but do not have any risk factors commonly associated with the disease. In order to gain valuable insight by studying biological samples from those who do or do not have the disease, it is important to be able to accurately understand and characterize the level of disease in those patients. Accordingly, this application contemplates leveraging the image-based CAD measurement and analysis tools described herein to establish baseline and/or follow-up imaging that can be used to characterize and quantify a patient's disease. This imaging can be coupled with bioassay analysis to determine relationships between the bioassay analysis and the disease as a way to identify targets (e.g., molecular or cellular targets) for drug discovery and development. This can be done in several ways. In one example, image-based CAD phenotyping can be used to identify and quantify the CAD of various patients. The same patients can undergo bioassay analysis. The results of the image-based CAD phenotyping and bioassay analysis can be related for each patient. The results can be compared between patients with high levels of CAD (cases) and patients with low levels of CAD (controls). Examining the differences in the bioassays between the case and control groups can be useful in identifying targets for drug discovery and development. In another example, patients can undergo image-based CAD phenotyping and associated bioassay analysis at different points in time. For example, first image-based CAD phenotyping and associated bioassay analysis at a first time may establish a baseline for a patient. At a later time, for example, 1 year, 5 years, or 10 years later, or at a time when the patient's CAD has developed or progressed, additional image-based CAD phenotyping and associated bioassay analysis can be performed. Comparison of the changes in CAD and the changes in the bioassay analysis between the two time periods can be used to identify targets for drug discovery and development. This type of dynamic evaluation can be accomplished in several ways. In one example, upon determining that a patient's CAD has progressed (or improved) between the two time points, the bioassay from the initial, first time point can be analyzed to determine targets for drug discovery or development. In another example, upon determining that a patient's CAD has progressed (or improved) between the two time points, the bioassay from the initial, first time point and the bioassay from the later time point can be examined to determine targets for drug discovery or development. One can examine the association between bioassay at the initial timepoint to baseline burden or changes in disease over time. Alternatively or additionally, one can look at the bioassay from the later time point and, upon identification of an individual who rapidly progresses, regresses, transforms, one can look at the bioassay after the change has occurred. Or, one can examine the changes between the initial timepoint and the later timepoint (as a parallel marker of change), for example, to examine the changes in disease in relationship to the changes in bioassay. As described herein and shown, for example, inFIG.30A, in some embodiments, a potential drug target for treatment of coronary atherosclerosis is identified, and administered to an individual (block3052). At the same time, a control individual who is not administered the potential drug candidate is also identified (block3052). At block3054, both the test case and the control individual can undergo contrast-enhanced CT imaging of the heart and heart arteries. At block106, a computer system can be configured to extract atherosclerosis features and vascular morphology characteristics for each individual (the test case and the control). Additionally, at block3058, biological specimens are obtained from the test case and control individuals. Such biological specimens can include, for example, saliva, blood, stool and others. Assays can be performed to determine the relationship of coronary atherosclerosis and vascular morphology parameters to biological specimens, including for genetics, proteomics, transcriptomics, metabolomics, microbiomics, and others. At block3060, a computer system can be configured to associate the atherosclerosis features and vascular morphology characteristics by CT to the output of the biological specimen assays (e.g., specific proteomic signatures). These atherosclerosis features and vascular morphology features can be specific and associated with clinically-manifest adverse events (e.g., MACE, MI, or death), and disease features include volume, composition, remodeling, location, diffuseness, and direction. In some embodiments, at block3062, based upon the output of the biological specimen assays associated with the second algorithm (e.g., coronary atherosclerosis burden, high-risk plaque), biological specimen assay outputs are identified as “targets” for drug discovery or development. In some embodiments, the principles described above can further be extended to image-based phenotyping of high-risk atherosclerosis progression to accelerate drug discovery or development. For example, the principles can be extended by performing serial CT imaging for changes in atherosclerosis and vascular morphology. An example method can include, for example, repeat CT scans performed in the future (e.g., 1 month, 1 year, 2 years). The atherosclerosis features and vascular morphology characteristics are quantified by the aforementioned algorithms. Afterwards, a computer system can be configured to relate the change in atherosclerosis features and vascular morphology characteristics to the biological specimen assays. In some embodiments, the computer system can further be developed to quantify the quantified changes to the biological specimen assay output (e.g., specific proteomic signatures). Based upon the output of the biological specimen assays (e.g., proteomic signatures) that are common to both the second and the fourth algorithms (e.g., coronary plaque progression, non-reduction in high-risk plaque), biological specimen assay outputs are identified as “targets” for drug discovery or development. In some embodiments, these principles can still be extended even further to image-based phenotyping of atherosclerosis stabilization or progression to identify optimal drug responders or non-responders. For example, the principles can be extended by performing serial CT imaging for changes in atherosclerosis and vascular morphology. An example method can include, for individuals treated with a specific drug, repeating CT scans in the future (e.g., 1 month, 1 year, 2 years). The atherosclerosis features and vascular morphology characteristics are quantified as described above. The computer system can further be configured to relate the change in atherosclerosis features and vascular morphology characteristics to the biological specimen assays. Individuals treated with this specific drug are classified as responders (e.g., reduced plaque progression) versus non-responders (e.g., continued plaque progression, continued high-risk plaque features, new high-risk plaques, etc.). The computer system can further relate responders versus non-responders to the biological specimen assay outputs. The approach described herein can be used with: multivariable adjustment of CAD risk factors and treatment and patient demographics/biometrics; protein, serum or urine markers, cytologic or histologic information, diet, exercise, digital wearables; combination targets of genomics and proteomics and/or microbiomics and metabolomics, etc.; combining different image features, and/or different information from different image modalities (e.g., liver steatosis from an ultrasound, delayed enhancement from an MRI). In some embodiments, the systems, processes, and methods described herein are implemented using a computing system, such as the one illustrated inFIG.2. The example computer system3002is in communication with one or more computing systems3020and/or one or more data sources3022via one or more networks3018. WhileFIG.2illustrates an embodiment of a computing system3002, it is recognized that the functionality provided for in the components and modules of computer system3002can be combined into fewer components and modules, or further separated into additional components and modules. The computer system3002can comprise an image-based phenotyping module3014that carries out the functions, methods, acts, and/or processes described herein. The image-based phenotyping module3014is executed on the computer system3002by a central processing unit3006discussed further below. In general the word “module,” as used herein, refers to logic embodied in hardware or firmware or to a collection of software instructions, having entry and exit points. Modules are written in a program language, such as JAVA, C, or C++, or the like. Software modules can be compiled or linked into an executable program, installed in a dynamic link library, or can be written in an interpreted language such as BASIC, PERL, LAU, PHP, or Python and any such languages. Software modules can be called from other modules or from themselves, and/or can be invoked in response to detected events or interruptions. Modules implemented in hardware include connected logic units such as gates and flip-flops, and/or can include programmable units, such as programmable gate arrays or processors. Generally, the modules described herein refer to logical modules that can be combined with other modules or divided into sub-modules despite their physical organization or storage. The modules are executed by one or more computing systems, and can be stored on or within any suitable computer readable medium, or implemented in-whole or in-part within special designed hardware or firmware. Not all calculations, analysis, and/or optimization require the use of computer systems, though any of the above-described methods, calculations, processes, or analyses can be facilitated through the use of computers. Further, in some embodiments, process blocks described herein can be altered, rearranged, combined, and/or omitted. The computer system3002includes one or more processing units (CPU)3006, which can comprise a microprocessor. The computer system3002further includes a physical memory3010, such as random access memory (RAM) for temporary storage of information, a read only memory (ROM) for permanent storage of information, and a mass storage device3004, such as a backing store, hard drive, rotating magnetic disks, solid state disks (SSD), flash memory, phase-change memory (PCM), 3D XPoint memory, diskette, or optical media storage device. Alternatively, the mass storage device can be implemented in an array of servers. Typically, the components of the computer system3002are connected to the computer using a standards based bus system. The bus system can be implemented using various protocols, such as Peripheral Component Interconnect (PCI), Micro Channel, SCSI, Industrial Standard Architecture (ISA) and Extended ISA (EISA) architectures. The computer system3002includes one or more input/output (I/O) devices and interfaces3012, such as a keyboard, mouse, touch pad, and printer. The I/O devices and interfaces3012can include one or more display devices, such as a monitor, that allows the visual presentation of data to a user. More particularly, a display device provides for the presentation of GUIs as application software data, and multi-media presentations, for example. The I/O devices and interfaces3012can also provide a communications interface to various external devices. The computer system3002can comprise one or more multi-media devices3008, such as speakers, video cards, graphics accelerators, and microphones, for example. The computer system3002can run on a variety of computing devices, such as a server, a Windows server, a Structure Query Language server, a Unix Server, a personal computer, a laptop computer, and so forth. In other embodiments, the computer system3002can run on a cluster computer system, a mainframe computer system and/or other computing system suitable for controlling and/or communicating with large databases, performing high volume transaction processing, and generating reports from large databases. The computing system3002is generally controlled and coordinated by an operating system software, such as z/OS, Windows, Linux, UNIX, BSD, PHP, SunOS, Solaris, MacOS, ICloud services or other compatible operating systems, including proprietary operating systems. Operating systems control and schedule computer processes for execution, perform memory management, provide file system, networking, and I/O services, and provide a user interface, such as a graphical user interface (GUI), among other things. The computer system3002illustrated inFIG.30Bis coupled to a network3018, such as a LAN, WAN, or the Internet via a communication link3016(wired, wireless, or a combination thereof). Network3018communicates with various computing devices and/or other electronic devices. Network3018is communicating with one or more computing systems3020and one or more data sources3022. The image-based phenotyping module3014can access or can be accessed by computing systems3020and/or data sources3022through a web-enabled user access point. Connections can be a direct physical connection, a virtual connection, and other connection type. The web-enabled user access point can comprise a browser module that uses text, graphics, audio, video, and other media to present data and to allow interaction with data via the network3018. The output module can be implemented as a combination of an all-points addressable display such as a cathode ray tube (CRT), a liquid crystal display (LCD), a plasma display, or other types and/or combinations of displays. The output module can be implemented to communicate with input devices3012and they also include software with the appropriate interfaces which allow a user to access data through the use of stylized screen elements, such as menus, windows, dialogue boxes, tool bars, and controls (for example, radio buttons, check boxes, sliding scales, and so forth). Furthermore, the output module can communicate with a set of input and output devices to receive signals from the user. The computing system3002can include one or more internal and/or external data sources (for example, data sources3022). In some embodiments, one or more of the data repositories and the data sources described above can be implemented using a relational database, such as DB2, Sybase, Oracle, CodeBase, and Microsoft® SQL Server as well as other types of databases such as a flat-file database, an entity relationship database, and object-oriented database, and/or a record-based database. The computer system3002can also access one or more databases3022. The databases3022can be stored in a database or data repository. The computer system3002can access the one or more databases3022through a network3018or can directly access the database or data repository through I/O devices and interfaces3012. The data repository storing the one or more databases3022can reside within the computer system3002. Examples of Embodiments Relating to Longitudinal Diagnosis, Risk Assessment, Characterization of Heart Disease The following are non-limiting examples of certain embodiments of systems and methods for determining longitudinal diagnosis, risk assessment, characterization of heart disease and/or other related features. Other embodiments may include one or more other features, or different features, that are discussed herein. Embodiment 1: A computer-implemented method for image-based phenotyping to enhance drug discovery or development, the method comprising: accessing, by a computer system, a first medical image of a test case patient; analyzing, by the computer system, the first medical image of the test case patient to perform quantitative phenotyping of atherosclerosis associated with the test case patient, the quantitative phenotyping of atherosclerosis comprising analysis of one or more of plaque volume, plaque composition, or plaque progression; accessing, by the computer system, a second medical image of a control patient; analyzing, by the computer system, the second medical image of the test case patient to perform quantitative phenotyping of atherosclerosis associated with the control patient, the quantitative phenotyping of atherosclerosis comprising analysis of one or more of plaque volume, plaque composition, or plaque progression; relating, by the computer system, outputs of assays performed on biological specimens obtained from the test case patient and the control patient to the atherosclerosis features and vascular morphology characteristics associated with the test case patient and the control patient, respectively; and based on the related outputs of the assays and atherosclerosis features and vascular morphology characteristics, identifying, by the computer system, biological specimen assay outputs as targets for drug discovery or development, wherein the computer system comprises a computer processor and an electronic storage medium. Embodiment 2: The computer-implemented method of Embodiment 1, wherein the targets for drug discovery and development are identified based on comparison of the test case patient to the control patient. Embodiment 3: The computer-implemented method of Embodiment 2, wherein the comparison of the test case patient to the control patient is based on comparing the quantitative phenotyping of atherosclerosis of the test case patient and the control patient. Embodiment 4: The computer-implemented method of Embodiment 3, wherein the comparison of the test case patient to the control patient is based on comparing changes of the quantitative phenotyping of atherosclerosis of the test case patient and the control patient over time. Embodiment 5: The computer-implemented method of Embodiment 4, wherein the changes are evaluated based on quantitative phenotyping performed at greater than two points of time. Embodiment 6: The computer-implemented method of any of Embodiments 1 to 5, wherein the comparison of the test case patient to the control patient is based on comparing the outputs of assays performed on biological specimens obtained from the test case patient and the control patient. Embodiment 7: The computer-implemented method of Embodiment 6, wherein the comparison of the test case patient to the control patient is based on comparing changes of the outputs of assays performed on biological specimens obtained from the test case patient and the control patient over time. Embodiment 8: The computer-implemented method of Embodiment 4, wherein the changes are evaluated based on quantitative phenotyping performed at greater than two points of time. Embodiment 9: The computer-implemented method of any of Embodiments 1 to 8, wherein the biological specimen assay outputs as targets for drug discovery or development comprise one or more of genomics, proteomics, transcriptomics, metabolomics, microbiomics, and epigenetics. Embodiment 10: The computer-implemented method of any of Embodiments 1 to 9, wherein the quantitative phenotyping is further comprises an analysis of one or more of plaque remodeling, plaque location, plaque diffuseness, and plaque direction. Embodiment 11: The computer-implemented method of any of Embodiments 1 to 10, wherein the quantitative phenotyping of atherosclerosis is performed based at least in part on analysis of density values of one or more pixels of the medical image corresponding to plaque. Embodiment 12: The computer-implemented method of Embodiment 11, wherein the plaque volume comprises one or more of total plaque volume, calcified plaque volume, non-calcified plaque volume, or low-density non-calcified plaque volume. Embodiment 13: The computer-implemented method of Embodiment 11, wherein the density values comprise radiodensity values. Embodiment 14: The computer-implemented method of Embodiment 11, wherein the plaque composition comprises composition of one or more of calcified plaque, non-calcified plaque, or low-density non-calcified plaque. Embodiment 15: The computer-implemented method of Embodiment 14, wherein one or more of the calcified plaque, non-calcified plaque, of low-density non-calcified plaque is identified based at least in part on radiodensity values of one or more pixels of the medical image corresponding to plaque. Embodiment 16: The computer-implemented method of any of Embodiments 1 to 15, wherein the biologic specimens are obtained from one or more of the following: saliva, blood, or stool. Embodiment 17: The computer-implemented method of any of Embodiments 1 to 16, wherein the biologic specimens are analyzed to determine one or more of genetics, proteomics, transcriptomics, metabolomics, microbiomics. Embodiment 18: The computer-implemented method of any of Embodiments 1 to 17, wherein the medical image comprises a Computed Tomography (CT) image. Embodiment 19: The computer-implemented method of any of Embodiments 1 to 10, wherein the medical image is obtained using an imaging technique comprising one or more of CT, x-ray, ultrasound, echocardiography, intravascular ultrasound (IVUS), MR imaging, optical coherence tomography (OCT), nuclear medicine imaging, positron-emission tomography (PET), single photon emission computed tomography (SPECT), or near-field infrared spectroscopy (NIRS). Embodiment 20: A system for improving accuracy of coronary artery disease measurements in non-invasive imaging analysis, the system comprising: one or more computer readable storage devices configured to store a plurality of computer executable instructions; and one or more hardware computer processors in communication with the one or more computer readable storage devices and configured to execute the plurality of computer executable instructions in order to cause the system to: access a first medical image of a test case patient; analyze first medical image of the test case patient to perform quantitative phenotyping of atherosclerosis associated with the test case patient, the quantitative phenotyping of atherosclerosis comprising analysis of one or more of plaque volume, plaque composition, or plaque progression; access a second medical image of a control patient; analyze the second medical image of the test case patient to perform quantitative phenotyping of atherosclerosis associated with the control patient, the quantitative phenotyping of atherosclerosis comprising analysis of one or more of plaque volume, plaque composition, or plaque progression; relate outputs of assays performed on biological specimens obtained from the test case patient and the control patient to the atherosclerosis features and vascular morphology characteristics associated with the test case patient and the control patient, respectively; and based on the related outputs of the assays and atherosclerosis features and vascular morphology characteristics, identify biological specimen assay outputs as targets for drug discovery or development Embodiment 21: The system of Embodiment 20, wherein the targets for drug discovery and development are identified based on comparison of the test case patient to the control patient. Embodiment 22: The system of Embodiment 21, wherein the comparison of the test case patient to the control patient is based on comparing the quantitative phenotyping of atherosclerosis of the test case patient and the control patient. Embodiment 23 The system of Embodiment 22, wherein the comparison of the test case patient to the control patient is based on comparing changes of the quantitative phenotyping of atherosclerosis of the test case patient and the control patient over time. Embodiment 24: The system of Embodiment 23, wherein the changes are evaluated based on quantitative phenotyping performed at greater than two points of time. Embodiment 25: The system of any of Embodiments 20 to 24, wherein the comparison of the test case patient to the control patient is based on comparing the outputs of assays performed on biological specimens obtained from the test case patient and the control patient. Embodiment 26: The system of Embodiment 25, wherein the comparison of the test case patient to the control patient is based on comparing changes of the outputs of assays performed on biological specimens obtained from the test case patient and the control patient over time. Embodiment 27: The system of Embodiment 26, wherein the changes are evaluated based on quantitative phenotyping performed at greater than two points of time. Embodiment 28: The system of any of Embodiments 20 to 27, wherein the biological specimen assay outputs as targets for drug discovery or development comprise one or more of genomics, proteomics, transcriptomics, metabolomics, microbiomics, and epigenetics. Embodiment 29: The system of any of Embodiments 20 to 28, wherein the quantitative phenotyping is further comprises an analysis of one or more of plaque remodeling, plaque location, plaque diffuseness, and plaque direction. Embodiment 30: The system of any of Embodiments 20 to 29, wherein the quantitative phenotyping of atherosclerosis is performed based at least in part on analysis of density values of one or more pixels of the medical image corresponding to plaque. Embodiment 31: The system of Embodiment 29, wherein the plaque volume comprises one or more of total plaque volume, calcified plaque volume, non-calcified plaque volume, or low-density non-calcified plaque volume. Embodiment 32: The system of Embodiment 29, wherein the density values comprise radiodensity values. Embodiment 33: The system of Embodiment 29, wherein the plaque composition comprises composition of one or more of calcified plaque, non-calcified plaque, or low-density non-calcified plaque. Embodiment 34: The system of Embodiment 33, wherein one or more of the calcified plaque, non-calcified plaque, of low-density non-calcified plaque is identified based at least in part on radiodensity values of one or more pixels of the medical image corresponding to plaque. Embodiment 35: The system of any of Embodiments 20 to 34, wherein the biologic specimens are obtained from one or more of the following: saliva, blood, or stool. Embodiment 36: The system of any of Embodiments 20 to 35, wherein the biologic specimens are analyzed to determine one or more of genetics, proteomics, transcriptomics, metabolomics, microbiomics. Embodiment 37: The system of any of Embodiments 20 to 36, wherein the medical image comprises a Computed Tomography (CT) image. Embodiment 38: The system of any of Embodiments 20 to 37, wherein the medical image is obtained using an imaging technique comprising one or more of CT, x-ray, ultrasound, echocardiography, intravascular ultrasound (IVUS), MR imaging, optical coherence tomography (OCT), nuclear medicine imaging, positron-emission tomography (PET), single photon emission computed tomography (SPECT), or near-field infrared spectroscopy (NIRS). Other Embodiment(s) Although this invention has been disclosed in the context of certain embodiments and examples, it will be understood by those skilled in the art that the invention extends beyond the specifically disclosed embodiments to other alternative embodiments and/or uses of the invention and obvious modifications and equivalents thereof. In addition, while several variations of the embodiments of the invention have been shown and described in detail, other modifications, which are within the scope of this invention, will be readily apparent to those of skill in the art based upon this disclosure. It is also contemplated that various combinations or sub-combinations of the specific features and aspects of the embodiments may be made and still fall within the scope of the invention. It should be understood that various features and aspects of the disclosed embodiments can be combined with, or substituted for, one another in order to form varying modes of the embodiments of the disclosed invention. Any methods disclosed herein need not be performed in the order recited. Thus, it is intended that the scope of the invention herein disclosed should not be limited by the particular embodiments described above. Conditional language, such as, among others, “can,” “could,” “might,” or “may,” unless specifically stated otherwise, or otherwise understood within the context as used, is generally intended to convey that certain embodiments include, while other embodiments do not include, certain features, elements and/or steps. Thus, such conditional language is not generally intended to imply that features, elements and/or steps are in any way required for one or more embodiments or that one or more embodiments necessarily include logic for deciding, with or without user input or prompting, whether these features, elements and/or steps are included or are to be performed in any particular embodiment. The headings used herein are for the convenience of the reader only and are not meant to limit the scope of the inventions or embodiments. Further, while the methods and devices described herein may be susceptible to various modifications and alternative forms, specific examples thereof have been shown in the drawings and are herein described in detail. It should be understood, however, that the invention is not to be limited to the particular forms or methods disclosed, but, to the contrary, the invention is to cover all modifications, equivalents, and alternatives falling within the spirit and scope of the various implementations described and the appended embodiments. Further, the disclosure herein of any particular feature, aspect, method, property, characteristic, quality, attribute, element, or the like in connection with an implementation or embodiment can be used in all other implementations or embodiments set forth herein. Any methods disclosed herein need not be performed in the order recited. The methods disclosed herein may include certain actions taken by a practitioner; however, the methods can also include any third-party instruction of those actions, either expressly or by implication. The ranges disclosed herein also encompass any and all overlap, sub-ranges, and combinations thereof. Language such as “up to,” “at least,” “greater than,” “less than,” “between,” and the like includes the number recited. Numbers preceded by a term such as “about” or “approximately” include the recited numbers and should be interpreted based on the circumstances (e.g., as accurate as reasonably possible under the circumstances, for example ±5%, ±10%, ±15%, etc.). For example, “about 3.5 mm” includes “3.5 mm.” Phrases preceded by a term such as “substantially” include the recited phrase and should be interpreted based on the circumstances (e.g., as much as reasonably possible under the circumstances). For example, “substantially constant” includes “constant.” Unless stated otherwise, all measurements are at standard conditions including temperature and pressure. As used herein, a phrase referring to “at least one of” a list of items refers to any combination of those items, including single members. As an example, “at least one of: A, B, or C” is intended to cover: A, B, C, A and B, A and C, B and C, and A, B, and C. Conjunctive language such as the phrase “at least one of X, Y and Z,” unless specifically stated otherwise, is otherwise understood with the context as used in general to convey that an item, term, etc. may be at least one of X, Y or Z. Thus, such conjunctive language is not generally intended to imply that certain embodiments require at least one of X, at least one of Y, and at least one of Z to each be present. | 1,161,514 |
11861834 | DETAILED DESCRIPTION Thermography is able to detect malignant tumors in tissues in a body part, such as the breast, based on temperature variation due to increased vasculature (angiogenesis) and increased blood perfusion to the affected area. The terms Thermography, infrared imaging, IR imaging and IRI are used interchangeably to mean images obtained in the infrared frequency range. The temperature profiles that result from IR imaging show any temperature variation over the body part surface, including those resulting from random vasculature especially near the body part surface, tumors, hormonal changes, and outside influencing factors such as alcohol/coffee consumption, clothing changes, makeup and deodorant/lotion. The present disclosure deals with providing geometric and thermal identifiers, specific markers and techniques to identify the presence and size/location of malignant tumors from the infrared images of the body part. These images are obtained by an infrared (IR) camera. Although the disclosure is described in greater detail using the female breasts and breast cancer, the disclosure covers the presence of suspected malignancy in any tissue in the body. Suspected malignancy refers to an area of suspicious cellular activity indicative of cancer. These may be obtained under steady-state or dynamic conditions using steady-state or dynamic Infrared Imaging (IRI). This disclosure describes various methods for malignancy detection in tissue. Breast cancer is an ideal candidate for IR imaging because the breasts lie in between the chest wall and the environment. There is no empty space in the body for other factors to block or alter the resulting thermal images. IR imaging could be incredibly beneficial for other diseases with a similar ideal setting. Examples of other potential diseased tissue that can benefit from this technology are listed. Thyroid Cancer—Thyroid cancer is rarer in the United States compared to breast cancer and colon cancer. It is also highly treatable with a low mortality rate. Similar to breast cancer, it is often initially discovered through self-examination. A lump or nodule discovered in the thyroid leads to an ultrasound and biopsy before cancer is diagnosed. Detecting the early signs of thyroid cancer can be challenging. Although the prognosis of thyroid cancer is very good, early detectability is always an issue. Because the thyroid is located close to the neck surface, surface thermal imaging would be simple. Skin Cancer—Although much of skin cancer can be visually assessed, rarer forms of skin cancer can be harder to diagnose. Skin cancer involves lesions, bumps, inflammation, or internal disease. Determining the extent of the cancer and later stage metastasis such as lymph node involvement is where imaging techniques are needed and where IR imaging could particularly come into play. Testicular Cancer—Testicular cancer is a rarer form of cancer with only approximately 20,000 cases in the United States per year. Similar to breast cancer, detection is similar by feeling a lump in either testicle. Due to the similar properties of the breast and breast cancer, IRI could easily be implemented to screen for testicular cancer. Non-Cancerous Diseases—The heat and increased blood perfusion associated with malignant tissue provides a good environment for thermal imaging. Other diseases, such as inflammatory diseases, are also strong contenders for this diagnosis modality. Diseases such as neuropathy, joint inflammation and arthritis, and various bowel diseases have been imaged using IR imaging. Breast cancer-related mortality rates in the United States have decreased due to advancements in screening and treatment. The prevalence of breast cancer in the United States is higher than in the developing countries. However, the mortality to incidence ratio in developing countries is higher. Although some cancers are easily detected after progression and metastasis, the survival rates become much lower. An increase in annual breast screening could dramatically improve survival rates in developing countries. However, the current techniques for breast cancer screening are expensive, not portable, adversely affected by the presence of dense tissue and cannot be easily adapted in remote locations. Thermal imaging can easily be made portable for implementation in remote or rural setting. With today's updated technology, some of the infrared cameras are compact enough to connect to a smartphone. Potential future application could include the development of a mobile application for onsite analysis of infrared images during screening. The only additional needed equipment is a seat or stool and an added support system to screen someone in the prone position with the breasts hanging freely, without gravitational deformation often seen in supine or seated positions. Portable infrared imaging camera is used in specific orientations including but not limited to frontal, oblique views, downward looking, upward looking on the body part being imaged. A body part refers to any body part or organ in both humans and animals that can be clinically imaged. Resulting images can be transmitted to an image processing center for further evaluation. The images can be further processed using the technique described below, and suspicion is determined. The results can be discussed with the consulting physician and further evaluation can be prescribed for thermal abnormality identification and specification. Although prone position is desirable to image the breast, other positions used in IR imaging can be used for detecting cancer using this invention. The wireless connectivity and ability to transmit the images to a central station where the images are used for further analysis using detection software, including the ones from this disclosure will further improve the early detection rate. Throughout the IR imaging and detection process, there are many key steps. The patient enters the room and is screened. IRI involves the acclimation of the body part to be imaged in order for steady state to be reached. Acclimation includes quasi-steady state conditions where temperatures are reasonably steady over a sufficient time, including a ten to twenty minute duration, five to ten minute duration, or one to five minute duration. Once the imaging begins, it involves the capturing of from 1-20 images or more around the body part. The images are used to identify abnormalities associated with malignancy. The images can be further processed using software techniques and numerical simulation tools to characterize the tumor. Tumor characterization involves identifying at least one or at least several of the parameters such as tumor size, tumor location, tissue thermal properties, metabolic heat generation of the tumor, metabolic heat generation of the non-cancerous tissue, fat layer thickness, and fat layer thermal properties. Some further characterization involves shape of the tumor. An example of this method uses the breast and IRI to detect malignancy. The IR imaging table can also be redesigned for other bodily purposes, for medical reasons, for comfort, etc. including but not limited to, adjustments in the center of the table for subjects who are morbidly obese or pregnant, subjects who cannot be in the specific position, subjects who would rather be kneeling, etc. The specific position includes upright, supine, lying sideways, or prone position dependent on the body part being imaged. In the instance of screening for breast cancer, when a subject enters the infrared imaging room, she is requested to disrobe from the waist up. Although the subject is generally female, the technique can be used for other genders. The technique can be used for other living species including but not limited to dogs, cats, horses, etc. A hospital gown for privacy is given and worn with an opening in the front. The subject lies down on the table in the prone position with one breast placed in the opening in the table. A period of, for example, 10 minutes is allowed for proper acclimation from the surrounding room temperature. This period may be increased or decreased depending on the acclimation time needed for the breast surface to reach near thermal equilibrium state with the surroundings. Prior to imaging, subjects are asked a variety of questions to determine if any external factors will affect the resulting images. They may be asked some or all of the questions, or other questions designed to get the relevant information related to each question. Additional relevant questions that affect the thermal profile may also be asked. For example, when screening for breast cancer, the following questions are discussed with the patient:Have you sunbathed within five days prior to the exam?Have you used lotion/cream, makeup, deodorant on the breasts on the day of the exam?Have you exercised today?Have you smoked or had alcohol today?Have you had coffee or tea today?Were you wearing tight-fitted clothing?Are you post-menopausal? If not, what is the current day of your menstrual cycle? When was your last period? Do you have a regular cycle?Are you on birth control? If so, what type?Did you have a previous or a recent injury to your breast?Did you recently have surgery?Did you have any surgery on breast?Do you have a fever? When steady state is reached, infrared imaging begins. Images are taken, beginning at the head, separated by 45° looking up at 25° angle to vertical. The process of taking each image may only take about 30 seconds-1 minute. Shorter or longer duration may be needed depending on the camera and imaging setup. One example of a camera mount is shown inFIG.1but the camera setup can be created in a number of ways. The resolution of a suitable IR camera that is used is 640×512 pixels, with a thermal sensitivity of 0.02° C. (FLIR SC6700). Other cameras that provide the necessary thermal information may be used with different pixel sizes and resolution. The angle and focus of the camera varies based on the tissue being screened. Other angles to vertical orientation and angular separation may be used to obtain specific or detailed images of the body part. In order to obtain images of potentially suspicious tissue, an imaging table is used to facilitate screening. This table can be designed in a multitude of ways dependent on the tissue being imaged and can be modular to fit many body types or abnormalities. In the example of imaging breast cancer, using imaging positions such as the supine and upright positions, imaging the breasts with infrared imaging can cause unwanted thermal distortions in the inframammary fold, between the breast and the underside of the breasts. Examples of some table designs used to screen the female breast for breast cancer are discussed further. One such embodiment, seen inFIG.2, involves a table with a circular opening where the breasts are exposed to an IR camera mounted on a stand that moves on a circular track. The purpose of the table is to facilitate obtaining infrared images in the desired view of the breast. The imaging table is a retrofitted lab table with a 9-inch hole for the breast imaging. A 2-inch layer of foam is placed on top for comfort and a layer of disposable paper is used for cleanliness. A black curtain was added around the edge of the table to hide the camera equipment and ensure no reflection or stray thermal artifacts during imaging. Additional acclimation time may be given for the acclimation of the contralateral breast because the subject was lying on that breast on the table during the imaging of the first breast. This causes the breast to warm and takes longer for steady state to be reached. A change in the design of the imaging table is presented to avoid this and ensure a shorter acclimation period by allowing both breasts to hang with a separating fabric that is used to obtain clear view of the breast being imaged. In addition, the following techniques are disclosed to reduce the acclimation time. It is imperative with thermal imaging to observe the entirety of the breast surface and reduce any unwanted thermal alterations caused by the breasts touching each other or the breast touching the chest wall. The subject is imaged in the prone position, similar to an MRI. The original table design was meant to replicate tables used for stereotactic breast biopsy with one opening in the center to observe one breast at a time. However, the current table used for clinical imaging causes potential unwanted thermal alterations during imaging. The acclimation time needed to ensure steady state condition suitable for infrared imaging varies from 1 minute to twenty minutes. In some cases, a longer duration may be implemented. In an embodiment, all testing takes approximately 23 minutes. Changes in the protocols, imaging duration and intervals between images will cause changes in the total imaging time. The subject enters the room, disrobes from the waist up and places their right breast into the opening in the table while the left breast is tucked against the table. Ten minutes pass for proper acclimation and to ensure that the temperature on the breast reaches steady state. When the subject switches sides, the contralateral breast is significantly hotter. This requires a longer acclimation period to ensure that the temperature of the contralateral breast reaches steady state before imaging continues. Another embodiment of a clinical table for breast cancer screening is closer in design to a breast MRI table. Two holes are added, one for each breast, with a perforated cloth or a flexible fabric lying between them. The cloth is added to move one breast out of the way so the other breast can be imaged in a 360° view. InFIG.3, the first panel shows the squished breast lying underneath the patient while the contralateral breast hangs freely through the opening in the table. This is found in the current clinical design, causing a longer acclimation and imaging period. The second panel shows the side view of a redesign with two holes cut in the table and the added cloth in between the two. The final panel shows the cloth is swept to the side to move the second breast out of the way. Although one breast will be touching a cloth, the perforations will prevent the breast from heating and will reduce acclimation time considerably. In addition to the added perforated cloth in between the two breasts, adjustable holes are added to fit to the patient. The table and the imaging system are designed to accommodate different ranges in breast size, shoulder width, body contour, and weight. In order to have one table that matches all patients, it can be made adjustable. A top view and bottom view of the new table with modular design is seen inFIG.4. There are modular pieces in the horizontal and vertical directions for each breast hole. When the subject lies on the table, the modular pieces will be adjusted to fit their breast size and weight. Once everything is adjusted accordingly, the perforated cloth will pull one breast to the side (FIG.3C) and the acclimation period will begin. Extra padding will be added between the two modular holes to ensure subject comfort. The perforated cloth serves the function of pulling the non-imaged breast away so that clear access can be achieved for the IR camera. It needs to move the breast without causing significant thermal changes due to insulating effect. The perforations serve the purpose of providing air cooling while the breast is moved away. The perforated cloth may be replaced with a mesh, be made of different materials including wire, polyester, nylon, etc. Other potential modular designs could include a specially made gown for each patient with holes where the breasts lie and a larger slot within the imaging table. This would significantly reduce the influence of the chest wall on the resulting thermal images and would provide more comfort for the patient. Instead of altering the gown, a softer, cloth-made modular design could be implemented for the same purpose. Altering the clinical imaging table as opposed to the subject gowns will provide much more control and help with cleanliness. Acclimation Time—The desired acclimation time to reach steady-state is between 5 minutes and 20 minutes for efficient throughput with good imaging quality for accurate detection. A faster acclimation time may be reached by providing loosely fitting clothing or minimal clothing during wait period. The time used for the preliminary study was 10 minutes. The series of images inFIG.5shows the left breast, the breast without a tumor, of Patient 23. This patient had a lot of random vascular throughout the breast, some more striking or larger than other areas. The larger visible blood vessels running through the breast do not have a drastic temperature difference from the beginning of acclimation to the end. However, the smaller, deeper blood vessels do change over time and reflect differently on the surface. The change in acclimation time can change the visible thermal profiles on the breast surface, particularly where the blood vessels and tumor(s) are resting. One potential method of creating a more uniform temperature across the surface is by reducing the heat transfer rate from the breast to the environment. An insulating brassiere can be used to sit on the breasts for a certain amount of time to ensure even temperature is reached. When the bra is removed, the vasculature thermal profiles will change when exposed to ambient temperature. The resulting changes in vasculature temperature can be observed over a period of time. The observable cooling period for the breast once exposed to the ambient temperature can help in differentiating hot spots due to tumors vs. hot spots due to vasculature. Hot spots are also referred to as an area of increased hyperthermia. As the breast cools, the lines of vasculature become more defined and linear while the thermal regions induced by the tumor remain more diffuse. Some changes are expected when the tumor is closer to the skin surface and the intensity of the thermal changes is considered. After the screening process, whether screening breast tissue or other body parts, one or more of the images are analyzed and it is determined whether or not a tumor is potentially present. The regions of interest are identified where thermal profiles are indicative of abnormalities that may be associated with the presence of cancer. The terms cancer and malignancy are used interchangeably and cover all forms of cancerous tissues. Different methods used to analyze thermal regions of interest are presented and discussed in greater detail in Method A. Method B describes the generation of a digital model of the body part using clinical images. A digital model is a 3D computer representation of the body part. The process can generate a 3D digital model of a body part whether the body part has a single tumor, multiple tumors or is considered healthy (free of suspicion). This model can be used to generate surface and internal temperature distributions of tissues using thermal simulation software (for example ANSYS Fluent). The generated temperature distributions can be compared with the temperature distributions in surface infrared images for the same orientation. The method also describes criteria that can be used to decide a match between the generated phantom thermal images and surface infrared images with actual temperature distributions. The thermal matching criteria are generated using Method A and are described further in Method C. Method C describes a technique to identify tumor characteristics including size, location, metabolic heat generation rate, and thermal properties of different components of the body part including, but not limited to tissues and fat layer. Method C can be used before or after suspected malignancy is detected. Method C employs the use of mathematical characterization of the temperature distributions, comparison of thermal images from the simulation and from the infrared images and a method to determine some or all of the thermal characteristics of tumor and healthy tissue. Finally, images can be added to a digital library based on identified thermal and geometric identifiers to be used in further analysis and diagnosis, discussed in Method D. Similar procedures can be adapted for cancers of other body parts. If no regions of interest are identified, the patient may be referred back to a consulting radiologist or primary doctor. More details on the components of this patient workflow and the underlying mechanisms are described in greater detail below. METHOD A: A method for determining malignancy within tissue using infrared imaging. —A region of interest is identified as a region on the body part surface where further analysis is considered based on the thermal abnormalities observed in surface infrared images. Tumors within tissue will have a greater amount of heat propagation throughout the tissue whereas vasculature or other thermally altering features will have a more defined thermal abnormality such as narrow and longer lines of increased temperature. After an image is taken, a geometric map is created on the surface infrared image using various techniques. There are multiple techniques that are used to develop a geometric map in order to detect regions of interest within tissue. These methods involve gridlines, profile lines, thermal contours, visual correlations and statistical indicators. Other mathematical or statistical parameters can be employed. Using the various techniques, thermal indicators are generated based on criteria specific to the technique used. Regions of interest are identified based on the thermal indicators and analyzed for suspected malignancy. Care is taken to avoid thermal saturation in the image over the tissue so that accurate information on temperature profile and its variation is obtained. Surface infrared images may or may not represent the exact surface temperature due to emissivity correction needed in the image. However, assuming the surface emissivity is uniform, the temperatures indicated by the infrared image are representative of the temperature field which may be somewhat offset from the true value. Use of lotion and other creams, etc. may cause changes in emissivity and their use is discouraged prior to imaging. The following techniques are used for analysis. Statistical Indicators—There are several statistical indicators proposed in the techniques discussed below including but not limited to mean temperatures, minimum temperatures, maximum temperatures, standard deviation, variance, median, etc. in order to determine suspected malignancy. Qualitative and quantitative measures are derived based on the surface infrared image of the body part. Correlate with Visual—Mastectomies, lumpectomies and other forms of scar tissue may create variations in thermal images or create confusing distortion. Combining IR imaging with visual data could help in determining if various thermal distortions should be examined more closely. However, obtaining visual images can be a sensitive issue from privacy considerations. Instead of digital photography, the tissue surface may be digitally reproduced using MRI images. Digital reproduction of a body part can be accomplished through other imaging modalities, including but not limited to, infrared imaging, outline capture techniques, shadow techniques, etc. In one embodiment, the imaging operator could write down observations about the visual scars, abnormalities, imperfects, etc. that correlate with factors seen in collected infrared images. Temperature Distribution in a Thermal Grid—One such system of analysis would be the implementation of a grid along the body part. This grid could be sketched in multiple ways including a longitude-latitude type pattern with the longitudinal and latitudinal lines matching the contours of the body part. Other gridding systems could involve biased lines, changing line density based on features of focus or an alternate style of grid pattern. Although a few types of grids are described, any grid pattern that can be generated on the infrared image for further analysis or comparison will provide the needed information. The goal of implementing a grid is to enable defining the temperature variation in each segment of the body part. With an overall estimated average surface temperature, finding the variation in the mean temperature in each section of the body part can make it much easier to find abnormalities. Using a biased grid or a varying density grid can help for various purposes. It can also be used to single out various thermal abnormalities. Thermal abnormalities are defined as hot spots or regions of interest including but not limited to suspected malignancy, suspected benign masses, suspected vasculature or suspected scar tissue. With fewer grid boxes, it will be easier to find the boundaries of the thermal abnormality in order to determine if malignancy is a concern or if stray vasculature is creating additional heat signatures. The grid temperatures may be obtained as an average of one or more pixels at known locations in a grid that is mapped over the infrared image of the skin surface. Additional features such as elimination of outliers or similar procedures used in image processing may be applied. Temperature profiles—Another method to identify thermal features is to obtain the surface temperature profile along lines drawn on the skin surface. These lines can be the same as described for the generation of the thermal grid. The temperature profile along these lines can be treated without processing, or processed to mitigate small temperature variations. Such processing can be done through Median, Average, Gaussian, Moving Average, Savitzky-Golay, Regression, or any combination of these filters. If the sign of the temperature gradient along the distance from the chest wall changes from positive to negative, it may be used as a marker of the presence of cancer. The distance over which the gradient is obtained is an important consideration. If the slope changes in certain region, while it may not change the sign, it may also be indicative of the cancer. If the slope changes by more than 10 percent over the certain distance, it may be used as a threshold. In other cases, a change from 10-50 percent or higher may be used as a marker. The slope is calculated over a reasonable distance to avoid any image aberrations. Thermal abnormalities can be classified as a temperature difference between maximum and minimum temperatures higher than 0.2° C. within a range of 2 mm to 10 mm, a preferred range of 2 mm to 5 mm. Thermal abnormalities are classified as a tumor if the temperature difference between maximum and minimum temperatures is higher than 0.5° C. over a 5 mm to 40 mm range, a preferred range of 10 mm to 30 mm. Thermal abnormalities are classified as veins if the temperature difference between maximum and minimum temperatures is between 0.2° C. and 2.0° C. over a 1 mm to 5 mm. If an abnormality is classified as both a vein and a tumor, it is classified as a tumor. Aspect Ratios—In order to distinguish between possible tumors and blood vessels in a surface infrared image, the aspect ratio of the region of interest can be used. Tumors have a larger area of diffusion from the center of the hot spot to the surrounding tissue. Other factors such as veins have a more secluded, well-defined hot spot. By studying various lengths of hot spots on the skin surface, an aspect ratio can be calculated. This ratio, depending on the resulting value, will help indicate if an area is suspicious of malignancy or denotes a different thermal feature such as vasculature. Thermal Contours—Using thermal contours with the hot spot in question as the central point is another method to hot spot differentiation and detection. Similar to a topographical map, thermal contours can be drawn around the hot spot and the diffusion through the tissue measured. Because of the increase in blood perfusion, malignant tumors have a stronger heat presence that steadily warms the surrounding areas, dissimilar to vasculature. Using appropriate markers, it is possible to identify the thermal changes due to angiogenesis. This may be a result of observing specific vasculature patterns that are noted in angiogenesis that are different from the regular vasculature. A comparison may also be made with infrared images obtained from prior visit or visits of the same subject to observe the thermal artifacts. METHOD B: A method to generate a digital model and computer simulated temperature profiles on the skin surface of the body part from clinical images—A digital model is a digital entity that has the actual shape of the body part under analysis and can be manipulated and modified. Also, if necessary, a volumetric or surface mesh can be generated on the digital model. The digital model can be generated from any imaging or video modality of the body part under analysis, including but not limited to digital photographs, infrared images, magnetic resonance images, magnetic resonance angiograms, ultrasound images, mammograms either 2D or 3D, computed tomography scans, data from 3D scanners, laser scanners, depth sensors such as the Microsoft Kinect, video processing, or any combination of these and other imaging modalities. A tumor of known characteristics, including but not limited to size, shape, metabolic heat generation rate, and thermal properties of different tissue or fat can be introduced within the digital model. The digital model can be used to conduct computer simulations to compute temperature distributions or profiles for various tumor location and sizes. The resulting model, referred to as a phantom thermal model, is used for comparison with the surface infrared image. Appropriate thermal boundary conditions are employed in the thermal simulation, such as constant chest wall temperature, given heat transfer coefficient or coefficients on the skin surface, ambient temperature, and emissivity of the skin surface and temperature of surroundings if radiation effects are being considered. The digital model can be generated from the data obtained from the imaging modalities or by processing one or multiple individual images using techniques such as image filtering, edge detection, segmentation, intensity transformation, multiview reconstruction, photogrammetry, marching cubes, marching tetrahedrons or any combination of these methods, or any other method that results in a 3D representation of the body part. If desired, the digital model can include the internal structures such as blood vessels and skin and fat layers. The resulting model can be used in its current state or modified to remove or add texture features either using a Computer Aided Design (CAD) software, a modeling software, or any software in which the model can me modified or smoothed to include new features. The digital model can include one or multiple tumors with characteristics including size, shape, location, metabolic activity, thermal conductivity, perfusion rate, etc. Once the digital model contains the desired features, a mesh, either surface or volumetric, is generated in order to create smaller regions to solve the governing equations of heat transfer in the domain. The mesh can be generated by any software or procedure known in the art. The governing heat transfer equations can be solved using available commercial thermal simulation software or open source software. Preferably, the minimum number of mesh elements is 1,000 for volumetric meshes and 100 for surface meshes. A higher resolution can be achieved using at least 100,000 elements for volumetric and 5,000 for surface meshes. Higher or lower number of elements can be implemented depending on the size of the region and desired overall computation speed or accuracy. The quality of the elements in the resulting mesh should be within recommended values for the software for accurate numerical computations. For example, the skewness of the mesh elements should be below 0.95, with preferred values below 0.7. Depending on the sophistication of the software used, the actual number of mesh elements can be smaller or larger. Once the mesh is generated in the digital model, the governing equations can be defined. The governing equations can be analytical, empirical, semi-empirical or any combination thereof. Some examples are the Pennes Bioheat Equation, the Countercurrent, and the Jiji models. These governing equations may or may not take into account the effect of the vasculature on the temperature calculations. This effect can be included either using models for the vasculature, from clinical images or artificially generated using software such as Vascusynth. Appropriate values of the tissue properties and parameters should be defined prior to conducting the simulations, some of the parameters include thermal conductivity, specific heat, density, blood perfusion rate, metabolic activity, etc. The overall goal of the digital model and the thermal simulation is to provide an accurate estimation of temperature profile on the surface of interest for a given digital model with given tumor characteristics. In order to solve the governing equations in the domain, either for a steady-state or transient formulations, boundary conditions are used to define the interactions of the computational domain with its environment. The surface is generally exposed to the still air, for which a convective boundary condition can be used. The value of the heat transfer coefficient considering a mixture of radiation, natural convection and evaporation from the skin is preferably in the range of 5-25 W/m2-K; however any value can be used. Other alternatives include forced convection or natural convection by modeling the surroundings, a fixed initial surface temperature, radiation effects or any combination of these to account for the heat transfer between the model and its surroundings. For the other surfaces of the domain, any relevant boundary condition can be used, for example, fixed temperature, known heat flux, known temperature distribution, temperature distribution from experimental or analytical data, symmetry, insulated faces. These conditions can be either stationary or time dependent. Once the digital model is meshed and the boundary conditions and tissue properties are set, the temperature field in the computational domain can be obtained from the thermal simulation software. The governing equation can be discretized in the software using the Finite Volume Method, the Finite Element Method, Finite Differences, The Boundary Element Method or any other suitable discretization method. The solution can be obtained using commercial software, open-access software, by developing in-house scripts/programs/algorithms, or any combinations of these. This software can run in parallel or serial mode either on a CPU or GPU (graphics processing unit). The solution can be obtained by running the routines in parallel, serial, multithread or single-thread processes in any architecture or processor, including, but not limited to CPU and GPU (graphics processing unit). Any other technique to obtain the thermal profile for a given digital model can be implemented. The tumor introduced in the digital model can have any shape, including but not limited to spheres, cubes, ellipsoids, cylinders, pyramids, prisms, irregular shapes, actual tumor shape from imaging and any combination of those. The thermophysical properties of the tissues such as thermal conductivity, specific heat, blood perfusion, metabolic activity, or any other, can be constant, variable in space, variable in time, or any combination of these. The variation can be defined by any continuous, discontinuous or piecewise mathematical function or combination of functions. An optional but very highly recommended step is to validate the temperature computations of the digital model. In order to compute accurate temperature distributions, the predictions from the thermal simulation software should match closely to the temperature observed from the phantom thermal images to the surface infrared images. Therefore, it may be necessary to validate the digital model. Method B can be used to validate the digital model using a case where the tumor characteristics are known and the infrared images are available. The process steps in one such embodiment may be as follows—Obtain a digital model of the body part of interest from the infrared images or other imaging techniques. If available, incorporate the tumor characteristics in the digital model, generate simulated temperature distributions using thermal simulation software, compare the simulated and actual IR temperature distributions and determine whether there is a good match using thermal matching criteria. In one embodiment, the phantom thermal images are generated using the thermal simulation software saved in views/orientations that reproduce the views/orientations that were used to capture the infrared images during clinical testing. To facilitate the comparison, the phantom thermal images and surface infrared images should correspond to the same view, orientation and angle of the body part. Image alignment also known as image registration or registration is an important step while comparing the phantom thermal image and surface infrared imaging in order to assure that the corresponding regions of interest are closely matched with each other. The surface infrared images can be analyzed, either by technicians, clinicians, or through software either automated, semi-automated or manual, to inspect them for abnormal features, including but not limited to, abnormal lumps, scars, missing tissue and deformation. The outcome from the analysis can be used while comparing the computed and infrared images. The validation can be conducted by comparing specific parameters in the computed and clinical thermal images at the pixel level (2D images), voxel level (3D images), or analyzing portions of the image or images, or the entire image or 3D model. The same or similar procedure can be used to compare images in other scenarios, such as comparing the results in an iterative procedure, outlined in Method C. The comparison is used to validate the digital breast model. It can also be used to verify the match between computed and infrared images, and update the values of the parameters in the iterative process. In one embodiment, the portion or regions of the images to analyze can be a gridding system along the body part. The grid can be sketched as longitudinal and latitudinal lines following the contours of the body part, horizontal and vertical lines, skewed lines, concentric and eccentric lines and any other gridding system to divide the part under analysis. The grid lines can have a constant spacing or variable spacing. The region to be analyzed can be further divided into subdivisions, preferably more than 4 subdivisions. A more preferred number of subdivisions is more than 16. In one embodiment, a central region representing between 1% and 100% of the image is used for comparison. In a preferred embodiment, a range between 5% and 75% is used, whereas, in a more preferred embodiment, a range between 25% and 50% is used. One of the factors in determining the range is the size of the region of interest, while other factors including, but not limited to, are size of image, registration match between the images. These specific parameters can be individual indicators described in Method A, and including but not limited to mean, median, variance, standard deviation, texture, entropy, maximum, minimum, moment, correlation, or any other first or second order statistical parameter or mathematical function of them. The parameters can also be distributions of values along specific paths such as the gridding system proposed in this invention, along regions of interest in the images or 3D models, or a combination of these. It is preferred to conduct the comparison between parameters after the 3D models or individual images are aligned/registered to ensure that the comparison is being conducted between corresponding regions/levels, any suitable registration method such as intensity-based, feature-based or a combination of them can be used, although no image registration is required. During registration, images may or may not be scaled to the same size, although scaling is preferred to facilitate the comparison. The comparison of parameters can be conducted by computing the difference, absolute error, averaged error, mean squared error, correlation, any mathematical function between these or their combinations. The difference between the computed and clinical images in the thermal parameters that provide a representative temperature are called as the convergence criteria and should be below 3° C. for accurate tumor detection, but preferably below 1° C., or more preferably below 0.5° C., and most preferably below 0.2° C. The accepted value is balanced between the competing needs to reduce false positives and improve accurate detection. The convergence criteria could be based on other indicators such as temperature gradients, where differences between computed and actual should be below 3° C./cm, values below 0.5° C./cm are more preferred. Any other statistical parameter can be used to define the convergence criteria either thermal or in terms of pixel intensities of individual pixels or clusters of pixels from the images. METHOD C: A method to localize a tumor within tissue—The digital model can be used in software to localize tumor in terms of estimating relevant parameters such as thermal conductivity of tissues (skin, fat, gland, muscle, tumor, etc.), blood perfusion and metabolic activity of the tissues, location, size and shape of a tumor, or external conditions such as ambient temperature, heat transfer coefficient, or any other relevant parameter. First, initial values of the parameters required in the digital model are set using thermal simulation software along with appropriate boundary conditions to generate phantom thermal images. The details of the digital model generation and thermal simulation described in Method B can be employed in Method C. The surface infrared images are the target images to which the generated phantom thermal images are compared. The phantom thermal images and surface infrared images are processed and compared using any criteria for comparison, such as described in Method B. If the difference of parameter values between the phantom thermal images and the surface infrared images is below a convergence criteria determined by the user, the values of the parameters are accepted as the estimates from the software. If the difference is above the convergence criteria, the parameters are updated and new phantom thermal images are generated, the process is repeated until the convergence criteria are satisfied and the parameter values are accepted. The comparison between the phantom thermal images and the surface infrared images can be done using any of the techniques and algorithms described in Method B. In order to update the values of the parameters to estimate, any optimization procedure can be used such as the Levenberg-Marquardt algorithm, The Gradient Descent method, The Conjugate Gradient Method, The Simulated Annealing Method, Particle Swarm Optimization, Ant Colony Optimization, Sequential Quadratic Programming, Artificial Neural Networks, Support Vector Machines, Genetic Algorithms, any combination of them and any other existing or new method suitable to solve optimization problems. The localization of an area of suspected malignancy is defined as obtaining the location, size and other characteristics of a tumor. The location of the tumor can be measured in terms of a set of coordinates (x, y, z) from an origin to its center of gravity or to any other point inside the tumor outline or on its surface. Any coordinate system such as Cartesian, Cylindrical, Spherical, or any mapping and combination thereof can be used. Any combination and number of parameters can be estimated using the methods described herein. The estimation can be obtained from the phantom thermal images by comparing the thermal identifiers to a library of thermal identifiers. Other methods to conduct the estimation include Artificial Intelligence algorithms trained with data in the library of thermal identifiers including but not limited to Artificial Neural Networks, Support Vector Machines, Convolutional Neural Networks, Genetic Algorithms and any combination of these. Another method includes using a digital model prepared from any of the imaging modalities described herein. The origin of the coordinate system to locate the tumor can be defined as any point either internal, on the surface, or external to any part of the body. The procedure can also be used to locate multiple tumors. The process for multiple tumor identification can be invoked when a single tumor has been identified and there is at least one region for which the clinical and computed temperature temperatures differ by more than 0.3° C., preferred values are above 1° C., more preferred values are above 1.5° C. The convergence criteria could be based on other indicators such as temperature gradients, where regions with differences above 0.5° C./cm are identified, values above 2.5° C./cm are more preferred. Higher or lower values of any of the convergence criteria could be employed to improve the detection accuracy. Any statistical indicator can be used to define criteria for multiple tumors, such as thermal indicators or indicators in terms of pixel intensities of individual pixels or clusters of pixels from the images. Thermal indicators refer to temperature changes on the body part surface observed in surface infrared images or phantom thermal images. Other suitable values may be used based on accuracy or speed of simulation, although accuracy is of primary concern. The location of the identified tumor is fixed and the procedure is repeated until a convergence criteria is met for the second tumor location and size. In case of more regions of discrepancy, the procedure can be repeated as many times as needed. If multiple tumors are present, an iterative procedure can be further implemented by keeping the second tumor fixed and refining the first tumor. Similar strategy can be used for multiple tumors. The process may be iteratively repeated to improve accuracy. The estimated convergence criteria and parameters can be refined by comparing the outcome with clinical data, including surface infrared images. The parameters that can be refined include but are not limited to, tumor shape, aspect ratio, size, location, and other tumor characteristics to include early stages of cancer such as cell linings occurring in ductal carcinoma in situ. One optional but highly recommended step is to analyze the outcome values of the estimated parameters, preferably those referring to the location and size of the tumor. The analysis can be done by trained personnel such as clinicians, for example the examining radiologist, or software, either automated, manual or semi-automated. The analysis can distinguish between different scenarios, including but not limited to tumor size and locations falling within common ranges for the specific cancer under analysis, tumor is not found in the domain under analysis or its location falls outside the domain, tumor is too small and its location is beyond common values, tumor is too big, etc. For each of the possible scenarios, the entity analyzing the outcome will provide recommendations for further analysis. The Method C is both applicable to steady state or dynamic infrared imaging. METHOD D: A method to utilize a digital library for comparison—A digital library is generated from images such as clinical images, phantom thermal images, or surface infrared images of any body part in order to store relevant thermal and geometric data for future comparison with infrared imaging screening. The library is where information or data is stored in electronic or other media forms. When images of a body part are added to the digital library, geometric and thermal identifiers are identified. The geometric identifiers of images of a body part are compared with the geometric identifiers of images found in the digital library. After a geometrically similar match is identified, the thermal identifiers are compared. The digital library may contain geometric identifiers, thermal identifiers, and infrared images from different orientations and distances, patient details, tumor identifiers including size, shape and histology, details regarding how the information was obtained, computer generated thermal images and their geometric identifiers, thermal identifiers and tumor identifiers. The geometric identifiers are identifiers that are related to the geometrical details of the body part. These libraries can be dynamic and trained with individual case studies. The tumor identifier may include information regarding whether a tumor is present or not, its size, shape, type of tumor and other tumor location details that enable location of the tumor within tissue. The thermal identifiers may include information on the tumor properties and how they affect the surface temperature profile. They may include information on maximum temperatures, minimum temperatures and the gradient throughout the tissue. In accordance with an aspect of the present disclosure, there is provided a breast cancer detection process using infrared images in which geometric and thermal identifiers are generated and compared with identifiers stored in a digital library to determine tumor characteristics. In an embodiment, a method for breast cancer detection includes:a. isolating the breast and obtaining thermal images using an infrared camera;b. comparing images to those in the digital library for geometrical identifiers including,looking for similarities in geometry, including but not limited to breast shape, breast size, breast circumference, distance from chest wall to nipple, volume, etc;c. comparing images to geometrically similar images in digital library for thermal identifiers including,utilizing thermal identification methods including but not limited to thermal contours, thermal profiles, statistical indicators, and gridlines;d. using thermal indicators to determine abnormality; ande. adding thermal images to digital library for future comparison. In accordance with an aspect of the present disclosure, there is provided a procedure to obtain clinical infrared images of the isolated breasts from multiple positions including:a. subjects either with or without breast cancer are recruited;b. subjects asked a series of questions about potential activities and factors that can influence thermal distribution of the breast;c. one breast is isolated and allowed to acclimate to room temperature;d. multiple images are taken around the circumference of the breast; ande. the process is repeated for the contralateral breast. In accordance with an aspect of the present disclosure, there is provided a procedure to generate geometric identifiers of the breast that contain relevant geometric information to define the shape, structure and topology of the breasts:a. isolating the breast and obtaining clinical images of multiple views including,clinical images obtained using methods including but not limited to infrared imaging, MRI, mammogram and x-ray;b. generating a 3D digital model using collected images;c. measuring multiple factors on the breast including but not limited to the circumference around the breast at multiple locations, the distance from the nipple to the chest wall, the size of the breast, the shape of the breast, horizontal and vertical dimensions of breast, etc.; andd. adding images to digital library for future comparison. In accordance with an aspect of the present disclosure, there is provided a procedure to generate thermal identifiers to identify regions of increased hyperthermia obtained from surface temperature information of a body part:a. multiple methods are used during post-processing to identify regions of interest and differentiate between vasculature and tumors, including but not limited to,i. gridlines including,1. gridlines including latitude and longitude drawn on the body part creating individual boxed regions on the body part,2. average temperature in each box calculated,3. temperatures differences between adjacent boxes greater than 0.3° C. are considered suspicious,4. additional lines drawn closer together in areas of increased hyperthermia to narrow region of interest,5. tumors or veins identified using increased areas of hyperthermia also known as hot spots, and6. veins and tumors differentiated,ii. profile lines including,1. lines drawn through regions of identified interest also known as regions of increased hyperthermia,2. the plots created correspond to temperature changes over distance along the profile lines,3. slope of the resulting plot lines measured and identified as abnormalities or normal thermal changes,4. thermal abnormalities are classified as a temperature difference between maximum and minimum temperatures higher than 0.2° C. within a range of 2 mm to 10 mm, a preferred range of 2 mm to 5 mm including,i. thermal abnormalities are classified as a tumor if the temperature difference between maximum and minimum temperatures is higher than 0.5° C. over a 5 mm to 40 mm range, a preferred range of 10 mm to 30 mm,ii. thermal abnormalities are classified as veins if the temperature difference between maximum and minimum temperatures is between 0.2° C. and 2.0° C. over a 1 mm to 5 mm,iii. if an abnormality is classified as both a vein and a tumor, it is classified as a tumor,iii. contours including,1. contours drawn around visible hot spots or temperature differences,2. heightened temperatures detected and identified as abnormalities,3. aspect ratios used to define regions of interest and differentiate veins and tumors; andb. abnormalities detected by gridding systems and profile lines are classified as thermal identifiers and added to digital library for future comparison. In accordance with an aspect of the present disclosure, there is provided a breast cancer detection process using infrared images and thermal images generated through numerical simulations in which a matching algorithm is used to determine the tumor characteristics:a. isolating the breast and obtain thermal images using an infrared camera;b. storing the individual images with image identifiers, including patient data, orientation of the camera and patient, geometrical identifiers;c. generating digital breast model;d. generating thermal images of the breast surface including,i. the digital model and initial tumor characteristics,ii. thermal simulation software. Alternatively, artificial intelligence algorithms including but not limited to neural networks and support vector machines can be used to generate the surface the temperature distributions; ande. using matching algorithm to determine tumor characteristics. In accordance with an aspect of the present disclosure, there is provided a procedure to generate a digital breast model from clinical or optical images of the breasts:a. a process to generate digital breast models from images including, but not limited to digital photographs, infrared images, MRI images, ultrasound images, Mammograms, either 2D or 3D, computed tomography scans, 3D scanners, laser scanners, depth sensors such as the Microsoft Kinect or any other, any other imaging modality or video capture from which the breast outline can be obtained;b. the digital breast model can be generated from the data obtained from the imaging modalities or by processing one or multiple individual images using techniques such as image filtering, edge detection, segmentation, intensity transformation, multiview reconstruction, photogrametry, marching cubes, marching tetrahedrons or any combination of these methods, or any other method that results in a 3D representation of the breast. If desired, the digital breast model can include the internal breast structure such as lobules, blood vessels and milk ducts, where available. The resulting breast model can be used in its current state or modified to remove or add texture features either using a Computer Aided Design (CAD) software, a modeling software, or any software in which the model can me modified or smoothed to include new features;c. collected clinical images are formed into a 3D model through image analysis;one of the embodiments of image combination includes the following,1. remove artifacts in MRI images,2. segment the breast in the MRI images,3. images are stacked and a model is formed, and4. model is smoothed to create seamless digital model of actual breast shape. In accordance with an aspect of the present disclosure, there is provided a procedure to generate thermal images using a digital breast model:a. using digital model generated from clinical images;b. a mesh to divide the computational domain. The mesh can be generated by any software or procedure. It is desired that the minimum number of mesh elements is 1000 for volumetric meshes and 100 for surface meshes. A better resolution can be achieved using at least 100000 elements for volumetric and at least 5000 for surface meshes. The quality of the elements in the resulting mesh must be within recommended values for accurate numerical computations. For example, the skewness of the mesh elements must be below 0.95, with preferred values below 0.7. Depending on the sophistication of the software used, the actual number of mesh elements can be smaller or larger;c. defining the governing equation for heat transfer in tissues. The governing equations can be analytical, empirical, semi-empirical or any combination and any number of these. Some examples are the Pennes Bioheat Equation, the Countercurrent, and the Jiji models. These governing equations may or may not take into account the effect of the vasculature on the temperature calculations. This effect can be included either using models for the vasculature, from clinical images or artificially generated using software such as Vascusynth;d. defining values of properties of the tissue and other thermal and biological factors; ande. defining boundary conditions. The surface of the breast is generally exposed to the still air, for which a convective boundary condition can be used. The value of the heat transfer coefficient considering a mixture of radiation, natural convection and evaporation from the skin is typically in the range of 5-25 W/m2-K; however any value can be used. Other alternative is to include forced convection or natural convection by modeling the surroundings of the breast, a fixed initial surface temperature, radiation effects or any combination of these to account for the heat transfer between the model and its surroundings. For the other surfaces of the domain, any relevant boundary condition can be used, including but not limited to, fixed temperature, known heat flux, known temperature distribution, temperature distribution from experimental or analytical data, symmetry, insulated faces. These conditions can be either stationary or time dependent. In accordance with an aspect of the present disclosure, there is provided a procedure to utilize matching algorithm to determine tumor characteristics:a. isolating the breast and obtain thermal images using an infrared camera;b. generating digital breast model;c. inputting tumor parameters, including but not limited to size, location, shape, aspect ratio, metabolic activity, blood perfusion. These values can be obtained from clinical images, patient data, or can be guessed as an initial value in an iterative procedure;d. generating thermal images of the breast surface using,i. a digital model and initial tumor characteristics,ii. thermal simulation software, alternatively, artificial intelligence algorithms including but not limited to neural networks and support vector machines can be used;e. selecting the region to be analyzed in the images and compute thermal identifiers. The thermal identifiers may include information on the tumor properties and how they affect the surface temperature profile. The thermal identifiers are used to characterize the thermal distribution and include but are not limited to information on maximum temperatures, minimum temperatures and the gradient throughout the tissue. The region to be analyzed can be further divided into subdivisions;f. comparing the infrared and computed thermal images using a cost function including but not limited to error, mean squared error, correlation, cross-entropy. The cost function is any mathematical function to measure discrepancies between the IR and computed thermal images;g. updating the value of the tumor characteristics in an iterative procedure using optimization methods including but not limited to, Levenberg-Marquardt, Gradient Descent, Newton, Steepest Descent, Particle Swarm Optimization, Simulated Annealing Method, Genetic Algorithms, Sequential Quadratic Programming;h. continuing the method until cost function is below a predetermined threshold; andi. identifying the estimated tumor characteristics as the outcome from the algorithm. In accordance with an aspect of the present disclosure, there is provided a procedure to utilize wireless technology for remote, portable infrared imaging to identify suspected malignancy:a. portable infrared imaging camera is used in specific orientations including but not limited to frontal, oblique views, downward looking, upward looking on the body part being imaged;b. transmitting these images to a processing center for further evaluation;c. images are processed using techniques described in Method A-Method D to identify suspected malignancy;d. further actions will be taken for further evaluation including,i. doctors consulted, andii. further imaging including but not limited to x-ray, mammography, IRI detection, ultrasound, MRI, CT scan, physical examination, etc. In accordance with an aspect of the present disclosure, there is provided a procedure to monitor the usage and efficacy of chemotherapy and/or radiation for progression of treatment:a. chemotherapy drugs or radiation is induced into the growing tumor to shrink the mass;b. cancer patients receive periodic infrared screening to observe the tumor's activity including,i. as treatment is given, the tumor should begin to shrink and its metabolic activity should reduce,ii. if shrinkage not observed, treatment may be ineffective; andc. discuss outcome with consulting physician and alter treatment accordingly. The disclosure will be further illustrated with reference to the following specific examples. It is understood that these examples are given by way of illustration and are not meant to limit the disclosure or the claims to follow. Example 1, Method A: One example of varying gridline patterns can be seen inFIG.10. InFIG.10Athe grid is composed by latitudinal lines (parallels) and longitudinal lines (meridians) that match the contours of the breast; this grid is similar to the coordinate system used to locate points on the Earth surface. The grid shown inFIG.10Bis composed of horizontal and vertical, similar to a Cartesian coordinate system. The grid can also be composed of oblique lines, circular segments, hyperbolic, parabolic lines, functions resulting from statistical fitting (quadratic, cubic, exponential, logarithmic, linear, etc.), or any combination of these. They can be matched on the breast outline. The points where the two types of lines intersect define the nodes of the grid. These nodes (for example A, B, C and D inFIG.10Aor A′, B′, C′ and D′ inFIG.10B) define the regions (R1or R1′) where the quantities of interest will be obtained. Example 2, Method A: Another example gridding system, seen inFIG.11, can show the importance of a finer mesh. The hot spot resulting from the tumor is on the left side of the breast and the hot spot resulting from a vein is on the right side of the breast. The grid system is drawn over the breast surface with horizontal (latitude) and vertical (longitude) lines. When a region of interest is identified, a finer meshing system is applied. The average temperature value in each square can be found to determine the significance of the hot spot. With a finer grid in specified regions of interest, the average temperature values will change. The temperatures at the hot spot will be significantly hotter than the surrounding tissue. Using a grid system can help identify potential regions of concern as well as differentiate between a tumor and a vein. Example 3, Method A: As an example, the mean (Tmean), maximum (Tmax) and minimum (Tmin) temperatures and standard deviation (SD) were computed in the regions shown inFIG.12A. The statistical parameters are shown only in six regions to illustrate the advantages of using the thermal grid. In order to conduct a complete analysis, all regions should be analyzed. In addition to such statistical parameters, other indicators include entropy, energy, histogram analysis, texture, variance, correlation, contrast, skew, kurtosis and any combination or product of these.FIG.12Bshows the values of these parameters. Region1is expected to have a high mean temperature relative to other regions; besides the temperature range is only 0.5° C. Region2is expected to have a mean temperature lower than Region1; however, the mean temperature is 1.7° C. higher than Region1. The temperature range in Region2is 2.7° C. and the standard deviation is high, which indicates that a possible abnormality is found near Region2. Region3also presents an elevated temperature with a range of 1° C. Regions4and5present lower and more uniform temperatures than Regions2and3. From the previous analysis, the possible abnormality is located in the vicinity of Regions2and3. The density of lines can be increased in such Regions to aid in the identification of abnormal temperatures. With fewer grid boxes, it will be easier to find the boundaries of the thermal abnormality in order to determine if malignancy is a concern or if stray vasculature is creating additional heat signatures. Although an example of the grid pattern is presented above, any other type of grid pattern, and temperature calculations based on the temperature of the nodes, several pixels around nodes, or different regions identified by the grid patterns can be used to arrive at the thermal markers. The temperature elevation in a region over an unaffected region may be 3° C. or higher for aggressive tumor, or tumor close to the surface, 1-3° C. for the smaller tumor or deep inside, or between 0.1 to 1° C. for very small or very deep tumors. The small and large are somewhat qualitative, generally small meaning less than 7 mm, medium being between 0.7-2 cm and large being greater than 2 cm. These boundaries are not fixed and may be changed depending on the individual case depending on the breast size. Tumor depths are also somewhat subjective and may be indicative of near surface, around within 10 mm, or moderate around 10 mm to 20 mm, and deep beyond these values. These are also subject to variation depending on tumor location, breast size, etc., similar to other classification presented earlier and vice versa. Example 4, Method A:FIG.13shows the breast thermogram of an individual with breast cancer. Lines1and2are latitudinal lines, Lines3and4are longitudinal lines.FIG.14Ashows the temperature profile for Lines1and2, the filtered profiles for these lines are shown inFIG.14B. The temperature of Line1starts from ˜28° C. near the chest wall and decreases. After 1 cm, the profile flattens and shows a peak at around 5 cm; the temperature rise is −0.5° C. with respect to the surrounding temperatures, which helps locate possible malignancy. The profile of Line2decreases from its maximum near the chest wall to its minimum at the tip of the breast. This profile does indicate that no abnormality is observed along Line2.FIG.14Cshows the temperature profile for Lines3and4, the filtered profiles for these lines are shown inFIG.14D. The temperature along Line3increases continually and then decreases near 3 cm. This change in slope (temperature gradient) indicates the presence of an abnormality. The temperature of Line4is almost uniform with only slight variations, which indicates that no abnormalities are observed along Line4. In the previous example, the temperature of Line1starts from ˜28° C. near the chest wall and decreases. After 1 cm, the profile flattens and shows a peak at around 5 cm. The temperature rise is −0.5° C. with respect to the surrounding temperatures, which indicates possible malignancy. The profile of Line2decreases from its maximum near the chest wall to its minimum at the tip of the breast. This profile indicates that no abnormality is observed along Line4. Example 5, Method A: Both tumors and blood vessels cause local temperature rises. The aspect ratio D1/D2, where Ll, the largest dimension, is close to one which indicates that the abnormality is likely caused by a tumor. The aspect ratio for possible tumors can be from 1 to 4. This value may be larger if the tumor is large, which will show up as wider regions in the normal direction, while the blood vessels will not generally be wider than about 5 mm or 7 mm. The aspect ratio d1/d2 is −15, which indicates that this is likely caused by a blood vessel. Aspect ratios larger than five (5) are indicative of blood vessels. In some cases, these values may be further refined based on the actual width of the enhanced thermal region since tumors would be wider while the blood vessels would be narrower. This needs to be further evaluated with consideration of DCIS which may present a somewhat similar to the effects of blood vessels. These results correlate with the tumor location obtained from the MRI images and the surface blood vessels from the MRI rendering shown inFIG.15for Subject6. FIG.16shows temperature contours for the same subject. These temperature contours are warmer (red), more circular and closed in the region surrounding the tumor. In the region surrounding the blood vessel of interest, the contours have higher aspect ratios and are colder than for the region surrounding the tumor, which shows the potential of temperature contours to distinguish between possible tumors and blood vessels from infrared images. Thermal contours are also effective thermal markers of the cancer. If the contour plots show concentric regions that are indicative of steep hill type feature, the region may be a suspect. If the gradient in this region is high as discussed earlier, then it can be used as a further marker. Example 6, Method B: A digital breast model was generated from MRI images of the breast and it was used to generate and validate temperature distributions with clinical images. A succinct flowchart of the digital model process is seen inFIG.17. The MRI study, consisted of 178 images, the region containing the breast of interest was selected. Then, the tumor was measured and its location was stored for future steps. The tumor was modeled as a sphere with a diameter of 2.7 cm. A 3D median filter was applied, the dimensions of the applied 3D median filter were (3, 3, 3). The outline of the breast was detected using a modified version of the Canny edge detector, which detected a continuous outline of the breast. Then, the breast was segmented by defining everything inside the breast outline as breast tissue and the region outside as the background. The breast surface was generated from the stack of MRI slices using the Marching Cubes algorithm, which results in a surface mesh composed of triangular elements. The resulting surface mesh was jagged and needed to be smoothed to represent more accurately the geometry of the breast. The surface mesh was smoothed using an algorithm that replaced the angle of a mesh face with the average angle of the neighboring faces; which is similar to applying an averaging filter to a 3D image. In the smoothed breast geometry, some regions of the mesh were further smoothed using the software Autodesk Recap Photo only on the regions that needed it. The resulting surface mesh is seamless and accurate. The generated digital breast model is shown inFIG.18. This model was used to compute the surface temperature. The commercial software ANSYS Fluent was used to predict the breast surface temperature profile. The solution is based on solving the underlying heat transfer equations with convective blood flow. The Pennes' bioheat equation was used to account for the thermal interactions occurring within the breasts and in the environment. Pennes' equation is given by: ρtCt(∂Tt∂t)=∇·(kt∇Tt)+ωbcb(Ta-Tt)+qm(1) where ρ, c and k are the density, specific heat and thermal conductivity, respectively. The subscripts t, b and a refer to tissue, blood and arteries, respectively, ω is the blood flow rate per unit tissue volume (perfusion rate in kg/m3-s) and qmis the metabolic activity within the tissue in W/m3. The computational domain used to compute the temperature distribution is shown inFIG.19. The breast surfaces are subjected to a convection boundary condition, where values of the ambient temperature and heat transfer coefficient can be reasonably well estimated and entered in the software. The chest wall is considered to be at the core temperature of the body. To take into consideration the two blood perfusion and metabolic activity terms in the Pennes equation, these terms were defined as source terms in the software using User Defined Functions (UDFs). The UDFs were prepared to vary the location and size of the tumor without need to again mesh the tumor domain separately. This offers flexibility if the position and size of the tumor is changed in the model because there is no need to re-mesh the domain; only the UDF will be modified to account for the new tumor position and size. TABLE 1Values of the parameters used to compute the thermal images.ParameterValueUnitThermal conductivity (k)0.42W m−1K−1Perfusion rate of healthy tissue (ωh)1.8 × 10−4s−1Perfusion rate of tumor (ωt)9 × 10−3s−1Metabolic activity of healthy tissue (qh)450W m−3Metabolic activity of tumor (qt)6350W m−3Temperature of arteries (Ta)37° C.Specific heat of blood (cb)3,840J kg−1K−1Density of blood (ρb)1,060Kg m−3Core temperature (Tc)37° C.Ambient temperature (T∞)21° C.Heat transfer coefficient10.5W m−2K−1 The metabolic activity of the tumor was obtained using the formula developed by Pennes, where dt is the tumor diameter: qt=3.27×106468.5ln(100dt)+50(2) Where qtis the metabolic activity in W/m3 and dt=0.027 m (2.7 cm) is the tumor diameter. Using (2) results in a value of the metabolic activity of 6,350 W/m3. Using the parameters listed in Table 1, the thermal images were obtained.FIG.20shows the comparison of the clinical infrared images and the computed thermal images for three different positions. The digital model computed accurately the temperature distribution and predicted the thermal trends observed in the clinical infrared images. The median temperature in each of the regions was computed, which allowed to filter the effect of small blood vessels. The absolute error between the clinical and computed temperature distributions was computed using: E [° C.]E/nr× nc[° C./region]17.140.12 Where Texp and Tnum are the clinical and numerical temperature vectors, respectively, which contain all the individual temperature values of each of the individual regions. Table 2 lists the values of E, as well as the value of mean absolute error for each region. The mean absolute error per individual region is 0.12, which indicated that the model is accurately captures the temperature distribution observed in clinical images; therefore, the modeling approach is validated and can be used confidently to compute the temperature distribution for additional cases. E=∑1nr×nc❘"\[LeftBracketingBar]"Texp,i-Tnum,i❘"\[RightBracketingBar]"(3) Example 7, Method C: The method described was used to estimate simultaneously five parameters named thermal conductivity of healthy tissue kh, tumor size (diameter) d, tumor position within the breast (xt, yt, zt) in a breast with cancer. The parameters were estimated using a digital breast model (d) generated from a sequence of MRI images (c). A set of initial parameters (a) was used us current value of the parameters (b) to numerically solve (e) the governing equations to generate surface temperature profiles (f). The generated profiles were processed in Matlab (g, j). The temperature of various regions was extracted in both the numerical and target images (h, k). Both temperatures were compared using the Levenberg-Marquardt algorithm (l), if the convergence criteria (m) is not met, the parameters are updated (o) and the process is repeated. If the convergence criteria is met, the current value of the parameters is accepted as the estimated value of the parameters of interest. This process is outlined inFIG.25. A digital model of the female breast in prone position was generated from sequential Magnetic Resonance Imaging (MRI) images. The MRI images were individually filtered to reduce noise. Then, the outline of the breast was identified and the breast segmented in every slice. The sequential segmented images were stacked, which resulted in a digital breast model. This model was smoothed to generate a seamless and continuous breast model. The digital breast model was generated from the right breast of a 68-years-old woman with a grade III tumor and a diagnosis of invasive ductal carcinoma. The tumor was located at 12 o'clock, 2 cm from the nipple. The tumor volume was measured; although its shape is irregular, its volume was used to model a spherical tumor with a diameter of 2.7 cm and equivalent volume. Using Eq. (2) results in a tumor metabolic activity of Qt=6350 W/m3. The Pennes' bioheat equation (1) was used to account for thermal interactions within the breast and with the environment. Pennes bioheat equation is subject to the following boundary conditions (FIG.24): convection between the surface of the breast and the environment at Face F (4), constant temperature at the chest wall temperature Tcat Face E (5), no heat flux across Faces 1, 2, 3 and 4 (6). In our computational model, we consider only two different tissues, gland (healthy) and tumor. -k∂T∂n❘"\[RightBracketingBar]"atF=h(Ts-T∞)(4)T|atE=Tc(5)∂T∂n❘"\[RightBracketingBar]"atA,B,C,D=0(6) The software ANSYS Fluent was used to numerically solve Pennes bioheat equation (1) in the digital breast model. This software uses the finite volume method to discretize the governing equation and provide a numerical solution for the problem. The perfusion and metabolic generation terms in (1) were introduced as source terms through User Defined Functions (UDFs). The UDFs allow to vary the tumor position and size without need to recalculate the mesh in the computational domain given a proper mesh; a mesh with 3.5 million elements was created for this purpose. The value of the constant parameters used to simulate the temperature distribution is shown in Table 3. TABLE 3Value of the constant parameters used in the simulations.PropertyValueUnitPerfusion rate of healthy tissue (ωh)1.8 × 10−4s−1Perfusion rate of tumor (ωt)9 × 10−3s−1Metabolic activity of healthy tissue (Qh)450W m−3Temperature of arteries (Ta)37° C.Specific heat of blood (cb)3,840J kg−1K−1Density of blood (ρb)1,060kg m−3Core temperature (Tc)37° C.Ambient temperature (T∞)21° C.Heat transfer coefficient (h)13.5W m−2K−1 A text file containing initial values of the five parameters was prepared to start the process of parameter estimation. The initial values were positive and in the range of values shown in Table 4. The range of values for the thermal conductivity was obtained from data reported in the literature; the minimum is for a completely fatty breast and the maximum is for an extremely dense breast. In the case of tumor diameter, the minimum value (9.9 mm) results in the maximum metabolic activity reported by Gautherie, the maximum is a 7 cm tumor, which would be easily palpable. For the case of tumor location, its center must lie within the computational domain. TABLE 4Range of values for the parameters to estimate.ParameterMinimum ValueMaximum ValueUnitskh0.150.8W m−1K−1d0.00990.07mxt0.060.16myt0.080.16mzt0.080.15m The surface temperature for each set of parameters was obtained through numerical simulations using the generated digital breast model. The technique was tested by estimating the parameter values from the target images using the technique described. In order to obtain the temperature along the entire breast surface, eight different views around the breast model were generated, each separated 45° clockwise in the XZ plane and oriented at 25° with the Y axis. The surface temperature distribution on the eight different views was exported as an image from ANSYS Fluent. The resulting images were read in MATLAB®. First, the region of the breast was isolated and the image intensity values were transformed to temperature values using an in house code in each of the eight images. Only the central part of each breast image was analyzed to avoid analyzing the same region in more than one image. Then, a rectangular region of interest (ROI) was defined in each of the images as shown inFIG.25. The ROI in each image was divided into 12 rows by 6 columns. Resulting in nr×ncsub regions, Ri, in each of the eight images. The mean temperature of the pixels in each of the sub regions was computed and used to represent the temperature of each sub region. An arbitrary location was selected as the origin and all breast and tumor coordinates were adjusted accordingly. The initial tumor location was placed inside the central region of the breast. We used the Levenberg-Marquardt algorithm to estimate the set of parameters β defined as: β=[khd xtytzt]T(7)the algorithm is used to minimize the objective function defined by: S(β)=[Texp−T(β)]T[Texp−T(β)] (8) The objective function (11) is the mean squared error between the experimental (target) temperature Texp, and the current estimated temperature distribution T(β). The algorithm uses a damping parameter, μ, which value changes at every iteration and had an initial value of 0.001. A central difference scheme was used to compute each element of the Jacobian matrix, matrix formed by the derivatives with respect to the parameters, ∂Ti(β)∂βj≈Ti(βj+εβj)-Ti(βj-εβj)2εβj(9) Once the Jacobian matrix is computed, the parameters are updated according with the equation: βk+1=βk+[(Jk)TJk+μkΩk]−1(Jk)T[Texp−T(βk)] (10) Where the subscript k refers to the current value of parameters, and the subscript k+1 refers to the updated value of parameters. The matrix S2is diagonal and its aim is to damp oscillations and instabilities by making its components large as compared to those of (Jk)TJkif needed. The matrix S2is defined as: Ω=diag(JTJ) (11) The algorithm used stopped stops when at least one of three conditions were met. In the first condition (12), the algorithm runs for a maximum of kmaxiterations, where kmax=50 is used. In the second condition (13), the algorithm stops and accepts the current parameters as the best estimations when the objective function is lower than a small value ε1, where ε1=10−3. The last condition (14), implies that the algorithm stops when the norm of the difference between current and updated parameters is lower than a value ε2, where ε2=10−10. In this case, the updated parameters are accepted as the best estimations of the actual values. k>kmax(12) S(βk+1)<ε1(13) ∥βk+1−βk∥<ε2(14) Using volumetric images of the breast with tumor helps to visualize the location of the tumor within the breast.FIG.27shows the actual tumor within the breast, the estimated tumor and the image registration between actual and estimated, where it is observed that the estimations match closely the actual location and size. The volumetric images with the estimated tumor can be useful in a clinical to aid in the location of the tumor for biopsy. Example 8, Method D: The geometrical parameters that can be used include but are not limited to width (W), height (H), length (L), as seen from the images and any combination or function of these. A minimum of two images obtained from different views are necessary to categorize the breast. It is preferred to use images looking at the breast from the head (axial) and images looking at the breast from the side (sagittal) but any combination and number of images in any orientation can be used. The geometric parameters are measured from the base of the breast, which is a plane parallel to a Coronal plane. The base of the breast is identified as a plane parallel to the imaging table from the surface of the chest contacting the imaging table that can be located at any distance from the imaging table, one example could be that the base plane is coplanar to the bottom face of the imaging table. A rectangle surrounding the breast can be defined in any of the views to help in defining further indicators of the breast shape. The area defined by each of the rectangles is the maximum area that a breast with base and H dimensions can occupy in a scene. By computing the ratio between the actual area of the breast to the area of the rectangle, a fullness indicator is obtained. The fullness indicator, together with the other geometric identifiers are used to geometrically classify the breasts. An example of how to obtain the geometric identifiers from breast images is usingFIG.28. From an axial view of the breast the following parameters can be defined using equations (15) and (16). From a sagittal view of the breast the following parameters can be defined using equations (17) and (18). Combining the information from the two views will define equation (19). AAR=WH(15)AF=BreastareainviewWH(16)SAR=LH(17)AF=BreastareainviewLH(18)CAR=WL(19) Any combination, function or power of these parameters can be used instead of the previously described. Other parameters that can be used to describe the breast shape and size is by mapping, fitting or matching the breast outline in any of the views to any algebraic, trigonometric, exponential or logarithmic function and provide the relevant parameters found. Although various embodiments have been depicted and described in detail herein, it will be apparent to those skilled in the relevant art that various modifications, additions, substitutions, and the like can be made without departing from the spirit of the disclosure and these are therefore considered to be within the scope of the disclosure as defined in the claims which follow. | 87,311 |
11861835 | DETAILED DESCRIPTION The present invention generally relates to methods and systems for automatic hemorrhage expansion detection from head CT (computed tomography) images. Embodiments of the present invention are described herein to give a visual understanding of such methods and systems. A digital image is often composed of digital representations of one or more objects (or shapes). The digital representation of an object is often described herein in terms of identifying and manipulating the objects. Such manipulations are virtual manipulations accomplished in the memory or other circuitry/hardware of a computer system. Accordingly, is to be understood that embodiments of the present invention may be performed within a computer system using data stored within the computer system. Further, reference herein to pixels of an image may refer equally to voxels of an image and vice versa. Embodiments described herein provide for the automatic assessment of expansion of hemorrhages and other abnormalities. The expansion of hemorrhages is clinically referred to as hemorrhage expansion. Embodiments described herein apply hemorrhage segmentation systems to effectively differentiate pathological changes between a baseline input medical image and a follow-up input medical image. The segmentation results are combined and features are extracted from the baseline input medical image and the follow-up input medical image based on the combined segmentation results. A trained machine learning based classifier network is applied to assess expansion of the hemorrhage based on the extracted features. Advantageously, embodiments described herein provide for the automatic assessment of expansion of hemorrhages with higher accuracy as compared with conventional approaches. FIG.1shows a method100for assessing expansion of an abnormality, in accordance with one or more embodiments.FIG.2shows a workflow200for assessing expansion of a hemorrhage, in accordance with one or more embodiments.FIG.1andFIG.2will be described together. The steps of method100may be performed by one or more suitable computing devices, such as, e.g., computer702ofFIG.7. At step102ofFIG.1, a first input medical image of a patient depicting an abnormality at a first time and a second input medical image of a patient depicting the abnormality at a second time are received. In one embodiment, as in workflow200ofFIG.2, the abnormality is a hemorrhage. However, the abnormality may be any other abnormality of the patient, such as, e.g., lesions, nodules, and other abnormalities where tissue deformation and artifact are involved. The first input medical image may be a baseline input medical image of the abnormality and the second input medical image may be a follow-up input medical image of the abnormality. For example, as shown in workflow200ofFIG.2, the first input medical image may be baseline scan202of the head of the patient and the second input medical image may be follow-up scan205of the head of the patient. In one embodiment, the first input medical image and/or the second input medical image are CT images. However, the first input medical image and/or the second input medical image may comprise any other suitable modality, such as, e.g., MRI (magnetic resonance imaging), ultrasound, x-ray, or any other medical imaging modality or combinations of medical imaging modalities. The first input medical image and/or the second input medical image may be 2D (two dimensional) images and/or 3D (three dimensional) volumes, and may comprise a single input medical image or a plurality of input medical images. In one embodiment, the first input medical image and/or the second input medical image comprise 2.5D (2D plus time) images. The first input medical image and/or the second input medical image may be received directly from an image acquisition device, such as, e.g., a CT scanner, as the medical images are acquired, or can be received by loading previously acquired medical images from a storage or memory of a computer system or receiving medical images that have been transmitted from a remote computer system. At step104ofFIG.1, the second input medical image is registered with the first input medical image. The registration spatially aligns the first input medical image and the second input medical image. In one example, baseline scan202and follow-up scan204in workflow200ofFIG.2are spatially registered at block206to generate an aligned image208of follow-up scan204. The second input medical image may be registered with the first input medical image using any suitable approach, such as, e.g., known rigid registration or linear registration techniques. At step106ofFIG.1, the abnormality is segmented from a) the first input medical image to generate a first segmentation map and b) the registered second input medical image to generate a second segmentation map. In one example, in workflow200ofFIG.2, hemorrhages are segmented from baseline scan202to generate bleeding map212and hemorrhages are segmented from aligned image208of follow-up scan204to generate bleeding map216. Bleeding map212and bleeding map216in workflow200may be binary segmentation maps where, for example, a voxel (or pixel) intensity value of 1 indicates that the abnormality is present at that voxel and a voxel intensity value of 0 indicates that the abnormality is not present at that voxel. In one embodiment, the segmentation is performed using a trained machine learning based segmentation network. The trained machine learning based segmentation network may be implemented using a U-Net, a Dense U-NET, or any other suitable machine learning based architecture. The trained machine learning based segmentation network is trained to segment the abnormality from medical images during a prior offline or training stage using ground truth annotated maps. Once trained, the trained machine learning based segmentation network is applied during an online or testing stage (e.g., at step106ofFIG.1). At step108ofFIG.1, the first segmentation map and the second segmentation map are combined to generate a combined map. For example, bleeding map212and bleeding map216in workflow200ofFIG.2are combined to generate an attention map218. In one embodiment, the first segmentation map and the second segmentation map are combined by applying a voxelwise (or pixelwise) OR operation to the first segmentation map and the second segmentation map such that, for example, a voxel value of 1 at corresponding voxels in either the first segmentation map or the second segmentation map results in a voxel value of 1 at that voxel in the combined map, and a voxel value of 0 otherwise. Other approaches for combining the first segmentation map and the second segmentation map are also contemplated. At step110ofFIG.1, features are extracted from the first input medical image and the registered second input medical image based on the combined map. The features may be extracted from the first input medical image and the registered second input medical image based on the combined map using any suitable approach. The combined map identifies specific regions where the abnormality is located in either the first segmentation map or the second segmentation map, thereby enabling features to be extracted from the first input medical image and the registered second input medical image with a focus on the specific regions identified by the combined map. In one embodiment, the features are extracted by first generating an input image based on the first input medical image, the registered second input medical image, and the combined map. The input image may be a 3-channel input image comprises the first input medical image, the registered second input medical image, and the combined map. For example, in workflow200ofFIG.2, a 3-channel input volume is constructed at block220. 2D in-plane features are then extracted from the 3-channel input image. For example, in workflow200ofFIG.2, 2D in-plane features are extracted from the 3-channel input volume by 2D in-plane feature extractor222. The 2D in-plane features comprise latent features extracted from each 2D slice of the 3-channel input image. The 3-channel input image is used as an attention map to focus the extraction of 2D in-plane features to regions identified in the combined map. The 2D in-plane features may be extracted using any suitable 2D machine learning based segmentation network, such as, e.g., a pre-trained 2D segmentation network or a Res-Net32/Res-Net50 network pretrained with public datasets (e.g., ImageNet). Sequential out-of-plane features are then extracted from the 2D in-plane features. For example, in workflow200ofFIG.2, sequential out-of-plane features are extracted from the 2D in-plane features by sequential out-of-plane feature extractor224. The sequential out-of-plane features model the 3D context of the 3-channel input image. The sequential out-of-plane features may be extracted using any suitable sequential out-of-plane feature extractor trained to learn the relationship between the 2D in-plane features and the sequential out-of-plane features. The sequential out-of-plane feature extractor may be implemented using RNNs (recurrent neural networks) with LSTM (long short-term memory), BGRUs (bidirectional gated recurrent units), or any other suitable machine learning based network. At step112ofFIG.1, expansion of the abnormality is assessed based on the extracted features using a trained machine learning based network. The trained machine learning based network may be any suitable trained machine learning based classifier network. The trained machine learning based classifier network receives as input the extracted sequential out-of-plane features and generates an expansion score. For example, in workflow200ofFIG.2, expansion of the hemorrhage is assessed by HE (hemorrhage expansion) classifier226based on the sequential out-of-plane features to determine an HE score228. The trained machine learning based classifier network first estimates a global latent feature vector from the extracted sequential out-of-plane features by max-pooling, global average pooling, or any other suitable pooling method. The expansion score is then predicted based on the global latent feature vector by fully-connected layers or fully-convolutional blocks. The expansion score represents the likelihood of expansion of the abnormality between the first input medical image and the second input medical image. The expansion score may be compared with one or more threshold values to provide final results (e.g., expansion/no expansion or expansion/no expansion/uncertain). The trained machine learning based classifier network is trained during a prior offline or training stage using annotated pairs of training images. The pairs of training images may be annotated as being an expansion where the pairs of training images depict, for example, at least a 33% increase in volume of the abnormality. Any other threshold increase in volume of the abnormality may be selected for annotating the training images as depicting an expansion. Once trained, the trained machine learning based classifier network is applied during an online or testing stage (e.g., at step112ofFIG.1). At step114ofFIG.1, results of the assessment are output. For example, the results of the assessment can be output by displaying the results of the assessment on a display device of a computer system, storing the results of the assessment on a memory or storage of a computer system, or by transmitting the results of the assessment to a remote computer system. Advantageously, embodiments described herein model longitudinal image features in medical images acquired at different timepoints to thereby improve performance. Since the machine learning based networks for extracting 2D in-plane features and sequential out-of-plane features can be trained with 2D images slices to model 3D longitudinal radiological features, fewer 3D training images are required as compared to conventional systems, thus reducing the cost of data acquisition. Further, embodiments described herein may exploit existing pretrained machine learning based networks, resulting in faster training convergence and overfitting reduction while reducing development costs. In one embodiment, at least some of the machine learning based networks utilized in method100may be jointly trained. For example, a first trained machine learning based feature extraction network may be utilized for 2D in-plane feature extraction (at step110), a second trained machine learning based feature extraction network may be utilized for sequential out-of-plane feature extraction (at step110), and the first trained machine learning based feature extraction network, the second trained machine learning based feature extraction network, and the trained machine learning based network (utilized at step112) may be jointly trained using an optimizer such as, e.g., Adam. FIG.3shows results of an assessment of expansion of a hemorrhage determined in accordance with one or more embodiments. First input medical image302shows a baseline scan of a head of a patient depicting hemorrhages at a first time and second input medical image204shows a follow-up scan of the head of the patient depicting hemorrhages at a second time. As shown inFIG.3, first input medical image302was manually assessed to have a GT (ground truth) hemorrhage volume of 9.9 ml (milliliters) while second input medical image304was manually assessed to have a GT hemorrhage volume of 17.3 ml. First input medical image302and second input medical image304were assessed in accordance with embodiments described herein to have an HE score of 0.9756. FIG.4shows a table400comparing a conventional segmentation based detection system and a longitudinal detection network in accordance with embodiments described herein. Table400compared the AUC (area under curve), SEN (sensitivity), SPC (specificity), precision, recall, and F1-score. Embodiments described herein are described with respect to the claimed systems as well as with respect to the claimed methods. Features, advantages or alternative embodiments herein can be assigned to the other claimed objects and vice versa. In other words, claims for the systems can be improved with features described or claimed in the context of the methods. In this case, the functional features of the method are embodied by objective units of the providing system. Furthermore, certain embodiments described herein are described with respect to methods and systems utilizing trained machine learning based networks (or models), as well as with respect to methods and systems for training machine learning based networks. Features, advantages or alternative embodiments herein can be assigned to the other claimed objects and vice versa. In other words, claims for methods and systems for training a machine learning based network can be improved with features described or claimed in context of the methods and systems for utilizing a trained machine learning based network, and vice versa. In particular, the trained machine learning based networks applied in embodiments described herein can be adapted by the methods and systems for training the machine learning based networks. Furthermore, the input data of the trained machine learning based network can comprise advantageous features and embodiments of the training input data, and vice versa. Furthermore, the output data of the trained machine learning based network can comprise advantageous features and embodiments of the output training data, and vice versa. In general, a trained machine learning based network mimics cognitive functions that humans associate with other human minds. In particular, by training based on training data, the trained machine learning based network is able to adapt to new circumstances and to detect and extrapolate patterns. In general, parameters of a machine learning based network can be adapted by means of training. In particular, supervised training, semi-supervised training, unsupervised training, reinforcement learning and/or active learning can be used. Furthermore, representation learning (an alternative term is “feature learning”) can be used. In particular, the parameters of the trained machine learning based network can be adapted iteratively by several steps of training. In particular, a trained machine learning based network can comprise a neural network, a support vector machine, a decision tree, and/or a Bayesian network, and/or the trained machine learning based network can be based on k-means clustering, Q-learning, genetic algorithms, and/or association rules. In particular, a neural network can be a deep neural network, a convolutional neural network, or a convolutional deep neural network. Furthermore, a neural network can be an adversarial network, a deep adversarial network and/or a generative adversarial network. FIG.5shows an embodiment of an artificial neural network500, in accordance with one or more embodiments. Alternative terms for “artificial neural network” are “neural network”, “artificial neural net” or “neural net”. Machine learning networks described herein, such as, e.g., the machine learning based networks utilized in method100ofFIG.1and workflow200ofFIG.2, may be implemented using artificial neural network500. The artificial neural network500comprises nodes502-522and edges532,534, . . . ,536, wherein each edge532,534, . . . ,536is a directed connection from a first node502-522to a second node502-522. In general, the first node502-522and the second node502-522are different nodes502-522, it is also possible that the first node502-522and the second node502-522are identical. For example, inFIG.5, the edge532is a directed connection from the node502to the node506, and the edge534is a directed connection from the node504to the node506. An edge532,534, . . . ,536from a first node502-522to a second node502-522is also denoted as “ingoing edge” for the second node502-522and as “outgoing edge” for the first node502-522. In this embodiment, the nodes502-522of the artificial neural network500can be arranged in layers524-530, wherein the layers can comprise an intrinsic order introduced by the edges532,534, . . . ,536between the nodes502-522. In particular, edges532,534, . . . ,536can exist only between neighboring layers of nodes. In the embodiment shown inFIG.5, there is an input layer524comprising only nodes502and504without an incoming edge, an output layer530comprising only node522without outgoing edges, and hidden layers526,528in-between the input layer524and the output layer530. In general, the number of hidden layers526,528can be chosen arbitrarily. The number of nodes502and504within the input layer524usually relates to the number of input values of the neural network500, and the number of nodes522within the output layer530usually relates to the number of output values of the neural network500. In particular, a (real) number can be assigned as a value to every node502-522of the neural network500. Here, x(n)idenotes the value of the i-th node502-522of the n-th layer524-530. The values of the nodes502-522of the input layer524are equivalent to the input values of the neural network500, the value of the node522of the output layer530is equivalent to the output value of the neural network500. Furthermore, each edge532,534, . . . ,536can comprise a weight being a real number, in particular, the weight is a real number within the interval [−1, 1] or within the interval [0, 1]. Here, w(m,n)i,jdenotes the weight of the edge between the i-th node502-522of the m-th layer524-530and the j-th node502-522of the n-th layer524-530. Furthermore, the abbreviation w(n)i,jis defined for the weight w(n,n+1)i,j. In particular, to calculate the output values of the neural network500, the input values are propagated through the neural network. In particular, the values of the nodes502-522of the (n+1)-th layer524-530can be calculated based on the values of the nodes502-522of the n-th layer524-530by xj(n+1)=f(Σixi(n)·wi,j(n)). Herein, the function f is a transfer function (another term is “activation function”). Known transfer functions are step functions, sigmoid function (e.g. the logistic function, the generalized logistic function, the hyperbolic tangent, the Arctangent function, the error function, the smoothstep function) or rectifier functions. The transfer function is mainly used for normalization purposes. In particular, the values are propagated layer-wise through the neural network, wherein values of the input layer524are given by the input of the neural network500, wherein values of the first hidden layer526can be calculated based on the values of the input layer524of the neural network, wherein values of the second hidden layer528can be calculated based in the values of the first hidden layer526, etc. In order to set the values w(m,n)i,jfor the edges, the neural network500has to be trained using training data. In particular, training data comprises training input data and training output data (denoted as ti). For a training step, the neural network500is applied to the training input data to generate calculated output data. In particular, the training data and the calculated output data comprise a number of values, said number being equal with the number of nodes of the output layer. In particular, a comparison between the calculated output data and the training data is used to recursively adapt the weights within the neural network500(backpropagation algorithm). In particular, the weights are changed according to w′i,j(n)=wi,j(n)−γ·δj(n)·xi(n) wherein γ is a learning rate, and the numbers δ(n)jcan be recursively calculated as δj(n)=(Σkδk(n+1)·wj,k(n+1))·f′(Σixi(n)·wi,j(n)) based on δ(n+1)j, if the (n+1)-th layer is not the output layer, and δj(n)=(xk(n+1)−tj(n+1))·f′(Σixi(n)·wi,j(n)) if the (n+1)-th layer is the output layer530, wherein f′ is the first derivative of the activation function, and y(n+1)jis the comparison training value for the j-th node of the output layer530. FIG.6shows a convolutional neural network600, in accordance with one or more embodiments. Machine learning networks described herein, such as, e.g., the machine learning based networks utilized in method100ofFIG.1and workflow200ofFIG.2, may be implemented using convolutional neural network600. In the embodiment shown inFIG.6, the convolutional neural network comprises600an input layer602, a convolutional layer604, a pooling layer606, a fully connected layer608, and an output layer610. Alternatively, the convolutional neural network600can comprise several convolutional layers604, several pooling layers606, and several fully connected layers608, as well as other types of layers. The order of the layers can be chosen arbitrarily, usually fully connected layers608are used as the last layers before the output layer610. In particular, within a convolutional neural network600, the nodes612-620of one layer602-610can be considered to be arranged as a d-dimensional matrix or as a d-dimensional image. In particular, in the two-dimensional case the value of the node612-620indexed with i and j in the n-th layer602-610can be denoted as x(n)[i,j]. However, the arrangement of the nodes612-620of one layer602-610does not have an effect on the calculations executed within the convolutional neural network600as such, since these are given solely by the structure and the weights of the edges. In particular, a convolutional layer604is characterized by the structure and the weights of the incoming edges forming a convolution operation based on a certain number of kernels. In particular, the structure and the weights of the incoming edges are chosen such that the values x(n)kof the nodes614of the convolutional layer604are calculated as a convolution x(n)k=Kk*x(n−1)based on the values x(n−1)of the nodes612of the preceding layer602, where the convolution * is defined in the two-dimensional case as xk(n)[i,j]=(Kk*x(n−1))[i,j]=Σi′Σj′Kk[i′,j′]·x(n−1)[i-i′,j-j′]. Here the k-th kernel Kkis a d-dimensional matrix (in this embodiment a two-dimensional matrix), which is usually small compared to the number of nodes612-618(e.g. a 3×3 matrix, or a 5×5 matrix). In particular, this implies that the weights of the incoming edges are not independent, but chosen such that they produce said convolution equation. In particular, for a kernel being a 3×3 matrix, there are only 9 independent weights (each entry of the kernel matrix corresponding to one independent weight), irrespectively of the number of nodes612-620in the respective layer602-610. In particular, for a convolutional layer604, the number of nodes614in the convolutional layer is equivalent to the number of nodes612in the preceding layer602multiplied with the number of kernels. If the nodes612of the preceding layer602are arranged as a d-dimensional matrix, using a plurality of kernels can be interpreted as adding a further dimension (denoted as “depth” dimension), so that the nodes614of the convolutional layer604are arranged as a (d+1)-dimensional matrix. If the nodes612of the preceding layer602are already arranged as a (d+1)-dimensional matrix comprising a depth dimension, using a plurality of kernels can be interpreted as expanding along the depth dimension, so that the nodes614of the convolutional layer604are arranged also as a (d+1)-dimensional matrix, wherein the size of the (d+1)-dimensional matrix with respect to the depth dimension is by a factor of the number of kernels larger than in the preceding layer602. The advantage of using convolutional layers604is that spatially local correlation of the input data can exploited by enforcing a local connectivity pattern between nodes of adjacent layers, in particular by each node being connected to only a small region of the nodes of the preceding layer. In embodiment shown inFIG.6, the input layer602comprises 36 nodes612, arranged as a two-dimensional 6×6 matrix. The convolutional layer604comprises 72 nodes614, arranged as two two-dimensional 6×6 matrices, each of the two matrices being the result of a convolution of the values of the input layer with a kernel. Equivalently, the nodes614of the convolutional layer604can be interpreted as arranges as a three-dimensional 6×6×2 matrix, wherein the last dimension is the depth dimension. A pooling layer606can be characterized by the structure and the weights of the incoming edges and the activation function of its nodes616forming a pooling operation based on a non-linear pooling function f. For example, in the two dimensional case the values x(n)of the nodes616of the pooling layer606can be calculated based on the values x(n−1)of the nodes614of the preceding layer604as x(n)[i,j]=f(x(n−1)[id1,jd2], . . . , x(n−1)[id1+d1−1, jd2+d2−1]) In other words, by using a pooling layer606, the number of nodes614,616can be reduced, by replacing a number d1·d2 of neighboring nodes614in the preceding layer604with a single node616being calculated as a function of the values of said number of neighboring nodes in the pooling layer. In particular, the pooling function f can be the max-function, the average or the L2-Norm. In particular, for a pooling layer606the weights of the incoming edges are fixed and are not modified by training. The advantage of using a pooling layer606is that the number of nodes614,616and the number of parameters is reduced. This leads to the amount of computation in the network being reduced and to a control of overfitting. In the embodiment shown inFIG.6, the pooling layer606is a max-pooling, replacing four neighboring nodes with only one node, the value being the maximum of the values of the four neighboring nodes. The max-pooling is applied to each d-dimensional matrix of the previous layer; in this embodiment, the max-pooling is applied to each of the two two-dimensional matrices, reducing the number of nodes from 72 to 18. A fully-connected layer608can be characterized by the fact that a majority, in particular, all edges between nodes616of the previous layer606and the nodes618of the fully-connected layer608are present, and wherein the weight of each of the edges can be adjusted individually. In this embodiment, the nodes616of the preceding layer606of the fully-connected layer608are displayed both as two-dimensional matrices, and additionally as non-related nodes (indicated as a line of nodes, wherein the number of nodes was reduced for a better presentability). In this embodiment, the number of nodes618in the fully connected layer608is equal to the number of nodes616in the preceding layer606. Alternatively, the number of nodes616,618can differ. Furthermore, in this embodiment, the values of the nodes620of the output layer610are determined by applying the Softmax function onto the values of the nodes618of the preceding layer608. By applying the Softmax function, the sum the values of all nodes620of the output layer610is 1, and all values of all nodes620of the output layer are real numbers between 0 and 1. A convolutional neural network600can also comprise a ReLU (rectified linear units) layer or activation layers with non-linear transfer functions. In particular, the number of nodes and the structure of the nodes contained in a ReLU layer is equivalent to the number of nodes and the structure of the nodes contained in the preceding layer. In particular, the value of each node in the ReLU layer is calculated by applying a rectifying function to the value of the corresponding node of the preceding layer. The input and output of different convolutional neural network blocks can be wired using summation (residual/dense neural networks), element-wise multiplication (attention) or other differentiable operators. Therefore, the convolutional neural network architecture can be nested rather than being sequential if the whole pipeline is differentiable. In particular, convolutional neural networks600can be trained based on the backpropagation algorithm. For preventing overfitting, methods of regularization can be used, e.g. dropout of nodes612-620, stochastic pooling, use of artificial data, weight decay based on the L1 or the L2 norm, or max norm constraints. Different loss functions can be combined for training the same neural network to reflect the joint training objectives. A subset of the neural network parameters can be excluded from optimization to retain the weights pretrained on another datasets. Systems, apparatuses, and methods described herein may be implemented using digital circuitry, or using one or more computers using well-known computer processors, memory units, storage devices, computer software, and other components. Typically, a computer includes a processor for executing instructions and one or more memories for storing instructions and data. A computer may also include, or be coupled to, one or more mass storage devices, such as one or more magnetic disks, internal hard disks and removable disks, magneto-optical disks, optical disks, etc. Systems, apparatus, and methods described herein may be implemented using computers operating in a client-server relationship. Typically, in such a system, the client computers are located remotely from the server computer and interact via a network. The client-server relationship may be defined and controlled by computer programs running on the respective client and server computers. Systems, apparatus, and methods described herein may be implemented within a network-based cloud computing system. In such a network-based cloud computing system, a server or another processor that is connected to a network communicates with one or more client computers via a network. A client computer may communicate with the server via a network browser application residing and operating on the client computer, for example. A client computer may store data on the server and access the data via the network. A client computer may transmit requests for data, or requests for online services, to the server via the network. The server may perform requested services and provide data to the client computer(s). The server may also transmit data adapted to cause a client computer to perform a specified function, e.g., to perform a calculation, to display specified data on a screen, etc. For example, the server may transmit a request adapted to cause a client computer to perform one or more of the steps or functions of the methods and workflows described herein, including one or more of the steps or functions ofFIG.1or2. Certain steps or functions of the methods and workflows described herein, including one or more of the steps or functions ofFIG.1or2, may be performed by a server or by another processor in a network-based cloud-computing system. Certain steps or functions of the methods and workflows described herein, including one or more of the steps ofFIG.1or2, may be performed by a client computer in a network-based cloud computing system. The steps or functions of the methods and workflows described herein, including one or more of the steps ofFIG.1or2, may be performed by a server and/or by a client computer in a network-based cloud computing system, in any combination. Systems, apparatus, and methods described herein may be implemented using a computer program product tangibly embodied in an information carrier, e.g., in a non-transitory machine-readable storage device, for execution by a programmable processor; and the method and workflow steps described herein, including one or more of the steps or functions ofFIG.1or2, may be implemented using one or more computer programs that are executable by such a processor. A computer program is a set of computer program instructions that can be used, directly or indirectly, in a computer to perform a certain activity or bring about a certain result. A computer program can be written in any form of programming language, including compiled or interpreted languages, and it can be deployed in any form, including as a stand-alone program or as a module, component, subroutine, or other unit suitable for use in a computing environment. A high-level block diagram of an example computer702that may be used to implement systems, apparatus, and methods described herein is depicted inFIG.7. Computer702includes a processor704operatively coupled to a data storage device712and a memory710. Processor704controls the overall operation of computer702by executing computer program instructions that define such operations. The computer program instructions may be stored in data storage device712, or other computer readable medium, and loaded into memory710when execution of the computer program instructions is desired. Thus, the method and workflow steps or functions ofFIG.1or2can be defined by the computer program instructions stored in memory710and/or data storage device712and controlled by processor704executing the computer program instructions. For example, the computer program instructions can be implemented as computer executable code programmed by one skilled in the art to perform the method and workflow steps or functions ofFIG.1or2. Accordingly, by executing the computer program instructions, the processor704executes the method and workflow steps or functions ofFIG.1or2. Computer702may also include one or more network interfaces706for communicating with other devices via a network. Computer702may also include one or more input/output devices708that enable user interaction with computer702(e.g., display, keyboard, mouse, speakers, buttons, etc.). Processor704may include both general and special purpose microprocessors, and may be the sole processor or one of multiple processors of computer702. Processor704may include one or more central processing units (CPUs), for example. Processor704, data storage device712, and/or memory710may include, be supplemented by, or incorporated in, one or more application-specific integrated circuits (ASICs) and/or one or more field programmable gate arrays (FPGAs). Data storage device712and memory710each include a tangible non-transitory computer readable storage medium. Data storage device712, and memory710, may each include high-speed random access memory, such as dynamic random access memory (DRAM), static random access memory (SRAM), double data rate synchronous dynamic random access memory (DDR RAM), or other random access solid state memory devices, and may include non-volatile memory, such as one or more magnetic disk storage devices such as internal hard disks and removable disks, magneto-optical disk storage devices, optical disk storage devices, flash memory devices, semiconductor memory devices, such as erasable programmable read-only memory (EPROM), electrically erasable programmable read-only memory (EEPROM), compact disc read-only memory (CD-ROM), digital versatile disc read-only memory (DVD-ROM) disks, or other non-volatile solid state storage devices. Input/output devices708may include peripherals, such as a printer, scanner, display screen, etc. For example, input/output devices708may include a display device such as a cathode ray tube (CRT) or liquid crystal display (LCD) monitor for displaying information to the user, a keyboard, and a pointing device such as a mouse or a trackball by which the user can provide input to computer702. An image acquisition device714can be connected to the computer702to input image data (e.g., medical images) to the computer702. It is possible to implement the image acquisition device714and the computer702as one device. It is also possible that the image acquisition device714and the computer702communicate wirelessly through a network. In a possible embodiment, the computer702can be located remotely with respect to the image acquisition device714. Any or all of the systems and apparatus discussed herein may be implemented using one or more computers such as computer702. One skilled in the art will recognize that an implementation of an actual computer or computer system may have other structures and may contain other components as well, and thatFIG.7is a high level representation of some of the components of such a computer for illustrative purposes. The foregoing Detailed Description is to be understood as being in every respect illustrative and exemplary, but not restrictive, and the scope of the invention disclosed herein is not to be determined from the Detailed Description, but rather from the claims as interpreted according to the full breadth permitted by the patent laws. It is to be understood that the embodiments shown and described herein are only illustrative of the principles of the present invention and that various modifications may be implemented by those skilled in the art without departing from the scope and spirit of the invention. Those skilled in the art could implement various other feature combinations without departing from the scope and spirit of the invention. | 39,055 |
11861836 | DESCRIPTION OF EMBODIMENTS Embodiment 1 The following description will discuss an embodiment of the present invention in detail. Technical Idea of the Present Invention First, the technical idea of an image analysis method in accordance with an aspect of the present invention will be described below. When Gleason's grade is applied, prostate cancer can be graded into any one of five different grades from grade 1 representing the mildest disease condition to grade 5 representing the most severe disease condition. The degree of differentiation of the glands in a prostate (tissue) is the strongest determinant of Gleason's grade. The degree of differentiation of a cell included in a prostate may be a pathological indicator associated with the structure of a tumor and the degree of infiltration of the tumor. For example, even expert pathologists need to carefully examine needle biopsy images of a prostate (hereinafter, referred to as needle biopsy images) to correctly discriminate grades representing moderate disease conditions on the basis of the needle biopsy images. This makes the diagnosis time-consuming and may result in disagreement on the diagnosis among pathologists. The inventors of the present invention selected, as an analysis target, a needle biopsy image, which is an example of a histological image, and made detailed comparison and study of the region where prostate cancer developed to the region where no prostate cancer developed in the needle biopsy image (histological image). The inventors of the present invention applied the concept of homology to describe the topological arrangement of the nuclei of the cells associated with the gland lumen and quantify the degree of gland differentiation in a needle biopsy slide of prostate cancer. More specifically, the inventors of the present invention extracted, from a needle biopsy image, an image patch (histological image) in which an analysis target region is captured, and generated, for the image patch, a plurality of binarized images associated with binarization reference values different from each other. The inventors of the present invention then calculated, for each binarized image, a one-dimensional Betti number b1(first characteristic numerical value) and a zero-dimensional Betti number b0(second characteristic numerical value), and a ratio R between the one-dimensional Betti number b1and the zero-dimensional Betti number b0(third characteristic numerical value). The ratio R may be b1/b0or b0/b1. In this manner, the inventors of the present invention obtained (1) a one-dimensional Betti number b1group, (2) a zero-dimensional Betti number b0group, and (3) a ratio R group for a binarized image group generated from the needle biopsy image. Subsequently, the inventors of the present invention regarded, as discrete probability distributions, (1) the one-dimensional Betti number b1group, (2) the zero-dimensional Betti number b0group, and (3) the ratio R group, and calculated statistics for the respective distributions. The set of these statistics calculated for the same needle biopsy image is considered to be a characteristic vector representing the characteristics of topological arrangement of the nuclei of cells included in tissue captured in the needle biopsy image. The inventors of the present invention have found that the degree of differentiation of a gland captured in each needle biopsy image can be precisely determined on the basis of the statistics calculated in this manner, and have completed the invention of an image analysis method in accordance with an aspect of the present invention. For example, applying the image analysis method in accordance with an aspect of the present invention enables precise discrimination of grades 1 to 5 of Gleason's grade, which represents the disease conditions of prostate cancer, on the basis of a needle biopsy image. In some cases, it is not easy even for expert pathologists to discriminate between grades 3 and 4 of Gleason's grade, which represent moderate-grade disease conditions. Applying the image analysis method in accordance with an aspect of the present invention enables precise discrimination between the grades 3 and 4 of Gleason's grade in a short time. (Needle Biopsy Image and Gleason's Grade) Referring now toFIG.1, Gleason's grade which is performed through determination on the basis of a needle biopsy image is described.FIG.1is a view illustrating an example histological image, in which (b) ofFIG.1is a view illustrating the entire needle biopsy image, and both (a) and (c) ofFIG.1are views illustrating extracted image patches of the prostate cancerous regions. The needle biopsy image illustrated in (b) ofFIG.1is an image of a sample at 40-fold magnification. The sample was prepared by paraffin-embedding tissue sections taken from multiple points in a prostate taken from a subject, slicing them into thin pieces, and staining the thin pieces with Hematoxylin-Eosin (HE). HE staining is one of the methods used to stain collected tissue sections, and uses staining with hematoxylin and staining with eosin together. Hematoxylin stains chromatin in a cell nucleus and ribosomes in cytoplasm purple-blue. Eosin stains the constituents of cytoplasm and an extracellular matrix red. To make the locations of the nuclei of individual cells constituting a gland lumen in a histological image and the arrangement of the nuclei clearer, the histological image may undergo color deconvolution processing. Color deconvolution processing is performed to normalize color intensity by hematoxylin staining and color intensity by eosin staining. For example, when a histological image is represented with three color phases of RGB, a normalization factor matrix expressed by, for example, Expression (1) or Expression (2) shown below can be used. The normalization factor matrix of Expression (1) is the original stain matrix by Ruifrok et al., and the normalization factor matrix of Expression (2) is the average stain matrix calculated using adaptive estimation method by Macenko et al. In each normalization factor matrix, the first row (upper line) is the normalization factor relating to hematoxylin staining and the second row (lower line) is the normalization factor relating to eosin staining. (0.6442110.7165560.2668440.0927890.9541110.283111)(RGB)Expression(1)(0.5155±0.03070.7234±0.01690.4576±0.02050.1501±0.020.7723±0.03080.6149±0.03911)(RGB)Expression(2) Both (a) and (c) ofFIG.1are views illustrating image patches extracted from the needle biopsy image of (b) ofFIG.1and representing the regions where prostate cancer in the moderate-grade disease condition develops. By way of example, two image patches are illustrated in each of (a) and (c) ofFIG.1. The image patches illustrated in (a) and (c) ofFIG.1are images of the respective regions in the needle biopsy image illustrated in (b) ofFIG.1at 40-fold magnification. The analysis of the degree of differentiation of a cell in tissue in an image does not require assessment of the nuclei of individual cells but requires assessment of characteristics relating to topology and connectivity among the nuclei of a plurality of cells. When such analysis targets at an image patch, the patch size may be 160×160 pixels at 10-fold magnification. The prostate in a needle biopsy image may locally include prostate cancerous regions. The local regions in the needle biopsy image are determined to be analyzed and the grades of Gleason's grade may be assigned to the local regions. To conduct such analysis, a plurality of image patches that can be the analysis targets may be extracted from the needle biopsy image. To remove the background region that does not include tissue from the needle biopsy image, any known algorithm can be used. Examples of such an algorithm include OTSU algorithm (Otsu, 1979, IEEE Trans. Syst. Man. Cybern. Vol. 9 (1), p 62-66, doi:10.1109/tsmc.1979.4310076). Further, to extract image patches, any trained neural network having an image recognition function can be used. For example, the Gleason's grade of the entire prostate in the needle biopsy image can be estimated on the basis of the analysis result for the plurality of image patches extracted from the needle biopsy image. The extraction window for extracting the plurality of image patches from the needle biopsy image may be moved across each region of the needle biopsy image row by row from the upper left to the lower right with a predetermined step size. When, for example, the extracted image patch is 160×160 pixels in size, the step size may be 80 pixels. In the image patches illustrated in (a) ofFIG.1, the cell nuclei arrange in association with the gland lumen. On the other hand, in the image patches illustrated in (c) ofFIG.1, some cell nuclei arrange to form small holes, and other cell nuclei do not arrange in association with the gland lumens. Expert pathologists determine the regions in the image patches illustrated in (a) ofFIG.1fall under grade 3 of Gleason's grade, and determine the regions in the image patches illustrated in (c) ofFIG.1fall under grade 4 of Gleason's grade. The histological image as used herein may be an image of a tissue section taken from the body of a subject. It should be noted that the image analysis method in accordance with an aspect of the present invention enables estimation of the degree of differentiation of a cell on the basis of a histological image in which a cell included in tissue taken from the body of a subject is captured, and the histological image is not limited to a needle biopsy image of a prostate. The image analysis method in accordance with an aspect of the present invention enables the analysis of a body part such as the alimentary canal, a liver, a pancreas, and a lymph node besides a prostate. (Mathematical Representation for Histological Image Analysis) The following is the description of mathematical representation used for histological image analysis in the image analysis method in accordance with an aspect of the present invention. To quantify and analyze a change that has occurred in tissue, the concept of homology, in particular, the persistent homology is applied to binarized images in the image analysis method in accordance with an aspect of the present invention. Homology is one of mathematical fields which facilitates an analysis of, for example, connection between figures by substituting an algebraic expression for morphological characteristics of the figures. The concept of homology is the mathematical concept that represents connection and contact among constituents. A histological image is binarized by using an appropriately set reference value for binarization (also referred to as binarization parameter). The binarized image is then used to calculate a zero-dimensional Betti number b0and a one-dimensional Betti number b1. Use of calculated zero-dimensional Betti number b0and one-dimensional Betti number b1enables assessment of the degrees of connection and contact among the constituents of tissue. Betti numbers are topological and suggestive numbers that have no relation with the shapes of figures (corresponding to, for example, constituents of tissue) but have relation only with contact and separation among figures. When a q-dimensional singular homology group is finitely-generated, the q-dimensional singular homology group is the direct sum of a free Abelian group and a finite Abelian group. The rank of the free Abelian group is referred to as a Betti number. <Zero-Dimensional Betti Number b0> A zero-dimensional Betti number b0is mathematically defined as follows. In general, a zero-dimensional Betti number b0refers to the number of connected components of a figure K composed of a finite number of line segments connected together (K is also referred to as one-dimensional complex). The expression “a figure composed of a finite number of points and a finite number of line segments connecting the points is a connection” means that it is possible to reach any second vertex from any first vertex of the figure by following a side of the figure. For each of a plurality of binarized images generated using respective binarization reference values different from each other, the number of connected regions each composed of pixels that are connected together and have one of pixel values obtained by the binarization (for example, a pixel value of 0 as a result of binarization) is the zero-dimensional Betti number b0. <One-Dimensional Betti Number b1> A one-dimensional Betti number b1is mathematically defined as follows. The one-dimensional Betti number b1of figure K is r when K satisfies conditions (1) and (2) below. For figure K composed of a finite number of line segments connected together (a one-dimensional complex having connection), (1) when any given r one-dimensional simplices (for example, line segments) that are each open (each do not have both ends) are removed from figure K, the number of connected components of figure K does not increase; and (2) when any given (r+1) one-dimensional open simplices are removed from K, K no longer has connection (in other words, the number of connected components of K increases by 1). For each of the plurality of binarized images generated using respective binarization reference values different from each other, the number of hole-shaped regions (composed of pixels having, for example, a pixel value of 255 after the binarization) each surrounded by pixels having one of the pixel values obtained by the binarization (for example, a pixel value of 0 as a result of the binarization) is the one-dimensional Betti number b1. <Zero-Dimensional Betti Number b0and One-Dimensional Betti Number b1of Example Figure> Now, the zero-dimensional Betti number b0and the one-dimensional Betti number b1in a binarized image are described using an example figure illustrated inFIG.2.FIG.2is a schematic view for explaining Betti numbers in the concept of homology. For afigure M1illustrated inFIG.2, the number of black regions is one. Accordingly, the zero-dimensional Betti number b0of thefigure M1is 1. In addition, the number of white regions in thefigure M1that are each surrounded by the black region is one. Accordingly, the one-dimensional Betti number b1of thefigure M1is 1. For afigure M2illustrated inFIG.2, the number of black regions is two. Accordingly, the zero-dimensional Betti number b0of thefigure M2is 2. In addition, the number of white regions of thefigure M2that are each surrounded by the black regions is three. Accordingly, the one-dimensional Betti number b1of thefigure M2is 3. For a two-dimensional image, the zero-dimensional Betti number b0is the number of cohesive groups each composed of components connected to each other, and the one-dimensional Betti number b1is the number of voids (which can be referred to as “hole-shaped regions” hereinafter) each bounded by the connected components. The number of hole-shaped regions is the total number of the “holes” present in the connected components. In the image analysis method in accordance with an aspect of the present invention, characteristics relating to topology and connectivity of the plurality of nuclei of cells are assessed for regions captured in histological images. With the image analysis method in accordance with an aspect of the present invention, this assessment enables estimation of the degree of cell differentiation based on the structural property and arrangement of the cells included in the tissue captured in the histological images. For example, applying the image analysis method in accordance with an aspect of the present invention to histological images such as needle biopsy images and image patches enables discrimination between the grades of Gleason's grade with a precision comparable to that achieved by expert pathologists (Configuration of Estimating System100) The following description will discuss a configuration of an estimating system100with reference toFIG.3.FIG.3is a block diagram illustrating an example configuration of the estimating system100. The estimating system100includes an estimating device1that performs the image analysis method in accordance with an aspect of the present invention. The estimating system100includes the estimating device1, an external device8that sends a histological image31to the estimating device1, and a presenting device5that obtains an estimation result output from the estimating device1and presents the estimation result.FIG.3illustrates an example in which a medical institution H1has introduced the estimating system100. The external device8may be, for example, a microscopic device having an image capturing function, or a computer connected to a microscope so that the computer can obtain image data from the microscope. Alternatively, the external device8may be a server (not illustrated) in the medical institution H1that stores and manages various kinds of medical image data and pathological image data. The presenting device5may be a device, such as a display and a speaker, capable of presenting information output from the estimating device1. In an example, the presenting device5is a display included in the estimating device1or in the external device8. Alternatively, the presenting device5may be a device, such as a computer and a tablet terminal, used by pathologists, laboratory technicians, and researchers from the medical institution H1. A connection between the estimating device1and the external device8may be a wireless connection or a wired connection. A connection between the estimating device1and the presenting device5may be a wireless connection or a wired connection. (Configuration of Estimating Device1) The estimating device1includes an image obtaining section2, a storage section3, and a control section4. The histological image31and an estimating model33may be stored in the storage section3. The estimating model33will be described later. The image obtaining section2obtains the histological image31of tissue from the external device8. When the analysis target is prostate tissue, the histological image31may be an image, captured at a predetermined magnification, of a tissue section of a prostate taken from the body of a subject. The image obtaining section2may obtain, from the external device8, an image patch corresponding to the region extracted from the histological image31. The image obtaining section2may store the obtained histological image31in the storage section3as illustrated inFIG.3. AlthoughFIG.3illustrates the example in which the estimating device1obtains the histological image31from the external device8that is separate from the estimating device1, the present invention is not limited to such a configuration. For example, the estimating device1may be incorporated into the external device8. The storage section3stores, in addition to the histological image31, a control program, executed by the control section4, for controlling each section, an OS program, an application program, and the like. Further, the storage section3stores various data that is retrieved when the control section4executes the program. The storage section3is constituted by a non-volatile storage device such as a hard disk and a flash memory. Note that the estimating device1may include, in addition to the storage section3, a volatile storage device, such as a Random Access Memory (RAM), used as a working area for temporary storage of data in the process of the execution of the various kinds of programs described above. AlthoughFIG.3illustrates an example in which the estimating device1is connected to the presenting device5that is separate from the estimating device1, the present invention is not limited to such a configuration. For example, the estimating device1may include the presenting device5for the exclusive use. <Configuration of Control Section4> The control section4may be constituted by a control device such as a central processing unit (CPU) and a dedicated processor. The respective units of the control section4, which will be described later with reference toFIG.3, are implemented by causing the control device such as a CPU to retrieve a program stored in the storage section3embodied in the form of, for example, a read only memory (ROM) and store the program in, for example, a random access memory (RAM) for execution. The control section4analyzes the histological image31, which is the analysis target, to estimate the degree of differentiation of a cell included in the tissue captured in the histological image31and outputs the estimation result. The control section4includes a binarizing section41, a Betti number calculating section42(characteristic numerical value calculating section), a statistic calculating section43, an estimating section44, and an output control section45. [Binarizing Section41] The binarizing section41performs binarization processing on the histological image31to generate a plurality of binarized images associated with respective binarization reference values different from each other. The binarizing section41may have a known image recognition function and image processing function. This allows the binarizing section41to clip, from the histological image31, the region in which the analysis target tissue is captured, perform color deconvolution processing on the histological image31, and/or divide the histological image31to generate a plurality of image patches. For example, the binarizing section41may discriminate the region in which tissue is captured from the region surrounding the tissue region (for example, the region in which resin is captured) and clip the tissue region. In the binarization processing, the binarizing section41converts a pixel with a pixel value larger than a binarization reference value to a white pixel, and converts a pixel with a pixel value not larger than the binarization reference value to a black pixel. In doing so, the binarizing section41performs binarization processing every time the binarization reference value changes, to generate a plurality of binarized images. In other words, the binarizing section41generates a plurality of binarized images associated with respective binarization reference values different from each other for the histological image31. In an example, the binarizing section41sets the binarization reference value to a value of not less than 0 and not more than 255. For example, when the binarization reference value is set to a pixel value of 100, the pixel value of a pixel having a pixel value of not more than 100 is converted to 0 as a result of the binarization processing, and the pixel value of a pixel having a pixel value of higher than 100 is converted to 255 as a result of the binarization processing. In an example, the binarizing section41may generate 255 binarized images for each histological image by changing the binarization reference value from 1 to 255 by an increment of 1. Alternatively, the binarizing section41may generate 253 binarized images for each histological image by changing the binarization reference value from 2 to 254 by an increment of 1. Instead, the binarizing section41may generate 50 binarized images for each histological image by changing the binarization reference value from 2 to 254 by an increment of 5. The present invention is not limited to such a binarized image generation manner, provided that the binarizing section41may generate a plurality of binarized images for each histological image by changing the binarization reference value according to a desired rule. [Betti Number Calculating Section42] The Betti number calculating section42calculates, for each binarized image, (i) a one-dimensional Betti number b1representing the number of hole-shaped regions each surrounded by pixels having one of the pixel values given through binarization (hereinafter, referred to as a first pixel value) and each composed of pixels with the other pixel value given through the binarization (hereinafter, referred to as a second pixel value). Further, the Betti number calculating section42calculates (ii) a zero-dimensional Betti number b0representing the number of connected regions each composed of pixels having the first pixel value connected together and (iii) the ratio R between the one-dimensional Betti number b1and the zero-dimensional Betti number b0. Although the ratio R is b1/b0in the example discussed below, the ratio R may be b0/b1. Each of the above connected regions is a region composed of pixels that have a pixel value of, for example, 0 after binarization processing and are connected together. The connected regions are each surrounded by the pixels having a pixel value of 255 after the binarization processing, and are independent of each other. On the other hand, each of the above hole-shaped regions is a region composed of pixels that have a pixel value of 255 after the binarization processing and are connected to each other. The hole-shaped regions are each surrounded by the pixels having a pixel value of 0 after the binarization processing, and are independent of each other. When, for example, 255 binarized images are generated for each histological image by changing the binarization reference value from 1 to 255 by an increment of 1, the Betti number calculating section42calculates 255 one-dimensional Betti numbers b1, 255 zero-dimensional Betti numbers b0, and 255 ratios R. FIG.4is a view illustrating example binarized images generated from the histological image31, and an example zero-dimensional Betti number b0and an example one-dimensional Betti number b1calculated for each binarized image. In an example, for the binarized image inFIG.4that is generated using a binarization reference value of 40, the calculated zero-dimensional Betti number b0is 22 and the calculated one-dimensional Betti number b1is 2 (the ratio R is thus 11). Similarly, for the binarized image generated using a binarization reference value of 100, the calculated zero-dimensional Betti number b0is 16, and the calculated one-dimensional Betti number b1is 4 (the ratio R is thus 4). Further, for the binarized image generated using a binarization reference value of 150, the calculated zero-dimensional Betti number b0is 31, and the calculated one-dimensional Betti number b1is 2 (the ratio R is thus 15.5). Note that values of the one-dimensional Betti number b1and zero-dimensional Betti number b0calculated by the Betti number calculating section42depend on magnification and resolution set at the time of obtaining the histological image31and on the area of a region imaged in the histological image31. Accordingly, the Betti number calculating section42preferably calculates the one-dimensional Betti numbers b1and the zero-dimensional Betti numbers b0for respective histological images31that are at the same magnification and have the same resolution, and that have the same area of an imaged region. An existing program can be used for the Betti number calculating section42. Examples of such a program may include CHomP. CHomP is freeware compliant with General Public License (GNU). However, the program is not limited to CHomP, and programs other than CHomP may be used, provided that the programs can calculate the zero-dimensional Betti number b0and the one-dimensional Betti number b1relating to an image. [Statistic Calculating Section43] FIG.3is referred to again. The one-dimensional Betti number b1, the zero-dimensional Betti number b0, and the ratio R have been calculated for each of the plurality of binarized images. The statistic calculating section43then calculates a first statistic T1relating to the distribution of the one-dimensional Betti number b1, a second statistic T2relating to the distribution of the zero-dimensional Betti number b0, and a third statistic T3relating to the distribution of the ratio R. When, for example, the Betti number calculating section42calculates 255 one-dimensional Betti numbers b1, the statistic calculating section43deals with the 255 one-dimensional Betti numbers b1as one discrete probability distribution to calculate the first statistic T1relating to the discrete probability distribution. Similarly, the statistic calculating section43deals with the 255 zero-dimensional Betti numbers b0as one discrete probability distribution to calculate the second statistic T2relating to the discrete probability distribution, and deals with the 255 ratios R as one discrete probability distribution to calculate the third statistic T3relating to the discrete probability distribution. The statistic calculating section43may calculate at least one of an average value, a median value, a standard deviation, a distribution range, a variation coefficient, skewness, and kurtosis as each of the first statistic T1, the second statistic T2, and the third statistic T3. The distribution range is the difference between the maximum value and the minimum value. The variation coefficient is a value obtained by dividing the standard deviation by the average value. The skewness is the statistic representing the degree to which a distribution is skewed with respect to the normal distribution and thus is the indicator representing the degree and the magnitude of asymmetry between two halves of the distribution divided at the average value of the distribution. The kurtosis is the statistic representing the degree to which a distribution is sharp when compared to the normal distribution and thus is the indicator representing the degree of sharpness of the distribution. [Estimating Section44] The estimating section44feeds input data including the first statistic T1, the second statistic T2, and the third statistic T3to an estimating model33(described later), and outputs the degree of differentiation of a cell included in tissue. The degree of differentiation may be a value or an indicator, representing the degree of differentiation calculated on the basis of structural property, arrangement, and infiltration manner of a tumor cell developing in tissue. When the analysis target is prostate tissue, the degree of differentiation may be a value representing any of grades of Gleason's grade which is determined according to the degree of gland differentiation of the prostate. The estimating model33is a model that simulates a correspondence of the first statistic T1, the second statistic T2, and the third statistic T3with the degree of differentiation of a cell included in tissue. In other words, the estimating model33is created through machine learning using, as learning data, a combination of the following (1) and (2): (1) a training histological image32that is an image obtained by capturing an image of tissue and that has been given in advance differentiation information indicating the degree of differentiation of a cell included in tissue captured in the training histological image; and (2) the first statistic T1, the second statistic T2, and the third statistic T3which are calculated for each of a plurality of binarized images that are generated from the training histological image32and that are associated with respective binarization reference values different from each other. Machine learning processing using the training histological image32to create the estimating model33will be described later. [Output Control Section45] The output control section45causes the presenting device5to present information indicating the estimation result output from the estimating section44. Further, the output control section45may cause the presenting device5to present the histological image31, which is the analysis target, together with the information indicating the estimation result. Alternatively, the output control section45may be configured to control the presenting device5to present the estimation result for a region extracted from the histological image31at the position corresponding to the region in the histological image31. This allows the estimating device1to present, to pathologists, laboratory technicians, and researchers, the output an estimation result for the histological image31and the position of a region associated with the estimation result. The estimation result may be presented to a user in a desired manner. For example, a determination result may be displayed on the presenting device5as illustrated inFIG.3, or may be output from a printer (not illustrated), a speaker (not illustrated), or the like. (Processing Performed by Estimating Device1) The following description will discuss the processing performed by the estimating device1with reference toFIG.5.FIG.5is a flowchart illustrating an example flow of the processing performed by the estimating device1. First, the image obtaining section2obtains the histological image31from the external device8(step S1). Next, the binarizing section41generates, from the histological image31, a plurality of binarized images associated with the respective binarization reference values different from each other (step S2). The Betti number calculating section42then calculates the one-dimensional Betti number b1, the zero-dimensional Betti number b0, and the ratio R for each of the plurality of binarized images (step S3: a characteristic numerical value calculation step). The ratio R is denoted as “b1/b0” inFIG.5. The statistic calculating section43calculates a statistic T1relating to the distribution of the one-dimensional Betti number b1calculated for the respective binarized image, a statistic T2relating to the distribution of the zero-dimensional Betti number b0calculated for the respective binarized image, and a statistic T3relating to the distribution of the ratio R calculated for the respective binarized images (step S4: statistic calculation step). The estimating section44then feeds input data including the statistic T1, the statistic T2, and the statistic T3to the estimating model33(step S5: an estimation step), and outputs the degree of differentiation of a cell included in the tissue captured in the histological image31(step S6: an estimation step). With the above configuration, the estimating device1generates, for the histological image31, the plurality of binarized images associated with the respective binarization reference values different from each other, and calculates the one-dimensional Betti number b1, the zero-dimensional Betti number b0, and the ratio R for each binarized image. The estimating device1then calculates the first statistic T1relating to the distribution of the one-dimensional Betti number b1, the second statistic T2relating to the distribution of the zero-dimensional Betti number b0, and the third statistic T3relating to the distribution of the ratio R. Subsequently, the estimating device1feeds, as a data set, the first statistic T1, the second statistic T2, and the third statistic T3to the estimating model33, and outputs the degree of differentiation of a cell included in the tissue. This allows the estimating device1to precisely determine the degree of differentiation of a cell included in the tissue in the histological image31on the basis of the structural property of the cell included in the tissue. Cells in tissue vary in uniformity of shape and size according to the degree of differentiation. In the above method, the property of the histological image31of the tissue is mathematically analyzed using the concept of homology, and the degree of differentiation of a cell included in the tissue is estimated on the basis of the analysis result. The estimation result is output from the estimating model33created through machine learning using the training histological image32, which will be described later. Therefore, as with the pathological diagnosis results by pathologists, the estimation result is based on the property of the histological image31. Accordingly, the estimation result can be understood in the same manner as determination results by pathologists, and are highly reliable. (Training Histological Image32) The training histological image32may be used to create the estimating model33.FIG.6is a view illustrating an example data structure of the training histological image32. As illustrated inFIG.6, the training histological image32includes histological images of tissue taken from the bodies of subjects. The respective histological images are given training histological image IDs. Each histological image included in the training histological image32has been given in advance differentiation information that indicates the degree of differentiation of a cell included in the tissue. The differentiation information is a result of determination made by a pathologist who has examined the histological image included in the training histological image32, and is information indicating the degree of differentiation of a cell included in the tissue. (Configuration of Estimating Device1Having Function of Creating Estimating Model33) The following description will discuss the configuration of the estimating device1that is running a learning algorithm for creating the estimating model33with reference toFIG.7.FIG.7is a functional block diagram illustrating an example configuration of the main part of the estimating device1that creates the estimating model33. For convenience of description, members having functions identical to those of the respective members described inFIG.3are given respective identical reference numerals, and a description of those members is omitted. AlthoughFIG.7illustrates the example in which the training histological image32is stored in advance in the storage section3of the estimating device1, the present invention is not limited to such an arrangement. For example, the image obtaining section2illustrated inFIG.3may obtain the training histological image32from the external device8. Further, although the estimating device1has the function of creating the estimating model33in the example illustrated inFIG.7, the present invention is not limited to such an arrangement. For example, the estimating model33may be created by causing a computer other than the estimating device1to perform the processing as described above. In this case, the estimating model33created by the computer other than the estimating device1may be stored in the storage section3of the estimating device1, and the estimating section44may use the estimating model33. In the estimating device1, the control section4that is running a learning algorithm for creating the estimating model includes the binarizing section41, the Betti number calculating section42, the statistic calculating section43, the estimating section44, and an estimating model creating section46. [Estimating Model Creating Section46] The estimating model creating section46performs, on a candidate estimating model, an algorithm for machine learning using the training histological image32to create the estimating model33(trained). The estimating model33(trained) is stored in the storage section3. Any known algorithm for machine learning can be applied to the machine learning for creating the estimating model33. In an example, a k-nearest neighbor algorithm or a weighted k-nearest neighbor algorithm can be used to create the estimating model33. However, the present invention is not limited to such an arrangement. For example, an algorithm such as Support Vector Machine and Random Forest can be applied to the machine learning for creating the estimating model33. In a case where a k-nearest neighbor algorithm or a weighted k-nearest neighbor algorithm is applied to creation of the estimating model33, the first statistic T1, the second statistic T2, and the third statistic T3that are normalized on the basis of the average value and dispersion may be used. This enables adjustment of the extent to which the first statistic T1, the second statistic T2, and the third statistic T3each affect the estimating model33. (Processing to Create Estimating Model33) The following description will discuss, with reference toFIG.8, the processing to create the estimating model33.FIG.8is a flowchart illustrating an example flow of the processing performed by estimating device1to create the estimating model33. First, the estimating model creating section46retrieves the training histological image32from the storage section3(step S11), and selects, from the training histological image32, a training histological image that has not been previously selected (for example, the histological image with a training histological image ID, “P1” inFIG.6) (step S12). Then, the binarizing section41generates, from the histological image selected by the estimating model creating section46, a plurality of binarized images associated with the respective binarization reference values different from each other (step S13). Subsequently, the Betti number calculating section42calculates the one-dimensional Betti number b1, the zero-dimensional Betti number b0, and the ratio R for each binarized image, in the same manner as in step S3inFIG.5. Then, the statistic calculating section43calculates the statistic T1relating to the distribution of the one-dimensional Betti number b1calculated for the respective binarized image, the statistic T2relating to the distribution of the zero-dimensional Betti number b0calculated for the respective binarized image, and the statistic T3relating to the distribution of the ratio R calculated for the respective binarized images (step S14). Next, the estimating section44feeds input data including the statistic T1, the statistic T2, and the statistic T3to the candidate estimating model (step S15), and outputs the estimation result obtained by estimating the degree of differentiation of a cell included in tissue in the histological image (step S16). Subsequently, the estimating model creating section46compares the estimation result output from the estimating section44to the differentiation information associated with the histological image selected in step S12to calculate errors (step S17). In addition, the estimating model creating section46updates the candidate estimating model having outputted the estimation result in step S16, such that the calculated errors are minimized (step S18). For example, when a k-nearest neighbor algorithm is used for the machine learning algorithm, a hyper-parameter k may be updated. In a case where not all of the training histological images included in the training histological image32are selected in step S12(NO in step S19), the estimating model creating section46returns to step S12to select, from the training histological image32, a histological image that has not been selected. For example, in a case where the histological image with a training histological image ID “P1” is selected, the histological image with a training histological image ID “P2” will be selected next (seeFIG.4). On the other hand, in a case where all of the training histological images included in the training histological image32are selected in step S12(YES in step S19), the estimating model creating section46stores the current candidate estimating model in the storage section3as the estimating model33(trained) (step S20). The estimating model thus created can output highly precise estimation result relating to the degree of differentiation of the cell included in the tissue in the histological image31in response to inputting the first statistic T1, the second statistic T2, and the third statistic T3calculated for a given histological image (for example, the histological image31inFIG.3). It should be noted that the estimating model33may be created by causing a computer other than the estimating device1to perform the processing illustrated inFIG.8. In this case, the trained estimating model33may be installed on the estimating device1. (Assessment of Estimation Precision) The following description will discuss the result of assessment on the estimation precision obtained when the image analysis method in accordance with the present invention is applied to needle biopsy images, and Gleason's grades of prostate cancer are estimated from the respective needle biopsy images with reference toFIGS.9and10.FIGS.9and10are views illustrating the results of comparison between an estimation precision obtained by the image analysis method in accordance with an aspect of the present invention and estimation precisions obtained by any other known image analysis methods. To assess the estimation precision, 18276 image patches are used, which are extracted from 43 needle biopsy images that has been given results of the determination made by an expert pathologist. As an example, images used as the 43 needle biopsy images are images each including a prostate cancerous region which is classified into grade 3 or 4 of Gleason's grade. This is to assess the estimation precision obtained when grades 3 and 4 are discriminated. The discrimination between grades 3 and 4 is the most difficult in Gleason's grade. FIG.9is a view illustrating indicator values calculated on the basis of an estimation result based on the individual image patches, andFIG.10is a view illustrating indicator values calculated on the basis of an estimation result based on the individual needle biopsy images. InFIGS.9and10, “SRHP” is an abbreviation for “Statistic Representation of Homology Profile” and means the image analysis method in accordance with the present invention. “DLGg” means the approach used by Arvaniti et al. to estimate Gleason's grade for prostate cancer (Arvaniti et al., Sci. Rep. Vol. 8, 2018, doi:10.1038/s41598-018-30535-1). “DLGg” uses a neural network that has undergone supervised learning. “SSAE” means a neural network to which the unsupervised learning algorithm “Stacked Sparse autoencoder algorithm” developed by Xu et al. (Xu et al., IEEE Trans. Med. Imaging Vol. 35(1), p 119-130, 2016) is applied. “MATF” is an abbreviation for “Morphological, Architectural and Textures Features” and means the approach developed by Ali et al. (Ali et al., Comput. Med. Imaging Graph. Vol. 41 p 3-13, 2015). FIG.9illustrates the following six assessment indicators: Area under curve (AUC), Accuracy, Recall, Precision, Specificity, and F1 score. The SRHP yields higher values than any other approaches in terms of AUC (0.96) that represents discriminant ability, Accuracy (89.02%), Recall (0.94), and F1 score (0.89), and yields favorable values in terms of Precision (0.84) and Specificity (0.84). This suggests SRHP successfully estimates the degree of differentiation of a cell included in tissue in the image patch extracted from the needle biopsy image with high precision. Further, as illustrated inFIG.10, SRHP successfully discriminates between grades 3 and 4 of Gleason's grade with high precision for all of the needle biopsy images. This suggests that the estimation by SRHP has robustness. [Influence of Hyper-Parameter k on Estimating Model Created Using k-Nearest Neighbor Algorithm] The following description will discuss the influence of the hyper-parameter k on the estimating model33created using a k-nearest neighbor algorithm with reference toFIG.11.FIG.11is a view illustrating various assessment indicators relating to precision of the estimation result output from the estimating model33as a function of a hyper-parameter k. As illustrated inFIG.11, each of the assessment indicators remains high even when the hyper-parameter k is set to any of the values ranging from 3 to 20. This suggests that the estimating model33in the image analysis method in accordance with the present invention is robust to the hyper-parameter k. Embodiment 2 The following description will discuss another embodiment of the present invention. For convenience of description, members having functions identical to those of the respective members described in Embodiment 1 are given respective identical reference numerals, and a description of those members is omitted. AlthoughFIG.3illustrates an example case in which the medical institution H1has introduced the estimating system100, the present invention is not limited to such an example case. For example, an estimating device1amay be connected to the external device8of a medical institution H2in the manner that allows the estimating device1ato communicate with the external device8over a communication network50. An estimating system100aemploying such a configuration will be described with reference toFIG.12.FIG.12is a functional block diagram illustrating an example configuration of the estimating system100ain accordance with an aspect of the present invention. As illustrated inFIG.12, the estimating device1aincludes a communicating section6serving as a communication interface with the external device8of the medical institution H2. This allows the image obtaining section2to obtain a histological image from the external device8of the medical institution H2over the communication network50. Further, the estimating device1asends the estimation result output from the estimating section44to the external device8over the communication network50. The estimating device1amay be connected to external devices8aof a plurality of medical institutions in the manner that allows the estimating device1ato communicate with the external devices8a. In this case, each of the histological images sent from the medical institutions H2to the estimating device1amay be given an image ID indicating the histological image and an identification number (for example, a patient ID) specific to a subject (patient) from which the tissue in the histological image is taken. In addition, each histological image may have been given a medical institution ID indicating the medical institution H2from which the histological image is sent. This configuration enables the estimating device1ato provide, to each of a plurality of medical institutions that sends histological image data, the estimation result obtained by analyzing the histological image having been received from each of the medical institutions. For example, a supervisor who supervises the estimating device1amay charge each medical institution a predetermined fee as a remuneration for the service offered to provide the estimation result of the estimation from the received histological image. The estimating device1amay deliver, to a computer (for example, the presenting device5provided in the medical institution H2) with which the estimating device1ais capable of communicating, a computer program (hereinafter referred to as an image analysis application) for the computer to calculate the first statistic, the second statistic, and the third statistic from the histological image. In this case, for example, the estimating device1amay send a notification for charging a remuneration for the image analysis application to the computer that has the delivered application installed thereon. In this manner, the supervisor supervising the estimating device1amay charge the medical institution H2a predetermined fee as a remuneration for service of providing the image analysis application. Embodiment 3 The following describes another embodiment of the present invention. For convenience of description, members having functions identical to those of the respective members described in Embodiments 1 and 2 are given respective identical reference numerals, and a description of those members is omitted. The estimating device1illustrated inFIG.3and the estimating device1aillustrated inFIG.12each have both an image analysis function performed on a histological image and an estimation function using the estimating model33. In other words, each of the estimating device1illustrated inFIG.3and the estimating device1aillustrated inFIG.12is a single device into which an image analysis device1A and an estimating device1B (described later) are unified. However, the present invention is not limited to such a configuration. For example, the functions of the estimating devices1and1amay be implemented by a combination of the image analysis device1A, which includes the binarizing section41, the Betti number calculating section42, and the statistic calculating section43, and the estimating device1B, which includes a control section4B. The control section4B includes the estimating section44. The following description will discuss an estimating system100bemploying such a configuration, with reference toFIG.13.FIG.13is a functional block diagram illustrating an example configuration of the estimating system100bin accordance with one aspect of the present invention. As illustrated inFIG.13, the image analysis device1A includes an external device8of the medical institution H2and a communicating section6A serving as a communication interface with the estimating device1B. This allows the image obtaining section2to obtain a histological image from the external device8of the medical institution H2over a communication network. Further, the image analysis device1A sends, to the estimating device1B over the communication network50, the first statistic T1, the second statistic T2, and the third statistic T3calculated by the statistic calculating section43. The estimating device1B includes a communicating section6B serving as a communication interface with the image analysis device1A and the external device8of the medical institution H2. This allows the estimating section44to obtain the first statistic T1, the second statistic T2, and the third statistic T3from the image analysis device1A over a communication network. Further, the estimating device1B sends an estimation result output from the estimating section44to the external device8over the communication network50. The estimating device1B may be connected to a plurality of image analysis devices1A in the manner that allows the estimating device1B to communicate with the image analysis devices1A. In this case, the statistics (including the first statistic T1, the second statistic T2, and the third statistic T3) that are sent from the image analysis devices1A to the estimating device1B may be given various IDs. The various IDs may include, for example, an identification number (patient ID) specific to a subject from which the tissue in a histological image as an analysis target is taken, a medical institution ID indicating the medical institution H2from which the corresponding histological image is sent, and a device ID specific to the image analysis device1A that has performed the image analysis. Such a configuration enables the image analysis device1A to analyze a histological image obtained from each of a plurality of medical institutions to calculate predetermined statistics, and send them to estimating device1B. The estimating device1B can output the estimation result using the statistics obtained from the image analysis device1A, and provide the estimation result to the medical institution from which the histological image data is sent. For example, a supervisor who supervises the estimating device1B may charge each medical institution a predetermined fee as a remuneration for service of providing the estimation result obtained by the estimation from the histological image obtained from the medical institution. [Modification] At least one of the image analysis device1A and the estimating device1B may deliver, to a computer (for example, the presenting device5provided in the medical institution H2) with which the image analysis devices1A and1B are capable of communicating, a computer program (hereinafter referred to as an image analysis application) for the computer to serve as the image analysis device1A. The computer on which the image analysis application is installed can serve as the image analysis device1A. In this case, for example, the image analysis device1A or the estimating device1B may send a notification for charging a remuneration for the image analysis application to the computer that has the delivered application installed thereon. In this manner, the supervisor who supervises the estimating device1B can receive a predetermined fee from the medical institutions H2as a remuneration for the service of providing the image analysis application. The statistics (including the first statistic T1, the second statistic T2, and the third statistic T3) that are sent to the estimating device1B from the computer provided in the medical institution H2and having the image analysis application installed thereon may be given various IDs. The various IDs may include, for example, an identification number (patient ID) specific to a subject from which the tissue in a histological image as an analysis target is taken, a medical institution ID indicating the medical institution H2from which the corresponding histological image is sent, and a device ID specific to the image analysis device1A that has performed the image analysis. This configuration eliminates the need for the medical institution H2to send any histological image to the outside of the medical institution H2(for example, to the image analysis device1A). The medical institution H2can, by using the image analysis application, analyze each histological image to calculate the first statistic T1, the second statistic T2, and the third statistic T3from the histological image, and send them to the estimating device1B. The histological image, which relates to the diagnostic information of a subject, needs to be sent to the outside of the medical institution H2in a manner which gives consideration for the protection of personal information. This configuration eliminates the need for sending the histological image to the outside of the medical institution H2. In addition, this configuration enables lower communication load than a configuration in which the histological image itself is sent. [Implementation by Software] The control blocks of the estimating devices1and1a, and the image analysis device1A (particularly, control section4), and the control block of the estimating device1B (control section4B) can be realized by a logic circuit (hardware) provided in an integrated circuit (IC chip) or the like, or can be alternatively realized by software. In the latter case, the estimating devices1and1a, the image analysis device1A, and the estimating device1B include a computer that executes instructions of a program that is software realizing the foregoing functions. The computer, for example, includes at least one processor and a computer readable storage medium storing the program. An object of the present invention can be achieved by the processor of the computer reading and executing the program stored in the storage medium. Examples of the processor encompass a central processing unit (CPU). Examples of the storage medium encompass “a non-transitory tangible medium” such as a read only memory (ROM), a tape, a disk, a card, a semiconductor memory, and a programmable logic circuit. The computer may further include a random access memory (RAM) in which the program is loaded. Further, the program may be supplied to or made available to the computer via any transmission medium (such as a communication network or a broadcast wave) which allows the program to be transmitted. Note that an aspect of the present invention can also be achieved in the form of a data signal in which the program is embodied via electronic transmission and which is embedded in a carrier wave. The present invention is not limited to the embodiments, but can be altered by a skilled person in the art within the scope of the claims. The present invention also encompasses, in its technical scope, any embodiment derived by combining technical means disclosed in differing embodiments. REFERENCE SIGNS LIST 1A: image analysis device1,1a,1B: estimating device5: presenting device8: external device41: binarizing section42: Betti number calculating section (characteristic numerical value calculating section)43: statistic calculating section44: estimating sectionS3: characteristic numerical value calculation stepS4: statistic calculation stepS5, S6: estimation step | 60,202 |
11861837 | In accordance with common practice the various features illustrated in the drawings may not be drawn to scale. Accordingly, the dimensions of the various features may be arbitrarily expanded or reduced for clarity. In addition, some of the drawings may not depict all of the components of a given system, method or device. Finally, like reference numerals may be used to denote like features throughout the specification and figures. DESCRIPTION Numerous details are described in order to provide a thorough understanding of the example implementations shown in the drawings. However, the drawings merely show some example aspects of the present disclosure and are therefore not to be considered limiting. Those of ordinary skill in the art will appreciate that other effective aspects or variants do not include all of the specific details described herein. Moreover, well-known systems, methods, components, devices and circuits have not been described in exhaustive detail so as not to obscure more pertinent aspects of the example implementations described herein. FIG.1illustrates a physical environment5including a device10with a display15. The device10displays content20(including content item30) to a user25. The content may include a video, a presentation, other time-varying content, or content presented as part of a computer-generated reality (CGR) environment. The device10is configured to obtain physiological data (e.g., pupillary data, electrocardiograph (ECG) data, etc.) from the user25via a sensor35directed in a direction40towards the user25. While the device10is illustrated as a hand-held device, other implementations involve devices with which a user interacts without holding and devices worn by a user. In some implementations, as illustrated inFIG.1, the device10is a handheld electronic device (e.g., a smartphone or a tablet). In some implementations the device10is a laptop computer or a desktop computer. In some implementations, the device10has a touchpad and, in some implementations, the device10has a touch-sensitive display (also known as a “touch screen” or “touch screen display”). In some implementations, the device10is a wearable head mounted display (HMD) for example, worn on a head27of the user25. Moreover, while this example and other examples discussed herein illustrate a single device10in a physical environment5, the techniques disclosed herein are applicable to multiple devices as well as to multiple real world environments. For example, the functions of device10may be performed by multiple devices. In some implementations, the device10includes an eye tracking system for detecting eye position and eye movements. For example, an eye tracking system may include one or more infrared (IR) light-emitting diodes (LEDs), an eye tracking camera (e.g., near-IR (NIR) camera), and an illumination source (e.g., an NIR light source) that emits light (e.g., NIR light) towards the eyes of the user25. Moreover, the illumination source of the device10may emit NIR light to illuminate the eyes of the user25and the NIR camera may capture images of the eyes of the user25. In some implementations, images captured by the eye tracking system may be analyzed to detect position and movements of the eyes of the user25, or to detect other information about the eyes such as pupil dilation or pupil diameter. Moreover, the point of gaze estimated from the eye tracking images may enable gaze-based interaction with content. In some implementations, the device10has a graphical user interface (GUI), one or more processors, memory and one or more modules, programs or sets of instructions stored in the memory for performing multiple functions. In some implementations, the user25interacts with the GUI through finger contacts and gestures on the touch-sensitive surface. In some implementations, the functions include image editing, drawing, presenting, word processing, website creating, disk authoring, spreadsheet making, game playing, telephoning, video conferencing, e-mailing, instant messaging, workout support, digital photographing, digital videoing, web browsing, digital music playing, and/or digital video playing. Executable instructions for performing these functions may be included in a computer readable storage medium or other computer program product configured for execution by one or more processors. In some implementations, the device10employs various physiological sensor, detection, or measurement systems. Detected physiological data may include, but is not limited to, electroencephalography (EEG), electrocardiography (ECG), electromyography (EMG), functional near infrared spectroscopy signal (fNIRS), blood pressure, skin conductance, or pupillary response. Moreover, the device10may simultaneously detect multiple forms of physiological data in order to benefit from synchronous acquisition of physiological data. Moreover, in some implementations, the physiological data represents involuntary data, e.g., responses that are not under conscious control. For example, a pupillary response may represent an involuntary movement. In some implementations, one or both eyes45of the user25, including one or both pupils50of the user25present physiological data in the form of a pupillary response. The pupillary response of the user25results in a varying of the size or diameter of the pupil50, via the optic and oculomotor cranial nerve. For example, the pupillary response may include a constriction response (miosis), e.g., a narrowing of the pupil, or a dilation response (mydriasis), e.g., a widening of the pupil. In some implementations, the device10may detect patterns of physiological data representing a time-varying pupil diameter. FIG.2illustrates a pupil50of the user25ofFIG.1in which the diameter of the pupil50varies with time. As shown inFIG.2, a present physiological state (e.g., present pupil diameter55) may vary in contrast to a past physiological state (e.g., past pupil diameter60). For example, the present physiological state may include a present pupil diameter and a past physiological state may include a past pupil diameter. The physiological data may represent a response pattern that dynamically varies over time. The device10may use the physiological data to implement the techniques disclosed herein. For example, a user's pupillary response to a luminance change event in the content may be compared with the user's prior responses to similar luminance change events in the same or other content. FIG.3, in accordance with some implementations, is a flowchart representation of a method300for identifying a state of a user based on how the user's pupil responds to a luminance change event. In some implementations, the method300is performed by one or more devices (e.g., device10). The method300can be performed at a mobile device, HMD, desktop, laptop, or server device. The method300can be performed on an HMD that has a screen for displaying 3D images or a screen for viewing stereoscopic images. In some implementations, the method300is performed by processing logic, including hardware, firmware, software, or a combination thereof. In some implementations, the method300is performed by a processor executing code stored in a non-transitory computer-readable medium (e.g., a memory). At block310, the method300displays content on a display of a device. The content may include a video, a presentation, other time-varying content, or content presented as part of a computer-generated reality (CGR) environment. A computer-generated reality (CGR) environment refers to a wholly or partially simulated environment that people sense and/or interact with via an electronic system. In CGR, a subset of a person's physical motions, or representations thereof, are tracked, and, in response, one or more characteristics of one or more virtual objects simulated in the CGR environment are adjusted in a manner that comports with at least one law of physics. For example, a CGR system may detect a person's head turning and, in response, adjust graphical content and an acoustic field presented to the person in a manner similar to how such views and sounds would change in a physical environment. In some situations (e.g., for accessibility reasons), adjustments to characteristic(s) of virtual object(s) in a CGR environment may be made in response to representations of physical motions (e.g., vocal commands). A person may sense and/or interact with a CGR object using any one of their senses, including sight, sound, touch, taste, and smell. For example, a person may sense and/or interact with audio objects that create 3D or spatial audio environment that provides the perception of point audio sources in 3D space. In another example, audio objects may enable audio transparency, which selectively incorporates ambient sounds from the physical environment with or without computer-generated audio. In some CGR environments, a person may sense and/or interact only with audio objects. Examples of CGR include virtual reality and mixed reality. A virtual reality (VR) environment refers to a simulated environment that is designed to be based entirely on computer-generated sensory inputs for one or more senses. A VR environment comprises virtual objects with which a person may sense and/or interact. For example, computer-generated imagery of trees, buildings, and avatars representing people are examples of virtual objects. A person may sense and/or interact with virtual objects in the VR environment through a simulation of the person's presence within the computer-generated environment, and/or through a simulation of a subset of the person's physical movements within the computer-generated environment. In contrast to a VR environment, which is designed to be based entirely on computer-generated sensory inputs, a mixed reality (MR) environment refers to a simulated environment that is designed to incorporate sensory inputs from the physical environment, or a representation thereof, in addition to including computer-generated sensory inputs (e.g., virtual objects). On a virtuality continuum, a mixed reality environment is anywhere between, but not including, a wholly physical environment at one end and virtual reality environment at the other end. In some MR environments, computer-generated sensory inputs may respond to changes in sensory inputs from the physical environment. Also, some electronic systems for presenting an MR environment may track location and/or orientation with respect to the physical environment to enable virtual objects to interact with real objects (that is, physical articles from the physical environment or representations thereof). For example, a system may account for movements so that a virtual tree appears stationery with respect to the physical ground. Examples of mixed realities include augmented reality and augmented virtuality. An augmented reality (AR) environment refers to a simulated environment in which one or more virtual objects are superimposed over a physical environment, or a representation thereof. For example, an electronic system for presenting an AR environment may have a transparent or translucent display through which a person may directly view the physical environment. The system may be configured to present virtual objects on the transparent or translucent display, so that a person, using the system, perceives the virtual objects superimposed over the physical environment. Alternatively, a system may have an opaque display and one or more imaging sensors that capture images or video of the physical environment, which are representations of the physical environment. The system composites the images or video with virtual objects, and presents the composition on the opaque display. A person, using the system, indirectly views the physical environment by way of the images or video of the physical environment, and perceives the virtual objects superimposed over the physical environment. As used herein, a video of the physical environment shown on an opaque display is called “pass-through video,” meaning a system uses one or more image sensor(s) to capture images of the physical environment, and uses those images in presenting the AR environment on the opaque display. Further alternatively, a system may have a projection system that projects virtual objects into the physical environment, for example, as a hologram or on a physical surface, so that a person, using the system, perceives the virtual objects superimposed over the physical environment. An augmented reality environment also refers to a simulated environment in which a representation of a physical environment is transformed by computer-generated sensory information. For example, in providing pass-through video, a system may transform one or more sensor images to impose a select perspective (e.g., viewpoint) different than the perspective captured by the imaging sensors. As another example, a representation of a physical environment may be transformed by graphically modifying (e.g., enlarging) portions thereof, such that the modified portion may be representative but not photorealistic versions of the originally captured images. As a further example, a representation of a physical environment may be transformed by graphically eliminating or obfuscating portions thereof. An augmented virtuality (AV) environment refers to a simulated environment in which a virtual or computer generated environment incorporates one or more sensory inputs from the physical environment. The sensory inputs may be representations of one or more characteristics of the physical environment. For example, an AV park may have virtual trees and virtual buildings, but people with faces photorealistically reproduced from images taken of physical people. As another example, a virtual object may adopt a shape or color of a physical article imaged by one or more imaging sensors. As a further example, a virtual object may adopt shadows consistent with the position of the sun in the physical environment. There are many different types of electronic systems that enable a person to sense and/or interact with various CGR environments. Examples include head mounted systems, projection-based systems, heads-up displays (HUDs), vehicle windshields having integrated display capability, windows having integrated display capability, displays formed as lenses designed to be placed on a person's eyes (e.g., similar to contact lenses), headphones/earphones, speaker arrays, input systems (e.g., wearable or handheld controllers with or without haptic feedback), smartphones, tablets, and desktop/laptop computers. A head mounted system may have one or more speaker(s) and an integrated opaque display. Alternatively, a head mounted system may be configured to accept an external opaque display (e.g., a smartphone). The head mounted system may incorporate one or more imaging sensors to capture images or video of the physical environment, and/or one or more microphones to capture audio of the physical environment. Rather than an opaque display, a head mounted system may have a transparent or translucent display. The transparent or translucent display may have a medium through which light representative of images is directed to a person's eyes. The display may utilize digital light projection, OLEDs, LEDs, uLEDs, liquid crystal on silicon, laser scanning light source, or any combination of these technologies. The medium may be an optical waveguide, a hologram medium, an optical combiner, an optical reflector, or any combination thereof. In one embodiment, the transparent or translucent display may be configured to become opaque selectively. Projection-based systems may employ retinal projection technology that projects graphical images onto a person's retina. Projection systems also may be configured to project virtual objects into the physical environment, for example, as a hologram or on a physical surface. At block320, the method300identifies a luminance change event associated with the content. The luminance change may be a quick change to brighter or dimmer content that is detected by monitoring and tracking the luminance of the content over time. The luminance change may be a change in brightness (e.g., average brightness) exceeding a threshold value within a predetermined time period. In some implementations, pixel information is collected and the collected RGB values (or other pixel values) are converted into an estimated perceived luminance. The luminance may be determined using a linear formula (e.g., LUM=(0.21 Red(R)+0.72 Green(G)+0.07 Blue(B)) or non-linear formula (e.g., LUM=sqrt(0.299R2+0.587G2+0.115B2). The luminance change event may occur naturally as the user scans the content, for example, as the user looks from a brightly lit object to a dimly lit object within the content. The luminance change event may be embedded intentionally in the content. In one example, artificial luminance probes (e.g., small abrupt luminance changes) are inserted into the content once every few seconds. In another example, artificial luminance probes are sequential (e.g., inserted with a specific frequency over a few seconds). In some implementations, the luminance change is foveal, e.g., involving changing luminance of a single or group of items upon which the user is focusing. The items upon which the user is focusing may be determined via eye tracking and scene segmentation algorithms. In some implementations, a gaze point is determined and objects within a circular region around the gaze point are considered for the luminance change event detection. The radius or other dimensions of the region of focus may be personalized based on the particular user's gaze characteristics. In some implementations, the luminance change event is extra-foveal, e.g., involving luminance change occurring outside of a region (e.g., a circular region) around the current focus. In some implementations, the luminance change event is feature-based, e.g., luminance changes based on the changing in intensity of certain colored objects or specific textures in a scene. In some implementations, the luminance change is achieved by altering inter-frame content. For example, head-mounted devices (HMDs) may use low persistent displays that briefly display black between rendered frames (either through hardware or software means). These inter-frame moments may be intentionally altered to result in luminance change events within the content over time. For example, some inter-frame may be altered to display white, grey, or otherwise brighter content than would otherwise be displayed. At block330, the method300obtains, using a sensor, a pupillary response of a user perceiving the luminance change event. In some implementations, the diameter of the pupil is measured over a period of time using an eye-tracking system. In some implementations, images captured by the eye tracking system may be analyzed to detect positions and movements of the eyes of the user and to detect other information about the eyes, such as pupil dilation or pupil diameter. In some implementations, an eye tracking system implemented in a head-mounted device (HMD) collects pupil radius data as a user engages with content. At block340, the method300assesses a state of the user based on the pupillary response. Examples of user states include, but are not limited to, attentive, inattentive, perceptive, focused, imperceptive, distracted, interested, disinterested, curious, inquisitive, doubtful, critical, certain, deciding, assessing, assessing logically, assessing emotionally, receptive, unreceptive, day-dreaming, thinking, observing, relaxed, processing information, relating to past experience, remembering, mindful, mind-wandering, tired, rested, alert, attracted, repelled, guilty, and indignant. Assessing the state of the user may involve determining a level or extent of a type of state, e.g., determining the extent of attentiveness or an extent of inattentiveness on a numerical scale. As a particular example, depending on the pupillary response magnitude the method300may determine that a user was attentive with a score of 0.6 on a normalized scale from 0 to 1 (0 being inattentive and 1 being fully focused). In some implementations, the method300may classify or otherwise determine more than two states depending on the dynamics of the pupillary response after a luminance event (for example, based on how long it takes for pupil radius to reach a minimum, how quickly the pupil radius returns to a normal level, or other dynamic behavior of the pupil). For example, the method300may be configured to determine that the user is inattentive and further configured to determine that a reason that the user is inattentive is because the user is tired, rested but day-dreaming, distracted etc. The method may use machine learning models for such sub-category classifications. In some implementations, the state is determined based on a magnitude of change of pupil diameter or radius. In some implementations, the state is determined based on a pattern or other dynamics of the pupillary response. In some implementations, the state is determined using a statistical or machine learning-based classification technique. In some implementations, pupil responses are aggregated and classified into different states using statistical or machine-learning based techniques. For example, a machine-learning models may be trained to classify a pupillary response to a luminance change event into one or a fixed number of classes. In one example, a pupillary response is classified as attentive, inattentive, or neutral. In another example, a pupillary response is classified as observing or processing information. In some implementations, the pupillary response is compared with the user's own prior responses to determine the user's current state. In some implementations, the pupillary response is assessed based on pupillary response of multiple users to various types of content, e.g., comparing the user's current pupillary response with a typical or average user response to a similar luminance change event. Observing repeated responses of the pupil to luminance changes over a duration of time (e.g., seconds to minutes) may provide insights about the underlying state of the user at different time scales. These luminance changes may occur naturally with the content, occur as the user scans different locations of the screen of the device, or can be embedded into the content, unnoticeable to the user, for example, using minor adaptations to the underlying RGB (or gray) scale of the pixel values. In some implementations, the method300provides feedback to the user based on the user's state. For example, the method300may detect an attention lapse and provide a notification for the user to take a break, re-watch a particular portion of the content, or change the content presentation speed. In another example, the method300may detect a heightened attention moment in the content and suggest similar moments of content for the user. In another example, a notification may be provided for a user to view content at a particular time or under particular conditions based on determining a state profile for the user. For example, such a profile may indicate that the user is more attentive in the mornings than in the afternoon or evening and a notification may be provided at 8:30 am suggesting to the user that now is a good time to study or work with the content. In some implementations, the method300provides feedback to a content creator to facilitate improvements to the content or future/related content. A notification may identify a portion of the content associated with a state for one or more users, e.g., identifying that during a particular portion of the content users were typically less attentive than during another portion of the content. In some implementations, state data for multiple users who have viewed the content is aggregated to provide feedback regarding the content. The content creator may revise the content based on such feedback to make that portion shorter or more attention grabbing. In some implementations, content is automatically adjusted or presented according to automatically-determined display parameters based on a user's state. For example, content may be broken up into smaller pieces with breaks in between based on determining that the user is in a particular state. The method300provides a flexible and subtle way to determine the attention levels and other states of a user without requiring a behavioral response and interruption. The method300may be provided to users who want to learn a new skill, watch a lecture, or perform a task that requires a long attention span, amongst numerous other uses cases, and may provide such users with information to facilitate the users' purposes. For example, the method300may inform a user when his or her attention is low and suggest a break. Such feedback may allow more productive working and learning schedules. FIG.4is a chart400illustrating exemplary foveal luminance recordings410from a user watching an instructional cooking video. The mean luminance value420fluctuates over time as the user's eyes saccade and the content changes. Time instances when the luminance rapidly decreases are indicated with dots (e.g., dot430). In some implementations, physiological (e.g., pupillary) responses may be measured for a few seconds after each instance in which the luminance rapidly decreases. Similarly, physiological (e.g., pupillary) responses may be measured for a few seconds after each instance in which the luminance rapidly increases. FIG.5is a chart500illustrating average pupil radius responses to the luminance increases inFIG.4. In this example, the average pupil radius responses to sudden luminance increases during the first half510are different than the average pupil radius responses to sudden luminance increases during the second half520. For example, the average magnitude of change of the pupil radius was greater during the second half than the first half. In some implementations, larger pupil responses are associated with a relatively inattentive state. Thus, the differences in the average pupil radius responses may indicate that the user was in a more attentive state while viewing the first half of the content than while viewing the second half of the content. Note that shaded areas in the chart represent standard deviation. FIG.6is a chart600illustrating average pupil radius responses to the luminance decreases inFIG.4. In this example, the average pupil radius responses to sudden luminance decreases during the first half610are different than the average pupil radius responses to sudden luminance decreases during the second half620. For example, the average magnitude of change of the pupil radius was greater during the second half than the first half. In some implementations, larger pupil responses are associated with a relatively inattentive state. Thus, the differences in the average pupil radius responses may indicate that the user was in a more attentive state while viewing the first half of the content than while viewing the second half of the content. Note that shaded areas in the chart represent standard deviation. FIG.7is a block diagram of an example of a device10in accordance with some implementations. While certain specific features are illustrated, those skilled in the art will appreciate from the present disclosure that various other features have not been illustrated for the sake of brevity, and so as not to obscure more pertinent aspects of the implementations disclosed herein. To that end, as a non-limiting example, in some implementations the device10includes one or more processing units702(e.g., microprocessors, ASICs, FPGAs, GPUs, CPUs, processing cores, and/or the like), one or more input/output (I/O) devices and sensors706, one or more communication interfaces708(e.g., USB, FIREWIRE, THUNDERBOLT, IEEE 802.3x, IEEE 802.11x, IEEE 802.16x, GSM, CDMA, TDMA, GPS, IR, BLUETOOTH, ZIGBEE, SPI, I2C, and/or the like type interface), one or more programming (e.g., I/O) interfaces710, one or more displays712, one or more interior and/or exterior facing image sensor systems714, a memory720, and one or more communication buses704for interconnecting these and various other components. In some implementations, the one or more communication buses704include circuitry that interconnects and controls communications between system components. In some implementations, the one or more I/O devices and sensors706include at least one of an inertial measurement unit (IMU), an accelerometer, a magnetometer, a gyroscope, a thermometer, one or more physiological sensors (e.g., blood pressure monitor, heart rate monitor, blood oxygen sensor, blood glucose sensor, etc.), one or more microphones, one or more speakers, a haptics engine, one or more depth sensors (e.g., a structured light, a time-of-flight, or the like), and/or the like. In some implementations, the one or more displays712are configured to present a user experience to the user25. In some implementations, the one or more displays712correspond to holographic, digital light processing (DLP), liquid-crystal display (LCD), liquid-crystal on silicon (LCoS), organic light-emitting field-effect transitory (OLET), organic light-emitting diode (OLED), surface-conduction electron-emitter display (SED), field-emission display (FED), quantum-dot light-emitting diode (QD-LED), microelectromechanical system (MEMS), a retinal projection system, and/or the like display types. In some implementations, the one or more displays712correspond to diffractive, reflective, polarized, holographic, etc. waveguide displays. In one example, the device10includes a single display. In another example, the device10includes a display for each eye of the user25, e.g., an HMD. In some implementations, the one or more displays712are capable of presenting CGR content. In some implementations, the one or more image sensor systems714are configured to obtain image data that corresponds to at least a portion of the face of the user25that includes the eyes of the user25. For example, the one or more image sensor systems714include one or more RGB camera (e.g., with a complimentary metal-oxide-semiconductor (CMOS) image sensor or a charge-coupled device (CCD) image sensor), monochrome camera, IR camera, event-based camera, and/or the like. In various implementations, the one or more image sensor systems714further include illumination sources that emit light upon the portion of the face of the user25, such as a flash or a glint source. The memory720includes high-speed random-access memory, such as DRAM, SRAM, DDR RAM, or other random-access solid-state memory devices. In some implementations, the memory720includes non-volatile memory, such as one or more magnetic disk storage devices, optical disk storage devices, flash memory devices, or other non-volatile solid-state storage devices. The memory720optionally includes one or more storage devices remotely located from the one or more processing units702. The memory720comprises a non-transitory computer readable storage medium. In some implementations, the memory720or the non-transitory computer readable storage medium of the memory720stores the following programs, modules and data structures, or a subset thereof including an optional operating system730and a user experience module740. The operating system730includes procedures for handling various basic system services and for performing hardware dependent tasks. In some implementations, the user experience module740is configured to display content on electronic devices assess the states of users viewing such content. To that end, in various implementations, the user experience module740includes a content unit742, a physiological tracking (e.g., pupil) unit744, and a user state unit746. In some implementations, the content unit742is configured to provide and/or track content for display on a device. The content unit742may be configured to monitor and track the luminance of the content over time and/or to identify luminance change events that occur within the content. In some implementations, the content unit742may be configured to inject artificial luminance change events into content using one or more of the techniques discussed herein or as otherwise may be appropriate. To these ends, in various implementations, the unit includes instructions and/or logic therefor, and heuristics and metadata therefor. In some implementations, the physiological tracking (e.g., pupil) unit744is configured to track a user's pupil or other physiological attributes using one or more of the techniques discussed herein or as otherwise may be appropriate. To these ends, in various implementations, the unit includes instructions and/or logic therefor, and heuristics and metadata therefor. In some implementations, the user state unit746is configured to assess the state of a user based on a physiological response using one or more of the techniques discussed herein or as otherwise may be appropriate. To these ends, in various implementations, the unit includes instructions and/or logic therefor, and heuristics and metadata therefor. Although the units and modules ofFIG.7are shown as residing on a single device (e.g., the device10), it should be understood that in other implementations, any combination of these units may be located in separate computing devices. Moreover,FIG.7is intended more as functional description of the various features which are present in a particular implementation as opposed to a structural schematic of the implementations described herein. As recognized by those of ordinary skill in the art, items shown separately could be combined and some items could be separated. For example, some functional modules shown separately inFIG.7could be implemented in a single module and the various functions of single functional blocks could be implemented by one or more functional blocks in various implementations. The actual number of modules and the division of particular functions and how features are allocated among them will vary from one implementation to another and, in some implementations, depends in part on the particular combination of hardware, software, and/or firmware chosen for a particular implementation. FIG.8illustrates a block diagram of an exemplary head-mounted device1000in accordance with some implementations. The head-mounted device1000includes a housing1001(or enclosure) that houses various components of the head-mounted device1000. The housing1001includes (or is coupled to) an eye pad (not shown) disposed at a proximal (to the user25) end of the housing1001. In various implementations, the eye pad is a plastic or rubber piece that comfortably and snugly keeps the head-mounted device1000in the proper position on the face of the user25(e.g., surrounding the eye of the user25). The housing1001houses a display1010that displays an image, emitting light towards or onto the eye of a user25. In various implementations, the display1010emits the light through an eyepiece having one or more lenses1005that refracts the light emitted by the display1010, making the display appear to the user25to be at a virtual distance farther than the actual distance from the eye to the display1010. For the user25to be able to focus on the display1010, in various implementations, the virtual distance is at least greater than a minimum focal distance of the eye (e.g., 7 cm). Further, in order to provide a better user experience, in various implementations, the virtual distance is greater than 1 meter. The housing1001also houses a tracking system including one or more light sources1022, camera1024, and a controller1080. The one or more light sources1022emit light onto the eye of the user25that reflects as a light pattern (e.g., a circle of glints) that can be detected by the camera1024. Based on the light pattern, the controller1080can determine an eye tracking characteristic of the user25. For example, the controller1080can determine a gaze direction and/or a blinking state (eyes open or eyes closed) of the user25. As another example, the controller1080can determine a pupil center, a pupil size, or a point of regard. Thus, in various implementations, the light is emitted by the one or more light sources1022, reflects off the eye of the user25, and is detected by the camera1024. In various implementations, the light from the eye of the user25is reflected off a hot mirror or passed through an eyepiece before reaching the camera1024. The display1010emits light in a first wavelength range and the one or more light sources1022emit light in a second wavelength range. Similarly, the camera1024detects light in the second wavelength range. In various implementations, the first wavelength range is a visible wavelength range (e.g., a wavelength range within the visible spectrum of approximately 400-700 nm) and the second wavelength range is a near-infrared wavelength range (e.g., a wavelength range within the near-infrared spectrum of approximately 700-1400 nm). In various implementations, eye tracking (or, in particular, a determined gaze direction) is used to enable user interaction (e.g., the user25selects an option on the display1010by looking at it), provide foveated rendering (e.g., present a higher resolution in an area of the display1010the user25is looking at and a lower resolution elsewhere on the display1010), or correct distortions (e.g., for images to be provided on the display1010). In various implementations, the one or more light sources1022emit light towards the eye of the user25which reflects in the form of a plurality of glints. In various implementations, the camera1024is a frame/shutter-based camera that, at a particular point in time or multiple points in time at a frame rate, generates an image of the eye of the user25. Each image includes a matrix of pixel values corresponding to pixels of the image which correspond to locations of a matrix of light sensors of the camera. In implementations, each image is used to measure or track pupil dilation by measuring a change of the pixel intensities associated with one or both of a user's pupils. In various implementations, the camera1024is an event camera comprising a plurality of light sensors (e.g., a matrix of light sensors) at a plurality of respective locations that, in response to a particular light sensor detecting a change in intensity of light, generates an event message indicating a particular location of the particular light sensor. It will be appreciated that the implementations described above are cited by way of example, and that the present invention is not limited to what has been particularly shown and described hereinabove. Rather, the scope includes both combinations and sub combinations of the various features described hereinabove, as well as variations and modifications thereof which would occur to persons skilled in the art upon reading the foregoing description and which are not disclosed in the prior art. As described above, one aspect of the present technology is the gathering and use of physiological data to improve a user's experience of an electronic device with respect to using electronic content. The present disclosure contemplates that in some instances, this gathered data may include personal information data that uniquely identifies a specific person or can be used to identify interests, traits, or tendencies of a specific person. Such personal information data can include physiological data, demographic data, location-based data, telephone numbers, email addresses, home addresses, device characteristics of personal devices, or any other personal information. The present disclosure recognizes that the use of such personal information data, in the present technology, can be used to the benefit of users. For example, the personal information data can be used to improve the content viewing experience. Accordingly, use of such personal information data may enable calculated control of the electronic device. Further, other uses for personal information data that benefit the user are also contemplated by the present disclosure. The present disclosure further contemplates that the entities responsible for the collection, analysis, disclosure, transfer, storage, or other use of such personal information and/or physiological data will comply with well-established privacy policies and/or privacy practices. In particular, such entities should implement and consistently use privacy policies and practices that are generally recognized as meeting or exceeding industry or governmental requirements for maintaining personal information data private and secure. For example, personal information from users should be collected for legitimate and reasonable uses of the entity and not shared or sold outside of those legitimate uses. Further, such collection should occur only after receiving the informed consent of the users. Additionally, such entities would take any needed steps for safeguarding and securing access to such personal information data and ensuring that others with access to the personal information data adhere to their privacy policies and procedures. Further, such entities can subject themselves to evaluation by third parties to certify their adherence to widely accepted privacy policies and practices. Despite the foregoing, the present disclosure also contemplates implementations in which users selectively block the use of, or access to, personal information data. That is, the present disclosure contemplates that hardware or software elements can be provided to prevent or block access to such personal information data. For example, in the case of user-tailored content delivery services, the present technology can be configured to allow users to select to “opt in” or “opt out” of participation in the collection of personal information data during registration for services. In another example, users can select not to provide personal information data for targeted content delivery services. In yet another example, users can select to not provide personal information, but permit the transfer of anonymous information for the purpose of improving the functioning of the device. Therefore, although the present disclosure broadly covers use of personal information data to implement one or more various disclosed embodiments, the present disclosure also contemplates that the various embodiments can also be implemented without the need for accessing such personal information data. That is, the various embodiments of the present technology are not rendered inoperable due to the lack of all or a portion of such personal information data. For example, content can be selected and delivered to users by inferring preferences or settings based on non-personal information data or a bare minimum amount of personal information, such as the content being requested by the device associated with a user, other non-personal information available to the content delivery services, or publicly available information. In some embodiments, data is stored using a public/private key system that only allows the owner of the data to decrypt the stored data. In some other implementations, the data may be stored anonymously (e.g., without identifying and/or personal information about the user, such as a legal name, username, time and location data, or the like). In this way, other users, hackers, or third parties cannot determine the identity of the user associated with the stored data. In some implementations, a user may access their stored data from a user device that is different than the one used to upload the stored data. In these instances, the user may be required to provide login credentials to access their stored data. Numerous specific details are set forth herein to provide a thorough understanding of the claimed subject matter. However, those skilled in the art will understand that the claimed subject matter may be practiced without these specific details. In other instances, methods, apparatuses, or systems that would be known by one of ordinary skill have not been described in detail so as not to obscure claimed subject matter. Unless specifically stated otherwise, it is appreciated that throughout this specification discussions utilizing the terms such as “processing,” “computing,” “calculating,” “determining,” and “identifying” or the like refer to actions or processes of a computing device, such as one or more computers or a similar electronic computing device or devices, that manipulate or transform data represented as physical electronic or magnetic quantities within memories, registers, or other information storage devices, transmission devices, or display devices of the computing platform. The system or systems discussed herein are not limited to any particular hardware architecture or configuration. A computing device can include any suitable arrangement of components that provides a result conditioned on one or more inputs. Suitable computing devices include multipurpose microprocessor-based computer systems accessing stored software that programs or configures the computing system from a general purpose computing apparatus to a specialized computing apparatus implementing one or more implementations of the present subject matter. Any suitable programming, scripting, or other type of language or combinations of languages may be used to implement the teachings contained herein in software to be used in programming or configuring a computing device. Implementations of the methods disclosed herein may be performed in the operation of such computing devices. The order of the blocks presented in the examples above can be varied for example, blocks can be re-ordered, combined, or broken into sub-blocks. Certain blocks or processes can be performed in parallel. The use of “adapted to” or “configured to” herein is meant as open and inclusive language that does not foreclose devices adapted to or configured to perform additional tasks or steps. Additionally, the use of “based on” is meant to be open and inclusive, in that a process, step, calculation, or other action “based on” one or more recited conditions or values may, in practice, be based on additional conditions or value beyond those recited. Headings, lists, and numbering included herein are for ease of explanation only and are not meant to be limiting. It will also be understood that, although the terms “first,” “second,” etc. may be used herein to describe various objects, these objects should not be limited by these terms. These terms are only used to distinguish one object from another. For example, a first node could be termed a second node, and, similarly, a second node could be termed a first node, which changing the meaning of the description, so long as all occurrences of the “first node” are renamed consistently and all occurrences of the “second node” are renamed consistently. The first node and the second node are both nodes, but they are not the same node. The terminology used herein is for the purpose of describing particular implementations only and is not intended to be limiting of the claims. As used in the description of the implementations and the appended claims, the singular forms “a,” “an,” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will also be understood that the term “or” as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items. It will be further understood that the terms “comprises” or “comprising,” when used in this specification, specify the presence of stated features, integers, steps, operations, objects, or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, objects, components, or groups thereof. As used herein, the term “if” may be construed to mean “when” or “upon” or “in response to determining” or “in accordance with a determination” or “in response to detecting,” that a stated condition precedent is true, depending on the context. Similarly, the phrase “if it is determined [that a stated condition precedent is true]” or “if [a stated condition precedent is true]” or “when [a stated condition precedent is true]” may be construed to mean “upon determining” or “in response to determining” or “in accordance with a determination” or “upon detecting” or “in response to detecting” that the stated condition precedent is true, depending on the context. The foregoing description and summary of the invention are to be understood as being in every respect illustrative and exemplary, but not restrictive, and the scope of the invention disclosed herein is not to be determined only from the detailed description of illustrative implementations but according to the full breadth permitted by patent laws. It is to be understood that the implementations shown and described herein are only illustrative of the principles of the present invention and that various modification may be implemented by those skilled in the art without departing from the scope and spirit of the invention. | 49,536 |
11861838 | The foregoing and other features of the present invention will become apparent from the following description and appended claims, taken in conjunction with the accompanying drawings. Understanding that these drawings depict only several embodiments in accordance with the disclosure and are, therefore, not to be considered limiting of its scope, the disclosure will be described with additional specificity and detail through use of the accompanying drawings. DETAILED DESCRIPTION OF THE INVENTION The present invention is directed to an image processing system using artificial intelligence and machine learning to determine a likelihood of a presence of a cardiovascular anomaly, such as a congenital heart disease (CHD) and/or other cardiovascular related anomaly in a patient, such as a fetus during pregnancy. The image processing system may also optionally detect standard views, determine anatomy key-points, determine measurements, and/or perform segmentation. For example, medical imaging (e.g., still frames and/or video clips) may be generated using an imaging system such as an ultrasound system (e.g., an echocardiogram system) and may be processed by neural networks and/or models for determining a likelihood of a presence of one or more cardiovascular anomaly. The medical imaging may include a video clip and/or a consecutive series of still frame images. The imaging processing system may overcome any biases in the system trained to detect a presence of the one or more cardiovascular anomaly, for example, by generating a set of styled image data for each set of input image data. Additionally, or alternatively, the images used for training may be styled to eliminate or reduce any bias. The set of styled input data may incorporate style data from representative style images from multiple different imaging systems. For example, a single input image frame may be styled using image data from several (e.g., four, eight, twenty, etc.) imaging systems, resulting in several input images, each corresponding to a representative style of a certain imaging system. The styled input images may be processed by an anomaly model (e.g., a classification neural network) for detecting cardiovascular anomalies in the set of styled input data. The representative style images corresponding to each imagining system may be either determined using a classification and/or clustering model, may be derived from the image file itself (e.g., from metadata), or may be manually selected or determined. To accomplish style transfer for input video clips and/or series of consecutive images, various different approaches may be applied to maintain consistency between consecutive image frames. For example, the optical flow between frames may be determined and used to inform the style transfer such that certain regions of the consecutive frame may be constrained to maintain consistency between frames. In yet another example, a style transfer model may be a neural network including an encoder-decoder portion and a multi-instance normalization portion. The neural network may be a spatiotemporal neural network. It is understood, however, that any other suitable approach for performing style transfer for input image frames and/or video clips may be used. Referring now toFIG.1, image processing system100is illustrated. Image processing system100may be designed to receive medical images, process medical images using artificial intelligence and machine learning, and determine a likelihood of a presence of one or more cardiovascular anomaly (e.g., CHD and/or other cardiovascular anomaly). For example, image processing system100may receive image data showing anatomy of a fetus and may process the image data to automatically determine a likelihood of a presence of one or more cardiovascular anomalies in the fetus. Additionally, image processing system100may optionally detect standard views, anatomy key-points, determine measurements, and/or perform segmentation. Image processing system100may include one or more imaging system102that may each be in communication with a server104. For example, imaging system102may be any well-known medical imaging system that generates medical image data (e.g., still frames and/or video clips including RGB pixel information) such as an ultrasound system, echocardiogram system, x-ray systems, computed tomography (CT) systems, magnetic resonance imaging (MRI) systems, positron-emission tomography (PET) systems, and the like. As shown inFIG.1, imaging system102may be an ultrasound imaging system including sensor108and imaging device106. Sensor108may include a piezoelectric sensor device and/or transducer and/or may be any well-known medical imaging device. Imaging device106may be any well-known computing device including a processor and a display and may have a wired or wireless connection with sensor108. Sensor108may be used by a healthcare provider to obtain image data of the anatomy of a patient (e.g., patient110). Sensor108may generate two or three-dimensional images corresponding to the orientation of sensor108with respect to patient110. The image data generated by sensor108may be communicated to imaging device106. Imaging device106may send the image data to remote server104via any well-known wired or wireless system (e.g., Wi-Fi, cellular network, Bluetooth, Bluetooth Low Energy (BLE), near field communication protocol, etc.). Additionally, or alternatively, image data may be received and/or retrieved from one or more picture archiving and communication system (PACS). For example, the PACS system may use a Digital Imaging and Communications in Medicine (DICOM) format. Any results from the system may be shared with PACS. Image data may be a set of image data, video clips, images, still frames, a series of consecutive image frames, or the like. For example, image data may include image105, which may include a single frame of a two dimensional representation of a cross-section of a patient's cardiovascular anatomy (e.g., chambers of the heart). Image105may include certain information such as the patient's information, information about the number of scans, the video, the sensor device in use, the time, the data, the model of image device106and/or sensor108, the company's and/or manufacturer's logo, the technician's name, a doctor's name, a name of a medical facility, and/or any other information commonly found on medical images. Remote server104may be any computing device with one or more processors capable of performing operations described herein. In the example illustrated inFIG.1, remote server104may be one or more server, desktop or laptop computer, or the like and/or may be located in a different location than imaging system102. Remote server104may run one or more local applications to facilitate communication between imaging system106, datastore112, and/or analyst device116. Datastore112may be one or more drives having memory dedicated to storing digital information such as information unique to a certain patient, professional, facility and/or device. For example, datastore112may include, but is not limited to, volatile (e.g. random-access memory (RAM)), non-volatile (e.g. read-only memory (ROM)), flash memory, or any combination thereof. Datastore112may be incorporated into server104or may be separate and distinct from server104. In one example, datastore112may be a picture archiving and communication system (PACS). Remote server104may communicate with datastore112and/or analyst device116via any well-known wired or wireless system (e.g., Wi-Fi, cellular network, Bluetooth, Bluetooth Low Energy (BLE), near field communication protocol, etc.). Datastore112may receive and store image data (e.g., image data118) received from remote server104. For example, imaging system102may generate image data (e.g., ultrasound image data) and may send such image data to remote server104, which may send the image data to datastore112for storage. It is understood that datastore112may be optional and/or more than one imaging system102, remote server104, datastore112and/or analyst device116may be used. Analyst device116may be any computing device having a processor and a display and capable of communicating with at least remote server104and performing operations described herein. Analyst device116may be any well-known computing device such as a desktop, laptop, smartphone, tablet, wearable, or the like. Analyst device116may run one or more local applications to facilitate communication between analyst device116and remote server104and/or any other computing devices or servers described herein. Remote server104may determine and/or receive representative images corresponding to unique styles for various types of imaging systems. WhileFIG.1illustrates server104in communication with imaging system106, server104may be in communication with multiple different imaging systems that may include different types of imaging systems (e.g., imaging systems with different sensors, different hardware, different software, imaging systems made by different companies and/or manufacturers, different model numbers, etc.). Alternatively, server104may receive images from datastore112. Remote server104may receive sets of input image data (e.g., video clips and/or image frames) from imaging system106and/or datastore112and may extend the input image data into multiple types of input image data, incorporating the styles of the various different types of imaging systems into the input image data (e.g., styled images107). Style may alternatively, or additionally, refer to a look or feel of the images generated by a probe that may be a result of a patient's anatomy. For example, style could include a patient's BMI and/or age which may result in a certain look in the medical imaging. Remote server104may process the styled input image data using one or more trained models such as neural networks (e.g., convolutional neural networks (CNNs)) trained to detect one or more cardiovascular anomalies. For example, a likelihood of a presence of one or more cardiovascular anomalies may be determined and may optionally be automatically processed by remote server104to determine a presence of one or more cardiovascular anomaly. In one example, remote server104and/or datastore112may facilitate storage, processing, and/or analysis in the cloud. Remote server104may share information regarding a likelihood of a presence of one or more cardiovascular anomalies with one or more computing device (e.g., user device, analyst device, medical device, practitioner device, etc.). Remote server104may cause analyst device116to display information about the likelihood of a presence of one or more cardiovascular anomalies. For example, analyst device may display a patient ID number and a likelihood percentage for one or more CHDs and/or other cardiovascular anomalies. Referring now toFIG.2, a schematic view of the data flow between an imaging system, analyst device, and back end of the image processing system is depicted. As shown inFIG.2, imaging system202, which may be the same as or similar to imaging system102ofFIG.1, may include image generator204which may generate image data206. Image data206may include video clips and/or still frames, which may include RGB and/or grey scale pixel information. Alternatively, or additionally, image data206may include Doppler image data. Imaging system202may be designed to generate grey scale image data and/or Doppler image data. For example, image data206may include two-dimensional representations of ultrasound scans of the patient's (e.g., a fetus' anatomy). Additionally, or alternatively, image data206may include Doppler image information (e.g., color Doppler, power Doppler, spectral Doppler, Duplex Doppler, and the like). It is understood that various types of image data206may be simultaneously processed by imaging system202. In one example, the Doppler image data may be generated at the same time as ultrasound image data. Imaging system202may send image data206, which may be input image data (e.g., set of input image data) to backend208, which may be the same as or similar to server104ofFIG.1. Image data206may optionally be processed by preprocessor210. Preprocessor210may remove noise, reduce file size, focus, crop, resize and/or otherwise remove unnecessary areas of image data206to generate preprocessed image data212. Preprocessor may additionally, or alternatively, generate a consecutive series of still frame images from video clips or otherwise may segment video clips. Image data206may be applied to style transfer generator213, which may be trained using representative images211. Applying the image data to the style transfer generator is understood throughout to mean either applying the image data to the style transfer generator or applying to the style transfer generator to the image data such that the image data is processed by, input into, and/or analyzed by the style transfer generator. Representative images211may be manually selected from image data with known styles. For example, style may be known as it may correspond to certain recording echographs and/or probes (e.g., of a certain model or manufacturer) and/or the style may be determined from metadata or other information in the image file. Alternatively, image classifier205and clusterer209may be used to determine representative images212from image data203. Image data203, which may be multiple sets of image data (e.g., video clips and/or image frames) from various different types of imaging devices, systems, models, units, etc., may be received by backend208. For example, image data203may include video clips representative of medical images of cardiovascular portions of various patients. It is understood that image data203may be a large volume of image data (e.g., hundreds, thousands, or more of video clips) and may be received by backend208at different times. In one example, image data203may optionally be preprocessed by a processor similar to preprocessor210. Where style is not known and/or representative images211are not manually selected, image data203may be processed by image classifier205. Image classifier205may be a neural network such as a classification neural network that may be trained for image processing, detection, and/or recognition using large sets of images. For example, images from daily life (e.g., cars, bikes, apples, etc.) may be used to train the classifier generally for image recognition. Additionally, or alternatively, the classifier may be trained or fine-tuned using specific datasets corresponding to cardiovascular anatomy including with and/or without CHD and/or other cardiovascular anomalies to ultimately recognize cardiovascular anomalies. Image classifier205may be used to determine style data207, which may be based on low-level feature maps (e.g., in the early layers of the neural network) of classifier205. For each image (e.g., still frame) of image data203processed by classifier205, Gram matrices for feature maps corresponding to that image may be computed. For example, each Gram matrix may be a representation of the feature maps of an image at a certain layer (e.g., using a correlation operation). In one example, the Gram matrix may contain dot products of the feature maps. The values of the Gram matrices may be concatenated into vectors (e.g., a single style vector). Style data may include the low-level feature maps or representations thereof, corresponding Gram matrices, and/or one or more vectors corresponding to the Gram matrices. Clusterer209may process the vectors representing Gram matrices and may determine a position (e.g., data point) within a multi-dimensional space based on the vector. Clusterer may be any suitable cluster model trained to perform clustering (e.g., clustering of the Gram matrices. In one example, clustering may be performed using a Gaussian Mixture Model (GMM) method. Once clustering has been performed on the vectors representative of the Gram matrices, groups of imaging system styles may be identified in the multi-dimensional space based on a proximity representations of such input vectors. For example, data points in relatively close proximity to one another may be determined to be in the same cluster, referred to herein as style group. Each image or image set corresponding to data points in the same cluster will thus be determined to have a similar style. Style may be the arrangement or presentation or images and/or data in the image (e.g., border style, text placement, size, or font, general image style, certain colors, or the like). For each style group, a representative set of images or image frames may be determined. For example, the image corresponding to the data point in the center-most position of the style group in the multi-dimensional space may be determined. In one example, a representative data-point for a given style group in the multi-dimensional space may be determined using Akaike Information Criterion (AIC) and/or Bayesian Information Criterion (BIC). However, it is understood that any other suitable approach may be used for determining a data-point in the multi-dimensional space that best represents a given style group. The image data used to determine the Gram matrix and vector input to the clusterer, ultimately resulting in the representative style data point, may be determined, referred to as representative image211. Representative image211may thus contain style data207that best represents a style group which may correspond to a given type of imaging system. The representative image for each style group in the multi-dimensional space may be determined in the same manner. Representative images211may be a set of image frames, a single image frame, and/or a video clip. Whether representative images211were manually selected or determined using image classifier205and clusterer209, both input image data206and representative images211may be input and processed by style transfer generator213to extend input image data206into multiple image frames, each incorporating a representative style for each style group identified by clusterer209. For example, for each representative image211, a styled input may be generated by style transfer generator213. Alternatively, instead of using image classifier205and cluster209to determine representative images211, representative images211may be manually determined and provided to back end208. Style transfer generator213may be a model (e.g., one or more neural networks) trained to combine the content of image data206with the style of one image of representative image212, resulting in an image frame having the content of image data206and the style of image data203. Style transfer generator213may be trained to transfer style for a given image frame and/or may transfer style for a given video clip. For example, style transfer generator213may be trained with images for which a style corresponding to such images is known. Optionally, other information corresponding to the training set of images may also be known such as a manufacturer of the imaging system, a model number, a probe type or style, a patient condition and/or other biometric information, and/or the like may be known. Based on this information the model may be trained to map a given input image to any known style. This type of training may be supervised training. In the example where the style transfer generator213transfers style for a given image frame, style transfer generator213may use the technique set forth in “A Neural Algorithm of Artistic Style,” by L. Gatys, et. al, arXiv:1508.06576v2, Sep. 2, 2015, incorporated herein by reference in its entirety. Specifically, the feature map for a representative image may be determined using a convolutional neural network. For example, lower levels of the convolutional neural network (CNN) may be used to determine, approximate, represent, and/or extract the style information. Additionally, the input image (e.g., image data206) may be processed by the CNN to determine content information. For example, in higher layers of the CNN, high level content information may be determined, approximate, represent, and/or extracted. The style information from the representative image (e.g., representative image211) and the content information from the input image (e.g., from image data206) may be synthesized by finding an image that simultaneously matches the content information of the input image and the style information of the representative image. Rather than performing style transformation for a single image frame in isolation, it may be desirable to perform style transformation for a video clip or multiple image frames in series. For example, style transformation may be performed using the style transfer generator213and the approach outlined in “Artistic style transfer for videos and spherical images” by Ruder, et. al, arXiv:1708.04538v3, Aug. 5, 2018, incorporated herein by reference in its entirety. Specifically, style transformation may be performed on a video clip (e.g. a series of consecutive image frames) in a manner that styles each image frame based in part by the image frame that came before it. For example, deviations between two consecutive image frames may be determined by determining an optical flow for the image frames. With known deviations, a multi-pass algorithm may be used to process the video clip in alternating temporal directions using both forwards and backwards flow. To maintain consistency in the video clip, a constrained region may be determined based on the optical flow for each image frame and each subsequent image frame may be styled based on, at least in part, the previously styled image frame in the series, taking the previously styled image frame as input, warped according to the optical flow. In another example, style transformation for a video clip may include using the style transfer generator213and the approach outlined in “Real-Time Neural Style Transfer for Videos,” H. Huang, et. al, Conference: 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Jul. 1, 2017, incorporated herein by reference in its entirety. For example, the style transfer model may be a feed-forward convolutional neural network including a styling network and a loss network. The stylizing network may process input image frames and output styled image frames. The loss network may include a classification network to determine, approximate, represent, and/or extract features of the styled image frames and output loss data indicative of spatial loss in each of the styled stylized image frames. In addition to spatial loss, which leads to style transfer for each frame, the style transfer model may further incorporate temporal loss to enforce the temporal consistency between adjacent frames. To determine temporal loss, two consecutive frames are fed into the network, simultaneously. The temporal loss is defined as the mean square error between the styled output at time t and the warped version of the styled output at time t−1. In yet another example, style transformation for a video clip using style transfer generator213based on the approach outlined in “Fast Video Multi-Style Transfer” by W. Gao, Conference: 2020 IEEE Conference Winter Conference on Applications of Computer Vision (WAVC), Mar. 1-5, 2020, incorporated herein by reference in its entirety. For example, style transfer generator213may include multiple modules such as an encoder-decoder, a multi-instance normalization block, and a convolutional long short term memory (ConvLSTM). To avoid retraining the network for each different style, the network may learn multiple styles using the instance normalization layer with multiple sets of parameters. For example, each style may be associated with a certain pair of parameters. Each parameter pair can be regarded as the embedding of a specific style in the instance normalization layer. The ConvLSTM may be two ConvLSTM modules that may be inserted into the encoder-decoder network. Using a recurrent network, for example, may combine all previous frame information and current frame information to infer the output. Specifically, the ConvLSTM may compress the whole previous input sequence into a hidden state tensor and may forecast the current state based on the hidden state. Using style transfer generator213, which may use any of the style transfer approach described herein and/or any other suitable style transfer approach, styled image data214may be generated. Style image data may include multiple image frames, each corresponding to image data206but with a different style. For example, an image frame for each style identified (e.g., by clustered209or manually) may be generated for each image frame of image data206. For example, if eight styles are determined, then eight distinct image frames may be generated by style transfer generator213, each with a different style but with the content of image data206. In another example, image analyzer216may be trained using image data that was styled (e.g., using style transfer generator213) according a certain style (e.g., a standard style). Image data206may then be analyzed by image analyzer216. Alternatively, image data206may also be applied to style transfer generator213to be styled with the same style (e.g., the standard) that the training data was styled with. In this example, style transfer generator213may only output styled images for each input image (e.g., each image of image data206) based on the same style type (e.g., the standard style). Each image frame in styled image data214may be processed by image analyzer216. Image analyzer216may be one or more neural networks trained to process image data (e.g., image frames and/or video clips), such as medical image data, to detect cardiovascular anomalies (e.g., CHD) and optionally determine or detect standard views, anatomy key-points (e.g., identifying extremities of the valves), measurements (e.g., measure the size of the heart and/or area or volume of the heart), and/or perform segmentation (e.g., identify the contour of the heart). For example, image analyzer216may process styled image data214and may detect one or more cardiovascular anomaly in each image frame. In one example, image analyzer216may be a spatiotemporal CNN trained to determine cardiovascular anomalies. For example, image analyzer216may include a spatial stream and a temporal stream that may be fused together. Styled image data may be applied to a spatial model, which may be a spatial CNN such as a spatial CNN trained for image processing, to generate a spatial output. Additionally, optical flow data may be generated based on styled image data214, which may permit the networks to better consider the movement of the image data over time. The optical flow data may be applied to a temporal model, which may be a temporal CNN such as a temporal CNN trained for image processing and/or trained for processing optical flow data to generate a temporal output. The spatial output and temporal output may both be input into a fuser to generate a spatiotemporal output. For example, the fuser may combine architecture of the spatial model and the temporal model at several levels. In another example, as explained above, image analyzer216may be trained using image data that has been applied to style transfer generator213, resulting in a training data set having all the same style (e.g., a standard style). As a result, image analyzer216may avoid or lessen any bias for a certain style because all the input images used for training purposes may now have the same style. In this example, image data206, prior to being input into image analyzer216, may optionally be processed by style transfer213to transfer the same style used for the training data to image data206. Image analyzer216may output analyzed data218, which may be indicative of a likelihood of a presence of one or more anomalies in the styled image data, and optionally a presence of a standard view, anatomy key-point, measurement information, and/or segmentation information. For example, a value between 0 and 1 may be generated for each type of potential anomaly and may be indicative of the presence of that particular anomaly. Alternatively, any other suitable image processing model (e.g., a classification model) may be used for processing styled image data214and detecting a likelihood of a presence of one or more cardiovascular anomalies in styled image data214. Analyzed data218may be processed by output analyzer234which may generate analyzed output236which may indicate a presence of one or more cardiovascular anomalies in styled image data214. For example, output analyzer234may calculate weighted averages based on analyzed data218and/or may filter certain portions of analyzed data218. In one example, analyzed output236may indicate the risk of a likelihood of a presence of one or more morphological abnormalities or defects and/or may indicate the presence of one or more pathologies. For example, analyzed output236may indicate the presence of atrial septal defect, atrioventricular septal defect, coarctation of the aorta, double-outlet right ventricle, d-transposition of the great arteries, Ebstein anomaly, hypoplastic left heart syndrome, interrupted aortic arch, ventricular disproportion (e.g., the left or right ventricle larger than the other), abnormal heart size, ventricular septal defect, abnormal atrioventricular junction, increased or abnormal area behind the left atrium, abnormal left ventricle and/or aorta junction, abnormal right ventricle and/or pulmonary artery junction, great arterial size discrepancy (e.g., aorta larger or smaller than the pulmonary artery), right aortic arch abnormality, abnormal size of pulmonary artery, transverse aortic arch and/or superior vena cava, a visible additional vessel, and/or any other morphological abnormality, defect and/or pathology. It is understood that, in one example, analyzed output236may be indicative of one or more CHD or may be indicative of features associated with one or more CHD. Back end208may communicate analyzed output236and/or information based on analyzed data218to analyst device240, which may be the same as or similar to analyst device116. Analyst device240may be different than or the same as the device in imaging system202. Display module238may generate a user interface on analyst device240to generate and display a representation of analyzed output236and/or analyzed data218. For example, the display may show a representation of the image data (e.g., ultrasound image) with an overlay indicating the location of the detected risk or likelihood of CHDs and/or other cardiovascular anomalies. In one example, the overlay could be a box or any other visual indicator (e.g., arrow). User input module242may receive user input244and may communicate user input244to back end208. User input244may be instructions from a user to generate a report or other information such as instructions that the results generated by one or more of image classifier205, clusterer209, style transfer generator213, image analyzer216, and/or output analyzer234, are not accurate. For example, where user input244indicates an inaccuracy, user input244may be used to further train one or more of the foregoing models and/or networks. Where user input244indicates a request for a report, user input244may be communicated to report generator246, which may generate a report. For example, the report may include some or all of analyzed output236, analyzed data218and/or analysis, graphs, plots, tables regarding the same. Report248may then be communicated to analyst device240for display (e.g., by display module238) of report248, which may also be printed out by analyst device240. Referring now toFIG.3, a classification model for determining content data and style data is depicted. For example, the model302may be the same as or similar to image classifier205ofFIG.2. As shown inFIG.3, image frame304may be input into model302. Image frame304may be similar to image105ofFIG.1and may include a single frame of a two dimensional representation of a cross-section of a patient's cardiovascular anatomy (e.g., chambers of the heart). Image frame304may include content data306, which may show the representation of the patient's anatomy. Additionally, image frame304may include style data308including certain style information such as the patient's information, information about the number of scans, the video, the sensor device in use, the time, the data, the model of image device106and/or sensor108, the company's and/or manufacture's logo, the technician's name, a doctor's name, a name of a medical facility, and/or any other information commonly found on medical images. Style data308may further include the look and/or feel of the image such as colors, spatial arrangement, text style, text font, borders, icons for navigation and the like. Alternatively, or additionally, information about the imaging device (e.g., model and/or make) and/or other style information may be determined from image file information such as metadata and/or in other information in a Digital Imaging and Communications in Medicine (DICOM) files, for example. Feature maps from low levels310of a model302may be used to determine, approximate, represent, and/or extract style data312which may capture the texture and other style information of the image. Style data312may be representative of style data308, but without content data306. For example, style data312may include nuances such as contrast, tint, colors, clarity, edges, and the like. Additionally, in higher layers314of model302, content data316may be determined. Content data316may be representative of content data306and may include a representation of the patient's anatomy. In this manner, one or more neural networks may map one image to a similar image (e.g., with the same content) but with a certain style. Referring now toFIG.4, a clustering model for forming style groups and style data corresponding to each style group is illustrated. As shown inFIG.4, set of input data402, which may be the same or similar to image data206(e.g., input data and/or set of input data), may be processed by an image classier to determine style data. The style data may be represented by vectors which may then be processed by a clusterer which may output spatial information representing the style data in multi-dimensional space404. While a two-dimensional plot is illustrated inFIG.4, it is understood more than two dimensions may be generated. Multi-dimensional space404may be used to indicate multiple distinct groups of style groups, which may be groups or clusters of data points representative of images having style data that are in close proximity in multi-dimensional space404. For example, style group406, style group408, and style group410may be determined from multi-dimensional space404. Each data point in a given style group may represent style data having similar style. For example, style group406may correspond to image data generated using the same ultrasound software resulting in a similarly arranged image. As shown inFIG.4, style data412, which may correspond to style group406, may be representative of an image with a light border, text and navigation icons on the top and arranged in portrait orientation. Style data414, which may correspond to style group408, may be presentative of an image with a dark border, text in the top right, a logo in the bottom right, and navigation icons in the bottom left. Style data416, which may correspond to style group410, may be representative of an image with no border, text data on the right as well as a logo or other image, and some navigation icons on the left. While each data point in each style group may not correspond to exactly the same type of style data, the close proximity in multi-direction space404indicates at least some similarities in the style arrangement or selection. Referring now toFIG.5, a process flow is depicted for indicating a likelihood of CHD and/or other cardiovascular anomaly agnostic of a type of imaging system (e.g., ultrasound), transducer and/or sensor, and/or other style inputs. Some or all of the blocks of the process flows in this disclosure may be performed in a distributed manner across any number of devices (e.g., a server such as server104ofFIG.1, computing devices, imaging or sensor devices, or the like). Some or all of the operations of the process flow may be optional and may be performed in a different order. At block502, computer-executable instructions stored on a memory of a device, such as a server, may be executed to determine sets of image data (e.g., still frames and/or video data) from one or more imaging system. For example, the sets of image data may be generated by different imaging systems made from different companies, manufacturers, and/or having different sensors and/or hardware. At optional block504, computer-executable instructions stored on a memory of a device, such as a server, may be executed to apply the sets of image data to a trained classification model (e.g., image classifier205ofFIG.2). At optional block506, computer-executable instructions stored on a memory of a device, such as a server, may be executed to determine style data from the sets of image data. For example, low-level feature maps from the trained classification model may be used to determine style data for the sets of image data. At optional block508, computer-executable instructions stored on a memory of a device, such as a server, may be executed to apply the style data and/or representations of the style data to a clustering model (e.g., clusterer209ofFIG.2) to determined style groups. For example, Gram matrices corresponding to low layers of the classification model may be determined and vectors representative of such Gram matrices may be input into the clustering model. At block510, computer-executable instructions stored on a memory of a device, such as a server, may be executed to determine representative sets of image data and/or representative image frames for each style group. For example, a representative data point for a given style group in the multi-dimensional space may be determined using any suitable approach for determining a data point in the multi-dimensional space that best represents a given style group. Alternatively, a representative image frame for each style may be manually selected. At block512, computer-executable instructions stored on a memory of a device, such as a server, may be executed to determine input image data (e.g., sets of input image data, image frames, and/or a video clip). Input image data may be the same as or similar to image data206ofFIG.2. At block514, computer-executable instructions stored on a memory of a device, such as a server, may be executed to process the input image data using a style transfer generator (e.g., style transfer generator213). The representative image frames from block510may also be input into the style transfer generator. The style transfer generator may output a set of styled imaged data (e.g., styled image data214ofFIG.2) for each style determined (e.g., for each style group). For example, if five styles are determined, the style transfer generator may generate five distinct styled images for each input image data, each styled imaged corresponding to one of the five styles. At block516, computer-executable instructions stored on a memory of a device, such as a server, may be executed to apply the set of styled image data to an image analyzer (e.g., image analyzer216ofFIG.2) to generate analyzed data (e.g., analyzed data218ofFIG.2). It is understood that the image analyzer may optionally determine or detect standard views, anatomy key-points, measurements, and/or perform segmentation. In the example where multiple different styled images are output by the style transfer generator for a given input image, the analyzed data may be aggregated resulting in aggregated analyzed data. For example, where the analyzed data is a vector or matrix, each vector or matrix may be added or otherwise combined resulting in a single vector or matrix. As an alternative to blocks514-516, the image analyzer may be trained using styled training data. In this example, at block513, computer-executable instructions stored on a memory of a device, such as a server, may be executed to apply the set of image data (e.g., the training data) and optionally the input image data to the style transfer generator to generate one or more styled sets of image data and/or styled input image data based multiple different style types or alternatively, based on a single standard style. At block515, computer-executable instructions stored on a memory of a device, such as a server, may be executed to train the image analyzer using the styled set or sets of image data. In one example, the training data may be styled with multiple different types of styles such that for each image frame of training data, multiple styled image frames may be generated, depending on the number of styles. Alternatively, the training data may be styled with only one style, which may be the standard style. At block517, the input image data or the styled input image data may be applied to and/or processed by the image analyzer to generate analyzed data. After either block516or517, block518may be initiated, at which computer-executable instructions stored on a memory of a device, such as a server, may be executed to process the analyzed data and/or aggregated analyzed data to determine likelihood of a cardiovascular anomaly. For example, analyzed data and/or aggregated analyzed data may be a number between 0 and 1 and the analyzed data and/or aggregated analyzed data may be processed to determine if the anomaly satisfies a certain threshold value (e.g., 0.7), in which case it may be determined that a cardiovascular anomaly is likely present. At block520, computer-executable instructions stored on a memory of a device, such as a server, may be executed to cause a computing device (e.g., an analyst device or any other device) to present the analyzed data and/or likelihood of cardiovascular anomaly on a computing device (e.g., analyst device). Referring now toFIG.6, a schematic block diagram of server600is illustrated. Server600may be the same or similar to server104ofFIG.1or otherwise one or more of the servers ofFIGS.1-5B. It is understood that an imaging systems, analyst device and/or datastore may additionally or alternatively include one or more of the components illustrated inFIG.6and server600may alone or together with any of the foregoing perform one or more of the operations of server600described herein. Server600may be designed to communicate with one or more servers, imaging systems, analyst devices, data stores, other systems, or the like. Server600may be designed to communicate via one or more networks. Such network(s) may include, but are not limited to, any one or more different types of communications networks such as, for example, cable networks, public networks (e.g., the Internet), private networks (e.g., frame-relay networks), wireless networks, cellular networks, telephone networks (e.g., a public switched telephone network), or any other suitable private or public packet-switched or circuit-switched networks. In an illustrative configuration, server600may include one or more processors602, one or more memory devices604(also referred to herein as memory604), one or more input/output (I/O) interface(s)606, one or more network interface(s)608, one or more transceiver(s)610, one or more antenna(s)634, and data storage620. The server600may further include one or more bus(es)618that functionally couple various components of the server600. The bus(es)618may include at least one of a system bus, a memory bus, an address bus, or a message bus, and may permit exchange of information (e.g., data (including computer-executable code), signaling, etc.) between various components of the server600. The bus(es)618may include, without limitation, a memory bus or a memory controller, a peripheral bus, an accelerated graphics port, and so forth. The bus(es)618may be associated with any suitable bus architecture. The memory604may include volatile memory (memory that maintains its state when supplied with power) such as random access memory (RAM) and/or non-volatile memory (memory that maintains its state even when not supplied with power) such as read-only memory (ROM), flash memory, ferroelectric RAM (FRAM), and so forth. Persistent data storage, as that term is used herein, may include non-volatile memory. In various implementations, the memory604may include multiple different types of memory such as various types of static random access memory (SRAM), various types of dynamic random access memory (DRAM), various types of unalterable ROM, and/or writeable variants of ROM such as electrically erasable programmable read-only memory (EEPROM), flash memory, and so forth. The data storage620may include removable storage and/or non-removable storage including, but not limited to, magnetic storage, optical disk storage, and/or tape storage. The data storage620may provide non-volatile storage of computer-executable instructions and other data. The memory604and the data storage620, removable and/or non-removable, are examples of computer-readable storage media (CRSM) as that term is used herein. The data storage620may store computer-executable code, instructions, or the like that may be loadable into the memory604and executable by the processor(s)602to cause the processor(s)602to perform or initiate various operations. The data storage620may additionally store data that may be copied to memory604for use by the processor(s)602during the execution of the computer-executable instructions. Moreover, output data generated as a result of execution of the computer-executable instructions by the processor(s)602may be stored initially in memory604, and may ultimately be copied to data storage620for non-volatile storage. The data storage620may store one or more operating systems (O/S)622; one or more optional database management systems (DBMS)624; and one or more program module(s), applications, engines, computer-executable code, scripts, or the like such as, for example, one or more implementation modules626, style module627, communication module628, content module629, style transfer module630, and/or image analyzer module631. Some or all of these modules may be sub-modules. Any of the components depicted as being stored in data storage620may include any combination of software, firmware, and/or hardware. The software and/or firmware may include computer-executable code, instructions, or the like that may be loaded into the memory604for execution by one or more of the processor(s)602. Any of the components depicted as being stored in data storage620may support functionality described in reference to correspondingly named components earlier in this disclosure. Referring now to other illustrative components depicted as being stored in the data storage620, the O/S622may be loaded from the data storage620into the memory604and may provide an interface between other application software executing on the server600and hardware resources of the server600. More specifically, the O/S622may include a set of computer-executable instructions for managing hardware resources of the server600and for providing common services to other application programs (e.g., managing memory allocation among various application programs). In certain example embodiments, the O/S622may control execution of the other program module(s) for content rendering. The O/S622may include any operating system now known or which may be developed in the future including, but not limited to, any server operating system, any mainframe operating system, or any other proprietary or non-proprietary operating system. The optional DBMS624may be loaded into the memory604and may support functionality for accessing, retrieving, storing, and/or manipulating data stored in the memory604and/or data stored in the data storage620. The DBMS624may use any of a variety of database models (e.g., relational model, object model, etc.) and may support any of a variety of query languages. The DBMS624may access data represented in one or more data schemas and stored in any suitable data repository including, but not limited to, databases (e.g., relational, object-oriented, etc.), file systems, flat files, distributed datastores in which data is stored on more than one node of a computer network, peer-to-peer network datastores, or the like. The optional input/output (I/O) interface(s)606may facilitate the receipt of input information by the server600from one or more I/O devices as well as the output of information from the server600to the one or more I/O devices. The I/O devices may include any of a variety of components such as a display or display screen having a touch surface or touchscreen; an audio output device for producing sound, such as a speaker; an audio capture device, such as a microphone; an image and/or video capture device, such as a camera; and so forth. Any of these components may be integrated into the server600or may be separate. The server600may further include one or more network interface(s)608via which the server600may communicate with any of a variety of other systems, platforms, networks, devices, and so forth. The network interface(s)608may enable communication, for example, with one or more wireless routers, one or more host servers, one or more web servers, and the like via one or more of networks. The antenna(s)634may include any suitable type of antenna depending, for example, on the communications protocols used to transmit or receive signals via the antenna(s)634. Non-limiting examples of suitable antennas may include directional antennas, non-directional antennas, dipole antennas, folded dipole antennas, patch antennas, multiple-input multiple-output (MIMO) antennas, or the like. The antenna(s)634may be communicatively coupled to one or more transceivers610or radio components to which or from which signals may be transmitted or received. Antenna(s)634may include, without limitation, a cellular antenna for transmitting or receiving signals to/from a cellular network infrastructure, an antenna for transmitting or receiving Wi-Fi signals to/from an access point (AP), a Global Navigation Satellite System (GNSS) antenna for receiving GNSS signals from a GNSS satellite, a Bluetooth antenna for transmitting or receiving Bluetooth signals including BLE signals, a Near Field Communication (NFC) antenna for transmitting or receiving NFC signals, a 900 MHz antenna, and so forth. The transceiver(s)610may include any suitable radio component(s) for, in cooperation with the antenna(s)634, transmitting or receiving radio frequency (RF) signals in the bandwidth and/or channels corresponding to the communications protocols utilized by the server600to communicate with other devices. The transceiver(s)610may include hardware, software, and/or firmware for modulating, transmitting, or receiving—potentially in cooperation with any of antenna(s)634—communications signals according to any of the communications protocols discussed above including, but not limited to, one or more Wi-Fi and/or Wi-Fi direct protocols, as standardized by the IEEE 802.11 standards, one or more non-Wi-Fi protocols, or one or more cellular communications protocols or standards. The transceiver(s)610may further include hardware, firmware, or software for receiving GNSS signals. The transceiver(s)610may include any known receiver and baseband suitable for communicating via the communications protocols utilized by the server600. The transceiver(s)610may further include a low noise amplifier (LNA), additional signal amplifiers, an analog-to-digital (A/D) converter, one or more buffers, a digital baseband, or the like. Referring now to functionality supported by the various program module(s) depicted inFIG.6, the implementation module(s)626may include computer-executable instructions, code, or the like that responsive to execution by one or more of the processor(s)602may perform functions including, but not limited to, overseeing coordination and interaction between one or more modules and computer executable instructions in data storage620, determining user selected actions and tasks, determining actions associated with user interactions, determining actions associated with user input, initiating commands locally or at remote devices, and the like. The style module(s)627may include computer-executable instructions, code, or the like that responsive to execution by one or more of the processor(s)602may perform functions including, but not limited to, analyzing and processing image data (e.g., still frames and/or video clips) and determining from the image data one or more style groups and/or representative image frames corresponding to a certain style. The communication module(s)628may include computer-executable instructions, code, or the like that responsive to execution by one or more of the processor(s)602may perform functions including, but not limited to, communicating with one or more devices, for example, via wired or wireless communication, communicating with servers (e.g., remote servers), communicating with datastores and/or databases, communicating with imaging systems and/or analyst devices, sending or receiving notifications or commands/directives, communicating with cache memory data, communicating with computing devices, and the like. The content module(s)629may include computer-executable instructions, code, or the like that responsive to execution by one or more of the processor(s)602may perform functions including, but not limited to, determining input images, processing input images, segmenting input images, and/or determining content in input images. The style transfer module(s)630may include computer-executable instructions, code, or the like that responsive to execution by one or more of the processor(s)602may perform functions including, but not limited to processing input images and/or representative style images to generate styled input images, each incorporating a style from the representative style images. The image analyzer module(s)631may include computer-executable instructions, code, or the like that responsive to execution by one or more of the processor(s)602may perform functions including, but not limited to processing the styled input images and detecting cardiovascular anomalies based on the styled input images and/or optionally determining or detecting standard views, anatomy key-points, measurements, and/or performing segmentation based on the styled input images. Although specific embodiments of the disclosure have been described, one of ordinary skill in the art will recognize that numerous other modifications and alternative embodiments are within the scope of the disclosure. For example, any of the functionality and/or processing capabilities described with respect to a particular device or component may be performed by any other device or component. Further, while various illustrative implementations and architectures have been described in accordance with embodiments of the disclosure, one of ordinary skill in the art will appreciate that numerous other modifications to the illustrative implementations and architectures described herein are also within the scope of this disclosure. Certain aspects of the disclosure are described above with reference to block and flow diagrams of systems, methods, apparatuses, and/or computer program products according to example embodiments. It will be understood that one or more blocks of the block diagrams and flow diagrams, and combinations of blocks in the block diagrams and the flow diagrams, respectively, may be implemented by execution of computer-executable program instructions. Likewise, some blocks of the block diagrams and flow diagrams may not necessarily need to be performed in the order presented, or may not necessarily need to be performed at all, according to some embodiments. Further, additional components and/or operations beyond those depicted in blocks of the block and/or flow diagrams may be present in certain embodiments. Accordingly, blocks of the block diagrams and flow diagrams support combinations of means for performing the specified functions, combinations of elements or steps for performing the specified functions, and program instruction means for performing the specified functions. It will also be understood that each block of the block diagrams and flow diagrams, and combinations of blocks in the block diagrams and flow diagrams, may be implemented by special-purpose, hardware-based computer systems that perform the specified functions, elements or steps, or combinations of special-purpose hardware and computer instructions. Program module(s), applications, or the like disclosed herein may include one or more software components, including, for example, software objects, methods, data structures, or the like. Each such software component may include computer-executable instructions that, responsive to execution, cause at least a portion of the functionality described herein (e.g., one or more operations of the illustrative methods described herein) to be performed. A software component may be coded in any of a variety of programming languages. An illustrative programming language may be a lower-level programming language such as an assembly language associated with a particular hardware architecture and/or operating system platform. A software component including assembly language instructions may require conversion into executable machine code by an assembler prior to execution by the hardware architecture and/or platform. Another example programming language may be a higher-level programming language that may be portable across multiple architectures. A software component including higher-level programming language instructions may require conversion to an intermediate representation by an interpreter or a compiler prior to execution. Other examples of programming languages include, but are not limited to, a macro language, a shell or command language, a job control language, a script language, a database query or search language, or a report writing language. In one or more example embodiments, a software component including instructions in one of the foregoing examples of programming languages may be executed directly by an operating system or other software component without having to be first transformed into another form. A software component may be stored as a file or other data storage construct. Software components of a similar type or functionally related may be stored together such as, for example, in a particular directory, folder, or library. Software components may be static (e.g., pre-established or fixed) or dynamic (e.g., created or modified at the time of execution). Software components may invoke or be invoked by other software components through any of a wide variety of mechanisms. Invoked or invoking software components may include other custom-developed application software, operating system functionality (e.g., device drivers, data storage (e.g., file management) routines, other common routines, and services, etc.), or third-party software components (e.g., middleware, encryption, or other security software, database management software, file transfer or other network communication software, mathematical or statistical software, image processing software, and format translation software). Software components associated with a particular solution or system may reside and be executed on a single platform or may be distributed across multiple platforms. The multiple platforms may be associated with more than one hardware vendor, underlying chip technology, or operating system. Furthermore, software components associated with a particular solution or system may be initially written in one or more programming languages, but may invoke software components written in another programming language. Computer-executable program instructions may be loaded onto a special-purpose computer or other particular machine, a processor, or other programmable data processing apparatus to produce a particular machine, such that execution of the instructions on the computer, processor, or other programmable data processing apparatus causes one or more functions or operations specified in the flow diagrams to be performed. These computer program instructions may also be stored in a computer-readable storage medium (CRSM) that upon execution may direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable storage medium produce an article of manufacture including instruction means that implement one or more functions or operations specified in the flow diagrams. The computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational elements or steps to be performed on the computer or other programmable apparatus to produce a computer-implemented process. Additional types of CRSM that may be present in any of the devices described herein may include, but are not limited to, programmable random access memory (PRAM), SRAM, DRAM, RAM, ROM, electrically erasable programmable read-only memory (EEPROM), flash memory or other memory technology, compact disc read-only memory (CD-ROM), digital versatile disc (DVD) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the information and which can be accessed. Combinations of any of the above are also included within the scope of CRSM. Alternatively, computer-readable communication media (CRCM) may include computer-readable instructions, program module(s), or other data transmitted within a data signal, such as a carrier wave, or other transmission. However, as used herein, CRSM does not include CRCM. Although embodiments have been described in language specific to structural features and/or methodological acts, it is to be understood that the disclosure is not necessarily limited to the specific features or acts described. Rather, the specific features and acts are disclosed as illustrative forms of implementing the embodiments. Conditional language, such as, among others, “can,” “could,” “might,” or “may,” unless specifically stated otherwise, or otherwise understood within the context as used, is generally intended to convey that certain embodiments could include, while other embodiments do not include, certain features, elements, and/or steps. Thus, such conditional language is not generally intended to imply that features, elements, and/or steps are in any way required for one or more embodiments or that one or more embodiments necessarily include logic for deciding, with or without user input or prompting, whether these features, elements, and/or steps are included or are to be performed in any particular embodiment. It should be understood that any of the computer operations described herein above may be implemented at least in part as computer-readable instructions stored on a computer-readable memory. It will of course be understood that the embodiments described herein are illustrative, and components may be arranged, substituted, combined, and designed in a wide variety of different configurations, all of which are contemplated and fall within the scope of this disclosure. The foregoing description of illustrative embodiments has been presented for purposes of illustration and of description. It is not intended to be exhaustive or limiting with respect to the precise form disclosed, and modifications and variations are possible in light of the above teachings or may be acquired from practice of the disclosed embodiments. It is intended that the scope of the invention be defined by the claims appended hereto and their equivalents. | 64,513 |
11861839 | It should be noted that the figures are purely diagrammatic and not drawn to scale. In the figures, elements which correspond to elements already described may have the same reference numerals. LIST OF REFERENCE NUMBERS The following list of reference numbers is provided for facilitating the interpretation of the drawings and shall not be construed as limiting the claims.020database022data communication030image data040grid data060display062display data080user input device082user input data100system for preprocessing of medical image data120input interface for image data, grid data122internal data communication140processor142internal data communication144internal data communication160memory180user interface subsystem182display output interface184user input interface200image data210image data of anatomical structure220regular grid222horizontal grid line224vertical grid line300image data310image data of anatomical structure320grid applied to anatomical structure322,324grid lines400segmentation model500method of preprocessing of medical image data510accessing image data of anatomical structure520segmenting the anatomical structure530assigning a grid to image data of anatomical structure540providing addressing to image data based on grid600computer-readable medium610non-transitory data DETAILED DESCRIPTION OF EMBODIMENTS FIG.1shows a system100for preprocessing medical image data for machine learning. The system100comprises an input interface120configured to access the medical image data. Accordingly, the input interface120may be considered an embodiment of an image data interface. In the example ofFIG.1, the input interface120is shown to be connected to an external database020which comprises the medical image data, namely as image data030showing an anatomical structure. The database020may, for example be constituted by, or be part of, a Picture Archiving and Communication System (PACS) of a Hospital Information System (HIS) to which the system100may be connected or comprised in. Accordingly, the system100may obtain access to the image data030via external data communication022. Alternatively, the image data030may be accessed from an internal data storage of the system100(not shown). In general, the input interface120may take various forms, such as a network interface to a local or wide area network, e.g., the Internet, a storage interface to an internal or external data storage, etc. The system100is further shown to comprise a processor140configured to internally communicate with the input interface120via data communication122, and a memory160accessible by the processor140via data communication142. The processor140is further shown to internally communicate with a user interface subsystem180via data communication144. The processor140may be configured to, during operation of the system100, segment the anatomical structure in the image data030to identify the anatomical structure as a delineated part of the image data, assign a grid to the delineated part of the image data, the grid representing a standardized partitioning of the type of anatomical structure, and provide a machine learning algorithm with an addressing to the image data in the delineated part on the basis of coordinates in the assigned grid. This operation of the system100, and various optional aspects thereof, will be explained in more detail with reference toFIGS.2-4. FIG.1also shows various optional system components. For example, the system100may further comprise a grid data interface to a database which comprises grid data defining the grid, with the processor140accessing the grid data via the grid data interface, e.g., instead of accessing the grid data locally or generating the grid ‘on the fly’. In the example ofFIG.1, the system100is shown to access the grid data040in a same database020and via a same input interface120as used to access the image data030. Alternatively, a separate image data interface and grid data interface may be provided. In general, the image data030and the grid data040may be accessed from a same location, e.g., a same database020as in the case ofFIG.1, but also from different locations, e.g., from different databases. As another optional aspect, the system100may comprise a user interface subsystem180which may be configured to, during operation of the system100, enable a user to interact with the system100, for example using a graphical user interface. The user interface subsystem180is shown to comprise a user input interface184configured to receive user input data082from a user input device080operable by the user. The user input device080may take various forms, including but not limited to a computer mouse, touch screen, keyboard, microphone, etc.FIG.1shows the user input device to be a computer mouse080. In general, the user input interface184may be of a type which corresponds to the type of user input device080, i.e., it may be a thereto corresponding type of user device interface184. The user interface subsystem180is further shown to comprise a display output interface182configured to provide display data062to a display060to visualize output of the system100. In the example ofFIG.1, the display is an external display060. Alternatively, the display may be an internal display. In general, the system100may be embodied as, or in, a single device or apparatus, such as a workstation or imaging apparatus or mobile device. The device or apparatus may comprise one or more microprocessors which execute appropriate software. The software may have been downloaded and/or stored in a corresponding memory, e.g., a volatile memory such as RAM or a non-volatile memory such as Flash. Alternatively, the functional units of the system, e.g., the input interface, the optional user input interface, the optional display output interface and the processor, may be implemented in the device or apparatus in the form of programmable logic, e.g., as a Field-Programmable Gate Array (FPGA). In general, each functional unit of the system may be implemented in the form of a circuit. It is noted that the system100may also be implemented in a distributed manner, e.g., involving different devices or apparatuses. For example, the distribution may be in accordance with a client-server model, e.g., using a server and a thin-client. FIG.2shows a schematic representation of a medical image200which comprises an anatomical structure210in the form of the left ventricular myocardium. It is noted that for ease of interpretation ofFIG.2, the surroundings of the anatomical structure210are not shown but normally also contained in the medical image200. In the example of the left ventricular myocardium, these may be blood pools, etc. The image data constituting the medical image200may be comprised of an array of image elements such as pixels together representing a 2D image or voxels together representing a volumetric image. Such image elements may represent discrete samples, with the array of image elements representing a grid of samples and with the relative position of samples being defined by a sampling grid. FIG.2illustrates schematically such a sampling grid220of the medical image200, which is normally a regular Cartesian grid comprised of horizontal222and vertical grid lines224and which normally does not take the image contents into account. It is noted that for ease of interpretation ofFIG.2, the sampling grid220is shown coarsely, e.g., having relatively few grid lines. Normally, the sampling grid220corresponds to the image resolution, e.g., for an image of 2048×2048 pixels, the sample grid may be considered to be a regular grid of 2048×2048 grid lines. The addressing to the image data is based on grid coordinates, e.g., (0,0) for the image element at the top-left corner to (2047, 2047) at the bottom-right corner. In accordance with the invention as claimed, a normalized addressing to the image data of the anatomical structure is provided by way of a grid which is assigned to a segmentation of the anatomical structure in the medical image. This is illustrated inFIG.3, where a grid320is assigned to the image data of the anatomical structure310. The grid may be specific to the particular type of anatomical structure310, e.g., specific to a left ventricular myocardium, and may provide a standardized partitioning of the anatomical structure by means of grid lines322,324. It is noted that the grid lines322,324of the unassigned grid (not shown inFIG.3) typically do not define a regular grid but rather may follow, in terms of exterior outline, a standard shape of the anatomical structure, e.g., as represented by an anatomical atlas, while the interior of the grid may follow interior structures within the anatomical structure or provide another type of standardized partitioning of the anatomical structure, e.g., into cells of equal size in the unassigned grid. Effectively, the grid320may be optimized for being applied to a particular type of anatomical structure, and in some embodiments, also for a particular medical application. The machine learning algorithm may be provided access to the image data of the anatomical structure310on the basis of coordinates of the grid320. For example, if the machine learning algorithm accesses the image data sequentially, e.g., based on read-outs at coordinates from (0,0), (0,1), . . . , (0, n), the machine learning algorithm may for example access the image data of an outer layer of the left ventricular myocardium310, rather than in theFIG.2example the top-most row of the medical image200which does show the anatomical structure200at all. In general, a grid may be predefined for an anatomical structure, e.g., an organ of interest, and optionally also for a particular medical application. A grid may be generated in various ways. For example, the general shape of the grid may be learned from a cohort of patients whereas the number of grid points/lines and their relative positions within the grid may be manually determined or automatically based on certain cost functions. The predefined grid may be stored as grid data so that it may be accessed by the system when required. Multiple predefined grids may be provided, e.g., for different types of anatomical structures, and/or for a particular type of anatomical structure for different types of medical applications. For example, a grid may be defined, and then later selected from the database, to be a high-resolution mesh with boundaries that correspond to the typical American Heart Association (AHA) segments. Alternatively, a grid may be chosen to be a high-resolution mesh with boundaries that correspond in more detail to the supply territories of the coronary arteries, for example, if the medical application requires more detail in these regions. As there are a few different variants of coronary artery anatomy, the grid may also be chosen in dependence of the anatomy of the actual coronary anatomy variant of the image or patient at hand. It is noted that although the above refers to the anatomical structure being a heart, similar considerations apply to other anatomical structures, such as the brain. Another example is that the grid resolution may be chosen in dependence of the image acquisition protocol, e.g., lower resolution for 3D US compared to CT. In the case of 2D acquisitions, the definition of the grid may depend on the actual view, e.g., 2-chamber view, 3-chamber view, 4-chamber view or axis view. In a specific example, a normalized grid may be generated in a manner as described in ‘Integrating Viability Information into a Cardiac Model for Interventional Guidance’ by Lehmann et al, FIMH 2009, pp. 312-320, 2009, for the construction of a volumetric mesh in the left ventricle, see section 3.3. This approach is not limited to the left ventricle and may also be used for other structures. To enable the assignment of the predefined grid to the image data of an anatomical structure, e.g., of a patient, the anatomical structure may be segmented in the medical image. For that purpose, known segmentation algorithms and techniques may be used, as are known per se from the field of medical image analysis. One example of a class of segmentation algorithms is model-based segmentation, in which prior knowledge may be used for segmentation, see, e.g., “Automatic Model-based Segmentation of the Heart in CT Images” by Ecabert et al., IEEE Transactions on Medical Imaging 2008, 27(9), pp. 1189-1201.FIG.4shows an example of such a segmentation model, being in this example a heart model. The predefined grid may then be assigned to the image data of the anatomical model and thereby effectively adapted to the particular position and pose of the anatomical structure. For example, anatomical landmarks may be used to guide the adaption of the grid. Such anatomical landmarks may be identified in the image data using the segmentation. In a specific example, the segmentation may be performed by a segmentation model which comprises anatomical landmarks. The patient's anatomical landmarks are now known from the applied segmentation model, which provides the processor with information on the position, size and pose of the anatomical structure in the medical image data. Parts of the grid may be linked to these anatomical landmarks, on which basis the grid may then be applied to the medical image and in particular the anatomical structure contained therein. In another specific example, the segmentation may be an atlas-based segmentation as described in ‘Atlas-based image segmentation: A Survey’ by Kalinic et al., 2009, and may thus be based on image registration of an atlas image to the medical image. Next to the use of anatomical landmarks provided by segmentation, various other ways of fitting a grid to the image data of an anatomical structure on the basis of a segmentation of the anatomical structure are equally within reach of the skilled person. For example, there may exist correspondences between the segmentation model and the grid which may not necessarily represent anatomical landmarks. Another example is that the predefined grid may have a specific shape and that the grid may be adapted to match the segmentation of the anatomical structure while using a cost function which attempts to minimize certain deformations to the grid. Yet another example is that the segmentation may provide a geometric structure which may be converted into, or even used directly as the grid. For example, if the segmentation is performed using a segmentation model, the geometric primitives of the segmentation model may be processed, e.g., by tessellation which is constrained to provide a same mesh topology also for slightly different shapes, to generate the grid. In some embodiments, such a segmentation model may directly provide the grid, e.g., with its vertices defining grid points. Having assigned the grid to the image data of the anatomical structure, the machine learning algorithm may be executed, e.g., by the system itself or by another entity. Effectively, the grid may be used to provide an ‘on the fly’ addressing. Alternatively, the image data may be resampled in correspondence with the grid before the machine learning algorithm is executed. In this case, the assigned grid may effectively be used as a resampling grid specifying at which locations the original medical image is to be sampled. Such resampling is known per se, and may comprise converting the original discrete image samples into a continuous surface, e.g., by image reconstruction, and then resampling the continuous surface at the positioned indicated by the sampling grid. Such image reconstruction may be performed using interpolation, e.g., bicubic interpolation in case of 2D image data or tri-cubic interpolation in case of 3D image data. The resampled image data may then be used as input the machine learning algorithm instead of the original image data. It is noted that providing ‘on the fly’ addressing may also be considered a form of resampling, namely one in which the resampling is performed on the fly in response to the image data being requested at a particular grid coordinate. In general, such resampling may effectively ‘crop out’ non-relevant image data and avoid partial volume effects. In addition to passing the resampled image to a machine learning algorithm, the coordinates within this predefined grid may be used by the machine learning since they now have an anatomical meaning. Whether these coordinates are passed as additional channel or inferred from the layout of the sampled image intensities may depend on the software architecture. The former may be explained as follows with reference to the left ventricular myocardium, which as an anatomical structure may be described by a coordinate system indicating height h, angle phi and distance d from epicardial wall. These coordinates may be associated with the resampled grid and may be passed together with (on the fly) resampled intensity values I to the neural network. In other words, instead of using only intensities I, a vector (I, h, phi, d) may be used as input. With respect to the machine learning algorithm, it is noted that the claimed measures may be applied to any machine learning algorithm which uses medical image data of an anatomical structure as input. For example, depending on the application, different types of neural networks may be used to carry out an image or voxel-wise classification task. For example, the resampled image can be used as input to a foveal fully convolutional network, e.g., as described in T. Brosch, A. Saalbach, “Foveal fully convolutional nets for multi-organ segmentation”, SPIE 2018. In general, the grid may provide a standardized and normalized partitioning of a type of anatomical structure. Such a grid may be predefined and stored, e.g., in the form of grid data, for a number of different anatomical structures and/or for a number of different medical applications. The assigned grid may be visualized to a user, e.g., using the aforementioned display output interface182of the system100ofFIG.1, for example as an overlay over the medical image. FIG.5shows a computer-implemented method500of preprocessing medical image data for machine learning. It is noted that the method500may, but does not need to, correspond to an operation of the system100as described with reference toFIG.1and others. The method500comprises, in an operation titled “ACCESSING IMAGE DATA OF ANATOMICAL STRUCTURE”, accessing510image data comprising an anatomical structure. The method500further comprises, in an operation titled “SEGMENTING THE ANATOMICAL STRUCTURE”, segmenting520the anatomical structure in the image data to obtain a segmentation of the anatomical structure as a delineated part of the image data. The method500further comprises, in an operation titled “ASSIGNING A GRID TO IMAGE DATA OF ANATOMICAL STRUCTURE”, assigning530a grid to the delineated part of the image data, the grid representing a partitioning of an exterior and interior of the type of anatomical structure using grid lines, said assigning comprising adapting the grid to fit the segmentation of the anatomical structure in the image data. The method500further comprises, in an operation titled “PROVIDING ADDRESSING TO IMAGE DATA BASED ON GRID”, providing540a machine learning algorithm with an addressing to the image data in the delineated part on the basis of coordinates in the assigned grid. It will be appreciated that, in general, the above operations may be performed in any suitable order, e.g., consecutively, simultaneously, or a combination thereof, subject to, where applicable, a particular order being necessitated, e.g., by input/output relations. The method500may be implemented on a computer as a computer implemented method, as dedicated hardware, or as a combination of both. As also illustrated inFIG.6, instructions for the computer, e.g., executable code, may be stored on a computer readable medium600, e.g., in the form of a series610of machine readable physical marks and/or as a series of elements having different electrical, e.g., magnetic, or optical properties or values. The executable code may be stored in a transitory or non-transitory manner. Examples of computer readable mediums include memory devices, optical storage devices, integrated circuits, servers, online software, etc.FIG.6shows an optical disc600. In accordance with an abstract of the present application, a system and computer-implemented method may be provided for preprocessing medical image data for machine learning. Image data may be accessed which comprises an anatomical structure. The anatomical structure in the image data may be segmented to identify the anatomical structure as a delineated part of the image data. A grid may be assigned to the delineated part of the image data, the grid representing a standardized partitioning of the type of anatomical structure. A machine learning algorithm may then be provided with an addressing to the image data in the delineated part on the basis of coordinates in the assigned grid. In some embodiments, the image data of the anatomical structure may be resampled using the assigned grid. Advantageous, a standardized addressing to the image data of the anatomical structure may be provided, which may reduce the computational overhead of the machine learning, require fewer training data, etc. Examples, embodiments or optional features, whether indicated as non-limiting or not, are not to be understood as limiting the invention as claimed. It will be appreciated that the invention also applies to computer programs, particularly computer programs on or in a carrier, adapted to put the invention into practice. The program may be in the form of a source code, an object code, a code intermediate source and an object code such as in a partially compiled form, or in any other form suitable for use in the implementation of the method according to the invention. It will also be appreciated that such a program may have many different architectural designs. For example, a program code implementing the functionality of the method or system according to the invention may be sub-divided into one or more sub-routines. Many different ways of distributing the functionality among these sub-routines will be apparent to the skilled person. The sub-routines may be stored together in one executable file to form a self-contained program. Such an executable file may comprise computer-executable instructions, for example, processor instructions and/or interpreter instructions (e.g. Java instructions). Alternatively, one or more or all of the sub-routines may be stored in at least one external library file and linked with a main program either statically or dynamically, e.g. at run-time. The main program contains at least one call to at least one of the sub-routines. The sub-routines may also comprise function calls to each other. An embodiment relating to a computer program product comprises computer-executable instructions corresponding to each processing stage of at least one of the methods set forth herein. These instructions may be sub-divided into sub-routines and/or stored in one or more files that may be linked statically or dynamically. Another embodiment relating to a computer program product comprises computer-executable instructions corresponding to each means of at least one of the systems and/or products set forth herein. These instructions may be sub-divided into sub-routines and/or stored in one or more files that may be linked statically or dynamically. The carrier of a computer program may be any entity or device capable of carrying the program. For example, the carrier may include a data storage, such as a ROM, for example, a CD ROM or a semiconductor ROM, or a magnetic recording medium, for example, a hard disk. Furthermore, the carrier may be a transmissible carrier such as an electric or optical signal, which may be conveyed via electric or optical cable or by radio or other means. When the program is embodied in such a signal, the carrier may be constituted by such a cable or other device or means. Alternatively, the carrier may be an integrated circuit in which the program is embedded, the integrated circuit being adapted to perform, or used in the performance of, the relevant method. It should be noted that the above-mentioned embodiments illustrate rather than limit the invention, and that those skilled in the art will be able to design many alternative embodiments without departing from the scope of the appended claims. In the claims, any reference signs placed between parentheses shall not be construed as limiting the claim. Use of the verb “comprise” and its conjugations does not exclude the presence of elements or stages other than those stated in a claim. The article “a” or “an” preceding an element does not exclude the presence of a plurality of such elements. The invention may be implemented by means of hardware comprising several distinct elements, and by means of a suitably programmed computer. In the device claim enumerating several means, several of these means may be embodied by one and the same item of hardware. The mere fact that certain measures are recited in mutually different dependent claims does not indicate that a combination of these measures cannot be used to advantage. | 25,415 |
11861840 | DETAILED DESCRIPTIONS OF THE EMBODIMENTS Extracting planes in real-time remains challenging. On the one hand, one needs to maintain a minimum rate of miss-detection and over-detection. On the other, sufficient inliers have to be obtained in order to accurately estimate the planar parameters.FIGS.1A,1B,1C, and1Dshow a classic example of plane extraction from an image.FIG.1Ais an RGB image of a scene.FIG.1Bis the corresponding depth image.FIG.1Cis the human labeled planar regions where each color represents a physical plane.FIG.1Dis the plane detection result generated by our algorithm. A staircase is composed of multiple partially occluded planar surfaces and is particularly challenging to state-of-the-art solutions. Prior techniques tend to either miss small-scale planes or incorrectly splits a large plane into multiple, smaller ones. In this disclosure, systems and methods for extracting one or more planar surfaces from depth image are disclosed. In some embodiments, a real-time algorithm is used to extract multi-scale planar surfaces in real-time. The algorithm first dynamically divides the depth image into rectangle regions, where points in each region lie on a common plane. Then the algorithm generates planar primitives by clustering these regions into some distinct groups according to their plane parameters. Finally, the pixel-wise segmentation results are achieved by growing each distinct group. The advantages of the disclosed systems and methods comprise: (1) a reduction of the number of regions to be clustered and improved plane fitting accuracy, because of the dynamic region size adjusting algorithm; (2) is guaranteeing the worst-case time complexity to be log-linear, because of the region clustering algorithm; and (3) superior performance of the disclosed algorithm than the state-of-art method in quality and speed (the disclosed algorithm runs an order of magnitude faster). In this disclosure, the plane extraction problem can be formulated as: given a depth image D, the goal is to detect a set of planes {Gi}, i∈{1, 2, . . . k} where each pixel of D can be classified into {Gi}, and a set of non-planes B so that:(1) The size of set {Gi} is minimum;(2) The total number of pixels assigned to {Gi} is maximum; and(3) For every pixel assigned to Gi, the plane fitting error should be less than a predefined threshold. Existing techniques on depth images can be classified into three categories: direct clustering, RANSAC (Random Sample Consensus) and region growing. Direct clustering groups every input point in terms of their estimated surface normal values. Under severe occlusions, reliably estimating surface normals from point locations is particularly challenging: the size of the area (neighborhood) one picks for estimating the normal affects greatly the final results. The existing approach is also slower as it often requires using grid discretization in the normal space where high precision corresponds to fine discretization and hence slower speed. Though the classical RANSAC algorithm is able to conduct plane fitting, it is even more computationally costly, especially on a scene with multiple planes. The key idea of region grown-based algorithms is to expand the seed region until the fitting error exceeds certain thresholds. Various existing techniques have been proposed to handle different type of regions, e.g., 3D voxels vs. 2D rectangular regions. Such existing techniques include, for example, an efficient algorithm to detect planes in unorganized point clouds. It first partitions the entire 3D space into a large number of small voxels and then performs clustering by merging the seed with its nearest 26 voxels. The Agglomerative Hierarchical Clustering (AHC) is a 2D region growing method. It first uniformly divides the entire image as small grids and then builds a priority queue for tracking the best seed region yielding to the minimum plane fitting error. Once a seed region is successfully fetched, AHC then searches for its four neighbor grids to see if any region can be merged with seed. After merging, AHC re-inserts the merged grid into a priority queue and re-fetch the new best seed. The process repeats until the queue is empty. The AHC approach has two major limitations: it is difficult to select the proper grid size and it cannot deal with heavy occlusions. The former directly affects the ability to extract small or non-uniform planar surfaces such as stairs: one can set the grid size to be very small (e.g., 4×4) to improve robustness at the cost of much higher computational cost (see details in the experiment section). The latter is inherently due to the existence of multi-scale planar regions. In particular, it is unable to group disconnected regions. Conceptually, one can apply RANSAC to resolve the grouping problem but at a much higher computational cost. To at least mitigate the above-described disadvantage of existing technologies in plane extraction, in this disclosure, a novel region growing technique is disclosed for ultra-fast and reliable planar surface detections (seeFIGS.2A-2E). Compared with existing technologies, the advantages of the disclosed approach include:(1). The disclosed systems and methods reduce the number of regions to be clustered and improve the plane fitting accuracy through dynamic region size adjusting (see Algorithm 1 below for details). Comparing to the methods using uniform region size, the disclosed algorithm will not lose any useful information and will not harm the time complexity;(2). The disclosed systems and methods overcome the plane splitting problem (usually due to the occlusion) by clustering all extracted seed regions into some distinct groups (see Algorithm 2 below). By employing a designed auto-balanced search tree, the complexity of this clustering algorithm is log-linear at the worst case; and(3). The disclosed systems and methods conduct several comprehensive experiments, which show that the disclosed algorithm outperforms the existing method in quality and speed (the disclosed algorithm runs an order of magnitude faster). FIGS.2A,2B,2C,2D, and2Eare graphical illustrations of an exemplary method for extracting planar surface from depth image, according to various embodiments of the present disclosure.FIG.2Ais an input depth image.FIG.2Bis the Distance Change Indication (DCI) map generated based onFIG.2A, wherein the black pixels stand for the non-smooth region.FIG.2Cshows the planar region extraction results, wherein the size of each region is dynamically adjusted according to the DCI.FIG.2Dis the result after planar region clustering step.FIG.2Eis the pixel-wise segmentation result based onFIG.2D. The overall pipeline of the disclosed algorithm is shown inFIGS.2A-2Efor fast plane detection from a depth image. In some embodiments, a recursive region extraction algorithm is put forward to divide the depth image into planar regions (seeFIG.2C), then these regions are clustered into distinct groups by a designed auto-balanced search tree (seeFIG.2D). Finally, the plane proposal of the whole image is verified by region growing to produce the pixel-wise segmentation (seeFIG.2E). Recursive Planar Region Extraction In some embodiments, the planar region in the disclosed algorithm is defined as a rectangle area in the depth image, in which each point lies on a common plane in 3D space. The disclosed algorithm extracts such regions to estimate plane normals and distances, which are the most intrinsic parameters to combine discrete planar fragments. An existing approach is to divide the entire image into many small non-overlapping grids, then fit a plane for each small grid. The trickiest part is to decide the size of the grids. With a small grid size, the plane parameters estimation will be inaccurate, and the running time will be significantly increased. However, the algorithm is hard to perceive the small planar such as stairs when setting a large gird size. To overcome such difficulties in existing technologies, a recursive planar region extraction algorithm (described in Algorithm 1 below) is disclosed. In some embodiments, a depth change indication map (DCI) is used to constrain the extracted region. A plane is fitted for an area in which depth changes are smooth. When MSE (mean square error) and curvature of plane fitting are small enough, this area is considered to be a planar region. The advantage of Algorithm 1 is that the recursive strategy can dynamically adapt to the DCI, i.e., the size of each region can be dynamically determined. The efficiency and accuracy can be improved since the algorithm always tries to preserve the large regions which can help to reduce the number of regions and increase the plane fitting accuracy. An intermediate result of this step is shown inFIG.2C. Algorithm 1: Dynamically generate planar regions1Function ExtractAllPlanarRegions(D, Tsmooth, TminSize, TMSE, Tcuv)|//verify the depth smoothness of each pixel by eq(1)2|DCI =ComputeDCI(D, Tsmooth);3|sDCI = ComputeSAT(DCI);//compute summed area table for DCI|//generate 9 SATs for fast covariance matrix computation [13]4|sC = CacheCovariance(D);5|Rinit= (0, 0, D.Width, D.Height);//initial region is the whole image6|PlanarRegions = Ø;7|PlanarRegionExtraction(PlanarRegions, sC, DCI, Rinst, TminSize, TMSE, Tcuv):9|return PlanarRegions10end//recursively extract planar regions from depth image11Function PlanarRegionExtraction(PlanarRegions, sC, sDCI, Rcur, TminSize, TMSE, Tcuv)|//TminSizemeans the minimum size of accepted planar regions12|if Rcur.Width < TminSizeor Rcur.Height < TminSizethen14||return;15|else||//get the number of smooth pixels in Rcur16||CntSmoothPixels =getSumOfRegion(sDCI, Rcur);||//verify the definition of smooth region see eg(2)17||if CntSmoothPixels = Rcur.Width × Rcur.Height then|||//we use the O(1) method proposed in [13]|||(MSE, Cuv) =PlaneFitting(xC, Rcur):19|||if MSE ≤ TMSEand Cuv ≤ Tcuvthen||||PlanarRegions = PlanarRegions + {Rcur};22||||return23|||end24||end||//split current region Rcurinto 4 parts and recursive on each of them25||Rlu=LeftUpQuarter(Rcur);26||Rlb=LeftBottomQuarter(Rcur):27||Rru=RightUpQuarter(Rcur);28||Rrb=RightBottomQuarter(Rcur);29||PlanarRegionExtraction(PlanarRegions, sC, sDCI, Rlu, TminSize, TMSE, Tcuv);30||PlanarRegionExtraction(PlanarRegions, sC, sDCI, Rlb, TminSize, TMSE, Tcur);31||PlanarRegionExtraction(PlanarRegions, sC, sDCI, Rru, TminSize, TMSE, Tcuv);32||PlanarRegionExtraction(PlanarRegions, sC, sDCI, Rrb, TminSize, TMSE, Tcuv);33|end34end In some embodiments, the DCI is defined as DCI(u,v)={1max(m,n)∈F❘"\[LeftBracketingBar]"D(u,v)-D(m,n)❘"\[RightBracketingBar]"≤f(d(u,v)),0otherwise,)(1) where F={(u−1, v), (u+1, v), (u, v−1), (u, v+1)}, ƒ(⋅) is the smoothness threshold function, and D(u, v) represents the depth value at pixel location (u, v). Common methods can be used to computing the DCI. In some embodiments, the following steps can be implemented to analyze the performance of the recursive planar region extraction. Per Algorithm 1, the main computational tasks in each recursive call are to verify if current region is smooth and estimate the plane parameters. Formally, a smooth region R is a rectangle in DCI which satisfied |R|=Σ(u,v)∈RDCI(u,v) (2) where |R| denotes the size of region R. Based on equation (2), region smoothness check can be performed in O(1) time by simply applying a summed-area table (SAT) on DCI. For plane parameter estimation, the SAT can also be used to accelerate the covariance matrix C computation. The plane normal n is the eigenvector which corresponds to the smallest eigenvalue of C. The region MSE and curvature is defined as MSE(R)=1❘"\[LeftBracketingBar]"R❘"\[RightBracketingBar]"∑p(u,ν)∈R(n·p(u,v)+d)2=λ0❘"\[LeftBracketingBar]"R❘"\[RightBracketingBar]"(3)Curvature(R)=λ0λ0+λ1+λ2(4) where λ0, λ1, λ2are eigenvalues of covariance matrix C (increasing order) and C·n=λ0·n. With T(n) denoting the worst-case running time on recursive extracting planar regions which contains n points, it can be obtained that T(n)≤4×T(n/4)+O(1). It is verified that T(n)=O(n). If the resolution of input depth map is W×H, then the overall worst-case time complexity of Algorithm 1 is O(W×H) since computing DCI and SAT are both cost O(W×H). Cluster the Extracted Regions by Auto-Balanced Search Tree In some embodiments, the planar regions generated by the previous step will be treated as plane proposals for the current depth image. However, examining every planar region will cause lots of unnecessary operations since many regions are corresponding to a common 3D plane. In order to reduce the number of plane proposals and improve the estimation accuracy, a cluster algorithm is indispensable. The distance function in parameter space is defined as χ(G1,G2)={1{n1-n2∞≤Tnorm.❘"\[LeftBracketingBar]"d1-d2❘"\[RightBracketingBar]"≤Tdist)0otherwise,)(5) where {n1, d1} and {n2, d2} are plane parameters corresponding to planar region groups G1and G2respectively. The aim of clustering algorithm is to classify all planar regions into several groups: {G1,G2, . . . ,Gk} s.t. ∀i,j∈{1,2, . . .k}, χ(Gi,Gj)=0 (6) For convenience, such a group set can be referred to as a distinct set. In some embodiments, a fast cluster algorithm based on a designed auto-balanced search tree (AST) is disclosed. The overall algorithm is shown as Algorithm 2. The disclosed cluster algorithm can produce a distinct set.FIG.2Dshows an exemplary output of this step. Algorithm 2: Clustering all planar regions1Function ClusterAllPlanarRegions(PlanarRegions, Tnorm., Tdist)2|AST = Ø:3|for each R ∈ PlanarRegions do4||G = {R};5||AddOneItem(AST, G, Tnorm., Tdist);6|end8|return AST9end In some embodiments, it is important for AST to maintain balance after insertion and deletion. Any planar region can be represented as n·p(u,v)(u,v) d=0. Where n is a three-dimensional unit vector specifying the plane normal, and d is a non-negative number specifying the distance from origin to the plane. In order to efficiently search a region by its n and d, an AST is built by cascading four Red-black trees (three for normal n and one for distance d, seeFIG.3, in which each branch is constructed by a Red-Black tree). The pseudo-code for adding one item in AST is shown as algorithm 3. Due to the excellent property of the red-black tree, the AST can maintain balance after each operation and the worst time complexity of searching, inserting and deleting are O(log(|AST|)). The pseudo-codes of these operations are given in Algorithms 6, 7, and 4. Algorithm 3: Add one plane region group to AST1Function AddOneItem(AST, Gnew, Tnorm., Tdist)|//PASTstand for the location of Gnearestin AST2|(Gnearest, PAST) =FindNeares(AST, Gnew, Tnorm., Tdist);3|if Gnearest = Ø then4||Insert(PAST, Gnew)5|else6||Gmerge=MergeTwoGroups(Gnew, Gnearest):7||Delete(PAST):8||AddOneItem(AST, Gmerge, Tnorm., Tdist):9|end11|return AST12end13Function MergeTwoGroups(G1, G2)14|Gmerge.Regions = G1.Regions + G2.Regions:15|Gmerge.PlaneParams =PlaneFitting(Gmerge.Regions):|//The following step is to prevent inaccurate plane fitting|caused by strip-like region shape [4]16|if Gmerge.PlaneParams is not satisfy equation (7) then17||if |G1.Regions| < |G2.Regions| then18|||Gmerge.PlaneParams = G2.PlaneParams;19||else20|||Gmerge.PlaneParams = G1.PlaneParams;21||end22|end23end Algorithm 4: Delete one item from AST1Function Delete(PAST)2|if PAST.Dist.size( ) > 1 then3||DeleteInRBTree(PAST.Dist, Gnew);4|else5||EraseRBTree(PAST.Dist);6||if PAST.Nz.size( ) > 1 then7|||DeleteInRBTree(PAST.Nz, Gnew);8||else9|||EraseRBTree(PAST.Nz):10|||if PAST.Ny.size( ) > 1 then11||||DeleteInRBTree(PAST.Ny, Gnew);12|||else13||||EraseRBTree(PAST.Ny);14||||DeleteInRBTree(PAST.Nx, Gnew);15|||end16||end17|end18end Algorithm 5: Pixel-wise Segmentation1Function PixelWiseSegmentation (AST, Tinlier)2|LabelMap = Ø;|//sorting by the number of points|in each group3|= SortDistinctGroups (AST,′decrease’);4|for each G ∈do5||Q = Ø ;//Q is a queue6||for each R ∈ G do7|||for each pixel p ∈ R do8||||if LabelMap(p) = Ø then9|||||LabelMap(p) = G:10||||end||||//BRis the boundry of R11||||if p ∈ BRthen12|||||Q.push(p);13||||end14|||end15||end16||while Q ≠ Ø do17|||p = Q.pop( );|||//Fpis set of 4 neighbors of p18|||for each f ∈ Fpdo19||||if LabelMap(f) = Ø then20|||||if |G.n · f + G.d| < Tinlierthen21||||||LabelMap(f) = G;22||||||Q.push(f);23|||||end24||||end25|||end26||end27|end29|return LabelMap30end Algorithm 6: Find nearest neighbor in AST1Function FindNearest(AST, Gnew, Tnorm., Tdist)2|Gnearest= Ø;3|PAST.Nx = AST.Nx;4|PAST.Ny =FindInRBTree(PAST.Nx, Gnew, Tnorm.);5|if PAST.Ny ≠ NULL then6||PAST.Nz =FindInRBTree(PAST.Ny, Gnew, Tnorm.);7||if PAST.Nz ≠ NULL then8|||PAST.Dist =FindInRBTree(PAST.Nz, Gnew, Tnorm.);9|||if PAST.Dist ≠ NULL then10||||Gnearest=FindInRBTree(PAST.Dist, Gnew, Tdist);11|||end12||end13|end15|return Gnearest, PAST16end Algorithm 7: Insert one item to AST1Function Insert(PAST, Gnew)2|if PAST.Ny = NULL then3||Insert InNxBranch(PAST, Gnew);4||if PAST.Nz = NULL then5|||InsertInNyBranch(PAST, Gnew);6|||if PAST.Dist = NULL then7||||InsertInNzBranch(PAST, Gnew);8|||else9||||InsertDistBranch(PAST, Gnew);10|||end11||end12|end13end//The key idea is to maintain theAST structure shown in Fig. 314Function InsertInNxBranch(PAST, Gnew)15|PAST.Dist = new DistTree;16|InsertInRBTree(PAST.Dist, Gnew);17|PAST.Nz = new NzTree;18|InsertInRBTree(PAST.Nz, PAST.Dist);19|PAST.Ny = new NyTree;20|InsertInRBTree(PAST.Ny, PAST.Nz);21|InsertInRBTree(PAST.Nx, PAST.Ny);22end In some embodiments, it is claimed that when Algorithm 2 is finished, ∇Gj, Gj∈AST, χ(Gi, Gj)=0. In order to show this claim is correct, the lemma that adding a new item by calling Algorithm 3 will preserve the property of a distinct AST is to be proved. The claim is true if the lemma is true. The correctness of this lemma can be demonstrated by induction. The base case is trivial, when AST=ø, it is correct obviously. The inductive hypothesis is that the property of a distinct AST with size k−1 will be kept after calling Algorithm 3. The inductive step is to verify if the distinct property is still maintained when inserting a new item into a distinct AST with size k. The inductive step is tested under two cases: the first one is that the new added item Gnewcannot merge with its nearest neighbor Gnearestin AST, the second one is the opposite. In the first case, Algorithm 3 will directly insert Gnewinto AST, and it is still distinct. In the second case, Algorithm 3 will first delete Gnearestfrom AST (deletion will not harm the property), and the size of AST will reduce to k−1. According to the inductive hypothesis, adding Gnewinto the current AST will still preserve its property. Based on these two cases, the inductive step is shown to be true, i.e., the lemma is true. In some embodiments, the worst-case running time of Algorithm 3 is O(log(|AST|)). There are two different situations when adding a new item into AST as above-mentioned. The first one takes one searching and inserting operation in O(log(|AST|)). The running time of the second one highly depends on the plane merge function. Per Algorithm 3, the parameters after merging must satisfy: {nmx∈[nminx,nmaxx]nmy∈[nminy,nmaxy]nmz∈[nminz,nmaxz]dm∈[dmin,dmax](7) According to equation (7), the merging operation will at most occur three times for each searching branch when adding Gnewinto AST, since the distances of nearest nodes in each branch are larger than a threshold.FIG.4shows a graphical illustration of an exemplary auto-balanced search tree (AST) algorithm of adding a new item into Nxbranch. InFIG.4, the solid circles represent the current items in AST according to the distinct property. The distance of each pair of solid circles is larger than Tnorm. The dotted line circles represent the items which are going to be added. When the distinct property is broken by adding a new item. Algorithm 3 will delete its nearest item and then recursively add the merged one until the distinct property is recovered. In conclusion, the recursive depth of Algorithm 3 is a constant number, i.e., its complexity is O(log(|AST|)). Therefore, the overall running time of Algorithm 2 is O(|R|log(|R|)). After the clustering step, the plane proposals based on the distinct region groups are obtained. In this step, these proposals can be verified by region growing. Once this procedure is finished, the pixel-wise segmentation and inlier points statistics are produced simultaneously. The algorithm finally outputs those proposals supported by a large amount of inliers. The detail of this step is summarized in Algorithm 5. Experiment Evaluation In some embodiments, the disclosed algorithm can be evaluated in two aspects: effectiveness and efficiency. For effectiveness evaluation, the data from the FARO laser scanner is used. The laser scanner can produce a 360-degree colorful point cloud for both indoor and outdoor circumstances. In order to compare with AHC, the original scan output is reordered to 512×512 depth images. For efficiency evaluation, the disclosed algorithm is implemented in C++ with open source libraries such as PCL and Eigen. All experiments may be performed on a desktop PC with an Intel Core i7-6950X CPU of 3.0 GHz and DDR4 RAM of 128 GB. Parallel technical such as OpenMP, OpenCL, and CUDA may not be used. The parameters used for all experiments are shown in TABLE 1. TABLE 1The table of parameters used in this application (the unit of depthimage is mm (millimeter))PRMValuePRMValuePRMValueTsmooth0.015 [13]Tcuv0.01Tnorm.sin5π180TminSize3 pixelsTσ1.6 × 10−8Tdist30TMSE(Tσ× z2+ Tε)2[4]Tε0.1Tinlier20 In some embodiments, the effectiveness is evaluated in terms of robustness and accuracy. To evaluate the robustness of the disclosed algorithm, a dataset with 1112 512×512 depth images is built under highly dynamic scenes such as staircase, corridor, room, and campus. The qualitative comparisons are shown inFIG.6. InFIG.6, the first row shows the RGB images. The second row shows the plane detection results generated by the algorithm of this disclosure. The third row shows the result from the AHC, in which the initial block size is 4×4. The last row shows the result from the AHC, in which the initial block size is 10×10. For accuracy evaluation, a sequence of depth images (226 frames) from the staircase (seeFIG.7) are labeled. Using the same terminology, detected planes are classified into 5 types: correct detection, over-segmentation, under-segmentation, missed and noise. The overall quantitative results are summarized in TABLE 2. TABLE 2The quantitative results on staircase sequence data set. The terminologieswe used are from [5], however the evaluate metrics are slightly modifiedby allowing disconnected regions. The overlapping threshold is 80%.GTCorrectOver-Under-MethodregionsdetectionsegmentedsegmentedMissedNoiseICRA′14(4 × 4) [4]19.603.40(17.29%)0.390.2816.042.20ICRA′14(10 × 10) [4]19.603.29(16.77%)0.030.5416.311.12Ours19.6012.17(62.08%)0.120.907.384.69 Based on the results inFIG.6,FIGS.2A-2E, and Table 2, the disclosed algorithm can effectively detect multiple scale planes. Comparing to current techniques, the quality of results obtained by the disclosed systems and methods are much better, as shown inFIG.7. InFIG.7, the first row shows the ground truth segmentation results from human labeling. The second row shows the results from the algorithm of this disclosure. The third row shows the result from the AHC, in which the initial block size is 4×4. The last row shows the result from the AHC, in which the initial block size is 10×10. In some embodiments, the running time of the disclosed method is compared with the fastest existing method on previously mentioned datasets. The initial block size of is tuned to as 4×4. For reference, its result is also given under 10×10. The detail results are shown inFIG.5. InFIG.5, the value of each bar represents the average processing time, and the minimum and maximum time are shown by black line segments. As shown inFIG.5, the disclosed method has a similar worst-case running time with AHC (block size: 10×10). However, considering the fact that the minimum region size used is 3×3, the disclosed algorithm runs an order of magnitude faster than AHC under similar detail level settings (block size 4×4). FIG.8is a flow diagram of an exemplary method800for extracting planar surface from a depth image, according to various embodiments of the present disclosure. The exemplary method800may be implemented by one or more components of a system for extracting planar surface from a depth image. The system may comprise a processor coupled to a non-transitory computer-readable storage medium (e.g., memory). The memory may store instructions that, when executed by the processor, cause the processor to perform various steps and methods (e.g., various algorithms) described herein. The exemplary method800may be implemented by multiple systems similar to the exemplary system800. The operations of method800presented below are intended to be illustrative. Depending on the implementation, the exemplary method800may include additional, fewer, or alternative steps performed in various orders or in parallel. At block801, a depth change indication map (DCI) is computed from a depth map in accordance with a smoothness threshold. This step may correspond toFIG.2Bdescribed above. At block802, a plurality of planar region is recursively extracted from the depth map, wherein the size of each planar region is dynamically adjusted according to the DCI. This step may correspond toFIG.2Cdescribed above. At block803, the extracted planar regions are clustered into a plurality of groups in accordance with a distance function. This step may correspond toFIG.2Ddescribed above. At block804, each group is grown to generate pixel-wise segmentation results and inlier points statistics simultaneously. This step may correspond toFIG.2Edescribed above. CONCLUSION As discussed, a new algorithm for fast and robust planar surfaces detection is disclosed to detect as many planes as possible in real-time. A novel recursive planar region extraction algorithm is first disclosed. By dynamically adjusting the region size, the algorithm can significantly reduce the number of planar regions extracted without losing any useful information and harming the time complexity. A novel clustering algorithm that can overcome the plane splitting problem caused by occlusion is also disclosed. In order to reduce the clustering complexity, an auto-balanced search tree that can speed up the clustering algorithm to log-linear time is designed. Finally, the pixel-wise segmentation results are achieved by growing each clustered planar region group. The disclosed algorithms are evaluated through theoretical analysis and comprehensive experiments. From a theoretical standpoint, a detail analysis of correctness and time complexity is performed for each disclosed algorithm. From a practical standpoint, comprehensive experiments are conducted on a highly dynamic scene dataset (1112 frames) and a sequence dataset of staircase scenes (226 frames) with human labeled ground truth. Experiments show that the disclosed algorithms can effectively detect multiple planes at 25 Hz for 512×512 depth images. The various modules, units, and components described above can be implemented as an Application Specific Integrated Circuit (ASIC); an electronic circuit; a combinational logic circuit; a field programmable gate array (FPGA); a processor (shared, dedicated, or group) that executes code; or other suitable hardware components that provide the described functionality. The processor can be a microprocessor provided by from Intel, or a mainframe computer provided by IBM. Note that one or more of the functions described above can be performed by software or firmware stored in memory and executed by a processor, or stored in program storage and executed by a processor. The software or firmware can also be stored and/or transported within any computer-readable medium for use by or in connection with an instruction execution system, apparatus, or devices, such as a computer-based system, processor-containing system, or other systems that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions. In the context of this document, a “computer-readable medium” can be any medium that can contain or store the program for use by or in connection with the instruction execution system, apparatus, or device. The computer-readable medium can include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus or device, a portable computer diskette (magnetic), a random access memory (RAM) (magnetic), a read-only memory (ROM) (magnetic), an erasable programmable read-only memory (EPROM) (magnetic), a portable optical disc such a CD, CD-R, CD-RW, DVD, DVD-R, or DVD-RW, or flash memory such as compact flash cards, secured digital cards, USB memory devices, memory sticks, and the like. The various embodiments of the present disclosure are merely preferred embodiments and are not intended to limit the scope of the present disclosure, which includes any modification, equivalent, or improvement that does not depart from the spirit and principles of the present disclosure. | 29,466 |
11861841 | DETAILED DESCRIPTION Hereinafter, embodiments of the present invention will be explained with reference to the drawings. First Embodiment (Configuration) (1) System FIG.1is a schematic configuration diagram of an on-vehicle system including a lane estimation device according to an embodiment of the present invention. A vehicle6includes a lane estimation device1, a camera2, a global positioning system (GPS) sensor3, a vehicle sensor4, and an autonomous driving controller5. The camera2uses, for example, a solid-state imaging device such as a complementary metal oxide semiconductor (CMOS) sensor, and an installation location, a direction, and an angle are set so that at least a road region in a progression direction of the vehicle6is included in an imaging range. The camera2outputs image data obtained by imaging a range including the road region in the progression direction of the vehicle6to the lane estimation device1. Although the camera2may be provided exclusively for lane estimation, any camera can be used as long as it can obtain equivalent image data, such as a camera of a drive recorder or a camera mounted for other purposes. For example, when the vehicle is a two-wheeled vehicle or a bicycle, a camera provided on a helmet of a driver may be used, or a camera provided on a mobile terminal such as a smartphone carried by a fellow passenger of the vehicle may be used. The camera may be an infrared camera. The image data may be moving image data, or may be still-image data imaged at regular time intervals. The GPS sensor3calculates the latitude and longitude of the vehicle6by receiving GPS signals transmitted from a plurality of GPS satellites and performing a ranging operation, and outputs the calculated latitude and longitude to the lane estimation device1as position data of the vehicle6. Instead of the GPS sensor3, a ground (road) based positioning system (GBPS) or the like may be used as long as the same function as the GPS sensor is exhibited. The vehicle sensor4detects information indicating a state of movement of the vehicle6, such as a speed, an acceleration, and a rotation speed of an engine of the vehicle6, for example, in order to perform on-board diagnostics (OBD) of the vehicle6, and outputs the detection result to the lane estimation device1as vehicle sensor data. The vehicle sensor4may include a sensor for detecting a steering angle of a steering wheel or the like in addition to sensors for detecting a speed, an acceleration, and a rotation speed of an engine, and may further include a sensor used for a purpose other than OBD. The autonomous driving controller5performs control for causing the vehicle6to travel fully automatically or semi-automatically based on images imaged by a vehicle exterior camera and a driver camera or sensor data output from various on-vehicle sensors, and uses data indicating a lane estimation result output from the lane estimation device1as one type of the sensor data. (2) Lane Estimation Device (2-1) Hardware Configuration The lane estimation device1estimates a lane in which the vehicle6is traveling, and is configured by, for example, a personal computer.FIG.2is a block diagram showing an example of a hardware configuration of the lane estimation device1. The lane estimating device1includes a hardware processor10A such as a central processing unit (CPU), to which a program memory10B, a data memory20, and an input/output interface unit (hereinafter referred to as an input/output I/F)30are connected via a bus40. External devices such as a camera2, a GPS sensor3, a vehicle sensor4, and an autonomous driving controller5are connected to the input/output I/F30. The input/output I/F30receives data from the camera2, the GPS sensor3, and the vehicle sensor4, and outputs data indicating an estimation result generated by the lane estimation device1to the autonomous driving controller5. The input/output I/F30may include a wired or wireless communication interface. The program memory10B uses, as a storage medium, a combination of a nonvolatile memory such as a hard disk drive (HDD) or a solid state drive (SDD) that can be written to and read from at any time and a nonvolatile memory such as a ROM, and stores programs necessary for executing various control processes according to the embodiment. The data memory20uses, as a storage medium, a combination of a nonvolatile memory such as an HDD or an SSD that can be written to and read from at any time and a volatile memory such as a random access memory (RAM), and is used to store various types of data acquired and created in the course of performing various types of processing. (2-2) Software Configuration FIG.3is a block diagram showing a software configuration of the lane estimation device1according to the first embodiment of the present invention in association with the hardware configuration shown inFIG.2. The input/output I/F30receives each type of data output from the camera2, the GPS sensor3, and the vehicle sensor4, supplies the data to a control unit10, and outputs data indicating a lane estimation result output from the control unit10to the autonomous driving controller5. The input/output I/F30generates display data for displaying the lane estimation result, and outputs the display data to, for example, a display unit of a car navigation device of the vehicle6for display. The storage region of the data memory20includes an image data storage unit21, a lane estimation data storage unit22, a road information storage unit23, a vehicle sensor data storage unit24, and a threshold value storage unit25. The image data storage unit21is used to store image data obtained by the camera2. The lane estimation data storage unit22is used to store data indicating a lane estimation result obtained by the control unit10, which will be described later, in association with the estimation date and time, position data of the vehicle6, and the like. In the road information storage unit23, for example, information indicating a configuration of a road corresponding to a position is stored in advance in association with position data indicated by latitude and longitude. The information indicating the configuration of the road includes, for example, the number of lanes in each of the upward and downward directions, the presence or absence of sidewalks, road shoulders, side strips, and median strips, and information indicating the widths thereof. The vehicle sensor data storage unit24is used to store vehicle sensor data output from the vehicle sensor4in association with information indicating a data type and a detection time. The threshold value storage unit25is used to store threshold values related to various feature values set in advance for each lane. The control unit10is configured by the hardware processor10A and the program memory10B, and includes, as a processing function unit executed by software, an image data acquisition unit11, an image processing unit12, a lane estimation processing unit13, a lane correction unit14, a past estimation data acquisition unit15, a road information acquisition unit16, a vehicle sensor data acquisition unit17, a vehicle action state estimation unit18, and an estimation data output control unit19. The functions of the processing units11to19are realized by causing the CPU (hardware processor)10A to execute a program stored in the program memory10B. The program for executing the processes of the processing units11to19may be stored in advance in the program memory10B in the lane estimation device1or may be stored in an application server or the like on a network, and used. In this case, the lane estimation device1executes the functions of the processing units11to19by downloading a necessary program from an application server via a network when necessary. The image data acquisition unit11, as an image acquisition unit, sequentially captures image date output from the camera2via the input/output I/F30, and stores the image date in the image data storage unit21in association with information indicating the imaging timing or the reception timing. The image processing unit12reads image data from the image data storage unit21. In the case where the image data is moving image data, still-image data is cut out at a predetermined frame cycle. In addition, the image processing unit12performs, for example, noise removal and calibration processing for correcting an individual difference in the performance of the camera2, an inclination at the time of installation, and the like on the still-image data as preprocessing of lane estimation. The lane estimation processing unit13receives the image data after the preprocessing from the image processing unit12and performs processing of estimating a lane in which the vehicle6is traveling based on the image data. The lane estimation processing unit13includes, for example, a road region extraction unit131, a feature value calculation unit132, and a lane estimation unit133as functions thereof, as shown inFIG.4. The road region extraction unit131has the following processing functions.(1) Processing of extracting a range corresponding to a road region from the image data received from the image processing unit12.(2) Processing of excluding a region erroneously extracted as a road region by using information such as the size of the area of the region, and, further, smoothing the extracted road region, etc. to extract a shape indicating the road region.(3) In a case where an object other than a road such as another traveling vehicle is included in an image of a range corresponding to the extracted road region, processing of extracting a shape indicating a region including the object and a shape indicating a region excluding the object in the road region, respectively, and estimating a shape indicating the road region when it is assumed that the object does not exist based on the extracted shapes. The feature value calculation unit132performs processing of calculating a feature value of the shape based on the shape indicating the road region extracted by the road region extraction unit131. Details of the feature value will be described later. The lane estimation unit133determines, for example, whether or not the feature value calculated by the feature value calculation unit132is included in a range of a threshold value set for each lane, thereby performing processing of estimating which lane the vehicle6is currently traveling in. As the threshold value for each lane, a general-purpose threshold value set according to the shape of a general road may be used, or a value set in advance by measurement or the like according to the shape of the road for each section of the road may be used. The past estimation data acquisition unit15reads out the past lane estimation data from the lane estimation data storage unit22, estimates the change history and the tendency of the lane in which the vehicle6traveled in the past based on the data, and gives the estimation information to the lane correction unit14as one piece of correction candidate information. Based on the position data of the vehicle6detected by the GPS sensor3, the road information acquisition unit acquires, from the road information storage unit23, information indicating the configuration of the road at the position where the vehicle6is currently traveling, and supplies the road information indicating the configuration of the road to the lane correction unit14as one piece of correction candidate information. The vehicle sensor data acquisition unit17receives vehicle sensor data, which is output from the vehicle sensor4and indicates the state of movement of the vehicle6, via the input/output I/F30, and stores the received vehicle sensor data in the vehicle sensor data storage unit in association with information indicating the measurement timing or the reception timing thereof. The vehicle action state estimation unit18reads out the vehicle sensor data from the vehicle sensor data storage unit24, estimates whether or not the vehicle6has made a lane change on the basis of the vehicle sensor data, and provides the estimation information to the lane correction unit14as one piece of correction candidate information. The lane correction unit14performs predetermined correction processing on the estimation result of the lane in which the vehicle6is traveling, which is obtained by the lane estimation unit133, and stores the corrected lane estimation data in the lane estimation data storage unit22in association with information indicating the current time. The following three types of processing may be considered for the correction processing of the lane estimation result.(1) Processing of determining the accuracy of the estimation result of the lane in which the vehicle6is traveling obtained by the lane estimation unit133based on the road information indicating the configuration of the road at the position in which the vehicle6is currently traveling, which is acquired by the road information acquisition unit16, and correcting the estimation result in a case where the estimation result can be regarded as erroneous.(2) Processing of determining the accuracy of the estimation result of the lane in which the vehicle6is traveling obtained by the lane estimation unit133based on the information indicating the past lane change history or tendency of the vehicle6, which is estimated by the past estimation data acquisition unit15, and correcting the estimation result in a case where the estimation result can be regarded as erroneous.(3) Processing of determining the accuracy of the estimation result of the lane in which the vehicle6is traveling obtained by the lane estimation unit133based on the information indicating the change in the movement of the vehicle6estimated by the vehicle action state estimation unit18and the information indicating the past lane change history of the vehicle6estimated by the past estimation data acquisition unit15, and correcting the estimation result in a case where the estimation result can be regarded as erroneous. The estimation data output control unit19reads out the latest lane estimation data from the lane estimation data storage unit22and outputs the latest lane estimation data from the input/output I/F30to the autonomous driving controller5. The estimation data output control unit19generates display data for displaying the latest lane estimation data on, for example, map data, and outputs the display data to, for example, a display unit of a car navigation device. In addition to outputting the latest lane estimation data, the estimation data output control unit19may read out and output lane estimation data corresponding to an arbitrary timing in the past. (Operation) A lane estimation operation by the lane estimation device1according to the first embodiment configured in the above manner will now be described. FIG.5is a flowchart showing the overall processing procedure of the lane estimation processing by the control unit10. (1) Image Data Acquisition and Image Processing While the vehicle6is traveling, the camera2images a scene including a road region in a progression direction, and the image data thereof is output from the camera2to the lane estimation device1. Under the control of the image data acquisition unit11, the control unit10of the lane estimation device1captures the image data output from the camera2in step S1via the input I/F30, and sequentially stores the image data in the image data storage unit21in a state of being associated with information indicating the imaging day and time. In parallel with the image data acquisition processing, under the control of the image processing unit12, the control unit10executes image processing that is necessary for lane estimation on the acquired image in step S2. FIG.6is a flowchart showing a processing procedure and processing contents performed by the image processing unit12. That is, the image processing unit12first reads image data from the image data storage unit21. It is then determined in step S21whether or not the image data is moving image data, and, in the case of it being moving image data, still-image data is cut out from the moving image data at a fixed frame cycle in step S22. In step S23, the image processing unit12then performs preprocessing for lane estimation on the still-image data. Here, for example, noise removal and calibration processing for correcting an individual difference in the performance of the camera2, an inclination at the time of installation, and the like are performed. The image processing may be performed by an image processing circuit including hardware. FIG.11shows a first example of still-image data VD after the image processing. (2) Estimation of Traveling Lane Under the control of the lane estimation processing unit13, the control unit10of the lane estimation device1executes processing of estimating the lane in which the vehicle6is traveling in step S3, in the following manner. FIG.7is a flowchart showing a processing procedure and processing contents of the lane estimation processing unit13. (2-1) Road Region Extraction The lane estimation processing unit13, first, causes the road region extraction unit131to extract a road region from the preprocessed image data (step S31). Here, when an example of a road with two lanes on each side is taken, as shown in, for example,FIG.10, traveling lanes TL1and TL2configuring a road are disposed in each of the upward direction and the downward direction divided by a median strip MS as a boundary. A sidewalk WL is disposed outside the traveling lane TL1with a road shoulder SR and a curb SB composed of a concrete block or with a planting SH therebetween. For such a road, the road region extraction unit131extracts a region including, for example, the traveling lanes TL1and TL2and the road shoulder SR as the road region. As the road region, only the traveling lanes TL1and TL2may be extracted. SegNet is used as an example of the road region extraction processing means. SegNet is a deep encoder/decoder architecture for realizing a labeling function on a pixel-by-pixel basis. For example, each portion included in an image is distinguished and labeled in a plurality of different display forms (for example, colors). In the present embodiment, labeling is performed using three types of regions, namely, a road region, a region of an object present on the road (for example, a vehicle region), and a region other than such regions. Subsequently, in consideration of a case where there is a region erroneously extracted as a road region, the road region extraction unit131performs processing of excluding the erroneously detected region by using information such as the size of the area of the region, and further performs processing of smoothing, etc. on the extracted road region to extract a shape indicating the road region. In a case where the contour line of the shape of the extracted road region has small irregularities, for example, the contour line may be linearly approximated. FIG.12shows an example of displaying a shape indicating a road region of a road having two lanes on each side extracted by the road region extraction processing in a superimposed manner on an original image. In this example, a shape (shaded portion in the drawing) indicating a region RE including traveling lanes TL1and TL2in the upward and downward directions and a road shoulder SR on the left side is extracted. In this case, the road shoulder SR may include a curb SB. However, in some cases, other vehicles traveling in the traveling lane TL1on the road shoulder side may be seen in the image data. In this case, the side end portion of the traveling lane TL1or the side end portion of the road shoulder SR is hidden by the other traveling vehicles, and the true side end portion of the road region may not be extracted. Therefore, the road region extraction unit131extracts the true road region in the following manner. FIG.9is a flowchart showing a processing procedure and processing contents of the road region extraction unit131in this case. That is, in step S51, the road region extraction unit131extracts a shape indicating the entire region including the road region and the outer shapes of the other traveling vehicles based on the image of the road region extracted using the SegNet. At the same time, in step S52, a shape indicating a region excluding the traveling vehicles in the road region is extracted. In the processing of extracting the shapes indicating the respective regions in steps S51and S52, processing of excluding an erroneously extracted region by using information such as the size of the area and processing of extracting a contour by smoothing the extracted road region, etc. are performed. In step S53, the road region extraction unit131estimates the true shape of the road region when it is assumed that there are no other traveling vehicles based on the shapes extracted in steps S51and S52. (2-2) Calculation of Feature Value and Lane Estimation Based on Feature Value In step S32, the lane estimation processing unit13then, under the control of the feature value calculation unit132, calculates the feature value from the shape indicating the extracted road region. Then, in step S33, under the control of the lane estimation unit133, the lane estimation processing unit13estimates whether the vehicle6is currently traveling in the traveling lane TL1on the road shoulder side or traveling in the traveling lane TL2on the median strip side based on the calculated feature value. A plurality of forms can be considered for the feature value used for the lane estimation. Hereinafter, an example of lane estimation processing in a case where each of the plurality of feature values is used will be explained. (2-2-1) A Case of Performing Lane Estimation Using an Inclination Angle of a Left End Side of a Road Region as the Feature Value For example, in a case where an image shown inFIG.13is obtained, the feature value calculation unit132draws an approximate line OL1with respect to the left end side of the shape indicating the road region RE extracted by the road region extraction unit131, that is, the left end portion of the traveling lane TL1or the left end portion of the road shoulder SR. When a lower left corner of a screen configured by the image data is defined as a reference coordinate (0,0) on an x-y coordinate plane, the approximate line OL1is expressed as: y=a1x+b1. Here, a1indicates an inclination, and b1indicates an intercept with respect to point P1. Similarly, for example, in a case where an image shown inFIG.14is obtained, the feature value calculation unit132draws an approximate line OL2with respect to the left end side of the shape indicating the road region RE extracted by the road region extraction unit131. Also in this case, when the lower left corner of the screen configured by the image data is defined as a reference coordinate (0,0) on an x-y coordinate plane, the approximate line OL2is expressed by y=a2x+b2 (where y is 0 when x is equal to or lower than P2). In this way, the feature value calculation unit132calculates, as the feature value, inclination angles a1and a2of the left side contour line of the shape indicating the road region RE, that is, the left end portion of the traveling lane TL1or the left end portion of the road shoulder SR. Subsequently, in step S33, the lane estimation unit133reads out the threshold value of the inclination angle set in advance for each lane from the threshold value storage unit25of the data memory20. Then, the inclination angles of the approximate lines OL1and OL2calculated by the feature value calculation unit132are compared with the respective threshold values set for the respective lanes, and it is determined whether the lane in which the vehicle6is currently traveling is the traveling lane TL1on the road shoulder side or the traveling lane TL2on the median strip side based on the comparison result. For example, when the currently calculated feature value is the inclination angle a1of the approximate line OL1, as exemplified inFIG.13, the inclination angle a1is included in the range of the threshold value corresponding to the traveling lane TL2, and thus it is determined that the lane in which the vehicle6is traveling is the traveling lane TL2on the median strip side. On the other hand, when the calculated feature value is the inclination angle a2of the approximate line OL2, as exemplified inFIG.14, the inclination angle a2is included in the range of the threshold value corresponding to the traveling lane TL1, and thus it is determined that the lane in which the vehicle6is traveling is the traveling lane TL1on the road shoulder side. (2-2-2) A Case of Performing Lane Estimation Using Center-of-Gravity of a Shape of a Road Region as the Feature Value The feature value calculation unit132cuts out a shape indicating a region corresponding to each lane on one side from the shape indicating the road region extracted by the road region extraction unit131, and defines a diagram for feature value calculation in an arbitrary part of the cut out shape. Then, coordinates of the center-of-gravity of the diagram are calculated as a feature value. For example, in the shape indicating the region RE corresponding to each lane on one side as shown inFIG.15, a portion surrounded by coordinate points P11to P14is defined as a diagram RE10for feature value calculation. Then, a coordinate W1indicating the center-of-gravity of the diagram RE10is calculated. Similarly, in the shape indicating the region RE corresponding to a lane on one side as shown inFIG.16, a portion surrounded by coordinate points P11to P14is defined as a diagram RE20for feature value calculation, and a coordinate W2indicating the center-of-gravity of the diagram RE20is calculated. Subsequently, the lane estimation unit133reads out coordinate values indicating a center line CL dividing the image data into left and right parts from the threshold value storage unit25of the data memory20. Then, by determining whether the coordinate indicating the center-of-gravity of the diagram RE20calculated by the feature value calculation unit132is located on the left side or the right side of the coordinate value of the center line CL in the x-axis direction in the drawing, it is determined whether the lane in which the vehicle6is traveling is the traveling lane TL1on the road shoulder side or the traveling lane TL2on the median strip side. For example, as shown inFIG.15, in a case where the feature value is the center-of-gravity coordinate W1of the diagram RE10, the center-of-gravity coordinate W1is located on the left side of the coordinate value of the center line CL in the x-axis direction in the drawing, and thus the lane estimation unit133determines that the lane in which the vehicle6is traveling is the traveling lane TL2on the median strip side. On the other hand, as shown inFIG.16, in a case where the feature value is the center-of-gravity coordinate W2of the diagram RE20, the center-of-gravity coordinate W2is located on the right side of the coordinate value of the center line CL in the x-axis direction in the drawing, and thus it is determined that the lane in which the vehicle6is traveling is the traveling lane TL1on the road shoulder side. (2-2-3) a Case of Approximating a Shape of a Road Region with a Triangle, One Side of which is in a y-Axis Direction of Image Data, and Performing Lane Estimation Using an Angle or Area of the Triangle as the Feature Value The feature value calculation unit132extracts a shape included in a region on the left side with respect to the center line CL in the x-axis direction of the screen constituted by the image data from among the shapes indicating the road region RE extracted by the road region extraction unit131. The extracted shape is then approximated by a right-angled triangle having the center line CL as one side, and an area of the right-angled triangle or an angle at one vertex is calculated as a feature value. For example, in the example shown inFIG.17, a shape (contour REL) included in a left half region VDLof the screen constituted by the image data is extracted from the shape indicating the road region RE, and this shape is approximated by a right-angled triangle TA1having the center line CL as one side, and the area of the right-angled triangle TA1or an angle θ5, which is an internal angle at a vertex P3, is calculated as the feature value. Similarly, in the example shown inFIG.18, the shape included in the left half region VDLof the screen constituted by the image data is extracted, the shape is approximated by a right-angled triangle TA2having the center line CL as one side, and the area of the right-angled triangle TA2or an angle θ6at a vertex P4is calculated as the feature value. Instead of calculating the angle of the internal angle, an angle of an external angle of a vertex or an angle obtained by the internal angle+90° may be calculated. Subsequently, the lane estimation unit133reads out the threshold value of the area or the threshold value of the angle of the right-angled triangle set in advance for each lane from the threshold value storage unit25of the data memory20. The area or the angle of the right-angled triangle calculated by the feature value calculation unit132is compared with a preset threshold value of the area or the angle, respectively. Based on the comparison result, it is determined whether the lane in which the vehicle6is currently traveling is the traveling lane TL1on the road shoulder side or the traveling lane TL2on the median strip side. For example, in the example shown inFIG.17, since the angle θ5of the vertex P3of the right-angled triangle TA1is larger than the angle threshold value, it is determined that the lane in which the vehicle6is traveling is the traveling lane TL2on the median strip side. On the other hand, in the example shown inFIG.18, since the angle θ6of the vertex P4of the right-angled triangle TA2is equal to or smaller than the angle threshold value, it is determined that the lane in which the vehicle6is traveling is the traveling lane TL1on the road shoulder side. Instead of the angles θ5and θ6of the vertexes of the right-angled triangles, it is also possible to determine the lanes TL1and TL2by comparing the areas of the right-angled triangles TA1and TA2with a threshold value. The areas of the right-angled triangles TA1and TA2can be obtained, for example, by counting the number of pixels in regions surrounded by the contours of the right-angled triangles TA1and TA2. (2-2-4) A Case of Estimating a Lane by Using Angles θ Between a Lower Side Center Point of a Screen and Intersection Points of Two Parallel Horizontal Lines Drawn on the Screen Constituted by Image Data and Contour Lines of a Road Region as the Feature Values The feature value calculation unit132calculates intersections between both left and right edges of the shape indicating the road region extracted by the road region extraction unit131and two parallel horizontal lines set on the screen constituted by the image data. Then, an angle with respect to the lower side of the image data when a lower side center point Pc of the image data and each of the intersections are connected by a straight line is calculated, and the calculated angle is used as a feature value. For example, in the example shown inFIG.19, intersections P51, P52, P53, and P54of both left and right edges of the shape indicating the road region RE and two parallel horizontal lines H1and H2set on the screen constituted by the image data are detected. Then, angles θ1, θ2, θ3, and θ4of each straight line connecting the lower side center point Pc of the screen constituted by the image data and the respective intersections P51, P52, P53, and P54are calculated with respect to the lower side of the screen, and a difference between the calculated angles θ1and θ2and a difference between the calculated angles θ3and θ4are used as feature quantities. Also, in the example shown inFIG.20, similarly, a difference between angles θ1and θ2and a difference between angles θ3and θ4of each straight line connecting the lower side center point Pc of the screen constituted by the image data and each intersection P51, P52, P53, and P54are calculated as the feature values. Subsequently, the lane estimation unit133reads out angle difference threshold values for the left side and for the right side set in advance for each lane from the threshold value storage unit25of the data memory20. Then, the difference between the angles θ1and θ2and the difference between the angles θ3and θ4calculated by the feature value calculation unit132are compared with the angle difference threshold values for the left side and the right side set for each lane. Based on the comparison result, it is determined whether the lane in which the vehicle6is currently traveling is the traveling lane TL1on the road shoulder side or the traveling lane TL2on the median strip side. For example, in the example shown inFIG.19, if the difference between the angles θ1and θ2is larger than the threshold value for the left side and the difference between the angles θ3and θ4is equal to or smaller than the threshold value for the right side, it is determined that the lane in which the vehicle6is traveling is the traveling lane TL2on the median strip side. Similarly, in the example shown inFIG.20, if the difference between the angles θ1and θ2is equal to or smaller than the threshold value for the left side and the difference between the angles θ3and θ4is larger than the threshold value for the right side, it is determined that the lane in which the vehicle6is traveling is the traveling lane TL1on the road shoulder side. (2-2-5) A Case of Estimating a Traveling Lane Based on a Shape Indicating a Road Region Extracted when Another Traveling Vehicle is Present in the Traveling Lane As described above, the road region extraction unit131extracts the shape indicating the entire region including the road region and the other traveling vehicles in step S51, and extracts the shape indicating the region excluding the other traveling vehicles in the road region in step S52. In step S53, based on each of the extracted shapes, a shape indicating a road region when it is assumed that there are no other traveling vehicles is estimated. Based on the shape indicating the road region estimated in step S53, the feature value calculation unit132draws an approximate line on the left end side thereof and calculates an inclination angle of the approximate line as a feature value. For example, in the example shown inFIG.21, based on the shape indicating the entire region including an image MB of another traveling vehicle extracted in step S51, the feature value calculation unit132draws an approximate line OL2on the left end side thereof. The approximate line OL2is expressed by y=a2x+b2. At the same time, based on the shape indicating the region excluding the traveling vehicles in the road region extracted in step S52, the feature value calculation unit132draws an approximate line OL1on the left end side thereof. The approximate line OL1is expressed by y=a1x+b1. Then, based on each of the approximate lines OL1and OL2, the feature value calculation unit132calculates a third approximate line OL3that is between the approximate lines OL1and OL2, and sets the third approximate line OL3as a contour line of the left end side of the road region when it is assumed that the image MB of another traveling vehicle does not exist. In this case, the third approximate line OL3is expressed by y={(a1+a2)/A}x+(b1+b2)/B. Here, A and B are coefficients, and are determined based on parameters such as how far the other traveling vehicle is traveling in the left-right direction from the center of the lane and how many meters of height the other traveling vehicle has. By appropriately setting these coefficients A and B, the position of the approximate line OL3can be brought close to the position of the left end side of the actual road region. The lane estimation unit133compares the calculated inclination angle {(a1+a2)/A} of the approximate line OL3with a threshold value set in advance for each lane. Based on the comparison result, it is determined whether the lane in which the vehicle6is currently traveling is the traveling lane TL1on the road shoulder side or the traveling lane TL2on the median strip side. For example, in the example shown inFIG.21, since the inclination angle {(a1+a2)/A} is included in the range of the threshold value corresponding to the traveling lane TL2, it is determined that the lane in which the vehicle6is traveling is the traveling lane TL2on the median strip side. On the other hand, when the inclination angle {(a1+a2)/A} is included in the range of the threshold value corresponding to the traveling lane TL1, it is determined that the lane in which the vehicle6is traveling is the traveling lane TL1on the road shoulder side. (3) Correction of Lane Estimation Result In step S4shown inFIG.5, the control unit10of the lane estimation device1determines the accuracy (validity) of the lane estimated by the lane estimation processing unit13under the control of the lane correction unit14, and executes processing of correcting the estimation result of the lane when it is determined that the lane is not valid. FIG.8is a flowchart showing a processing procedure and processing contents of the lane correction unit14. (3-1) Correction Based on Information Indicating Road Configuration First, in step S41, the lane correction unit14corrects the lane estimation result based on information indicating the configuration of the road corresponding to the traveling position of the vehicle6. For example, in the road information acquisition unit16, based on the current position data of the vehicle6measured by the GPS sensor3, information indicating the configuration of the road corresponding to the position where the vehicle6is currently traveling is read out from the road information storage unit23. The lane correction unit14collates the lane estimation result obtained by the lane estimation processing unit13with the read out information indicating the road configuration, and determines whether the lane estimation result is correct. For example, when the lane estimation result is the “traveling lane TL2” on the median strip side, and the road on which the vehicle6is traveling is a road with one lane on each side, the lane estimation result is determined to be erroneous, and the lane estimation result is corrected to the “traveling lane TL1”. (3-2) Correction Based on Past Lane Change History When a new lane estimation result is obtained, the past estimation data acquisition unit15reads out the past lane estimation data from the lane estimation data storage unit22and, based on the data, estimates the change history and tendency of the lane in which the vehicle6has traveled in a certain period in the past. The estimation processing may, for example, calculate the number of times or the frequency at which each of the traveling lanes TL1and TL2was used, or, for example, create trained data indicating the traveling tendency of a driver based on a traveling time zone, a traveling route, and a traveling position for each driver in advance and estimate a lane in which the driver is currently traveling based on the trained data. In step S42, the lane correction unit14compares the latest lane estimation result obtained by the lane estimation processing unit13with the information indicating the lane change history or tendency of the vehicle6estimated by the past estimation data acquisition unit15to evaluate the validity of the latest lane estimation result. If, for example, the latest lane estimation result is the “traveling lane TL2” on the median strip side even though the driver travels only in the traveling lane TL1on a daily basis, the estimation result is determined to be erroneous, and the lane estimation result is corrected to the “traveling lane TL1” on the road shoulder side. (3-3) Correction Based on Movement State of Vehicle6and the Past Lane Change History The vehicle action state estimation unit18estimates whether or not the vehicle6has made a lane change based on the sensor data indicating the movement of the vehicle6such as the speed and acceleration of the vehicle6and the steering wheel operation angle, etc. acquired by the vehicle sensor data acquisition unit17. In step S43, the lane correction unit14compares the lane estimation result obtained by the lane estimation processing unit13and the estimation result of the lane change obtained by the vehicle action state estimation unit18with each other in terms of time. If the lane estimation result obtained by the lane estimation processing unit13does not correspond to the estimation result of the lane change obtained by the vehicle action state estimation unit18, the lane estimation result obtained by the lane estimation processing unit13is corrected based on the estimation result of the lane change obtained by the vehicle action state estimation unit18. Finally, the lane correction unit14stores the data of the latest lane estimation result corrected by each correction processing described above in the lane estimation data storage unit22in association with the information indicating the current time. In the lane correction processing described above, a case of executing all of the three types of correction processing (3-1), (3-2), and (3-3) has been described as an example. However, only one or two types arbitrarily selected from the three types of correction processing may be executed, and the correction processing may be omitted when correction is not necessary. The order of executing the three types of correction processing may be set in the following manner. (4) Output of Lane Estimation Data Under the control of the estimation data output control unit19, in step S5, the control unit10executes control for outputting the lane estimation result in the following manner. That is, each time the latest lane estimation data is stored in the lane estimation data storage unit22, the estimation data output control unit19reads out the lane estimation data from the lane estimation data storage unit22. Then, the lane estimation data is output from the input/output I/F30to the autonomous driving controller5. As a result, in the autonomous driving controller5, for example, control for maintaining or changing the traveling position of the vehicle is performed by using the lane estimation data as one of the pieces of data indicating the current traveling state of the vehicle6. The estimation data output control unit19generates, for example, display data for displaying the position of the lane in which the vehicle6is traveling on map data in a superimposed manner based on the latest lane estimation data, and outputs the display data from the input/output I/F30to, for example, a car navigation device. As a result, in the car navigation device, processing to display the display data on a display unit is performed, whereby, on the display unit of the car navigation device, the position of the lane in which the vehicle6itself is currently traveling is displayed on the map. (Effect) As described above in detail, in the first embodiment, the shape indicating the road region is extracted from the image data obtained by imaging the progression direction of the vehicle6, and, based on the information indicating the shape, the inclination angle for one contour line, the center-of-gravity coordinate for a diagram indicating the shape of the road region, the angle between two consecutive contour lines across one vertex of the diagram indicating the shape, and the area of the diagram indicating the shape are calculated as the feature values for the shape of the road region. By determining whether or not the calculated feature value is included in a range of a threshold value set in advance for each lane, the lane in which the vehicle6is traveling is estimated. Therefore, according to the first embodiment, it is possible to estimate the lane by focusing on the feature of the shape indicating the road region when the traveling direction is viewed from the vehicle6. Therefore, it is possible to estimate the lane in which the vehicle is traveling without depending on the lane marker that divides the lane on the road. Accordingly, it is possible to estimate the lane even in a case where, for example, a repair mark of the lane marker remains due to construction or the like or fades or disappears due to deterioration over time. In the first embodiment, in a case where an object such as another traveling vehicle is present in a road region, a shape indicating the entire road region including the object and a shape indicating a region of the road region excluding the object are extracted from image data, and a contour line of the road region when it is assumed that the object is not present is estimated based on the extracted shapes. The lane in which the vehicle6is traveling is estimated based on the estimated inclination angle of the contour line. Therefore, for example, even in a case where the road shoulder or the left end portion of the traveling lane is hidden by another vehicle traveling in the lane on the road shoulder side, it is possible to estimate the contour line of the road shoulder or the left end portion of the traveling lane, to estimate the lane based on the estimation result. Furthermore, in the first embodiment, the validity of the lane estimation result obtained by the lane estimation processing unit13is evaluated based on the information indicating the configuration of the road corresponding to the traveling position of the vehicle6, the past lane change history, and the information indicating the presence or absence of the lane change of the vehicle6estimated based on the sensor data indicating the movement of the vehicle6, and the lane estimation result is corrected in the case where it is determined to be invalid. Therefore, for example, even in a case where clear image data cannot be obtained or a road region cannot be accurately recognized from the image data due to the influence of weather, illuminance, or the like, it is possible to correct the estimation result of the lane in which the vehicle is currently moving, and thereby to obtain an accurate lane estimation result. Second Embodiment A lane estimation device, method, and program according to a second embodiment of the present invention use pixel value data obtained by labeling each pixel in a road region based on a shape indicating the road region as a feature value of the road region. The lane estimation device, method, and program according to the second embodiment estimate the lane in which a vehicle6is moving by determining which of a plurality of patterns preset for each road or each lane the pixel value data is similar to. The lane estimation device according to the second embodiment of the present invention can adopt the same configuration as the lane estimation device1explained in relation to the first embodiment. Therefore, in the following, the second embodiment will be explained by using the same reference numerals for the same configurations as those of the first embodiment, and detailed explanations overlapping with the first embodiment will be omitted. (Configuration) An on-vehicle system including a lane estimation device1according to the second embodiment of the present invention can adopt the same configuration as that described with reference toFIG.1. The lane estimation device1according to the second embodiment can adopt the same hardware configuration as that described with reference toFIG.2. FIG.22is a block diagram showing a software configuration of the lane estimating device1according to the second embodiment of the present invention in association with the hardware configuration shown inFIG.2. Similarly to the first embodiment, a storage region of a data memory20includes an image data storage unit21, a lane estimation data storage unit22, a road information storage unit23, a vehicle sensor data storage unit24, and a threshold value storage unit25. In the lane estimation device1according to the second embodiment, the storage region of the data memory20further includes a pattern storage unit26. The pattern storage unit26is used to store a pattern (hereinafter, referred to as a “region pattern”, and various region patterns are collectively referred to as a “region pattern PT”) corresponding to a shape of a road region which is set in advance for each road or each lane and is shown in an image. The region pattern PT indicates an ideal shape of a road region in which, if the vehicle6is traveling in the center of each lane, the road will appear in the image captured by the camera installed in the vehicle6. The region pattern PT is created or set based on image data collected in advance from a large number of vehicles by, for example, a road management server or the like that provides a traffic congestion prediction service. The lane estimation device1can acquire a set of patterns including a plurality of region patterns PT corresponding to the type (vehicle type, vehicle height, and the like) of the vehicle6from the server through the network via, for example, a communication unit (not shown), and store the set of patterns in the pattern storage unit26. Here, depending on where the camera is installed in the vehicle6, the shape of the road shown in the image captured by the camera greatly differs. Therefore, the lane estimation device1may appropriately correct the acquired region pattern PT according to the installation position of the camera, such as whether the camera is located at the center of the vehicle6or is deviated to the left or right, the distance of the camera to the center line of the vehicle6, and the installation height of the camera based on the road, or the appearance of the hood of the vehicle itself shown in the image, then, store the corrected region pattern PT in the pattern storage unit26. Alternatively, the lane estimation device1may transmit an image captured in advance by an on-vehicle camera to the server and receive a set of region patterns PT corrected by the server based on the image. Alternatively, the lane estimation device1itself may generate a set of region patterns PT corresponding to the installation position of the camera. The pattern storage unit26stores, as a set of patterns corresponding to the types of the vehicles6, a large number of region patterns PT that differ depending on the types of roads (for example, a national expressway, national highways, a prefectural road, a municipal road, and the like), the number of lanes (which lane of the road with what number of lanes the vehicles6are traveling in), and the like. For example, the pattern storage unit26stores each region pattern PT in association with position information so that a necessary region pattern PT can be retrieved based on position data of the vehicle6detected by a GPS sensor3. As in the first embodiment, a control unit10includes an image data acquisition unit11, an image processing unit12, a lane correction unit14, a past estimation data acquisition unit15, a road information acquisition unit16, a vehicle sensor data acquisition unit17, a vehicle action state estimation unit18, and an estimation data output control unit19. The control unit10according to the second embodiment includes a lane estimation processing unit130instead of the lane estimation processing unit13. Similarly to the lane estimation processing unit13described in the first embodiment, the lane estimation processing unit130receives pre-processed image data from the image processing unit12and performs processing of estimating the lane in which the vehicle6is traveling on the basis of the image data. However, the feature value and the detailed function to be used in the estimation processing is different from the lane estimation processing unit13. FIG.23shows an example of functions of the lane estimation processing unit130. The lane estimation processing unit130includes a road region extraction unit131, a pattern acquisition unit1301, a similarity determination unit1302, and a lane estimation unit1303. The road region extraction unit131performs the following processing.(1) Processing of extracting a range corresponding to a road region and a region of an object present on the road (a vehicle region in this embodiment) from the image data received from the image processing unit12.(2) Processing of excluding a region erroneously extracted as a road region or a vehicle region by using information such as the size of the area of the region, and, furthermore, performing smoothing, etc. on the extracted road region and vehicle region to extract a shape indicating the road region and vehicle region. The pattern acquisition unit1301performs processing of reading out the region pattern PT stored in the pattern storage unit26and passing the region pattern PT to the similarity determination unit1302. The similarity determination unit1302serves as a feature value calculation unit and, based on the shape indicating the road region extracted by the road region extraction unit131, performs processing of acquiring pixel value data including a pixel value obtained by labeling each pixel in the road region as a feature value of the road region. The similarity determination unit1302further determines the similarity to the region pattern PT acquired by the pattern acquisition unit1301based on the acquired pixel value data, and passes the determination result to the lane estimation unit1302. The lane estimation unit1303serves as an estimation processing unit and performs processing of estimating which lane the vehicle6is currently traveling in based on the determination result by the similarity determination unit1302. (Operation) A lane estimation operation by the lane estimation device1according to the second embodiment configured in the above manner will be described. The lane estimation operation can follow the same flowchart as the overall processing procedure of the lane estimation processing by the control unit10described with reference toFIG.5in relation to the first embodiment. (1) Image Data Acquisition and Image Processing In step S1, the control unit10of the lane estimation device1executes image data acquisition processing under the control of the image data acquisition unit11, as in the first embodiment. In step S2, the control unit10of the lane estimation device1executes image processing necessary for lane estimation with respect to the acquired image data under the control of the image processing unit12, as in the first embodiment. The processing procedure and processing contents of the image processing unit12may be the same as those described in relation toFIG.6. FIG.25Ashows a second example of still-image data VD after the image processing. In this example, an image is imaged by a camera2mounted on a vehicle6traveling on a road having two lanes on each side as shown in, for example,FIG.10. In the still-image data VD ofFIG.25A, a hood portion of the vehicle6itself, a median strip MS, a guardrail RL on the median strip MS, traveling lanes TL1and TL2, a road shoulder SR and curb SB, and another vehicle MB traveling ahead are shown. (2) Estimation of Traveling Lane In step S3, the control unit10of the lane estimation device1according to the second embodiment executes processing of estimating a lane in which the vehicle6is traveling in the following manner under the control of the lane estimation processing unit130. FIG.24is a flowchart showing a processing procedure and processing contents of the lane estimation processing unit130. (2-1) Road Region Extraction First, in step S301, the lane estimation processing unit130causes the road region extraction unit131to perform processing of extracting a road region from the pre-processed image data VD. For the road having two lanes on each side as described above, the road region extraction unit131extracts a region including, for example, the traveling lanes TL1and TL2and the road shoulder SR as the road region. As in the first embodiment, the road region extraction unit131may extract only the traveling lanes TL1and TL2as the road region. As in the first embodiment, SegNet is used as an example of the road region extraction processing means. Optionally, similarly to the first embodiment, the road region extraction unit131may perform processing of excluding an erroneously detected region using information such as the size of the area of the region in consideration of a case where there is a region erroneously extracted as a road region, and may further perform processing such as smoothing on the extracted road region to extract a shape indicating the road region. FIG.25Bshows an example of an output result obtained by performing the SegNet extraction processing on the image data VD shown inFIG.25A. InFIG.25B, in processed image data TVD, a shape (a shaded portion in the drawing) indicating a road region RE including traveling lanes TL1and TL2in the same traveling direction as the vehicle6itself and a left-side road shoulder SR and a shape (a dot-hatched portion in the drawing) indicating a region MBR including another vehicle MB traveling ahead are extracted. Each region is given a color corresponding to labeling on a pixel basis. In the following description, as an example, the road region RE is labeled with green ([R, G, B]=[0, 255, 0]), the vehicle region MBR is labeled with red ([R, G, B]=[255, 0, 0]), and the other region (colorless in the drawing) is labeled with black ([R, G, B]=[0, 0, 0]). In the case where another vehicle appears in the image data and the road region cannot be extracted due to the other vehicle, the road region extraction unit131may perform the road region extraction processing shown inFIG.9as described in the first embodiment. (2-2) Reading Out Region Pattern Under the control of the pattern acquisition unit1301, in step302, the lane estimation processing unit130performs processing of reading out the region pattern PT set in advance for each type of road, each road, or each lane from the pattern storage unit26and passing the region pattern PT to the similarity determination unit1302. For example, based on the position information of the vehicle6detected by the GPS sensor3, the pattern acquisition unit1301reads out one or more region patterns PT corresponding to the position information from the pattern storage unit26and passes the region pattern PT to the similarity determination unit1302. FIG.26Ashows a region pattern PT1set for each road, as an example of the region pattern PT read out from the pattern storage unit26. The region pattern PT1shown inFIG.26Ais set particularly for a national highway having two lanes on each side, and indicates a pattern relating to a road region shown in the image data acquired by a vehicle6traveling in a lane on the road shoulder side (left side). The region pattern PT1includes a road portion RD (shaded portion) and a portion BK (colorless portion) other than the road portion RD. In the following description, as an example, a green pixel value ([R, G, B]=[0, 255, 0]) is assigned to the road portion RD, and a black pixel value ([R, G, B]=[0, 0, 0]) is assigned to the other portion BK. FIG.26Bis a diagram in which a virtual line VL for distinguishing the traveling lanes is drawn with respect to the region pattern PT1shown inFIG.26Afor the purpose of explanation. The road portion RD includes a region of a traveling lane TL1in which the vehicle6is traveling and a region of a traveling lane TL2on the right side thereof. FIG.26Cshows a region pattern PT2set for each road as another example of the region pattern PT read out from the pattern storage unit26. The region pattern PT2shown inFIG.26Cis set for a national highway having two lanes on each side, and indicates a pattern of a road region shown in the image data acquired by the vehicle6traveling in the lane on the median strip side (right side). The region pattern PT2includes a road portion RD (shaded portion) and a portion BK (colorless portion) other than the road portion RD. FIG.26Dis a diagram in which a virtual line VL for distinguishing the traveling lanes is drawn with respect to the region pattern PT2shown inFIG.26Cfor the purpose of explanation. The road portion RD includes a region of a traveling lane TL2in which the vehicle6is traveling and a region of a traveling lane TL1on the left side thereof. As shown inFIG.26AtoFIG.26D, the region pattern PT used in the estimation processing is set in advance for each road or each lane so as to reflect that the shape of the road region shown in the image imaged by the camera2mounted on the vehicle6is different depending on which lane the vehicle6is traveling in. The lane estimation processing unit130of the lane estimation device1according to the second embodiment performs estimation of a lane in which the vehicle is moving by comparing the road portion RD of the region pattern PT and the road region shown in the image data at a pixel level. The pattern acquisition unit1301is configured to read out one or a plurality of region patterns PT necessary for lane estimation from the pattern storage unit26on the basis of the position data of the vehicle6detected by the GPS sensor3. As an example, the pattern acquisition unit1301is configured to acquire, from the road information storage unit23, information indicating a configuration of a road at a position where the vehicle6is currently traveling, based on position data of the vehicle6detected by the GPS sensor3, and to read out, from the pattern storage unit26, one or more necessary region patterns PT, based on the acquired information. For example, based on the information that the road on which the vehicle is currently traveling is a national highway having two lanes on each side, the pattern acquisition unit1301is configured to read out the region pattern PT1and the region pattern PT2associated with “national highway having two lanes on each side”. In a case where the road on which the vehicle is currently traveling is a national expressway having three lanes on each side, the pattern acquisition unit1301can read out three region patterns PT corresponding to a case where the vehicle is traveling in a road-shoulder-side lane, a case where the vehicle is traveling in a central lane, and a case where the vehicle is traveling in a median-strip-side lane, which are associated with the national expressway having three lanes on each side. These are merely examples, and the type and the number of the region patterns PT the pattern acquisition unit1301reads out from the pattern storage unit26may be arbitrarily set. Hereinafter, a description of the type of road (national highway, expressway, etc.) will be omitted. Note that “each side” is used merely for convenience of explanation, and even in the case of a road region including an oncoming lane or a vehicle region, it is possible to estimate a lane by reading out N region patterns PT corresponding to each lane of an N-lane road. The pattern acquisition unit1301may acquire the region pattern PT directly from the road management server or the like through a communication unit (not shown). The region pattern PT may be any pattern as long as it can be compared with the road region shown in the image data acquired by the vehicle6. As described above, the region pattern PT2shown inFIG.26Crelates to the image data acquired by the vehicle6traveling in the traveling lane TL2on the median strip side of the road having two lanes on each side, and includes both the traveling lane TL1and the traveling lane TL2. However, the region pattern PT does not need to include all the traveling lanes included in the road. FIG.27AtoFIG.27Dshow examples of such region patterns PT set for each lane. FIG.27Ashows a region pattern PT3for comparison with the road region RE shown in the image data in the case where there is a lane to the left of the lane in which the vehicle6itself is traveling. The region pattern PT3includes a road portion RD obtained by cutting out only a region related to the traveling lane TL1from the region pattern PT2shown inFIG.26C, and a portion BK other than the road portion RD. For example, in a case where the vehicle6is traveling on a road having two lanes on each side, the lane estimation processing unit130determines whether or not the road region RE in the processed image data TVD includes a region similar to the region pattern PT3shown inFIG.27A. In a case where it includes a similar region, the vehicle6can be estimated to be traveling in the lane TL2on the median strip side, and, in a case where it does not include a similar region, the vehicle6can be estimated to be traveling in the lane TL1on the road shoulder side. FIG.27Bshows a region pattern PT4for comparison with the road region RE shown in the image data in a case where a lane exists on the right side of the lane in which the vehicle6itself is traveling. The region pattern PT4includes a road portion RD obtained by cutting out only a region related to the traveling lane TL2from the region pattern PT1shown inFIG.26A, and a portion BK other than the road portion RD. FIG.27Cshows a region pattern PT5for comparison with the road region RE shown in the image data in a case where another lane exists to the left of the left side lane of the lane in which the vehicle6itself is traveling, such as a road having three lanes on each side. The region pattern PT5includes a road portion RD relating to a traveling lane that is not included in the region pattern PT2shown inFIG.26C. FIG.27Dshows a region pattern PT6for comparison with the road region RE shown in the image data in a case where another lane exists to the right of the right side lane of the lane in which the vehicle6itself is traveling, such as a road having three lanes on each side. The region pattern PT6also includes a road portion RD relating to a traveling lane that is not included in the region pattern PT1shown inFIG.26A. By using the region patterns PT shown inFIG.27AtoFIG.27D, the area to be compared becomes smaller, and the lane estimation device1can determine the degree of similarity with fewer processes. The region patterns PT shown inFIG.27AtoFIG.27Dare merely examples. The region patterns PT may be variously changed depending on the presence or absence of a road shoulder, the width of each lane, the presence or absence of a median strip, the radius of curvature of a road, and the like. Whether the pattern acquisition unit1301should read out the region pattern PT of the entire road as shown inFIG.26AtoFIG.26Dor should read out the region pattern PT for each lane as shown inFIG.27AtoFIG.27Dmay be arbitrarily set by a user or the like of the lane estimation device1. The pattern acquisition unit1301may acquire the region pattern PT directly from the road management server or the like through a communication unit (not shown). (2-3) Determination of Degree of Similarity Under the control of the similarity determination unit1302, in step S303, the lane estimation processing unit130then compares the road region RE extracted from the image data VD with the road portion RD of the region pattern PT read out by the pattern acquisition unit1301at a pixel level. The lane estimation processing unit130is assumed to perform preprocessing such as size adjustment and inclination adjustment in advance on the processed image data TVD and the region pattern PT so that they can be compared with each other. The lane estimation processing unit130is also assumed to perform necessary calibration in advance in accordance with the vehicle height of the vehicle6, the appearance of the hood in the image data VD, and the like in addition to the individual difference in performance, the inclination at the time of installation, and the like of the camera2. First, the similarity determination unit1302acquires pixel value data indicating a pixel value at each pixel position for each of the processed image data TVD and the region pattern PT. As described above, in the processed image data TVD, each pixel position is labeled with a different color (pixel value) by the region extraction processing by the road region extraction unit131. Similarly, in the region pattern PT, different RGB values are assigned to the respective pixel positions. The similarity determination unit1302reads out the RGB values at each pixel position stored in the form of, for example, a two-dimensional array from each pixel value data, compares the RGB values at each pixel position, and determines whether the RGB values are the same. The similarity determination unit1302may perform comparison for all pixel positions or may perform comparison only for pixel positions corresponding to the road portion RD in the region pattern PT. The comparison processing will be further described later. In step S304, under the control of the similarity determination unit1302, the lane estimation processing unit130then determines the overall degree of similarity based on the comparison result for each pixel. As an example, the similarity determination unit1302determines the degree of similarity by calculating a ratio of the number of pixels determined to have the same RGB value to the total number of compared pixels. In step S305, the lane estimation processing unit130determines whether or not the similarity determination processing by the similarity determination unit1302has been completed for all the region patterns PT read out from the pattern storage unit26by the pattern acquisition unit1301. In the case where there is an uncompared region pattern PT (branch of NO), steps S303to304are repeated for the uncompared region pattern PT. In the case where the similarity determination processing has been completed for all of the region patterns PT (branch of YES), the processing proceeds to step S306. In step S306, under the control of the similarity determination unit1302, the lane estimation processing unit130passes the similarity determination result to the lane estimation unit1303. In one example, the similarity determination unit1302selects the region pattern PT having the highest degree of similarity among the plurality of region patterns PT for which the degree of similarity has been determined, and passes the selected region pattern PT to the lane estimation unit1303together with the determined degree of similarity. The number of region patterns PT selected by the similarity determination unit1302is not limited to one, and a plurality of region patterns PT satisfying a certain criterion may be selected. For example, the similarity determination unit1302may be configured to pass all the region patterns PT for which it is determined that the degree of similarity with the image data TVD exceeds a predetermined threshold value to the lane estimation unit1303. Alternatively, in a case where only one region pattern PT is read out by the pattern acquisition unit1301, the similarity determination unit1302may be configured to determine whether the degree of similarity exceeds a predetermined threshold value and pass the region pattern PT to the lane estimation unit1303together with the determination result. (2-4) Lane Estimation In step S307, under the control of the lane estimation unit1303, the lane estimation processing unit130performs processing of estimating in which lane the vehicle6is traveling based on the similarity determination result received from the similarity determination unit1302. For example, in a case where it is determined by GPS information that the vehicle6is traveling on a road having two lanes on each side, the region pattern PT1and the region pattern PT2are read out by the pattern acquisition unit1301, and the region pattern PT1is determined to have a higher degree of similarity by the similarity determination unit1302, the lane estimation unit1303can estimate that the lane in which the vehicle6is traveling is the lane TL1on the road shoulder side of the road having two lanes on each side. Alternatively, for example, in a case where the pattern acquisition unit1301is set to read out only the region pattern PT1when the vehicle is traveling on a road having two lanes on each side, the lane estimation unit1303can estimate the lane based on only the degree of similarity between the image data TVD and the region pattern PT1. In this case, if the degree of similarity to the region pattern PT1received from the similarity determination unit1302exceeds a predetermined threshold value, the lane estimation unit1303can estimate that the lane in which the vehicle6is traveling is the lane TL1on the road shoulder side of the road having two lanes on each side, and, if the degree of similarity to the region pattern PT1is equal to or less than the predetermined threshold value, the lane estimation unit1303can estimate that the lane in which the vehicle6is traveling is the lane TL2on the median strip side of the road having two lanes on each side. As another example, in the case where the vehicle is determined to be traveling on a road having two lanes on each side by the GPS information, the pattern acquisition unit1301may be set to read out the region patterns PT3and PT4shown inFIG.27AandFIG.27B. In such a case, when the similarity determination unit1302determines that the degree of similarity of the region pattern PT3is higher than that of the region pattern PT4, the lane estimation unit1303determines that a traveling lane exists on the left side of the lane in which the vehicle6itself is traveling, and can estimate that the vehicle6itself is traveling in the lane TL2on the median strip side of the road having two lanes on each side. As yet another example, in the case where it is determined that the vehicle is traveling on a road having three lanes on each side by the GPS information, the pattern acquisition unit1301may be set to read out the region patterns PT3to PT6shown inFIG.27AtoFIG.27D, and the similarity determination unit1302may be set to select a region pattern PT having a degree of similarity exceeding a predetermined threshold value. For example, in a case where the region pattern PT3and the region pattern PT4are selected by the similarity determination unit1302, the lane estimation unit1303can determine that the vehicle6is traveling in the center lane of a road having three lanes on each side. Alternatively, in a case where the region pattern PT3and the region pattern PT5are selected by the similarity determination unit1302, the lane estimation unit1303can determine that the vehicle6is traveling in a lane on the median strip side of a road having three lanes on each side. The lane estimation processing unit130may determine the tolerance of the degree of similarity based on a preset threshold value stored in the threshold value storage unit25. For example, in a case where the region pattern PT having the highest degree of similarity is received from the similarity determination unit1302, if the degree of similarity is below a preset threshold value, the processing may be suspended, and an error message indicating that estimation is impossible may be output. The threshold value may be a constant value regardless of the lane, or may be a value set for each lane or each region pattern PT. Alternatively, the lane estimation processing unit130may suspend the processing and output an error message also in a case where, for example, there is not a sufficient number of region patterns PT having similarities exceeding the predetermined threshold value received from the similarity determination unit1302, and the lane estimation unit1303cannot estimate the traveling lane. In this case, new image data VD may be acquired to redo the processing. (3) Correction of Lane Estimation Result In the same manner as in the first embodiment, the control unit10of the lane estimation device1then determines the accuracy (validity) of the lane estimated by the lane estimation processing unit130in step S4shown inFIG.5under the control of the lane correction unit14, and executes processing of correcting the estimation result of the lane in the case where it is determined that the lane is not valid. The processing procedure and processing contents of the lane correction unit14may be the same as those described with reference toFIG.8. (4) Output of Lane Estimation Data The control unit10executes control for outputting the lane estimation result in step S5under the control of the estimation data output control unit19. This processing can be executed in the same manner as in the first embodiment. (5) Other Embodiments FIG.28Ashows another example of the still-image data VD in the case where there is another traveling vehicle MB. The still-image data VD is, for example, imaged by the camera2mounted on the vehicle6traveling in the lane TL1on the road shoulder side of a road having two lanes on each side as shown inFIG.10, and is subjected to image processing by the image data acquisition unit11. In the still-image data VD ofFIG.28A, similarly to the image data VD shown inFIG.25A, the hood portion of the vehicle6itself, the median strip MS, the traveling lanes TL1and TL2, the road shoulder SR, and the curb SB, and, furthermore, another vehicle MB traveling in the traveling lane TL2are displayed. FIG.28Bshows an example of the output result of the extraction processing performed on the image data VD shown inFIG.28Ausing SegNet by the road region extraction unit131. InFIG.28B, a road region RE (shaded portion) and a region MBR (dot-hatched portion) including the other vehicle MB are extracted from the processed image data TVD. Again, each region is given a color corresponding to labeling on a pixel basis. Here, the road region RE is labeled with green ([R, G, B]=[0, 255, 0]), the vehicle region MBR is labeled with red ([R, G, B]=[255, 0, 0]), and the other regions are labeled with black ([R, G, B]=[0, 0, 0]). In this embodiment, it is known from the GPS information that the vehicle6itself is traveling on a road having two lanes on each side, therefore, the pattern acquisition unit1301reads out the region pattern PT1and the region pattern PT2from the pattern storage unit26.FIG.28Bshows an example in which the accuracy of region extraction is slightly low, and approximation processing is not performed. The road region RE and the vehicle region MBR include uneven contour line portions caused by erroneous detection. FIG.29shows an example of processing of determining the degree of similarity between the region pattern PT1and the processed image data TVD by the similarity determination unit1302. In this example, the similarity determination unit1302compares pixel values at each coordinate point P with the upper left of each image as the origin, the horizontal direction of the image as the x-axis, the vertical direction of the image as the y-axis, and each pixel position as the coordinate point P (x, y). The comparison of the pixel values may be performed over the entire image or may be performed only on the coordinates corresponding to the road portion RD in the region pattern PT1. In this example, the coordinates of the lower right point Q of each image are (640,360). InFIG.29, a point P1is located in a green region recognized as a road in both the pattern PT1and the image data TVD, and the RGB values match. Therefore, the similarity determination unit1302can perform processing of, for example, setting a flag indicating that the pixel values match for the point P1. On the other hand, the point P2located in the road portion RD in the pattern PT1, however, is located in the region MBR identified as the vehicle in the image data TVD. Therefore, the similarity determination unit1302determines that the pixel values of the point P2do not match. Here, in one embodiment, the similarity determination unit1302may be configured to determine that the pixel values of the coordinate points included in the road portion RD of the region pattern PT match as long as the coordinate points are included in either the road portion RD (shaded portion) or the vehicle region MBR (dot-hatched portion) in the image data TVD. In other words, the similarity determination unit1302according to this embodiment is configured to determine that only a black region (a colorless region in the drawing) in which nothing is extracted in the image data TVD does not match the coordinate points included in the road portion RD of the region pattern PT. That is, the similarity determination unit1302uses the entire shape RE+MBR including the region MBR of the object present on the road region in the road region RE for comparison with the region pattern PT. As described above, even when an object such as another traveling vehicle exists on the road region and a part of the road region cannot be extracted on the image data, the information can be easily complemented by regarding the coordinate point corresponding to the road part RD of the region pattern PT as the road region. The similarity determination may be performed on the assumption that the vehicle region MBR is different from the road region RE as labeled. As yet another example for explaining the similarity determination,FIG.30shows an image in which a road portion PT3-RD (the contour line thereof shown in alternate long and short dashed lines) of the pattern PT3shown inFIG.27Aand the road region RE in the processed image data TVD shown inFIG.28Bare superimposed. As shown inFIG.30, the similarity determination unit1302may determine the degree of similarity for only the road portion PT3-RD. In this case, the lane estimation device1compares the pixel value of each coordinate point in the road portion PT3-RD with the pixel value of the corresponding coordinate point in the image data TVD. If there are more than a certain number of coordinate points in the road portion PT3-RD whose pixel values match the pixel values of the image data TVD, the lane estimation device1can estimate that there is a lane in this region, that is, the vehicle6itself is traveling in the traveling lane TL2on the median strip side. On the other hand, if the number of coordinate points in the road portion PT3-RD whose pixel values matching the pixel values of the image data TVD is equal to or less than a certain number, the lane estimation device1can estimate that there is no lane in this region, that is, the vehicle6itself is traveling in the traveling lane TL1on the road shoulder side. According to the example shown inFIG.30, by using a region pattern PT corresponding to a part of the lane such as the pattern PT3, the lane estimation device1can perform lane estimation without any problem even in a case where another vehicle is traveling in the same lane ahead of the vehicle6itself. In addition, the lane estimation device1can set a smaller area as a comparison target, and can perform lane estimation by a small number of comparison processes. The region pattern PT to be used for comparison may be arbitrarily set by the user of the lane estimation device1or the like in accordance with the state of the road, the speed of processing, the accuracy of estimation, and the like. The lane estimation device1may also be configured to cut out the upper portion and the lower portion of the image and each region pattern PT (for example, cut out an upper portion of 640×100 and a lower portion of 640×60 of the image of 640×360) and compare the pixel values for only the remaining portion. This makes it possible to reduce the calculation cost for lane estimation. Furthermore, as shown inFIG.25AandFIG.28A, in a case where the hood portion of the vehicle6itself is shown in the image data VD, the image corresponding to the hood portion6and the lower portion of each region pattern PT may be cut out, and the comparison of pixel values may be performed only in the remaining portion. Since this will reduce the determination error of the degree of similarity caused by differences in the hood shape, a common region pattern PT will be able to be used among different vehicle types. (Effect) As described above in detail, the lane estimation device1according to the second embodiment extracts the shape indicating the road region from the image data obtained by imaging the traveling direction of the vehicle6, and, based on the shape indicating the road region, acquires the pixel value data obtained by labeling each pixel in the road region as the feature value. The lane in which the vehicle6is moving is estimated by determining which preset pattern for each road or each lane the pixel value data is similar to. As described above, according to the second embodiment, it is possible to estimate the lane in which the vehicle6is moving by focusing on the feature of the shape indicating the road region in the progression direction viewed from the moving vehicle6being different depending on the lane in which the vehicle6is moving, and comparing the shape with the preset pattern based on the pixel value obtained by labeling each pixel. Therefore, it is possible to estimate the lane in which the vehicle is traveling without depending on the lane marker that divides the lanes on the road, and thus it is possible to accurately estimate the lane even in a case where, for example, a repair mark of the lane marker remains due to construction or the like or the lane marker fades or disappears due to deterioration over time. In one embodiment, the region pattern PT corresponding to all the lanes on the road is used for comparison, thereby the accuracy of estimation is expected to improve. In another embodiment, the region pattern PT corresponding to one of the lanes on the road is used for comparison, thereby the processing speed is expected to increase and the influence of other vehicles is expected to decrease. Furthermore, in one embodiment, even when an object such as another traveling vehicle exists on the road region, the degree of similarity for the coordinate point in the road portion RD in the region pattern is determined by regarding the object as a part of the road. Therefore, even when the information obtained from the road region shown in the image data is insufficient due to the presence of an object such as another traveling vehicle, the lane can be estimated by complementing the information based on the shape of the object. Other Embodiments (1) In each of the above embodiments, a case where the lane estimation device1is mounted on the vehicle has been described as an example. However, the present invention is not limited thereto, and the lane estimation device1may be installed on a cloud computer or an edge router, and the vehicle6may transmit the image data obtained by the camera2, the position data obtained by the GPS sensor3, and the vehicle sensor data obtained by the vehicle sensor4from the on-vehicle communication device to the lane estimation device on the cloud or the edge router so that the lane estimation device can receive each type of data above to execute lane estimation processing. In this case, each processing unit included in the lane estimation device may be distributed to an on-vehicle device, a cloud computer, an edge router, and the like so that these devices cooperate with each other to obtain the lane estimation data. The various functional units described in each of the embodiments may be realized by using a circuit. The circuit may be a dedicated circuit that realizes a specific function, or may be a general-purpose circuit such as a processor. At least a part of the processing of each of the above-described embodiments can also be realized by using, for example, a processor mounted on a general-purpose computer as basic hardware. The program for realizing the above-described processing may be provided by being stored in a computer-readable recording medium. The program is stored in a recording medium as a file in an installable format or a file in an executable format. Examples of the recording medium include a magnetic disk, an optical disk (such as a CD-ROM, a CD-R, or a DVD), a magneto-optical disk (such as an MO), and a semiconductor memory. The recording medium may be any medium as long as it can store a program and can be read by a computer. Furthermore, the program for realizing the above-described processing may be stored in a computer (server) connected to a network such as the Internet and downloaded to a computer (client) via the network.(2) In each of the above-described embodiments, a case where the lane estimation data obtained by the lane estimation device1is output to the autonomous driving controller5, and the autonomous driving controller5uses the lane estimation data to cause the vehicle6to travel in a lane or to control lane changes has been described as an example. However, the present invention is not limited thereto, and the lane estimation data may be output to the drive recorder, and the drive recorder may record the lane estimation data as one piece of travel history information of the vehicle6. Furthermore, the lane estimation data may be transmitted to, for example, a road management server, and the road management server may use the lane estimation data as data for monitoring a traffic volume, predicting a traffic jam, or the like for each lane of a road. In this case, the lane change instruction information may be presented to the vehicle based on the prediction result of the traffic jam or the like. Furthermore, by inputting the lane estimation data to a navigation device mounted on the vehicle, for example, change instruction information of the traveling lane of the vehicle may be presented to the driver according to the destination.(3) In each of the above-described embodiments, the vehicle6such as an automobile is taken as an example of the moving object, and an example of a case in which the vehicle6travels on a road having two lanes on each side is described. However, the moving object is not limited thereto, and may be, for example, a motorcycle, a bicycle, a personal mobility vehicle, a vehicle towed by livestock such as a carriage, or an agricultural vehicle such as a tiller, or even a pedestrian. In the case of a motorcycle, it is determined whether or not the lane in which the motorcycle is currently traveling is a lane that can be traveled in based on, for example, an estimation result of the traveling lane and information indicating the displacement of the motorcycle registered in advance. If the motorcycle is traveling in a lane in which a motorcycle should not travel, a warning message by synthesized voice or ringing may be output. Similarly, in the case of a bicycle, it is determined whether the bicycle is traveling in a dedicated bicycle lane set on the road in a predetermined direction based on, for example, an estimation result of the traveling lane and the detection result of the traveling direction. If the bicycle is traveling in a lane other than a dedicated bicycle lane or if the bicycle is traveling in a reverse direction in the dedicated bicycle lane, a warning message by synthesized voice or ringing may be output to the driver. In addition, the type of the road to be estimated may be an expressway, a toll road, a cycling road, a sidewalk, or an agricultural road other than a general road having two lanes on each side. The configuration of the lane estimation device, the processing procedure and processing content of the lane estimation method, the configuration of the road to be estimated, and the like can be variously modified without departing from the gist of the present invention.(4) In each of the above-described embodiments, the image data used for lane estimation is described as being obtained by the camera2imaging a range including a road region in the traveling direction of the vehicle6. However, the image data used for lane estimation is not limited to this, and may be image data obtained by the camera2imaging a range including a road region in another direction, such as at the rear of the vehicle6.(5) In the second embodiment, an example of determining the degree of similarity by comparing the road region RE of the image data TVD with the road portion RD of the region pattern PT has been described. However, it is also possible to determine the degree of similarity by comparing a region other than the road region (non-road region) with the portion BK other than the road portion RD of the region pattern PT. The present invention is not limited exactly to the above-described embodiments, and can be embodied by modifying its structural elements at the implementation stage without departing from the gist thereof. In addition, various inventions can be made by suitably combining the structural elements disclosed in connection with the above embodiments. For example, some of the structural elements may be deleted from all of the structural elements described in the embodiments. Furthermore, the structural elements of different embodiments may be appropriately combined. SUPPLEMENTARY NOTE Some or all of the above-described embodiments can be described as shown in the following supplementary notes in addition to the claims, but are not limited thereto. [C1] A lane estimation device comprising:an image acquisition unit configured to acquire image data obtained by imaging a range including a road region in which a moving object is moving;a feature value calculation unit configured to recognize a shape indicating the road region from the acquired image data, and calculate a feature value of the road region based on the recognized shape; andan estimation processing unit configured to estimate a lane in which the moving object is moving based on the calculated feature value. [C2] The lane estimation device according to C1 above, whereinthe feature value calculation unit calculates, based on the recognized shape indicating the road region, an inclination angle of a contour line thereof as the feature value of the road region, andthe estimation processing unit estimates the lane in which the moving object is moving by determining which threshold value range preset for each lane the calculated inclination angle of the contour line is included in. [C3] The lane estimation device according to C1 above, whereinthe feature value calculation unit calculates, based on the recognized shape indicating the road region, at least one of center-of-gravity coordinates for a diagram indicating the shape, an angle of one vertex of the diagram indicating the shape or an angle of one vertex of a virtual diagram derived from the diagram, and an area of the diagram indicating the shape; andthe estimation processing unit estimates the lane in which the moving object is moving by determining which threshold value range preset for each lane the calculated center-of-gravity coordinates for the diagram, vertex angle, and diagram area are included in. [C4] The lane estimation device according to cl above, whereinthe feature value calculation unit calculates, based on the recognized shape indicating the road region, at least one of an angle between two sides of a diagram obtained by converting the shape into a triangle having one side in a vertical direction of a screen indicated by the image data, or an area of the converted diagram, as the feature value; andthe estimation processing unit estimates the lane in which the moving object is moving by determining which threshold value range preset for each lane the calculated angle between contour lines or area of the diagram is included in. [C5] The lane estimation device according to cl above, whereinthe feature value calculation unit recognizes, from the acquired image data, a first shape indicating the road region including an object present on the road region and a second shape indicating a region excluding the object from the road region, respectively, and estimates a contour line of the road region when the object is not present on the basis of the recognized first shape and second shape to calculate an inclination angle of the contour line; andthe estimation processing unit estimates the lane in which the moving object is moving by determining which threshold value range preset for each lane the calculated inclination angle of the contour line is included in. [C6] The lane estimation device according to C1 above, whereinthe feature value calculation unit acquires, based on the recognized shape indicating the road region, pixel value data labeled for each pixel in the road region as the feature value of the road region; andthe estimation processing unit estimates the lane in which the moving object is moving by determining which of a plurality of patterns preset for the road region the acquired pixel value data is similar to. [C7] The lane estimation device according to cl above, whereinthe feature value calculation unit acquires, based on the recognized shape indicating the road region, pixel value data labeled for each pixel in the road region as the feature value of the road region; andthe estimation processing unit estimates the lane in which the moving object is moving by determining which pattern preset for each lane included in the road region the acquired pixel value data is similar to. [C8] The lane estimation device according to cl above, whereinthe feature value calculation unit recognizes, from the acquired image data, a first shape indicating the road region including an object present on the road region, and acquire pixel value data labeled for each pixel in the first shape; andthe estimation processing unit estimates the lane in which the moving object is moving by determining which of a plurality of patterns preset for the road region or patterns preset for each lane included in the road region the acquired pixel value data is similar to. [C9] The lane estimation device according to any one of the above C1 to C8, further comprising a correction unit configured to correct, based on at least one of information indicating a lane change history of the moving object estimated from a lane estimation result obtained in the past by the estimation processing unit, information relating to a structure of the road region at a moving position of the moving object, or information indicating a lane change in the road region estimated from a state of movement of the moving object, a currently obtained lane estimation result by the estimation processing unit. [C10] A lane estimation method, wherein an information processing device estimates a lane in which a moving object is moving, the lane estimation method comprising:acquiring image data obtained by imaging a range including a road region in which the moving object is moving;recognizing a shape indicating the road region from the acquired image data and calculating a feature value of the road region based on the recognized shape; andestimating a lane in which the moving object is moving based on the calculated feature value. [C11] A program for causing a processor included in the lane estimation device to execute processing of each unit included in the lane estimation device according to any one of C1 to C9. [C12] A lane estimation method for estimating a lane in which a moving object (6) is moving executed by a computer, the method comprising:acquiring image data obtained by imaging a range including a road region in which the moving object (6) is moving;recognizing a shape indicating the road region from the acquired image data and calculating a feature value of the road region based on the recognized shape; and estimating a lane in which the moving object (6) is moving based on the calculated feature value. [C13] The method according to C12 above, whereinthe calculating the feature value includes calculating, based on the recognized shape indicating the road region, an inclination angle of a contour line thereof as the feature value of the road region, andthe estimating includes estimating the lane in which the moving object (6) is moving by determining which threshold value range preset for each lane the calculated inclination angle of the contour line is included in. [C14] The method according to C12 above, whereinthe calculating the feature value includes calculating, based on the recognized shape indicating the road region, at least one of center-of-gravity coordinates for a diagram indicating the shape, an angle of one vertex of the diagram indicating the shape or an angle of one vertex of a virtual diagram derived from the diagram, and an area of the diagram indicating the shape; andthe estimating includes estimating the lane in which the moving object (6) is moving by determining which threshold value range preset for each lane the calculated center-of-gravity coordinates for the diagram, vertex angle, and diagram area are included in. [C15] The method according to c12 above, whereinthe calculating the feature value includes calculating, based on the recognized shape indicating the road region, at least one of an angle between two sides of a diagram obtained by converting the shape into a triangle having one side in a vertical direction of a screen indicated by the image data, or an area of the converted diagram, as the feature value; andthe estimating includes estimating the lane in which the moving object (6) is moving by determining which threshold value range preset for each lane the calculated angle between contour lines or area of the diagram is included in. [C16] The method according to c12 above, whereinthe calculating the feature value includes recognizing, from the acquired image data, a first shape indicating the road region including an object present on the road region and a second shape indicating a region excluding the object from the road region, respectively, and estimating a contour line of the road region when the object is not present on the basis of the recognized first shape and second shape to calculate an inclination angle of the contour line; andthe estimating includes estimating the lane in which the moving object (6) is moving by determining which threshold value range preset for each lane the calculated inclination angle of the contour line is included in. [C17] The method according to C12 above, whereinthe calculating the feature value including acquiring, based on the recognized shape indicating the road region, pixel value data labeled for each pixel in the road region as the feature value of the road region; andthe estimating includes estimating the lane in which the moving object (6) is moving by determining which of a plurality of patterns preset for the road region the acquired pixel value data is similar to. [C18] The method according to c12 above, whereinthe calculating the feature value including acquiring, based on the recognized shape indicating the road region, pixel value data labeled for each pixel in the road region as the feature value of the road region; andthe estimating includes estimating the lane in which the moving object (6) is moving by determining which pattern preset for each lane included in the road region the acquired pixel value data is similar to. [C19] The method according to c12 above, whereinthe calculating the feature value including recognizing, from the acquired image data, a first shape indicating the road region including an object present on the road region, and acquiring pixel value data labeled for each pixel in the first shape; andthe estimating includes estimating the lane in which the moving object (6) is moving by determining which of a plurality of patterns preset for the road region or patterns preset for each lane included in the road region the acquired pixel value data is similar to. [C20] The method according to any one of C12 to C19 above, further comprising correcting, based on at least one of information indicating a lane change history of the moving object (6) estimated from a lane estimation result obtained in the past, information relating to a structure of the road region at a moving position of the moving object (6), or information indicating a lane change in the road region estimated from a state of movement of the moving object (6), a currently obtained lane estimation result. [C21] A lane estimation device (1) comprising means for performing the methods of any one of C12 to C19 above. [C22] A program, when executed by a computer, comprising instructions for causing the computer to execute the methods of any one of C12 to C19 above. [C23] A computer-readable storage medium, when executed by a computer, comprising instructions for causing the computer to execute the methods of any one of C12 to C19 above. REFERENCE SIGNS LIST 1. . . Lane estimation device2. . . Camera3. . . GPS sensor4. . . Vehicle sensor5. . . Autonomous driving controller6. . . Vehicle10. . . Control unit10A . . . Hardware processor10B . . . Program memory11. . . Image data acquisition unit12. . . Image processing unit13. . . Lane estimation processing unit14. . . Lane correction unit15. . . Past estimation data acquisition unit16. . . Road information acquisition unit17. . . Vehicle sensor data acquisition unit18. . . Vehicle action state estimation unit19. . . Estimation data output control unit20. . . Data memory21. . . Image data storage unit22. . . Lane estimation data storage unit23. . . Road information storage unit24. . . Vehicle sensor data storage unit25. . . Threshold value storage unit26. . . Pattern storage unit30. . . Input/output I/F40. . . Bus130. . . Lane estimation processing unit1301. . . Pattern acquisition unit1302. . . Similarity determination unit1303. . . Lane estimation unitVD . . . Still-image dataTVD . . . Processed image dataTL1, TL2. . . Traveling laneWL . . . SidewalkMS . . . Median stripSR . . . Road shoulderSB . . . CurbSH . . . PlantingRE . . . Contour of road regionPT . . . Region patternRD . . . Contour of road portionMBR . . . Vehicle regionGR . . . Guardrail | 106,440 |
11861842 | DESCRIPTION OF EMBODIMENTS Hereinafter, preferred embodiments of the present disclosure will be described in detail with reference to the appended drawings. Note that, in this specification and the appended drawings, structural elements that have substantially the same function and structure are denoted with the same reference numerals, and repeated explanation of these structural elements is omitted. Note that description will be provided in the following order.1. First Embodiment1.1. Overview1.2. Functional Configuration Example of Manual Cropping Device101.3. Functional Configuration Example of Automatic Cropping Device201.4. Functional Configuration Example of Detection Device301.5. Process Flow2. Second Embodiment3. Third Embodiment4. Hardware Configuration Example5. Conclusion 1. FIRST EMBODIMENT <<1.1. Overview>> First, the overview of the first embodiment of the present disclosure will be described. As described above, in recent years, devices that perform an operation on the basis of the detection result of an object have become widespread. As an example of the above-mentioned device, for example, an in-vehicle device that detects an object such as a pedestrian or a traveling vehicle from a captured image, outputs an alert, and assists driving can be mentioned. Here, in order to realize a high-performance object detection function in the in-vehicle device, it is required to secure a large number of cropped images of the object to be used as a learning image at the stage of generating a detector. However, when the above-mentioned cropped images are manually created, costs such as labor cost and work time increase. For this reason, as in PTL 1 and PTL 2 described above, a method of automatically generating a learning image has also been proposed. However, for example, as described in PTL 2, when a composite image generated by CG is used as a learning image, it may be difficult to sufficiently improve the performance of the detector due to the difference from the reality. For example, in the generation of a cropped image using a relatively simple non-pattern recognition system or pattern recognition system algorithm disclosed in PTL 1, it is difficult to secure sufficient cropping accuracy. Therefore it is not practical to apply it to devices that require high accuracy, such as in-vehicle devices that detect a pedestrian, for example. Further, PTL 1 discloses a technique of repeating a single machine learning method using the cropped image generated as described above, but such a method causes a bias in learning and it is difficult to generate a detector with high generalization performance. The technical idea according to the present disclosure is conceived by paying attention to the above points, and enables a large number of highly accurate learning images to be generated without bias at reduced costs. For this purpose, an information processing method according to an embodiment of the present disclosure causes a processor to execute: automatically cropping a region including an object from a material image to generate an automatically cropped image; and performing learning related to detection of the object on the basis of the automatically cropped image. Further, the generating of the automatically cropped image further includes generating the automatically cropped image using an automatic cropper generated by learning based on manually cropped images obtained by manually cropping a region including the object from the material image. FIG.1is a diagram illustrating an overview of an information processing system according to the first embodiment of the present disclosure. As illustrated inFIG.1, the information processing system according to the present embodiment may include a manual cropping device10, an automatic cropping device20, and a detection device30. Further,FIG.1illustrates an example in which the detection device30detects a pedestrian as an object. (Manual Cropping Device10) The manual cropping device10according to the present embodiment is an information processing device that performs machine learning to realize the automatic cropping function provided in the automatic cropping device20. The manual cropping device10according to the present embodiment may be a PC (Personal Computer) or the like, for example. The manual cropping unit120of the manual cropping device10according to the present embodiment generates a manually cropped image MTI obtained by cropping a region including a pedestrian from a material image MI including a pedestrian which is an object on the basis of an operation of an operator. That is, the operator can operate the manual cropping device10to trim the region where the pedestrian is captured from the material image MI and generate a cropped image related to the pedestrian. For example, approximately 16,000 manually cropped images MTI may be generated. The learning unit140of the manual cropping device10according to the present embodiment performs machine learning related to the features of the pedestrian on the basis of the manually cropped image MTI cropped manually as described above. The learning unit140according to the present embodiment may perform high-performance machine learning using a DNN (Deep Neural Network). According to the learning unit140according to the present embodiment, it is possible to learn the features of a pedestrian with high accuracy on the basis of an operator, that is, a subject that a person actually recognizes as a pedestrian. A learned parameter P1obtained as the result of learning by the learning unit140is applied to an automatic cropper provided in the automatic cropping unit220of the automatic cropping device20. According to the parameter application, the automatic cropping unit220of the automatic cropping device20can crop an object with high accuracy on the basis of the result of high-performance machine learning. The parameter according to the present embodiment broadly includes various parameters generally used in machine learning, such as weights and biases. (Automatic Cropping Device20) The automatic cropping device20according to the present embodiment is an information processing device having a function of automatically cropping a region including a pedestrian from the material image MI using a DNN to which the learned parameter P1obtained as the result of machine learning by the manual cropping device10is applied. One of the features of the automatic cropping unit220of the automatic cropping device20in the present embodiment is that the automatically cropped image ATI is generated using the DNN to which the learned parameter P1is applied as the automatic cropper. According to the automatic cropping unit220according to the present embodiment, a large number of highly accurate automatically cropped images ATI can be generated at high speed using the learned parameter P1learned on the basis of the manually cropped images MTI generated manually. The automatic cropping unit220may generate approximately 1 million automatically cropped images ATI, for example. The learning unit240of the automatic cropping device20according to the present embodiment performs medium-performance machine learning related to pedestrian detection using the automatically cropped image ATI automatically generated by the automatic cropping unit220as an input. A learned parameter P2obtained as the result of learning by the learning unit240is applied to a detector provided in the detection unit320of the detection device30. According to the automatic cropping device20according to the present embodiment, it is possible to generate a detector with high generalization performance at low costs by performing machine learning using a large number of automatically cropped images ATI with high cropping accuracy. The automatic cropping device20according to the present embodiment may be a PC, for example. (Detection Device30) The detection device30according to the present embodiment is an information processing device that detects a pedestrian using a medium-performance detector to which the parameter P2obtained as the result of machine learning by the automatic cropping device20is applied. The detection device30according to the present embodiment may be a camera module mounted in a vehicle, for example. The detector according to the present embodiment detects an object using HOG (Histograms of Oriented Gradients) features and an SVM (Support Vector Machine). As described above, in the present embodiment, it is possible to realize unbiased learning by automatically generating learning data of medium-performance machine learning using SVM by high-performance machine learning using DNN. The detection unit320of the detection device30according to the present embodiment can detect a pedestrian in real time from an input image II captured by a photographing unit310described later using the above-mentioned detector. Further, the detection unit320according to the present embodiment may output the detection result as an output image OI. The output image OM may be displayed on a display device mounted in a vehicle, for example. The overview of the information processing system according to the present embodiment has been described above. As described above, according to the information processing method according to the present embodiment, a large number of highly accurate cropped images can be automatically generated at low costs, and as a result, a detector having high generalization performance can be generated. In the present disclosure, a case where the detection device30is a camera module mounted in a vehicle and the object to be detected is a pedestrian will be described as a main example, but the object according to an embodiment of the present disclosure and the detection device30are not limited to the above examples. The object according to an embodiment of the present disclosure may broadly include obstacles during movement of various moving objects including vehicles. Examples of obstacles include other moving objects, animals, and installations on a movement route as well as persons including pedestrians. Further, as an example of the above-mentioned moving object, a ship, an aircraft including a drone, various autonomous mobile polymorphic robots, and the like can be mentioned, for example. Furthermore, the detection device30according to an embodiment of the present disclosure may be a surveillance camera or the like, for example. The technical idea of the present disclosure can be widely applied to generation of various detectors for detecting an object. <<1.2. Functional Configuration Example of Manual Cropping Device10>> Next, a functional configuration example of the manual cropping device10according to the present embodiment will be described in detail.FIG.2is a block diagram illustrating a functional configuration example of the manual cropping device10according to the present embodiment. As illustrated inFIG.2, the manual cropping device10according to the present embodiment includes an image supply unit110, a manual cropping unit120, an image storage unit130, and a learning unit140. (Image Supply Unit110) The image supply unit110according to the present embodiment has a function of supplying a material image with an object as a subject to the manual cropping unit120. The image supply unit110may supply a material image registered manually in advance by the manual cropping unit120, for example. Further, the image supply unit110can also supply a material image automatically supplied from the Internet to the manual cropping unit120, for example. (Manual Cropping Unit120) The manual cropping unit120according to the present embodiment has a function of cropping a region including an object from a material image and generating a manually cropped image on the basis of an operation of an operator. For this purpose, the manual cropping unit120according to the present embodiment may provide the operator with an image editing interface with which a trimming operation, for example, can be performed. (Image Storage Unit130) The image storage unit130according to the present embodiment stores the manually cropped image generated by the manual cropping unit120. (Learning Unit140) The learning unit140according to the present embodiment performs machine learning related to the features of the object using the manually cropped image generated by the manual cropping unit120and stored by the image storage unit130as an input. As described above, the learning unit140according to the present embodiment may perform machine learning using DNN or the like. The learned parameters obtained as the result of learning by the learning unit140are applied to the automatic cropper provided in the automatic cropping device20. The functional configuration example of the manual cropping device10according to the present embodiment has been described above. The configuration described with reference toFIG.2is merely an example, and the functional configuration of the manual cropping device10according to the present embodiment is not limited to such an example. The functional configuration of the manual cropping device10according to the present embodiment may be flexibly modified according to specifications and operations. <<1.3. Functional Configuration Example of Automatic Cropping Device20>> Next, a functional configuration example of the automatic cropping device20according to the present embodiment will be described in detail.FIG.3is a block diagram illustrating a functional configuration example of the automatic cropping device20according to the present embodiment. As illustrated inFIG.3, the automatic cropping device20according to the present embodiment includes an image supply unit210, an automatic cropping unit220, an image storage unit230, and a learning unit240. (Image Supply Unit210) The image supply unit210according to the present embodiment has a function of supplying a material image with an object as a subject to the automatic cropping unit220. The image supply unit210may supply the material image registered in advance manually or the material image automatically supplied from the Internet to the automatic cropping unit220. (Automatic Cropping Unit220) The automatic cropping unit220according to the present embodiment has a function of automatically cropping a region including an object from the material image and generating an automatically cropped image. As described above, one of the features of the automatic cropping unit220according to the present embodiment is that an automatically cropped image is generated using an automatic cropper to which the learned parameters obtained as the result of machine learning by the manual cropping device10are applied. The details of the function of the automatic cropping unit220according to the present embodiment will be described later. (Image Storage Unit230) The image storage unit230according to the present embodiment stores the automatically cropped image generated by the automatic cropping unit220. (Learning Unit240) The learning unit240according to the present embodiment performs machine learning related to the detection of an object using the automatically cropped image generated by the automatic cropping unit220and stored by the image storage unit230as an input. The learning unit240according to the present embodiment may perform machine learning using HOG features and SVM or the like. The learned parameters obtained as the result of learning by the learning unit240are applied to the detector provided in the detection device30. The functional configuration example of the automatic cropping device20according to the present embodiment has been described above. The above configuration described with reference toFIG.3is merely an example, and the functional configuration of the automatic cropping device20according to the present embodiment is not limited to such an example. The functional configuration of the automatic cropping device20according to the present embodiment may be flexibly modified according to specifications and operations. <<1.4. Functional Configuration Example of Detection Device30>> Next, a functional configuration example of the detection device30according to the present embodiment will be described in detail.FIG.4is a block diagram illustrating a functional configuration example of the detection device30according to the present embodiment. As illustrated inFIG.4, the detection device30according to the present embodiment includes a photographing unit310, a detection unit320, an operation control unit330, and an operating unit340. (Photographing Unit310) The photographing unit310according to the present embodiment has a function of photographing an image (RGB image) around the vehicle. The image includes a moving image and a still image. (Detection Unit320) The detection unit320according to the present embodiment has a function of detecting an object in real time from the image captured by the photographing unit310using a detector to which the learned parameters obtained as the result of learning by the automatic cropping device20are applied. The detection unit320according to the present embodiment outputs the detection result of the object to the operation control unit330. The detection unit320according to the present embodiment is realized by a microcontroller, for example. (Operation Control Unit330) The operation control unit330according to the present embodiment has a function of controlling the operation of the operating unit340on the basis of the detection result of the object by the detection unit320. For example, the operation control unit330according to the present embodiment may cause the operating unit340to output an alert or cause the operating unit340to operate the brake on the basis of the detection unit320detecting an object in front of the vehicle. Further, when the detection unit320outputs the detection result of the object as an output image OI as illustrated inFIG.1, the operation control unit330may display the output image OI on a display device included in the operating unit340. (Operating Unit340) The operating unit340according to the present embodiment executes various operations on the basis of the control of the operation control unit330. The operating unit340according to the present embodiment may include an accelerator, a brake, a steering wheel, a display device, a speaker, and the like, for example. The functional configuration example of the detection device30according to the present embodiment has been described above. The above configuration described with reference toFIG.4is merely an example, and the functional configuration of the detection device30according to the present embodiment is not limited to such an example. The functional configuration of the detection device30according to the present embodiment may be flexibly modified according to specifications and operations. <<1.5. Process Flow>> Next, the flow of processing of the manual cropping device10, the automatic cropping device20, and the detection device30according to the present embodiment will be described in detail. First, the flow of processing of the manual cropping device10according to the present embodiment will be described.FIG.5is a flowchart illustrating the flow of processing of the manual cropping device10according to the present embodiment. Referring toFIG.5, first, the image supply unit110supplies the material image to the manual cropping unit120(S1101). Subsequently, the manual cropping unit120crops a region including an object from the material image supplied in step S1101on the basis of the user operation, and generates a manually cropped image (S1102). Subsequently, the image storage unit130stores the manually cropped image generated in step S1102(S1103). Subsequently, the learning unit140executes learning related to the features of the object using the manually cropped image stored in step S1103as an input and generates parameters (S1104). Subsequently, the learning unit140applies the learned parameters obtained in step S1104to the automatic cropper provided in the automatic cropping device20(S1105). The flow of processing of the manual cropping device10according to the present embodiment has been described in detail above. Next, the flow of processing of the automatic cropping device20according to the present embodiment will be described in detail.FIG.6is a flowchart illustrating the flow of processing of the automatic cropping device20according to the present embodiment. Referring toFIG.6, first, the image supply unit210supplies the material image to the automatic cropping unit220(S1201). Subsequently, the automatic cropping unit220automatically crops the region including the object from the material image supplied in step S1201using the automatic cropper to which the learned parameters are applied in step S1105ofFIG.5, and generates an automatically cropped image (S1202). Subsequently, the image storage unit230stores the automatically cropped image generated in step S1202(S1203). Subsequently, the learning unit240executes learning related to the detection of the object using the automatically cropped image stored in step S1203as an input and generates parameters (S1204). Subsequently, the learning unit240applies the learned parameters obtained in step S1204to the detector provided in the detection device30(S1205). The flow of processing of the automatic cropping device20according to the present embodiment has been described above. Next, the generation of the automatically cropped image by the automatic cropping unit220according to the present embodiment will be described in more detail. FIG.7is a diagram illustrating the flow of generating an automatically cropped image according to the present embodiment. The generation of the automatically cropped image by the automatic cropping unit220according to the present embodiment is realized roughly by two processes. Specifically, first, the automatic cropping unit220according to the present embodiment identifies an interim region Ri which is an approximate region including an object from the material image MI supplied by the image supply unit210by a high-speed object detection method using deep learning. Examples of the above-mentioned high-speed object detection method include SSD (Single Shot multibox Detector). Using a high-speed object detection method such as SSD, the automatic cropping unit220according to the present embodiment can identify an approximate interim region Ri including an object in the material image MI at a relatively high speed (for example, in 100 msec if the automatic cropping device20is equipped with the latest GPU (Graphics Processing Unit) at the time of filing this application). The automatic cropping unit220may identify a region slightly wider than the region derived by SSD or the like as the interim region Ri. In identification of the interim region Ri using the SSD as described above, the parameters learned from the contest image or the like, which can be acquired from the Internet may be used, for example. Using the learned parameters as described above, the time required for learning and the cost of generating the learning image can be significantly reduced. The high-speed object detection method using deep learning according to the present embodiment is not limited to SSD. The automatic cropping unit220according to the present embodiment may identify the interim region Ri using Faster RCNN (Regions with Convolutional Neural Networks) or YOLO (You Only Look Once), for example. Subsequently, the automatic cropping unit220according to the present embodiment automatically crops a detailed region Rd including the object from the interim region Ri identified by the SSD using the above-mentioned automatic cropper, and generates an automatically cropped image ATI. Here, the detailed region Rd according to the present embodiment may be a rectangular region that is smaller than the interim region Ri and excludes a region that does not include the object as much as possible. For example, when the object is a pedestrian, the upper end of the detailed region Rd may substantially coincide with the upper end of the pedestrian's head, and the lower end of the detailed region Rd may substantially coincide with the lower end of the pedestrian's foot. As described above, one of the features of the automatic cropping unit220according to the present embodiment is that the process of identifying the interim region Ri and the process of automatically cropping the detailed region Rd are executed using two different neural networks. According to the above-mentioned features of the automatic cropping unit220according to the present embodiment, since the interim region Ri which is an approximate region including the object is identified as a preliminary step of the automatic cropping, the detailed region Rd can be cropped without scanning the material image Mi completely and the time required for generating the automatically cropped image ATI can be significantly reduced. The flow of generating the automatically cropped image ATI according to the present embodiment has been described in detail above. As illustrated in the lower part ofFIG.7, the automatically cropped image ATI generated by the automatic cropping unit220as described above may be stored in the image storage unit230after the operator confirms whether a correct region is cropped. A case where the automatic cropping unit220according to the present embodiment identifies the interim region from the material image using a high-speed object detection method such as SSD has been described above, but when the material image is a continuous frame of a moving image, the automatic cropping unit220may identify an approximate region in which the object is photographed by acquiring information such as a motion vector from the continuous frame. Next, the network structure of the automatic cropper according to the present embodiment will be described in detail.FIG.8is a diagram illustrating an example of a network structure of the automatic cropper according to the present embodiment. The network structure illustrated inFIG.8is merely an example, and the network structure of the automatic cropper according to the present embodiment may be flexibly deformed. InFIG.8, “Conv” indicates a “Convolution” layer, “Lrn” indicates “Local response normalization”, “Pool” indicates “Pooling”, and “FC” indicates “Fully Connected” layer. Further, “S” in each layer indicates a stride, and “D” indicates the number of filters. For example, ReLU may be adopted as an activation function, and Max Pooling may be adopted as Pooling. Further, an output layer may be an identity function, and the sum of squares error may be used for a loss function. An example of the network configuration of the automatic cropper according to the present embodiment has been described above. In the present embodiment, in the above-mentioned network, learning is performed in such a way that a cropped image in which the upper end of the head and the lower end of the foot coincide with the upper and lower ends of the image, respectively, is regarded as a positive image and an image in which the upper end of the head and the lower end of the foot are misaligned with respect to the upper and lower ends of the image is regarded as a negative image. At this time, for example, 256.0 may be given to the positive image and 0.0 may be given to the negative image as teaching data, and it may be determined whether the image is accurately cropped with the output value of 128.0 as a threshold value. The accuracy of automatic cropping can be secured by giving approximately 16,000 positive images that have been normalized in the image and approximately 204,000 negative images that have been normalized in the image as learning data. Further, in order to generate an automatic cropper that is tolerant to changes in color, brightness, and contrast, an image subjected to data expansion that randomly changes color, brightness, and contrast may be used for learning. Further, an image subjected to data expansion that mirrors the image in the left-right direction may be used. Since the data expansion as described above does not affect the positions of the head and the foot, it is possible to secure a number of pieces of learning data without deteriorating the accuracy of automatic cropping. As an example of other data expansion, an increase in variations using segmentation by FCN (Fully Convolutional Networks) may be used, for example. According to FCN, for example, a clothing region of a pedestrian in an image may be identified to generate an image in which the clothing color is changed, or a skin region may be identified to change the skin color to generate an image which expresses racial differences. Further, according to FCN, it is possible to change the building or road on the background in the image to a different building or the like. Furthermore, it is possible to diversify the variation of the material image by identifying the position of the hand of a person in the image using the object detection method and overwriting the bag or the like thereon. The flow of processing of the automatic cropping device20according to the present embodiment has been described in detail above. Next, the flow of processing of the detection device30according to the present embodiment will be described.FIG.9is a flowchart illustrating the flow of processing of the detection device30according to the present embodiment. Referring toFIG.9, first, the photographing unit310captures an RGB image around the vehicle (S1301). Subsequently, the detection unit320detects an object from the RGB image captured in step S1301using a detector to which the learned parameters obtained as the result of learning by the automatic cropping device20are applied (S1302). Subsequently, the operation control unit330causes the operating unit340to execute various operations on the basis of the detection result in step S1302(S1303). For example, the operation control unit330may cause the operating unit340to display an image indicating the region of the detected object or cause the operating unit340to operate the brake or the like on the basis of the fact that the object has been detected. 2. SECOND EMBODIMENT Next, a second embodiment of the present disclosure will be described. In the first embodiment described above, a case where only the RGB image is supplied to the automatic cropping unit220of the automatic cropping device20has been described. On the other hand, in the second embodiment of the present disclosure, in addition to the RGB image, a distance image photographed at the same time may be supplied to the automatic cropping unit220together. Examples of the distance image include a ToF (Time of Flight) image. The RGB image and the distance image may be simultaneously captured by a photographing device such as an RGB-D camera, for example, or may be captured by two different photographing devices installed in parallel. At this time, the automatic cropping unit220according to the present embodiment may make a determination regarding the adoption of the automatically cropped image on the basis of the distance image photographed at the same time as the RGB image. That is, the learning unit240according to the present embodiment can perform learning on the basis of the automatically cropped image adopted on the basis of the distance between the object and the photographing device at the time of photographing the material image. Hereinafter, adoption of an automatically cropped image using the distance image according to the present embodiment will be described in detail. In the following, the differences from the first embodiment will be mainly described, and redundant description will be omitted for common functions and effects. FIG.10is a diagram illustrating the flow of processing of the automatic cropping device20according to the second embodiment of the present disclosure. Referring toFIG.10, first, the image supply unit210supplies the RGB image including the object and the distance image photographed at the same time as the RGB image to the automatic cropping unit220(S2101). Subsequently, the automatic cropping unit220identifies an interim region in the RGB image using a high-speed object detection method such as SSD and determines whether the distance between the object and the photographing device is within a predetermined range on the basis of the distance image (S2102). Here, when the distance between the object and the photographing device is not within a predetermined range (S2102: No), the automatic cropping unit220may end the process without performing automatic cropping on the supplied RGB image. According to this, as will be described later, it is possible to improve the efficiency of processing without unnecessarily generating an automatically cropped image that is not suitable as learning data. On the other hand, when the distance between the object and the photographing device is within the predetermined range (S2102: Yes), the automatic cropping unit220generates an automatically cropped image and the image storage unit230stores the automatically cropped image (S2103). Subsequently, the learning unit240executes learning using the automatically cropped image stored in step S2103as an input and generates parameters (S2104). Subsequently, the learning unit240applies the learned parameters obtained in step S2104to the detector provided in the detection device30(S2015). An example of the flow of processing of the automatic cropping device20according to the present embodiment has been illustrated. As described above, in the present embodiment, only the automatically cropped image in which the distance to the object at the time of photographing is within a predetermined range is input to the learning unit240. In the above-mentioned predetermined range, a value is set such that the distance between the object and the photographing device is not too close and not too far. In an image where the distance between the object and the photographing device is too close, it is expected that the object will appear distorted in many cases, and it is expected that the use of the distorted image for learning may reduce the performance of the detector. Therefore, it is possible to secure the performance of the generated detector by excluding an image in which the distance between the object and the photographing device is too close on the basis of the distance image. In an image in which the distance between the object and the photographing device is too long, a large amount of noise will be added to the region including the object. A noisy image can be a factor that degrades the performance of the detector. Therefore, it is possible to secure the performance of the generated detector by excluding the image in which the distance between the object and the photographing device is too long on the basis of the distance image. The second embodiment of the present disclosure has been described above. According to the automatic cropping device20according to the present embodiment, it is possible to generate a detector having higher generalization performance. In the above description, a case where the automatic cropping unit220according to the present embodiment automatically crops a region from the material image only when the distance between the object and the photographing device is within a predetermined range has been described as an example, but the determination timing of adoption is not limited to this example. For example, the automatic cropping unit220according to the present embodiment may determine the adoption of the automatically cropped image on the basis of the distance image after generating the automatically cropped image. For example, first, the automatic cropping unit220may generates an automatically cropped image regardless of the distance and store the automatically cropped image in the image storage unit230only when the distance between the object and the photographing device is within a predetermined range. Further, in the stage before supplying the material image to the automatic cropping unit220, the operator can roughly visually select the image, for example. 3. THIRD EMBODIMENT Next, a third embodiment of the present disclosure will be described. In the first and second embodiments described above, a case where the detection of the object using the detector and the learning for generating the detector are performed by separate devices (that is, the automatic cropping device20and the detection device30) has been described. On the other hand, in the third embodiment of the present disclosure, a configuration in which both functions are realized by a single device will be described. That is, the detection device30according to the present embodiment may be able to perform self-learning on the basis of the captured image and automatically update the parameters of the detector. Hereinafter, the function of the detection device30according to the third embodiment of the present disclosure will be described in detail. In the following, the differences between the first and second embodiments will be mainly described, and redundant description will be omitted for the functions and effects common to the first and second embodiments. FIG.11is a block diagram illustrating a functional configuration of a detection device40according to the third embodiment of the present disclosure. As illustrated inFIG.11, the detection device40according to the present embodiment includes a photographing unit410, an image supply unit420, an automatic cropping unit430, an image storage unit440, a learning unit450, a detection unit460, an operation control unit470, and an operating unit480. That is, it can be said that the detection device40according to the present embodiment further has an automatic cropping function and a learning function in addition to the configuration of the detection device30according to the first and second embodiments. According to the configuration, using an image photographed as the vehicle travels as a material image, learning related to detection can be continuously performed and parameters can be continuously updated. The learning unit450according to the present embodiment can determine whether the performance of the newly generated parameters exceeds the performance of the current parameters using an evaluation image set. At this time, the learning unit450according to the present embodiment may automatically update the parameters of the detector only when the performance of the newly generated parameters exceeds the performance of the current parameters. According to the function of the learning unit450according to the present embodiment, it is possible to improve the generalization performance of the detector as the vehicle travels. FIG.12is a flowchart illustrating the flow of a detector parameter update process by the detection device40according to the present embodiment. Referring toFIG.12, first, the photographing unit410inputs the captured image to the image supply unit420(S3101). Subsequently, the image supply unit420supplies the image input in step S3101as a material image to the automatic cropping unit430(S3102). Subsequently, the automatic cropping unit430generates an automatically cropped image including an object from the material image supplied in step S3102(S3103). At this time, as described in the second embodiment, the automatic cropping unit430may make determination related to the adoption of the automatically cropped image on the basis of the distance to the object. Subsequently, the image storage unit440stores the automatically cropped image generated in step S3103(S3104). Subsequently, the learning unit450executes learning related to the detection of the object using the automatically cropped image stored in step S3104as an input and generates parameters (S3105). Subsequently, the learning unit450determines whether the performance of the detector is improved by the parameters newly generated in step S3105on the basis of the evaluation image set (S3106). Here, when it is determined that the performance of the detector is not improved by the newly generated parameters (S3106: No), the learning unit450does not update the parameters of the detector, and the detection device40returns to step S3101. On the other hand, when it is determined that the performance of the detector is improved by the newly generated parameters (S3106: Yes), the learning unit450updates the parameter of the detector, and the detection device40returns to step S3101. The flow of the detector parameter update process according to the present embodiment has been described above. According to the processing of the detection device40according to the present embodiment, it is possible to efficiently improve the object detection performance by continuously performing the learning related to detection and updating the parameters. The detection device40according to the present embodiment may transmit the learned parameters to an external server or the like via a network. In this case, the server can identify the parameter that realizes the highest detection performance among the parameters collected from a plurality of detection devices40and distribute the parameter to each detection device40. According to such a configuration, parameters having high performance can be widely shared among the plurality of detection devices40, and the detection device40having high detection performance can be efficiently mass-produced. On the other hand, the sharing of the parameters may be limited to the detection devices40in which the mounted vehicles or the photographing units410have the same specifications and an individual difference is small. When the individual difference between the mounted vehicles or the photographing units410is large, the object detection performance can be effectively improved using only the parameters self-learned by each detection device40. When the detection device40communicates with an external server or the like via a network, the automatically cropped image generation function of the automatic cropping unit430may not necessarily be realized as a function of the detection device40. For example, the detection device40may perform learning using the automatically cropped image by transmitting a material image to the server via the network and receiving the automatically cropped image generated by the server. In this case, the power consumption of the detection device40can be reduced, and the size of the housing can be reduced. 4. HARDWARE CONFIGURATION EXAMPLE Next, a hardware configuration example of the automatic cropping device20according to an embodiment of the present disclosure will be described.FIG.13is a block diagram illustrating a hardware configuration example of the automatic cropping device20according to the embodiment of the present disclosure. Referring toFIG.13, the automatic cropping device20includes a processor871, a ROM872, a RAM873, a host bus874, a bridge875, an external bus876, an interface877, an input device878, an output device879, a storage880, a drive881, a connection port882, and a communication device883, for example. The hardware configuration illustrated here is an example, and some of the components may be omitted. In addition, components other than the components illustrated here may be further included. (Processor871) The processor871functions as an arithmetic processing unit or a control device, for example, and controls all or a part of the operation of each component on the basis of various programs recorded in the ROM872, the RAM873, the storage880, or a removable recording medium901. The processor871includes a GPU and a CPU, for example. (ROM872, RAM873) The ROM872is a means for storing a program read into the processor871and data used for calculation. The RAM873temporarily or permanently stores a program read into the processor871and various parameters that change as appropriate when the program is executed, for example. (Host Bus874, Bridge875, External Bus876, Interface877) The processors871, the ROM872, and the RAM873are connected to each other via a host bus874capable of transmitting data at a high speed, for example. On the other hand, the host bus874is connected to the external bus876, which has a relatively low data transmission speed, via the bridge875, for example. In addition, the external bus876is connected to various components via the interface877. (Input Device878) For the input device878, for example, a mouse, a keyboard, a touch panel, buttons, switches, levers, and the like are used. Further, as the input device878, a remote controller (hereinafter, remote controller) capable of transmitting a control signal using infrared rays or other radio waves may be used. Further, the input device878includes a voice input device such as a microphone. (Output Device879) The output device879is a device that can visually or audibly provide the user with acquired information, such as a display device such as a CRT (Cathode Ray Tube), LCD, or organic EL, an audio output device such as a speaker or headphones, a printer, a mobile phone, or a facsimile. Further, the output device879according to the present disclosure includes various vibration devices capable of outputting a tactile stimulus. (Storage880) The storage880is a device for storing various types of data. As the storage880, for example, a magnetic storage device such as a hard disk drive (HDD), a semiconductor storage device, an optical storage device, an optical magnetic storage device, or the like is used. (Drive881) The drive881is a device that reads information recorded on a removable recording medium901such as, for example, a magnetic disk, an optical disk, a magneto-optical disk, or a semiconductor memory, or writes information on the removable recording medium901. (Removable Recording Medium901) The removable recording medium901is a DVD medium, a Blu-ray (registered trademark) medium, an HD DVD medium, various semiconductor storage media, and the like, for example. Naturally, the removable recording medium901may be an IC card equipped with a non-contact type IC chip, an electronic device, or the like, for example. (Connection Port882) The connection port882is a port for connecting an externally connected device902, such as, for example, a USB (Universal Serial Bus) port, an IEEE1394 port, a SCSI (Small Computer System Interface), an RS-232C port, or an optical audio terminal. (Externally Connected Device902) The externally connected device902is a printer, a portable music player, a digital camera, a digital video camera, an IC recorder, or the like, for example. (Communication Device883) The communication device883is a communication device for connecting to a network, and, for example, is a wired or wireless LAN, a Bluetooth (registered trademark), or a communication card for WUSB (Wireless USB), a router for optical communication, and a router for ADSL (Asymmetric Digital Subscriber Line), a modem for various communications, or the like. 5. CONCLUSION As described above, according to the automatic cropping device20according to an embodiment of the present disclosure, an information processing method is realized, the method causing a processor to execute: automatically cropping a region including an object from a material image to generate an automatically cropped image; and performing learning related to detection of the object on the basis of the automatically cropped image, wherein the generating of the automatically cropped image further includes generating the automatically cropped image using an automatic cropper generated by learning based on manually cropped images obtained by manually cropping a region including the object from the material image. According to the information processing method according to an embodiment of the present disclosure, it is possible to generate a large number of highly accurate learning images without bias at reduced costs. While the preferred embodiments of the present disclosure have been described above with reference to the accompanying drawings, the technical scope of the present disclosure is not limited to the above examples. A person skilled in the art may find various alterations and modifications within the scope of the technical idea described in the claims, and it should be understood that they will naturally come under the technical scope of the present disclosure. Further, the effects described in this specification are merely illustrative or mentioned effects, and are not limitative. That is, with or in the place of the above effects, the technology according to the present disclosure may achieve other effects that are clear to those skilled in the art on the basis of the description of this specification. In addition, a program for causing the hardware such as a processor, a ROM, and a RAM built in the computer to perform the same functions as those of the automatic cropping device20, the detection device30, and the detection device40may be created. Moreover, a computer-readable non-transient recording medium on which the program is recorded may also be provided. For example, the steps related to the processing of each device in the present specification may not necessarily be executed chronically in the order described in the flowcharts. For example, the steps related to the processing of the automatic cropping device20, the detection device30, and the detection device40may be processed in the order different from the order described in the flowcharts, or may also be processed in parallel. Note that, the following configurations also fall within the technical scope of the present disclosure. (1) An information processing method for causing a processor to execute:automatically cropping a region including an object from a material image to generate an automatically cropped image; andperforming learning related to detection of the object on the basis of the automatically cropped image, whereinthe generating of the automatically cropped image further includes generating the automatically cropped image using an automatic cropper generated by learning based on manually cropped images obtained by manually cropping a region including the object from the material image. (2) The information processing method according to (1), whereinthe automatic cropper is generated on the basis of learned parameters obtained as a result of learning by a neural network to which the manually cropped images are input. (3) The information processing method according to (2), whereinthe generating of the automatically cropped image further includes: identifying an interim region including the object in the material image; and automatically cropping a detailed region including the object from the interim region. (4) The information processing method according to (3), whereinthe identifying of the interim region and the automatically cropping of the detailed region are executed using different neural networks. (5) The information processing method according to (3) or (4), whereinthe identifying of the interim region involves identifying the interim region by a high-speed object detection method using deep learning. (6) The information processing method according to (4) or (5), whereinthe automatically cropping of the detailed region involves executing automatic cropping of the detailed region using the automatic cropper to generate the automatically cropped image. (7) The information processing method according to any one of (1) to (6), whereinthe learning involves performing learning on the basis of the automatically cropped image adopted on the basis of a distance between the object and a photographing device during photographing of the material image. (8) The information processing method according to (7), whereinthe generating of the automatically cropped image involves adopting the automatically cropped image related to the material image as learning data only when the distance between the object and the photographing device is within a predetermined range. (9) The information processing method according to (7), whereinthe generating of the automatically cropped image involves automatically cropping a region including the object from the material image to generate the automatically cropped image only when the distance between the object and the photographing device is within a predetermined range. (10) The information processing method according to any one of (7) to (9), whereinthe material image includes an RGB image, andthe generating of the automatically cropped image involves making determination related to adoption of the automatically cropped image on the basis of a distance image photographed simultaneously with the RGB image. (11) The information processing method according to any one of (1) to (10), whereinthe object includes an obstacle during movement of a moving object. (12) A program for causing a computer to function as an information processing device comprising:an automatic cropping unit that automatically crops a region including an object from a material image to generate an automatically cropped image; and a learning unit that performs learning related to detection of the object on the basis of the automatically cropped image, whereinthe automatic cropping unit generates the automatically cropped image using an automatic cropper generated by learning based on manually cropped images obtained by manually cropping a region including the object from the material image. (13) An information processing system comprising:a photographing unit that photographs a material image including an object; an automatic cropping unit that automatically crops a region including the object from the material image to generate an automatically cropped image;a learning unit that performs learning related to detection of the object on the basis of the automatically cropped image; anda detection unit that detects the object on the basis of the result of learning by the learning unit, whereinthe automatic cropping unit generates the automatically cropped image using an automatic cropper generated by learning based on manually cropped images obtained by manually cropping a region including the object from the material image. REFERENCE SIGNS LIST 10Manual cropping device20Automatic cropping device210Image supply unit220Automatic cropping unit230Image storage unit240Learning unit30Detection device310Photographing unit320Detection unit330Operation control unit340Operating unit | 54,951 |
11861843 | DETAILED DESCRIPTION The following description of embodiments of the invention is not intended to limit the invention to these embodiments, but rather to enable any person skilled in the art to make and use this invention. 1. Overview As shown inFIG.1, variants of the method for property identification analysis can include: determining object information S100, determining a set of object representations S200, determining relationships between representations S300, and generating an analysis based on the relationships S400. The method can function to disambiguate between different structural versions of a building and provide a universal identifier for each building version. The method can further provide a timeseries of changes between different building versions. 2. Examples In an illustrative example, the method can include: determining a timeseries of measurements of a geographic region (e.g., S100); and determining a timeseries of structure versions based on the timeseries of measurements (e.g., S200and S300), wherein different structure versions can be identified by different structure identifiers (e.g., building identifiers). Examples of the illustrative example are shown in at leastFIG.6A,FIG.6B,FIG.12, andFIG.13. The measurements can be aerial images, depth measurements, and/or other measurements. The geographic region can be a property parcel, a prior structure version's geofence or mask, and/or other region. The timeseries of structure versions can be determined by: extracting a structure representation for a physical structure (e.g., a roof vector, a roof segment, a roof geometry, a building vector, a building segment, etc.) from each measurement of the timeseries; and associating structure representations representing the same structure version with a common structure version (e.g., the same structure identifier). Structure representations representing the same structure version can have the same representation values, have the same structural segments (e.g., physical segments, image segments, geometric segments, etc.), share structural segments, and/or otherwise represent the same structure version. Relationships between different structure versions can also be determined, based on the respective structure representations (e.g., using rules, a classifier, etc.). For example, a modified structure version can have an overlapping set of structural segments with a prior structure version; a replaced structure version can have a disjoint set of structural segments with a prior structure version; an added structure version can have a new set of structural segments when a prior structure version had none; and a removed structure version can have no structural segments when a prior structure version had structure segments. Auxiliary information for each structure version—such as measurements, permit data, descriptions, attributes (e.g., determined from the measurements), and/or other information—can also be associated with the respective structure version (e.g., with the structure identifier) using a set of spatiotemporal rules, or be otherwise associated. In variants, the timeseries of building versions and associated relationships can be represented by a graph (e.g., spatiotemporal graph), where the graph nodes can represent building versions and the edges can represent the relationships. In use, information associated with a given structure version can be returned responsive to a request identifying the structure version. Information can include: a set of changes (e.g., relationships) between the identified structure version and another structure version (e.g., a current structure version), information from the structure version's timeframe (e.g., auxiliary information, information extracted from measurements of the geographic region sampled during the timeframe, structure segments, etc.), and/or other information. However, the method can be otherwise performed. 3. Technical Advantages Variants of the technology for property identification analysis can confer several benefits over conventional systems and benefits. First, variants of the technology can associate information for a physical object variant (e.g., object instance) with a singular, universal object identifier. This can enable the system to track and return object information for different variants of the “same” object, and can enable the system to identify when the physical object has changed. For example, the system can automatically distinguish between and treat the original and remodeled version of a property as two different property variants, and associate information for each property version with the respective property version. Second, variants of the technology can enable information from different measurement modalities, different measurement vendors, and/or different measurement times to be merged with an object identifier for a single (physical) object instance. Third, variants of the technology can extract object representations (e.g., structure representations) that are agnostic to vendor differences, common changes (e.g., occlusions; shadows; roof appearance (e.g., additional/removal of solar panels, roof covering replacement; etc.), measurement registration errors or changes (e.g., due to seismic shift), and other inter-measurement differences, which humans can have difficulty distinguishing. For example, the technology can use a neural network model that is trained using self-supervised learning to generate the same representation for the same physical property instance, despite slight changes in the visual appearance of the property represented in the measurements (e.g., the model is agnostic to appearance-based changes, but is sensitive to geometric changes and/or structural changes). Fourth, variants of the technology can use the object representations to determine the type of change. For example, the technology can use a neural network model that is trained to identify roof facets from the measurement. A first and second object structure detected in a first and second measurement, respectively, can be considered the same building when the first and second measurement share roof facets. Fifth, variants of the technology can minimize the number of measurement pairs to analyze, thereby increasing the computational efficiency of the method, by limiting the object representation comparisons to those located within and/or overlapping a common geographic region (e.g., property parcel). For example, objects from different measurements of the same property parcel or geographic region can be assumed to have a high probability of being the same, so representation comparisons can be limited to those of objects extracted from measurements of the same property parcel (or geographic region). However, the technology can confer any other suitable benefits. 4. Method The method can include: determining object information S100, determining a set of object representations S200, determining relationships between object representations S300, and optionally generating an analysis based on the relationships S400. However, the method can additionally and/or alternatively include any other suitable elements. One or more variations of the system and/or method can omit one or more of the above elements and/or include a plurality of one or more of the above elements in any suitable order or arrangement. One or more instances of the method can be repeated for different properties, different timeframes, and/or otherwise repeated. The method functions to associate information (e.g., measurements, representations, extracted features, auxiliary information, etc.) for the same physical object version with a common object identifier. The method can optionally function to segregate information for different object versions. In variants, the method can also determine a timeseries of object versions for the objects within a given geographic region. The objects are preferably physical objects, but can be any other object. The objects can be: structures (e.g., built structures, such as buildings, etc.), a portion of a structure (e.g., a building component, a roof, a wall, etc.), vegetation, manmade artifacts (e.g., pavement, driveways, roads, lakes, pools, etc.), and/or be any other suitable physical object. The objects are preferably static (e.g., immobile relative to the ground or other mounting surface), but can alternatively be partially or entirely mobile. Object versions40(e.g., examples shown inFIG.4andFIG.12) are preferably distinguished by different physical geometries (e.g., internal geometries, external geometries, etc.), but can alternatively be distinguished by different appearances and/or otherwise distinguished. For example, different building versions can have different sets of building segments (e.g., different roof segments), different building material (e.g., tile vs shingle roofing), and/or be otherwise distinguished. Conversely, the same building version can have the same set of building segments, building material, and/or other parameters. Each object version40can be identified by a singular, universal object identifier. Alternatively, each object version can be identified by multiple object identifiers. Each object identifier preferably identifies a single object version, but can alternatively identify multiple object versions (e.g., identify an object geographic region, wherein different object versions can be distinguished by different timeframes). The object identifier can be a sequence of characters, a vector, a token, a mask, and/or any other suitable object identifier. The object identifier can include or be derived from: the geographic region (e.g., an address, lot number, parcel number, geolocation, geofence, etc.), a time stamp (e.g., of the first known instance of the object version), a timeframe (e.g., of the object version), the object representation associated with the object version (e.g., a hash of the object representation, the object representation itself, etc.), an index (e.g., version number for the geographic region), a random set of alphanumeric characters (e.g., a set of random words, etc.), an object version within a series of related object versions, and/or any other suitable information. For example, the object identifier can include or be determined from a parcel number or address and a timestamp of the first measurement depicting the respective object version. The method can be performed: in response to a request (e.g., an API request) from an endpoint, before receipt of a request, when new measurements are received, at a predetermined frequency, and/or any other suitable time. The method can be performed once, iteratively, responsive to occurrence of a predetermined event, and/or at any other time. In a first variant, an instance of the method is performed using all available object information (e.g., measurements) for a given geographic region10, wherein object representations are extracted from each piece of available object information and compared against each other to distinguish object versions. In a second variant, an instance of the method is performed using a specific set of object information (e.g., new object information, object information from a certain timeframe, etc.), wherein the object representation is extracted from the set of object information and compared against other object representations for the geographic region10that were determined using prior instances of the method to determine whether the object representation represents a known object version or a new object version. However, the method can be performed at any time, using any other suitable set of object information. The method can be performed for all objects within an object set (e.g., all properties appearing in a map, within a geographic region, within a large-scale measurement, etc.), a single object (e.g., a requested building, a requested parcel), a geographic region, a timeframe, and/or any other suitable set of objects. The method can be performed for one or more geographic regions10(e.g., examples shown inFIG.3,FIG.6A, andFIG.6B), wherein object representations for objects within and/or encompassing the geographic region10are extracted and analyzed from object information depicting or associated with the geographic region to determine whether the object has changed over time. The geographic region10can be: a real estate parcel, the geographic footprint of an object version detected in a different piece of object information (e.g., measurement), a region surrounding the geographic footprint, a municipal region (e.g., neighborhood, zip code, city, state, etc.), a geographic region depicted within a measurement, and/or any other suitable geographic region. In a first variant, the geographic region includes the geofence for a detected or known object. In a second variant, the geographic region includes the geolocations within the geofence. In a third variant, the geographic region includes a property parcel. In this variant, the analyzed object representations can be limited to those for an object of interest (e.g., having a predetermined object class). In a fourth variant, the geographic region includes a large region (e.g., property parcel, neighborhood, municipal district, etc.). In this variant, object representations for a plurality of objects can be extracted from object information from different times, wherein temporally distinct object representations that are associated with overlapping geographic subregions are analyzed (e.g., compared). However, any other suitable geographic region can be used. The method is preferably performed by a remote system (e.g., platform, cloud platform, etc.), but can additionally and/or alternatively be performed by a local system or be performed by any other suitable system. The remote system can include a set of processing systems (e.g., configured to execute all or portions of the method, the models, etc.), storage (e.g., configured to store the object representations, object versions, data associated with the object versions, etc.), and/or any other suitable component. The method can be performed using one or more models. The model(s) can include: a neural network (e.g., CNN, DNN, encoder, etc.), a visual transformer, a combination thereof, an object detector (e.g., classical methods, CNN based algorithms, such as Region-CNN, fast RCNN, faster R-CNN, YOLO, SSD-Single Shot MultiBox Detector, R-FCN, etc.; feed forward networks, transformer networks, generative algorithms, diffusion models, GANs, etc.), a segmentation model (e.g., semantic segmentation model, instance-based semantic segmentation model, etc.), leverage regression, classification, rules, heuristics, equations (e.g., weighted equations), instance-based methods (e.g., nearest neighbor), decision trees, support vectors, geometric inpainting, Bayesian methods (e.g., Naïve Bayes, Markov, etc.), kernel methods, statistical methods (e.g., probability), deterministics, clustering, and/or include any other suitable model or algorithm. Each model can determine (e.g., predict, infer, calculate, etc.) an output based on: one or more measurements depicting the object, tabular data (e.g., attribute values, auxiliary data), other object information, and/or other information. The model(s) can be specific to an object class (e.g., roof, tree, pool), a sensing modality (e.g., the model is specific to accepting an RGB satellite image as the input), and/or be otherwise specified. One model can be used to extract different representations of the set, but additionally and/or alternatively multiple different models can be used to extract different representations of the set. Examples of models that can be used include one or more: object representation models, object detectors, object segmentation models, relationship models, and/or other models. The object representation model functions to extract a representation of the object depicted or described within the object information. The object representation model preferably extracts the object representation from a single piece of object information (e.g., a single image, etc.), but can alternatively extract the object representation from multiple pieces of object information. The object representation model can include a single model (e.g., a CNN, an object detector, an object segmentation model, etc.), a set of submodels (e.g., each configured to extract a different attribute value from the object information), and/or be otherwise configured. In a first variant, the object representation model can be an object segmentation model that extracts a segment (e.g., semantic segment, instance-based segment, etc.) of an object of interest (e.g., object class of interest) from a measurement (e.g., an image, a DSM, etc.). In a second variant, the object representation model can include a set of attribute models, each configured to extract a value for the respective attribute from the object information (e.g., contemporaneous object information, same piece of object information, etc.). Examples of attribute models can include those described in U.S. application Ser. No. 17/475,523 filed 15 Sep. 2021, U.S. application Ser. No. 17/526,769 filed 15 Nov. 2021, U.S. application Ser. No. 17/749,385 filed 20 May 2022, U.S. application Ser. No. 17/870,279 filed 21 Jul. 2022, U.S. application Ser. No. 17/858,422 filed 6 Jul. 2022, U.S. application Ser. No. 17/981,903 filed 7 Nov. 2022, U.S. application Ser. No. 17/968,662 filed 18 Oct. 2022, U.S. application Ser. No. 17/841,981 filed 16 Jun. 2022, and/or U.S. application Ser. No. 18/074,295 filed 2 Dec. 2022, each of which is incorporated herein in its entirety by this reference. However, other attribute models can be used. In a third variant, the object representation model can be a model (e.g., CNN, encoder, autoencoder, etc.) trained to output the same object representation (e.g., feature vector) for the same object version despite common appearance changes in the object measurement (e.g., be agnostic to appearance-based changes but sensitive to geometric changes). Common appearance changes can include: shadows, occlusions (e.g., vegetation cover, snow cover, cloud cover, aerial object cover, such as birds or planes, etc.), registration errors, and/or other changes. This object representation model preferably extracts the object representation from an appearance measurement (e.g., an image), but can alternatively extract the object representation from a geometric measurement (e.g., DSM, point cloud, mesh, etc.), geometric representation (e.g., model), and/or from any other suitable object information. In this variant, the object representation model can be trained to extract the same object representation (e.g., feature vector) for all instances of an object within a geographic region (e.g., the same parcel, the same geofence, the object segment's geographic footprint, etc.) from object information from different timeframes (e.g., images from different years, times of the year, etc.) (e.g., example shown inFIG.5). Even though the training data set might include objects that have changed geometric configurations (e.g., different object versions) and therefore should have resulted in different object representations, these changes are relatively rare, and in variants, do not negatively impact the accuracy of the model given a large enough training set. Alternatively, this object representation model can be trained to predict the same object representation (e.g., feature vector) using only object information that is known to describe the same object version. In a fourth variant, the object representation model can be a feature extractor. The feature extractor can be: a set of upstream layers from a classifier (e.g., trained to classify and/or detect the object class from the object information), an encoder, a classical method, and/or any other suitable feature extractor. Examples of feature extractors and/or methodologies that can be used include: corner detectors, SIFT, SURF, FAST, BRIEF, ORB, deep learning feature extraction techniques, and/or other feature extractors. However, the object representation model can be otherwise configured. The object detectors can detect the objects of interest (e.g., object classes of interest) within the object information (e.g., object measurement, DSM, point cloud, image, wide-area image, etc.). Each object detector is preferably specific to a given object class (e.g., roof, solar panel, driveway, chimney, HVAC system, etc.), but can alternatively detect multiple object classes. Example object detectors that can be used include: Viola Jones, Scale-invariant feature transform (SIFT), Histogram of oriented gradients (HOG), region proposals (e.g., R-CNN), Single Shot MultiBox Detector (SSD), You Only Look Once (YOLO), Single-Shot Refinement Neural Network for Object Detection (RefineDet), Retina-Net, deformable convolutional networks, zero-shot object detectors, transformer-based networks (e.g., Detection Transformer (DETR), and/or any other suitable object detector. An object detector can: detect the presence of an object of interest (e.g., an object class) within the measurement, determine a centroid of the object of interest, determine a count of the object of interest within the measurement, determine a bounding box surrounding the object (e.g., encompassing or touching the furthest extents of the object, aligned with or parallel the object axes, not aligned with the object axis, be a rotated bounding box, be a minimum rotated rectangle, etc.), determine an object segment (e.g., also function as a segmentation model), determine object parameters, determine a class for the object, and/or provide any other suitable output, given a measurement of the object or geographic region (e.g., an image). Examples of object parameters can include: the location of the object (e.g., location of the object within the image, the object's geolocation, etc.; from the bounding box location); the geographic extent of the object (e.g., extent of the bounding box); the orientation of the object (e.g., within the image, within a global reference frame, relative to cardinal directions, relative to another object, etc.; from the bounding box orientation); the dimensions of the object (e.g., from the dimensions of the bounding box), and/or other parameters. The object detector can additionally or alternatively be trained to output object geometries, time of day (e.g., based on shadow cues), and/or any other suitable information. The object detector can be trained based on manual labels (e.g., manually drawn bounding boxes), bounding boxes drawn around object segments (e.g., determined using a segmentation model from measurements of the same resolution, higher resolution, or lower resolution), bounding boxes drawn around object geofences or masks (e.g., determined from the locations of the object segments), and/or other training targets. Alternatively, the object detector (e.g., a zero-shot object detector) can be generated using a zero-shot classification architecture adapted for object detection (e.g., by embedding both images and class labels into a common vector space; by using multimodal semantic embeddings and fully supervised object detection; etc.), and/or otherwise generated. The detected object, parameters thereof (e.g., location, orientation, bounding box size, object class, etc.), and/or derivatory entities (e.g., object segments) can be used to: retrieve higher-resolution measurements of the region (e.g., higher resolution than that used to detect the object; example shown inFIG.15); retrieve a different measurement modality for the region (e.g., depth measurements for the region); be used to determine the pose of the object; be used to determine attribute values for the object; be used to determine a geometric parameter of the object (e.g., the detected building shadow's length can be used to determine a building's height, given the building's geolocation and the measurement's sampling time); be used for multi-category multi-instance localization; determine counts and/or other population analyses; for time-series analyses (e.g., based on objects detected from a timeseries of measurements); and/or be otherwise used. The higher-resolution measurements can encompass the detected object's location, encompass the bounding box's geographic region, encompass a smaller region than the wide-area measurement, and/or be otherwise related to the detected object. In variants, the higher-resolution measurement can be used to extract a higher-accuracy object segment using a segmentation model (e.g., a more accurate segment than that extractable from the original measurement; example shown inFIG.15) and/or be otherwise used. In these variants, the bounding box can optionally be used to crop or mask the higher-resolution image before object segmentation or not be used for segmentation. The object segment and/or object detection (e.g., bounding box; measurement portion within the bounding box, etc.) can then be used to determine object attributes, feature vectors, and/or otherwise used. However, the object detector can be otherwise configured, and the outputs otherwise used. The object segmentation models can determine the pixels corresponding to an object of interest (e.g., object class of interest). The object segmentation models can be preferably specific to a given object class (e.g., roof, solar panel, driveway, etc.), but can alternatively segment multiple object classes. The object segmentation model can be a semantic segmentation model (e.g., label each pixel with an object class label or no object label), instance-based segmentation model (e.g., distinguish between different instances of the same object class within the measurement), and/or be any other suitable model. The object segmentation model is preferably trained to determine object representations (e.g., masks, pixel labels, blob labels, etc.) from object measurements, but can alternatively determine the object representations from other object information. In variants, the object measurements used by the object segmentation model can be higher resolution and/or depict a smaller geographic region than those used by the object detector; alternatively, they can be the same, lower resolution, and/or depict a larger geographic region. The relationship model functions to determine the type of relationship between two object versions and/or object representations. In a first variant, the relationship model can be a classifier (e.g., change classifier) trained to determine (e.g., predict, infer) a relationship (e.g., change) between two object variants and/or object representations, given the object representations. In a second variant, the relationship model can be a ruleset or set of heuristics. In an example, the relationship model can associate object representations with object versions based on a comparison of the geometric segments represented by the object representations. In a second example, the relationship model can associate information (e.g., auxiliary information) with an object version when that information (e.g., auxiliary information) is also associated with the same geographic region and the same timeframe as the object version. In a third variant, the relationship model can be a similarity model configured to determine a distance between two object representations. However, the relationship model can be otherwise configured. However, any other suitable set of models can be used. 4.1 Determining Object Information S100. Determining object information S100functions to determine information that is representative of an object version. The object information20(e.g., examples shown inFIG.2,FIG.4,FIG.12, andFIG.13) is preferably for a geographic region10associated with an object of interest, but can additionally or alternatively be for an image segment, geometric segment, and/or any other suitable entity. The geographic region associated with the object of interest can be a parcel, lot, geofence (e.g., of a prior version of the object), neighborhood, geocode (e.g., associated with an address), or other geographic region. In an illustrative example, S100can retrieve imagery (e.g., aerial imagery) depicting the geographic region associated with an address (e.g., depicting the respective parcel, encompassing a geocode associated with the address, etc.). However, S100can retrieve any other suitable object information. The determined object information can include one or more pieces of object information. For example, the object information can include: a single measurement (e.g., image), multiple measurements, measurements and attribute values, attribute values and descriptions, all available object information (e.g., for the geographic region, for the timeframe), and/or any other suitable set of object information. The determined object information can be for one or more time windows (e.g., timeframes). The determined object information can include: all information associated with the geographic region, only new information available since the last analysis of the object version and/or the geographic region, the most recent information (e.g., for the geographic region), only information within a predetermined time window (e.g., within a request-specified time window or timeframe, from the last analysis until a current time, etc.), and/or any other suitable object information. In variants, limiting the object information by time window can enable object representations to be determined for each time window, which enables the resultant object representations to be compared (and tracked) across different time windows. Alternatively or additionally, the determined object information can be unlimited by time. The object information20can be determined: responsive to receipt of a request (e.g., identifying the object identifier, identifying the geographic region, etc.), when new information is available, periodically, at a predetermined time, and/or at any other suitable time. In a first example, object information for a geographic region can be retrieved in response to receipt of a request including an object identifier associated with the geographic region, wherein the retrieved object information can be for the object version associated with the object identifier and/or for other object versions associated with the geographic region. In a second example, all object information for a geographic region can be periodically retrieved. However, the object information can be determined at any other time. The object information20can be from the same information provider (e.g., vendor), but can additionally and/or alternatively be from different information providers; example shown inFIG.2. The information determined for different object versions preferably has the same information modality (e.g., RGB, LIDAR, stereovision, radar, sonar, text, audio, etc.), but can additionally and/or alternatively have different information modalities. The information can be associated with the same information time (e.g., time the image was captured of the object), but can additionally and/or alternatively be associated with different information times. The information can be associated with the same geographic region (e.g., parcel, lot, address, geofence, etc.), or be associated with different geographic regions. Each piece of object information20can be associated with geographic information (e.g., a geolocation, an address, a property identifier, etc.), a timeframe (e.g., a timestamp, a year, a month, a date, etc.), information provider, pose relative to the object or geographic region, and/or other metadata. The object information20can include: measurements, models (e.g., geometric models), descriptions (e.g., text descriptions, audio descriptions, etc.), attribute values (e.g., determined from the measurements or descriptions, etc.), auxiliary data, and/or other information. The same type of object information is preferably determined for each object version and/or method instance; alternatively, different object information types can be determined. Each measurement is preferably an image (e.g., RGB, NDVI, etc.), but can additionally and/or alternatively be depth information (e.g., digital elevation model (DEM), digital surface model (DSM), digital terrain model (DTM), etc.), polygons, point clouds (e.g., from LIDAR, stereoscopic analyses, correlated images sampled from different poses, etc.), radar, sonar, virtual models, audio, video, and/or any other suitable measurement. The image can include tiles (e.g., of geographic regions), chips (e.g., depicting a parcel, depicting the portion of a geographic region associated with a built structure), parcel segments (e.g., of a property parcel), and/or any other suitable image segments. Each measurement can be geometrically corrected (e.g., orthophoto, orthophotograph, orthoimage, etc.) such that the scale is uniform (e.g., image follows a map projection), but can alternatively not be geometrically corrected. Each measurement preferably depicts a top-down view of the object, but can additionally and/or alternatively depict an oblique view of the object, an orthographic view of the object, and/or any other suitable view of the object. Each measurement is preferably remotely sensed (e.g., satellite imagery, aerial imagery, drone imagery, radar, sonar, LIDAR, seismography, etc.), but can additionally and/or alternatively be locally sensed or otherwise sensed. The measurements can be of the object exterior, interior, and/or any other suitable view of the property. For example, measurements can be: wide-area images, parcel segments (e.g., image segment depicting a parcel), object segments (e.g., image segment depicting a structure of interest; example shown inFIG.3), and/or any other suitable measurement. The image segments can be cropped to only include the entity of interest (e.g., parcel, structure, etc.), or include surrounding pixels. The measurements of the set are preferably measurements of the same geographic region (e.g., property, parcel, etc.), but can alternatively be representative of different geographic regions. The property can be: a property parcel, an object, a point of interest, a geographic region, and/or any other suitable entity. Additionally or alternatively, the measurements of the set can be associated with (e.g., depict) the same physical object instance (e.g., the tracked object instance) or different object instances. The object can include: a built structure (e.g., building, deck, pool, etc.), a segment of a built structure (e.g., roof, solar panel, door, etc.), feature of a built structure (e.g., shadow), vegetation, a landmark, and/or any other physical object. The measurements and/or portions thereof can optionally be pre- or post-processed. In a first example, a neural network model can be trained to identify an object of interest (e.g., an object segment) within a measurement (e.g., image, geometric measurement, etc.), wherein occluded segments of the object of interest can be subsequently infilled by the same or different model. The occluded segments can be infilled using the method disclosed in U.S. application Ser. No. 17/870,279 filed 21 Jul. 2022, incorporated herein in its entirety by this reference, and/or using any other suitable method. In a second example, errant depth information can be removed from a depth measurement. In an illustrative example, this can include removing depth information below a ground height and/or removing depth information above a known object height, wherein the known object height can be obtained from building records, prior measurements, or other sources. However, any other suitable object measurement can be determined. The attribute values (e.g., property attribute values, object attribute values, etc.) can represent an attribute of the object version. Examples of object attributes can include: appearance descriptors (e.g., color, size, shape, texture, etc.), geometric descriptors (e.g., dimensions, slope, shape, complexity, surface area, etc.), condition descriptors (e.g., yard condition, roof condition, etc.), construction descriptors (e.g., roofing material, flooring material, siding material, etc.), record attributes (e.g., construction year, number of beds/baths, structure classification, etc.), classification attributes (e.g., object class, object type), and/or other descriptors. The object attributes can be quantitative, qualitative, and/or otherwise valued. The object attributes can be determined from object measurements, descriptions, auxiliary information, and/or other information. The object attributes can be determined: manually, using a trained model (e.g., trained to predict or infer the attribute value based on the object information), from a third party database, and/or otherwise determined. In examples, property attributes and/or values thereof can defined and/or determined as disclosed in U.S. application Ser. No. 17/529,836 filed on 18 Nov. 2021, U.S. application Ser. No. 17/475,523 filed 15 Sep. 2021, U.S. application Ser. No. 17/749,385 filed 20 May 2022, U.S. application Ser. No. 17/870,279 filed 21 Jul. 2022, and/or U.S. application Ser. No. 17/858,422 filed 6 Jul. 2022, each of which is incorporated in its entirety by this reference (e.g., wherein features and/or feature values disclosed in the references can correspond to attributes and/or attribute values). The attribute values can be determined asynchronously with method execution, or be determined in real- or near-real time with respect to the method. Example attributes can include: structural attributes (e.g., for a primary structure, accessory structure, neighboring structure, etc.), record attributes (e.g., number of bed/bath, construction year, square footage, legal class, legal subclass, geographic location, etc.), condition attributes (e.g., yard condition, yard debris, roof condition, pool condition, paved surface condition, etc.), semantic attributes (e.g., semantic descriptors), location (e.g., parcel centroid, structure centroid, roof centroid, etc.), property type (e.g., single family, lease, vacant land, multifamily, duplex, etc.), property component parameters (e.g., area, enclosure, presence, structure type, count, material, construction type, area condition, spacing, relative and/or global location, distance to another component or other reference point, density, geometric parameters, condition, complexity, etc.; for pools, porches, decks, patios, fencing, etc.), storage (e.g., presence of a garage, carport, etc.), permanent or semi-permanent improvements (e.g., solar panel presence, count, type, arrangement, and/or other solar panel parameters; HVAC presence, count, footprint, type, location, and/or other parameters; etc.), temporary improvement parameters (e.g., presence, area, location, etc. of trampolines, playsets, etc.), pavement parameters (e.g., paved area, percent illuminated, paved surface condition, etc.), foundation elevation, terrain parameters (e.g., parcel slope, surrounding terrain information, etc.), legal class (e.g., residential, mixed-use, commercial), legal subclass (e.g., single-family vs. multi-family, apartment vs. condominium), geographic location (e.g., neighborhood, zip, etc.), population class (e.g., suburban, urban, rural, etc.), school district, orientation (e.g., side of street, cardinal direction, etc.), subjective attributes (e.g., curb appeal, viewshed, etc.), built structure values (e.g., roof slope, roof rating, roof condition, roof material, roof footprint, roof surface area, covering material, number of roof facets, roof type, roof geometry, roof segments, etc.), auxiliary structures (e.g., a pool, a statue, ADU, etc.), risk scores (e.g., score indicating risk of flooding, hail, fire, wind, wildfire, etc.), neighboring property values (e.g., distance to neighbor, structure density, structure count, etc.), context (e.g., hazard context, geographic context, weather context, terrain context, etc.), vegetation parameters (e.g., vegetation type, vegetation density, etc.), historical construction information, historical transaction information (e.g., list price, sale price, spread, transaction frequency, transaction trends, etc.), semantic information, and/or any other attribute. Auxiliary data can include property descriptions, permit data, insurance loss data, inspection data, appraisal data, broker price opinion data, property valuations, property attribute and/or component data (e.g., values), and/or any other suitable data. The object information can be: retrieved, extracted, and/or otherwise determined. In a first variant, the set of measurements can be determined by retrieving the set of measurements from a database. In a second variant, the set of measurements can be manually determined. In a third variant, the set of measurements can be determined by segmenting the set of measurements from a wide-area measurement (e.g., representative of multiple properties). The set of measurements are preferably determined based on parcel data, but can alternatively be determined independent of parcel data. In a first example of the third variant, the set of measurements can be segmented from a wide-area measurement by determining a parcel segment within the wide-area image based on a parcel boundary. In this example, object segments (e.g., representative of objects, roof, pool, etc.) can further be determined from the parcel segment using a segmentation model (e.g., semantic segmentation model, instance-based segmentation model, etc.). In a second example of the third variant, the set of measurements can be segmented from a wide-area measurement by detecting objects within a wide-area image (e.g., using an object detector). The object detections can optionally be assigned to parcels based on the object's geolocation(s). In other examples of the third variant, the set of measurements can be segmented from a wide-area measurement as discussed in U.S. application Ser. No. 17/336,134 filed 1 Jun. 2021, which is incorporated in its entirety by this reference. However, the set of measurements can be otherwise determined. 4.2 Determining a Set of Object Representations S200. Determining a set of object representations S200functions to determine object representations for an object of interest or a geographic region. The set of object representations30is preferably determined (e.g., extracted) from the object information determined in S100, but can alternatively be determined from other information. The object information can be of the same or different type (e.g., modality), from the same or different vendor, from the same or different time window, from the same or different geographic region (e.g., wherein the different geographic regions overlap or are disjoint), and/or be otherwise related. The set of object representations are preferably determined from a set of measurements, but can additionally and/or alternatively be determined from a description, a set of attribute values, and/or any other suitable object information. One object representation30(e.g., a geometric representation) is preferably determined from each piece of object information (e.g., measurement) of the set, but alternatively multiple representations can be determined from each piece of object information (e.g., different representation types for the same object, such as a geometric representation and a visual representation for the same object; different representations for different objects depicted within the same measurement; etc.), one representation can be determined from multiple pieces of information (e.g., measurements) of the set, and/or any other suitable set of representations can be determined from any suitable number of object information pieces (e.g., measurements). The object representation30(e.g., representation) can represent: an object class of interest within a geographic region, an object instance of interest (e.g., an object version), and/or any other suitable object. The geographic region can be the same or different from that used in S100to determine the object information. In a first example, S200can include determining object representations for all buildings or roofs depicted within a measurement (e.g., image or an image segment) depicting a given geographic region (e.g., a real estate parcel, a region surrounding the geographic footprint of an object detected in a prior measurement, etc.). In a second example, S200can include determining the object representation for a specific building or roof depicted within the measurement. However, any other suitable set of object representations can be determined for a given object instance or geographic region. Examples of object representations30that can be determined can include: an appearance representation (e.g., representative of an object's visual appearance), a geometric representation (e.g., representative of an object's geometry), an attribute representation (e.g., representative of the object's attribute values), a combination thereof, and/or any other suitable representation. Examples of appearance representations can include: vectors (e.g., encoding values for different visual features or appearance-based features), matrices, object segments (e.g., measurement segments depicting substantially only the object, image segments, masks, etc.), bounding boxes encompassing the object, and/or other appearance-based representations. Examples of geometric representations can include: a geometric object segment (e.g., segment of a DSM, segment of a point cloud, masks, etc.), a bounding box (e.g., bounding box, bounding volume, etc.) encompassing the object, a mesh, vectors (e.g., encoding values for different geometric features), a set of geometric feature values (e.g., a set of planes, facets, points, edges, corners, peaks, valleys, etc.) and/or parameters thereof (e.g., count, position, orientation, dimensions, etc.), an array, a matrix, a map, polygons or measurement segments (e.g., example shown inFIG.3), a model, and/or other geometric representations. Examples of attribute representations can include a vector of values for one or more attributes, and/or be otherwise represented. The attribute values for a given attribute representation are preferably determined from the same pieces of object information (e.g., the same measurements), but can additionally or alternatively be determined from contemporaneous object information and/or other information. The type of object representations that are determined in S200can be: predetermined, determined based on the type of measurement (e.g., geometric representations are extracted from Lidar, RGB images, and DSM data, while appearance representations are extracted from RGB images), and/or otherwise determined. The same type of representation is preferably determined for each time window for a given geographic region, such that the representations can be compared against each other; alternatively, different representation types can be determined for each time window for a given geographic region. The representations30of the set are preferably representative of the same object, but can additionally and/or alternatively be representative of different objects. The representations can be for the same or different object versions. The representations are preferably unassociated with an object identifier when the representations are initially determined (e.g., wherein the representation is associated with the object identifier in S300), but can alternatively be associated with an object identifier by default (e.g., associated with the last object identifier for the property by default, wherein S300evaluates whether the object identifier is the correct object identifier). Object representations30can additionally and/or alternatively be associated with the metadata for the object information, such as the timestamp, vendor, geolocation, and/or other metadata. In variants where object representations are determined from a timeseries of object information (e.g., measurements), S200can generate a timeseries of object representations. Object representations30can additionally and/or alternatively be associated with: parcel information (e.g., parcel ID, parcel boundaries, parcel geolocation(s), associated addresses, etc.), attribute values extracted from or associated with the object information (e.g., attribute values extracted from the object information, etc.), auxiliary information (e.g., associated with the representation via the parcel, tax information, permit data, address information, square footage, number of beds and baths, etc.), and/or any other suitable auxiliary data. The auxiliary data can be associated with a representation: spatially (e.g., wherein the auxiliary data is associated with a geographic location within the geographic region, with an address shared with the representation, etc.), temporally (e.g., wherein the auxiliary data and the representation are associated with the same time window, etc.), spatiotemporally, and/or otherwise associated with a representation. The object representation30is preferably determined by an object representation model, but can alternatively be determined by any other suitable model, be determined by a user, be retrieved from a database, or be otherwise determined. The object representation can be: extracted, calculated, aggregated, and/or otherwise determined from the object information. In a first variant, a feature vector (e.g., appearance feature vector, geometric feature vector, etc.) can be extracted from the object information using a neural network model (e.g., example shown inFIG.5). The object information is preferably a measurement, more preferably an image or a geometric measurement (e.g., DSM), but can alternatively be any other suitable type of object information. In a specific example, the model can be trained to be invariant to common appearance changes (e.g., occlusions, shadows, car presence, registration errors, etc.), wherein the structure appearance of the property can be considered unchanged when the pair of vectors substantially match (e.g., do not differ beyond a threshold difference). In a second embodiment, the model can be trained to not be invariant to common appearance changes. In a second variant, the object representation is a measurement segment depicting the object. The object segment can be an image segment (e.g., determined from an image), a geometric segment (e.g., determined from a geometric measurement), an object component (e.g., a roof facet, etc.), and/or be otherwise configured. The object segment can be georegistered and/or unregistered. In a first example, an image segment can be determined using an object segmentation model trained to segment the object class of interest. In an illustrative example, a roof segment can be determined within an image (e.g., of a parcel, a wide-area image, etc.) using a roof segmentation model. In a second example, a geometric segment can be determined by masking a geometric measurement using a mask registered to the geometric measurement. The mask can be: an object segment or mask determined from an image (e.g., contemporaneously or concurrently sampled with the geometric measurement), an object segment for an known object version, and/or be any other suitable mask. In a third example, the geometric measurement segment can be determined using an object segmentation model trained to detect segments of an object class of interest within a geometric measurement. In an example, a neural network model (e.g., segmentation model) can be trained to determine object instances within an image, such as by using the methods as discussed in U.S. application Ser. No. 17/336,134 filed 1 Jun. 2021, which is incorporated in its entirety by this reference. However, the measurement segment can be otherwise determined. In a third variant, the object representation includes a set of dimensions for the object, and can additionally or alternatively include a location and/or orientation for the object. For example, the geometric representation can include a length and width of the object, and can optionally include the height of the object. In this embodiment, the geometric representation can be determined using an object detector (e.g., wherein the dimensions and other parameters, such as location or orientation are determined from the dimensions and parameters of the bounding box), an object segmentation model (e.g., wherein the dimensions and other parameters are determined from the object segment), and/or any other suitable object representation model. In a fourth variant, the object representation includes a set of components of the object, and can optionally include the parameters for each component and/or characteristics of the set. Examples of components can include: segments, planes, facets, edges (e.g., peaks, valleys, etc.), corners, and/or other object components. For example, the object representation can include a set of roof segments or roof facets for a building, and can optionally include the pose (e.g., location, orientation, surface normal orientation, etc.), dimensions, slope, and/or other parameter of each component, and/or include the count, distribution (e.g., size distribution, slope distribution, etc.) and/or other characteristic of the set. In a first embodiment, the components are determined from a geometric segment (e.g., determined using the first variant), wherein a set of planes (e.g., the components) are fitted to the geometric segment. The model or a second model can optionally distinguish between adjacent components. For example, a model can identify disjoint planes by optionally corresponding planes to depth information and identify disjoint planes with different normal vectors (e.g., determined based on depth information, etc.) as different components (e.g., different roof segments). However, different components can be otherwise distinguished. In a second embodiment, the components are determined using a model trained to determine component instances (e.g., roof segments) and/or extract the component parameters and/or characteristics from the object information (e.g., object measurement). However, the component set, parameters thereof, and/or characteristics thereof can be otherwise determined. In a fifth variant, the object representations include an attribute set (e.g., attribute vector) for the object. The attributes within the attribute set are preferably the same attributes for all objects (and object versions), but can alternatively include different attributes. The attribute values within the attribute set are preferably determined from the same piece of object information, but can additionally or alternatively be determined from contemporaneous object information (e.g., object information captured, generated, or otherwise associated with a common time frame, such as within 1 day, 1 week, 1 month, or 1 year of each other), from object information known to be associated with the same object version, and/or from any other suitable set of object information. For example, the attribute vector can include values for each of a set of attributes that were extracted from the same measurement (e.g., image) or from measurements sampled the same day (e.g., by the same vendor, by different vendors, etc.). The attribute values can be determined by one or more attribute models, and/or otherwise determined. In a sixth variant, the object representations can include a combination of the above variants. However, the set of representations can be otherwise determined. 4.3 Determining Relationships Between Representations S300. Determining relationships between object representations S300functions to determine whether the object representations within the set are associated with different physical object versions and/or how the physical object versions are related with each other. Different object versions40are preferably different versions of the same object (e.g., an original building and a remodeled building are different versions of the same building), but can additionally or alternatively be different objects altogether (e.g., an original building and a new building post demolition). Different object versions40preferably have different physical geometries (e.g., different footprints, different boundaries, different heights, different topography, etc.), but can additionally or alternatively have different appearances (e.g., different paint, different roofing or siding material, etc.), and/or be otherwise differentiated. Different object versions40can be associated with different object identifiers. Each object identifier preferably identifies a single object version (e.g., example shown inFIG.6B), but can alternatively identify multiple object versions (e.g., identify all related buildings located on the same or overlapping geographic region; example shown inFIG.6A). The identifiers are preferably globally unique (e.g., are universally unique identifiers (UUID) or globally unique identifiers (GUID)), but can additionally and/or alternatively be locally unique (e.g., to the geographic region), not be unique, and/or otherwise related. The object identifier can be: a hash of the object's characteristics (e.g., geometric characteristics, visual characteristics, etc.), generated from the object representations associated with the object version, a randomly generated identifier, an index, a nonce, be generated based on a geographic region identifier, be generated based on the object version parameters (e.g., timeframe, etc.), be generated based on a prior object version's identifier, a unique combination of semantic words or alphanumeric characters, or be any other suitable identifier. The geographic region identifier can be an address, a hash of an address, an index, randomly generated, and/or otherwise determined. Examples of object identifiers (e.g., building identifier, structure identifier, etc.) include: an address and timestamp combination (e.g., wherein the timestamp is associated with the object version), a hash of the object representation associated with the object version, an object representation associated with the object version, and/or any other identifier. The relationships50(e.g., examples shown inFIG.7,FIG.12,FIG.13, etc.) are preferably determined by comparing the object representations30, but can alternatively be manually determined, predicted (e.g., using a model, from the object representations30or the underlying object information20), and/or be otherwise determined. The compared object representations are preferably associated with the same geographic region (e.g., same parcel, the same geofence, the same geolocation, the same address, etc.), but can additionally and/or alternatively be associated with different geographic regions (e.g., disjoint geographic regions), related geographic regions (e.g., wherein one geographic region overlaps with the other), be randomly selected (e.g., from a set of representations extracted for a plurality of object instances), and/or be otherwise determined. The compared object representations are preferably temporally adjacent or sequential, but can alternatively be temporally disparate (e.g., separated by one or more intervening object representations). In a first example, all representations associated with the same geographic region are compared with each other. In a second example, temporally adjacent representations are compared with each other. However, any other set of representations can be compared. S300can include determining whether the object representations30represent the same object version40. Object representations indicative of the same physical object geometries (and/or appearances) are preferably associated with the same object version, while object representations indicative of different physical object geometries (and/or appearances) are preferably associated with different object versions. Alternatively, object representations indicative of different physical object geometries and/or appearances can be associated with the same object version, while object representations indicative of the same physical object geometries and/or appearances can be associated with different object versions. Object representations that represent the same object version can substantially match (e.g., are an exact match or match within a predetermined margin of error, such as 0.1%, 1%, 5%, etc.), have a similarity greater than a similarity threshold (e.g., Jaccard Index, Sorensen-Dice coefficient, Tanimoto coefficient, etc.), or have a distance less than a threshold distance (e.g., the cosine distance, Euclidean distance, Mahalanobis distance, or other similarity distance is less than a threshold distance); alternatively, the object representations representative of the same object version can be otherwise determined. Object representations representative of the same object version are preferably sequential, serial, or temporally adjacent (e.g., there is no intervening object representation or object version), but can alternatively not be sequential. For example, an object version can be determined for each set of temporally adjacent object representations that are substantially similar. Object representations representative of different object versions are preferably substantially differentiated (e.g., the differences exceed a margin of error) and/or have less than a threshold similarity (e.g., the cosine distance, Euclidean distance, Mahalanobis distance, or other similarity distance is higher than a threshold distance); alternatively, the object representations representative of different object versions can be otherwise determined. In a first variant, determining whether the object representations represent the same object version can include comparing the object segments. In a first embodiment, comparing the object segments includes determining the object segment overlap (e.g., geographic overlap), wherein the object representations represent the same object version when the overlap exceeds a threshold and/or the non-overlapping portions falls below a threshold. In a second embodiment, comparing the object segments includes comparing the segment projections (e.g., footprints) and/or geometries, wherein the object representations represent the same object version when the projections and/or geometries are similar above a threshold. This embodiment can mitigate against registration errors between the underlying object measurements. However, the segments can be otherwise compared. In a second variant, determining whether the object representations represent the same object version can include comparing feature vectors (e.g., extracted using the appearance change-agnostic model) and/or attribute vectors. In this variant, the object representations can be considered to represent the same object version when the vectors match (e.g., exactly or above a threshold similarity). However, the feature vectors can be otherwise compared. In a third variant, determining whether the object representations represent the same object version can include comparing component sets and/or parameters or characteristics thereof. In a first embodiment, the object representations represent the same object version when the components within the sets match, or when one set is a subset of the other. In a second embodiment, the object representations represent the same object version when the component sets have substantially the same component parameter or characteristic distributions. For example, the object representations represent the same object version when the number of roof facets are substantially the same, when the slope distribution is substantially the same, when the roof peak or valley locations are substantially the same, and/or other component parameters or characteristics are substantially the same. However, whether the object representations represent the same or different object versions can be otherwise determined. S300can additionally or alternatively include determining whether the object versions or object representations are related, and/or how the object versions or object representations are related. Object versions can be considered related when they are the same object version, are derivatory object versions (e.g., one object version is a modified version of the other object version), or otherwise related. Object representations can be considered related when they represent the same object version, represent derivatory object versions (e.g., one object version is a modified version of the other object version), or otherwise related. Relationships50can be characterized by: a relationship class (e.g., relationship type), a similarity metric (e.g., cosine distance, affinity, etc.), an amount of change, the changed segments (e.g., wing added/removed, highlight the portions of a building that have been modified, etc.), and/or otherwise characterized. Examples of relationship classes can include: same, unchanged, modified, added, removed, replaced, or other relationship types. Relationships50can be between object representations30, object versions40, and/or any other suitable entity. In a first illustrative example, two object versions are considered the “same” when the geometries (e.g., represented by object segments, object component sets, object feature sets, attribute vectors, etc.) substantially match; “modified” when a prior object version shares geometric segments with the latter object version (e.g., the geometric segments intersect; the component sets intersect or include shared components); “replaced” when a prior object version does not share geometric segments with the latter object version (e.g., the geometric segments are disjoint; the component sets do not share components); “added” when there was no prior object version (e.g., in the geographic region) and there is a latter object version; and “removed” when there was a prior object version (e.g., in the geographic region) and there is no latter object version. In a second illustrative example, two object representations are considered to be the “same” when the respective object versions have the same geometries; “modified” when the object version represented by the prior object representation (“prior object version”) shares geometric segments with the object version represented by the latter object representation (“latter object version”); “replaced” when the prior object version does not share geometric segments with the latter object version (e.g., the geometric segments are disjoint; the component sets do not share components); “added” when the prior object representation represents no object (e.g., is indicative of an empty space) and the latter object representation represents an object; and “removed” when the prior object representation represents an object and the latter object representation represents no object (e.g., is indicative of an empty space). However, the relationships can be otherwise defined. A single relationship is preferably determined between each pair of object versions and/or object representations; alternatively, multiple relationships can be determined between each pair of object versions and/or object representations. Collectively, the method (e.g., one or more instances of S300) can determine a set of relationships tracking a history of object changes over time (e.g., a building change history). The set of relationships can be determined for: a geographic region (e.g., a parcel), an object, or for any other entity. In a first variant, the set of relationships can be specific to an object (e.g., a building, a tree, a pool), wherein the representations connected by the set of relationships are associated with the same object, and describe the relationship between the object versions (e.g., object states; how the object has changed over time). In a second variant, the set of relationships can be specific to a geographic region (e.g., a parcel), wherein the representations connected by the set of relationships cooperatively describe how different objects within the geographic region are related (e.g., the relationship between buildings on the property over time). However, the set of relationships can be otherwise specified. This set of relationships50between the object versions40and/or object representations30can cooperatively form a graph (examples shown inFIG.6AandFIG.6B), but can be represented as a matrix, a table, a set of clusters, and/or otherwise represented. The graph can be a partially connected graph, a fully connected graph, and/or any other suitable type of graph. In a first variant, nodes of the graph can represent object versions, and edges of the graph (e.g., connecting pairs of nodes) represent relationships between the object versions. The nodes (e.g., object versions) can additionally or alternatively be associated with object representations, auxiliary data (e.g., measurement identifiers, timestamps, etc.), object information, and/or any other suitable data. In a second variant, the nodes of the graph can be object representations. In this variant, the object representations associated with the nodes can include object representations extracted from all available data for a given property or object, only the latest representation (e.g., to detect changes; be a 2-node graph), and/or represent any other set of data or timeframes. In this variant, object representations connected by a “same” relationship can be assigned the same object version identifier, while object representations connected by other relationships can be assigned different object version identifiers. The nodes are preferably ordered by time, but can alternatively be ordered by the relationship type, by statistical distance, or otherwise ordered. Connected nodes are preferably associated with the same geographic region, but can alternatively be associated with the same object and/or be otherwise associated. The edges are preferably associated with a relationship type (example shown inFIG.6AandFIG.6B), but can additionally and/or alternatively not be associated with a relationship type. In a first variant, the edges can be defined between representations of the same building (e.g., a relationship between the representations is classified as “unchanged” or “modified”); example shown inFIG.6AandFIG.6B. In a second variant, the edges can be defined between representations of different buildings (e.g., different buildings are associated with the same property). However, the edges can be otherwise defined. However, the set of relationships between object versions and/or object representations can be otherwise represented. The relationship50between two object versions40is preferably determined based on the respective object representations30(e.g., the object representations used to distinguish between the object versions, different object representations, etc.) (e.g., example shown inFIG.12), but can additionally or alternatively be determined based on the object information associated with the object versions (e.g., object information generated during each object version's timeframe, object information used to determine the object representations, etc.), auxiliary data associated with each object version, and/or any other suitable information. Object version relationships can be determined using a trained model, be determined using a ruleset, be calculated, be manually assigned, and/or be otherwise determined (e.g., examples shown inFIG.4). In a first variant, S300includes classifying the relationship between object representations (and/or respective object versions); example shown inFIG.7. The classifier can ingest a set of object representations (e.g., a representation pair) and output a relationship classification (e.g., “unchanged”, “modified”, “replaced”, “added”, “removed”, etc.). The representations can be feature vectors (e.g., determined using the appearance change-agnostic model), measurement segments, attribute vectors, and/or other representations. The representations are preferably the same representation type (e.g., both are segments, both are feature vectors, both are attribute vectors, etc.), but can alternatively be different representation types. The classifier can be trained using ground-truth data including object representation sets (e.g., vector pairs, extracted from different measurements) labeled with the respective relationship class (e.g., change type labels), and/or using any other training data. The ground-truth relationship class can be determined manually, labeled using contemporaneous permit or tax data (e.g., labeled as “modified” when a permit exists for the timeframe, labeled as “removed” when the tax records show no improvements/land only, labeled as “replaced” when the taxable value changes but no sale has occurred, etc.), but can alternatively be labeled using any other suitable method. For example, a set of appearance vectors extracted from different measurements by the same model can be fed to the classifier, wherein the classifier predicts a relationship label for the appearance vector set. In a second variant, S300includes determining the relationship between the object representations (and/or respective object versions) based on a comparison metric between the representations. In this variant, the representations can be a vector of attribute values (e.g., such as roof pitch, roof slope, etc.; example shown inFIG.8), be a feature vector, and/or be any other representation. The attribute within the vector can be selected to be indicative of object instance similarity (e.g., using SHAP values, lift, etc.), be unselected (e.g., include all attributes), or be otherwise selected. The comparison metric can be a distance metric (e.g., Euclidean distance, Chebyshev distance, etc.), a similarity metric (e.g., Jaccard similarity, cosine similarity, etc.), a dissimilarity metric, and/or any other suitable comparison metric. In a first embodiment, the distance metric values can be binned or thresholded, wherein each bin or threshold is associated with a different relationship class (e.g., small distances are binned to “unmodified” or “modified”, wherein large distances are binned to “added”, “removed”, or “heavily modified”, etc.). However, the distances can be otherwise interpreted. In a third variant, relationships between the representations (and/or respective object versions) can be determined based on rules and heuristics; example shown inFIG.9. In this variant, the representations are preferably geometric representations, but can be other representations. In a first example, each representation can include a set of roof facets. The relationship between a pair of representations (and/or respective object versions) can be considered: unchanged when the roof facets match; modified when the geometric representations share at least one roof facet; added when the prior geometric representation had no roof facets and the subsequent geometric representation has some roof facets; removed when the prior geometric representation had some roof facets and the subsequent geometric representation has no roof facets; and replaced when the pair of geometric representations both have roof facets but none match; example shown inFIGS.10A-10E. In this example, a structure can be considered the same structure when the subsequent and prior structures share a common roof facet or segment (e.g., when the structure is unchanged or is modified). Otherwise, the structure can be considered a different structure (e.g., a different object variant). In a second example, each representation can include a set of measurement segments (e.g., image segments or geometric segments). The relationship between a pair of representations (and/or respective object versions) can be considered: unchanged when the segments match; modified when the segments overlap; added when there was no prior measurement segment and there is a subsequent measurement segment; and removed when there was a prior measurement segment and there is not a subsequent measurement segment. In a second example, permits (e.g., remodeling permits) relevant to the object (e.g., property) from the same timeframe as the pair of representations can indicate that an object was modified. In a third example, tax information relevant to the property from the same timeframe as the representations can signal that an object was modified. This auxiliary data (e.g., permit data, tax data, etc.) can be used to validate, disambiguate, specify, or otherwise influence a relationship between representations. However, rules, criteria, and/or heuristics can otherwise determine relationships between the representations. In a fourth variant, relationships between the representations can be determined based on a cascade of analysis modules. In a first example, a first analysis module can use geometric analysis to determine coarse changes (e.g., added, removed, same footprint, different footprint, etc.) and a second analysis module can use appearance-based methods and/or classifiers to distinguish between finer changes after the coarse changes have been determined (e.g., detect occurrence of a remodel if same footprint, determine magnitude of change if different footprint, etc.). In a second example, a first analysis module can use appearance-based methods to determine whether the structure has changed, a second analysis module can use geometric representation (e.g., extracted from the same measurement or a contemporaneous measurement) to determine whether the structure remains the same, and a third analysis module can use appearance representation to determine (e.g., classify) the type of structure change. In a third example, the second variant can be used to determine the object representations that represent the same object versions (e.g., wherein vectors that are substantially similar represent the same object versions), then the first variant or the third variant can be used to classify the relationships between the object versions. However, the cascade of analysis modules can otherwise determine relationships between the representations. However, the relationships can be otherwise determined. S300can additionally include identifying object versions40based on the relationships. In a first variant, an object version can be associated with or identified as a set of related object relationships (e.g., set of connected nodes); example shown inFIG.6A. In a second variant, an object version can be associated with or identified as representations related by a predetermined subset of relationship types (e.g., only unchanged or “same”, example shown inFIG.6B; only unchanged and modified, etc.). In a first example, object representations representing the same object version are identified, then relationships between the object versions are determined (e.g., example shown inFIG.12). In a second example, relationships between the object representations can be determined, wherein object versions can be determined based on the relationships (e.g., using a set of rules), and the relationships between different object versions can be determined from (e.g., inherited from) the relationships between the respective object representations (e.g., example shown inFIG.13). In a third variant, an object version can be associated with or identified as a set of substantially similar object representations. However, the object version can be otherwise determined. Each object version40can be assigned one or more object identifiers, as discussed above. Each determined object version40can be associated with: a timeframe, the geographic region (e.g., from the object representations and/or object information), the related object representations, the object information used to determine the related object representations, auxiliary data spatially and/or temporally related to the object version, and/or other information. The timeframe for an object version can encompass all of the related object representations' timestamps (e.g., examples shown inFIG.12andFIG.13) and/or exclude one or more related object representations' timestamps. The beginning of the timeframe for an object version can be: the timestamp of the first related object representation, the timestamp of the last object representation associated with a prior object version, a manually-specified time, a time determined from change-related object information (e.g., permit information, construction information, etc.), and/or otherwise determined. The end of the timeframe for an object version can be: the timestamp of the last object representation related to the object version, the timestamp of the first object representation associated with a subsequent object version, a manually-specified time, a time determined from completion-related object information (e.g., final certification, insurance documentation, etc.), and/or otherwise determined. However, the object version timeframe can be otherwise defined. The auxiliary data associated with an object version can be associated with the object version: spatially (e.g., both are associated with the same geographic region), temporally (e.g., the auxiliary data has a timestamp falling within the object version timeframe), spatiotemporally, by object information (e.g., both the object representation associated with the object version and the auxiliary data were determined from the same object information), associated based on parameters (e.g., spatial parameter, temporal parameter, etc.) shared with the object version and/or the measurements associated with the object version, and/or be otherwise associated with the object version. Auxiliary data can include: object information from other vendors, object representations determined using object information from other vendors, object descriptions (e.g., property descriptions), permit data, insurance data (e.g., loss data, assessment data, etc.), inspection data, appraisal data, broker price opinion data, property valuations, attribute and/or component data (e.g., values), and/or other data. However, the object versions can be associated with any other suitable information. 4.4 Generating an Analysis Based on the Relationships S400. Generating an analysis based on the relationships S400functions to describe an object version, the timeseries of object versions, and/or other relationships between object versions. The analysis is preferably generated in response to a query (e.g., the query including an object identifier, geographic coordinates, an address, a time, etc.), but can additionally and/or alternatively be generated in response to a request from an endpoint, be generated before the request is received (e.g., when a specific relationship is detected, when new property data is received, etc.), be generated for each change (e.g., calculated when a changed relationship is determined), and/or any other suitable time. The analysis can be generated by traversing through a graph or a relationship set, by retrieving data associated with an object version, and/or be otherwise generated. In an example, S400can include: receiving a request identifying an object version; identifying the relationship set (e.g., graph or subgraph) associated with the object version; and returning information extracted from measurements associated with (e.g., used to generate) the object representations within the relationship set (e.g., example shown inFIG.14). In a first illustrative example, S400can include: receiving a request with an address and a timestamp (e.g., the time at which the property was insured); identifying the relationship set associated with the address (e.g., the geographic region's relationship set); identifying the object version associated with the timestamp (e.g., the object version with the timeframe encompassing the timestamp); and generating a response based on the measurements associated with the object version (e.g., used to generate the object representations associated with the object version). In a second example, S400can include: receiving the request identifying the object version and a set of requested data types, and retrieving object data associated with the object version that satisfies the request. However, the analysis can be otherwise initiated and/or used. Examples of analyses that can be determined include: the relationship between a first and second object version (e.g., the next object version, a prior object version, the current object version etc.); a timeseries of changes between a first and second object version (e.g., change type, change time, building development history, parcel development history, etc.); whether the requested object version is the most recent object version for the geographic region; the most recent object version for a requested geographic region; the timeframe or duration for a requested object version; data associated with a requested object version (e.g., measurements, object representations, feature vectors, attribute values, etc.); an analysis of the timeseries that the requested object version is a part of (e.g., number of changes, types of changes, average object version duration, etc.); an analysis of the object versions associated with a requested timeframe; an analysis of the auxiliary data associated with the object version (e.g., statistical analysis, lookup, prediction, etc.); anomaly detection (e.g., a temporary structure); and/or other analyses (e.g., examples shown inFIG.4). In a first example, S400can include identifying measurements for an object version and generating the analysis from the set of identified measurements. The identified measurements are preferably associated with the object version's timeframe (e.g., sampled during object version's timeframe; etc.) and/or be otherwise associated with the object version. In a first embodiment, the analysis can include selecting the best measurement (e.g., image) of the object version based on a set of criteria (e.g., measurement with minimal shadows, measurement with maximal resolution, most up-to-date measurement, and least occluded, etc.). In a second embodiment, the analysis can include generating a synthetic measurement from said measurements (e.g., averaged measurement across a sliding window of measurements, etc.). In a third embodiment, attribute values (e.g., roof slope, etc.) can be extracted from the identified measurements (and/or best measurement) and be used as the attribute values for the object version. In a fourth embodiment, object polygons for the object version can be extracted from the identified measurements. In a second example, S400can include limiting returned data to data for the specified object variant and/or modifications thereof. For example, if the structure of the specified object (e.g., identified by a timestamp, an object identifier, etc.) was wholly replaced by a replacement structure, information for the original structure can be returned, and information for the replacement structure would not be returned. In a third example, S400can include generating a timeseries of changes for a parcel. This can include compiling the time series of relationships between serial representations, serial object versions, or be otherwise performed. In a fourth example, S400can include determining whether an object version has changed since a reference time or reference object variant. In this variant, the relationship set can be analyzed to determine whether any change-associated relationships appear after the reference time or reference object variant. In a fifth example, S400can include generating an analysis based on relationships between object variants within geospatial extent (e.g., a set of geographic coordinates of a neighborhood) and/or a set of object identifiers (e.g., a set of addresses). In a first example, the analysis can include neighborhood-level statistics on object development (e.g., percentage of objects undergoing development within a neighborhood, average percentage of objects undergoing development across a set of neighborhoods). In a second example, the analysis can include statistics for a list of addresses (e.g., average frequency of change occurrence between the addresses within a time interval). In a sixth example, S400includes determining whether an object has changed based on the latest representation and a new representation determined from new property data. When the object has changed (e.g., the relationship between the latest and new representations is indicative of change), the new representation can be stored as the latest representation for the object (and/or a new object version can be created), and the prior representation can optionally be discarded; alternatively, the representations can be otherwise handled. Object information extracted from the new property data can optionally be stored in association with the new object version, and the old object information can optionally be discarded or associated with the prior object version. When the object has not changed (e.g., the relationship between the latest and new representations is not indicative of change), the object information extracted from the new property data can be: discarded, merged with the prior object information, and/or otherwise used. However, any other analysis can be performed. S400can additionally include providing the analysis to an endpoint (e.g., an endpoint on a network, customer endpoint, user endpoint, automated valuation model system, etc.) through an interface, report, or other medium. The interface can be a mobile application, web application, desktop application, an API, and/or any other suitable interface executing on a user device, gateway, and/or any other computing system. However, S400can be otherwise performed. 5. Use Cases The relationship set can be used to track changes of an object over time (e.g., structure modified, structure replaced, structure added, structure removed, auxiliary structure added, auxiliary structured removed, actively under construction, etc.), be used to associate measurements of the same object instance from different modalities or vendors together (e.g., example shown inFIG.11), and/or otherwise used. The relationship set can be used for: insurance underwriting (e.g., verify that the originally-insured property still exists or is unchanged; determine pricing of insurance depending on the change; optimize inspection to identify where to send inspectors; determine when to reach out to adjust insurance policy when remodeling is detected; identify which properties to initiate claims for; create opportunities to proactively address property change issues before they result in insurance claims; reconcile object versions with those in their records or those that were insured; etc.), real estate property investing (e.g., identify underpriced properties that can increase in value through renovation and/or repairs; incorporate the change into a valuation model to establish the offer price; determine when construction, remodeling, and/or repairing has occurred; identify properties in portfolio that have suffered damage; etc.), real estate management (e.g., identify areas that can be renovated, repaired, added, and/or removed, etc.), real estate valuations (e.g., use change as an input to an automated valuation model; use change to detect error in property evaluation models; use change as a supplement to a property-level valuation report; etc.), real estate and loan trading (e.g., detect illegal builds; identify deterioration since prior due diligence was completed; incorporate the change into collateral valuation in mortgage origination and in secondary mortgage market; etc.), and/or otherwise used. In an illustrative example, an insurer can use the relationship set to determine whether a multi-structure campus under development, such as an apartment complex, is under-insured due to recent construction or due to other discrepancies. However, the relationship set can be otherwise used. Different processes and/or elements discussed above can be performed and controlled by the same or different entities. In the latter variants, different subsystems can communicate via: APIs (e.g., using API requests and responses, API keys, etc.), requests, and/or other communication channels. Communications between systems can be encrypted (e.g., using symmetric or asymmetric keys), signed, and/or otherwise authenticated or authorized. All references cited herein are incorporated by reference in their entirety, except to the extent that the incorporated material is inconsistent with the express disclosure herein, in which case the language in this disclosure controls. Alternative embodiments implement the above methods and/or processing modules in non-transitory computer-readable media, storing computer-readable instructions that, when executed by a processing system, cause the processing system to perform the method(s) discussed herein. The instructions can be executed by computer-executable components integrated with the computer-readable medium and/or processing system. The computer-readable medium may include any suitable computer readable media such as RAMs, ROMs, flash memory, EEPROMs, optical devices (CD or DVD), hard drives, floppy drives, non-transitory computer readable media, or any suitable device. The computer-executable component can include a computing system and/or processing system (e.g., including one or more collocated or distributed, remote or local processors) connected to the non-transitory computer-readable medium, such as CPUs, GPUs, TPUS, microprocessors, or ASICs, but the instructions can alternatively or additionally be executed by any suitable dedicated hardware device. Embodiments of the system and/or method can include every combination and permutation of the various system components and the various method processes, wherein one or more instances of the method and/or processes described herein can be performed asynchronously (e.g., sequentially), contemporaneously (e.g., concurrently, in parallel, etc.), or in any other suitable order by and/or using one or more instances of the systems, elements, and/or entities described herein. Components and/or processes of the following system and/or method can be used with, in addition to, in lieu of, or otherwise integrated with all or a portion of the systems and/or methods disclosed in the applications mentioned above, each of which are incorporated in their entirety by this reference. As a person skilled in the art will recognize from the previous detailed description and from the figures and claims, modifications and changes can be made to the preferred embodiments of the invention without departing from the scope of this invention defined in the following claims. | 95,406 |
11861844 | DETAILED DESCRIPTION The exemplary embodiments of the present disclosure will be described in further detail below with reference to the drawings. Although the drawings illustrate the exemplary embodiments of the present disclosure, it should be understood that the present disclosure may be implemented in various forms, which should not be limited by the embodiments illustrated herein. In contrast, the purpose of providing those embodiments is to more clearly understand the present disclosure, and to completely convey the scope of the present disclosure to a person skilled in the art. It should be noted that, subject to the avoiding of any conflict, the embodiments and the features of the embodiments of the present disclosure can be combined. The present disclosure will be described in detail below with reference to the drawings and the embodiments. The First Embodiment Referring toFIG.1,FIG.1is a flow chart of the method for acquiring a three-value-processed image according to an embodiment of the present disclosure, which may particularly comprise the following steps: Step101: in a process of travelling of a vehicle, acquiring image data of an object to be identified. In an embodiment of the present disclosure, the object to be identified in the travelling of the vehicle is collected by using a vision system of the vehicle. For example, the acquired object to be identified is shown inFIG.2.FIG.2may be a global image acquired from a camera in the vision system, wherein the image area marked by the white block is the main object to be identified currently of the vehicle, which is the travelling road. In order to improve the efficiency of the image processing in the later stage, as shown inFIG.3, the image processing in the later stage mainly focuses on the image of the area in the white block. Step102: acquiring first differential-image data of the image data. In an embodiment of the present disclosure, the image processing to the image may be performed by using a Sobel Filter, to acquire the first differential-image data. As shown inFIG.3, the image to be processed is resampled and smoothing-processed, and subsequently the image is processed by using a Sobel Filter to obtain the differential-image data. Particularly, the Sobel operator shown in the formula (1) is used: FFF=[-101-202-101](1) As shown inFIG.4, a Raster Scan scans each of the pixel points in the image in the horizontal direction, and transforms the original image XXX(m,n) into the differential-image data of a differential image ddd(m,n), which may particularly refer to the following code: for m=2:mm−1for n=2:nn−1W=XXX(m−1:m+1,n−1:n+1);WF=W·*FFF;ddd(m,n)=sum(WF(:));end end Step103: by performing predetermined processing to an image in an image database that corresponds to the object to be identified, obtaining a three-value-processing coefficient. In an embodiment of the present disclosure, particularly, the differential image is three-value-processed by using the formula (2): ttt(m,n)={1ratio*max(❘"\[LeftBracketingBar]"ddd(:)❘"\[RightBracketingBar]")<ddd(m,n)-1ddd(m,n)<-ratio*max(❘"\[LeftBracketingBar]"ddd(:)❘"\[RightBracketingBar]")0otherise(2) In an embodiment of the present disclosure, particularly, the process comprises, firstly, by performing three-value processing of different thresholds to the image data of the object to be identified, obtaining an optimum-identification-result experience, and according to the experience, obtaining the range of the cumulative distribution probability of the positive-direction boundary pixels or the negative-direction boundary pixels of the three-value-processed image data; and then, according to the formula (2), acquiring the diagram of the differential-value distribution of the positive-direction boundary pixels of the differential-image data, and the cumulative distribution probability of the positive-direction boundary pixels or the negative-direction boundary pixels, and deducing reversely the three-value-processing coefficient of the range of the cumulative distribution probability of the above-described experience. For example,FIG.5shows a plurality of three-value-processed images that are obtained from the negative-direction boundary pixels in the plurality of differential diagrams. It can be determined by testing and observation by the researcher that all of the four diagrams satisfy the preset condition; in other words, they present the characteristics of little noises and clear boundaries. Furthermore, the range of the cumulative distribution probability of the positive-direction boundary pixels that corresponds to the four diagrams is 97%-99%, and the range of the probability distribution is the three-value-processing coefficient. Therefore, as shown inFIG.6, the differential-value-distribution diagram of the positive-direction boundary pixels that corresponds to the differential diagram shown inFIG.3(the upper half ofFIG.6) and the cumulative distribution probability of the positive-direction boundary pixels and the negative-direction boundary pixels (the lower half ofFIG.6) are acquired. As the predetermined cumulative distribution probability (threshold) previously acquired is 97%-99%, optimally, in the differential-value-distribution diagram shown in the upper half ofFIG.6, it can be acquired that the three-value-processing coefficient (ratio) that corresponds to the cumulative distribution probability of 97%-99% is 0.1-0.4. Step104: according to a product between the three-value-processing coefficient and a maximum pixel value in the first differential-image data, obtaining a three-value-processing threshold. In an embodiment of the present disclosure, by multiplying the acquired three-value-processing coefficient and the maximum pixel value of the acquired first differential-image data, the three-value-processing threshold is obtained. As shown inFIG.6, if the three-value-processing coefficient is 0.3, then, inFIG.6(a), according to the histogram distribution of the maximum-pixel-value coefficient, it can be obtained that the corresponding three-value-processing threshold is 99%. Step105: by application of the three-value-processing threshold in the differential-image data, obtaining the three-value-processed image. In an embodiment of the present disclosure, the differential-image data obtained above is further three-value-processed by using the acquired three-value-processing coefficient, to obtain the three-value-processed image of the differential-image data. Particularly, the following three-value-processed image ttt(m,n) code may be referred, to acquire the three-value-processed image ttt(m,n): for m=1:mmfor n=1:nnttt(m,n)=0if ddd(m,n)>Thresholdttt(m,n)=1else if ddd(m,n)←ThresholdTTT(m,n)=−1endend end By the above three-value-image processing, the values of each of the pixel points of the acquired three-value-processed image ttt(m,n) are selected from the set [−1,0,1]. Accordingly, the pixel points in the differential image ddd(m,n) that are greater than a preset threshold Threshold are assigned to be 1, wherein the value of the Threshold is ratio*max(|ddd(:)|), the pixel points in the differential image ttt(m,n) that are less than the preset threshold Threshold are assigned to be −1, and the remaining pixel points are assigned to be 0. Accordingly, the positive-direction boundary pixels with the value of 1 and the negative-direction boundary pixels with the value of −1 can be distinguished, and all of the other pixels than the boundary pixels are assigned to be 0. For example, if the three-value-processing coefficient (ratio) obtained according to the step103is 0.1-0.4, then, according to the three-value-processing coefficient, by acquiring the ratios of the values of 0.1, 0.2, 0.3 and 0.4, a three-value-processed image having a good effect is obtained. The embodiments of the present disclosure, by, in a process of travelling of a vehicle, acquiring image data of an object to be identified; acquiring first differential-image data of the image data; by performing predetermined processing to an image in an image database that corresponds to the object to be identified, obtaining a three-value-processing coefficient; according to a product between the three-value-processing coefficient and a maximum pixel value in the first differential-image data, obtaining a three-value-processing threshold; and by application of the three-value-processing threshold in the differential-image data, obtaining the three-value-processed image, realize the purpose of effectively and systematically acquiring a three-value-processed image with a good effect of the original image. The Second Embodiment Referring toFIG.7,FIG.7is a flow chart of the method for acquiring a three-value-processed image according to an embodiment of the present disclosure, which may particularly comprise the following steps: Step201: in a process of travelling of a vehicle, acquiring image data of an object to be identified. This step is the same as the step101, and is not discussed here in detail. Step202: performing grayscale processing to the image data, to acquire a grayscale map of the image data of the object to be identified. In an embodiment of the present disclosure, as shown inFIG.3, the original image is smoothed and grayscale-processed. Particularly, an image is formed by pixels of different grayscale values, and the distribution of the grayscales in the image is an important feature of the image. Moreover, the grayscale histogram expresses the distribution of the grayscales in the image, and can very intuitively exhibit the proportions that the grayscale levels in the image account for. Step203: processing the grayscale map by using a Sobel algorithm, to acquire the first differential-image data of the image data. In an embodiment of the present disclosure, as described in the step102, the grayscale map obtained in the step202is processed by using the Sobel algorithm, to acquire the differential-image data of the grayscale map, wherein the differential-image data are a matrix that is obtained from the convolution between a matrix corresponding to the grayscale map and a Sobel matrix. Step204: performing difference processing to the image in the image database that corresponds to the object to be identified, to obtain second differential-image data. The image database contains historical image data acquired by the object to be identified within a predetermined time period. In an embodiment of the present disclosure, the three-value-processed image is acquired by photographing a similar scene or the object to be identified within a predetermined time period, and the original images that are acquired multiple times are stored in an image database, to facilitate to acquire the second differential image at any time. Step205: by normalizing the second differential-image data, acquiring a differential-value distribution and a cumulative differential-value distribution of the second differential-image data. In an embodiment of the present disclosure, difference-processing test is performed to the grayscale map obtained above multiple times, to obtain a plurality of second differential images. If 10 differential image tests are performed, and three-value processing tests of different thresholds are performed, as shown inFIG.5, by the visual inspection of the technician, four diagrams of a satisfactory effect are obtained. It can be seen that the corresponding edges in the four diagrams are clear and visible, so the four diagrams are selected to be the three-value-processed image that satisfies the standard predetermined by the technician. Therefore, as shown inFIG.6, the corresponding second differential-image data in the four diagrams are selected and normalization-processed, to acquire the differential-value-distribution diagram6(a) and the cumulative differential-value-distribution diagram6(b) of the second differential image. Step206: according to the differential-value distribution and the cumulative differential-value distribution, acquiring a ratio of a separated noise to a signal in the second differential-image data, and determining the ratio to be the three-value-processing coefficient. In an embodiment of the present disclosure, as, inFIG.6, the Sobel operator processes the positive-direction boundary pixels, the total pixels in the image NN=55825, the maximum pixel value max=975, the curve ofFIG.6(a)represents the proportional relation between the pixel values and the maximum pixel value max when the three-value-processing coefficient ratio is 0-1, and the cumulative distribution of the pixel values in the differential image shown inFIG.6(b), when mapped inFIG.6(a), is shown as a plurality of vertical lines, the correspondence relation between different pixel cumulative distributions and pixel differential-value distributions is exhibited. In practical applications, the original images are processed in the same scene, the time quantity of the three-value processing of the images depends on the practical condition, and the three-value-processed image that satisfies the predetermined standard is selected by the detection of the naked eyes of the technician. The predetermined standard is determined by the technician according to the particular demands, for example, little noises, clear edges and so on. Subsequently, the histogram of the image selected above is acquired, i.e., the cumulative differential-value distribution probability of the pixel data shown inFIG.6(b), and thus it can be known that the cumulative pixel distribution probability of the four diagrams described inFIG.5is 97%-99%. According to the cumulative pixel distribution probability that has been determined, it is determined that a corresponding three-value-processing coefficient of approximately 0.3 is appropriate. In the subsequent three-value-image processing, for the same photographing scene or the same object to be identified, the three-value-processing coefficient may be directly used for the three-value-processed image processing, to obtain the most suitable three-value-processed image. Preferably, the step further comprises: Step A1: performing grayscale processing to the image in the image database that corresponds to the object to be identified, to obtain a grayscale map of the image data. This step is similar to the step202, and is not discussed here in detail. Step A2: performing three-value processing to the grayscale map, to obtain a plurality of test three-value-processed images containing pixel data of the object to be identified. In an embodiment of the present disclosure, assuming that the three-value-processed image is formed by a plurality of lines in the horizontal direction, then on each of the lines are distributed a plurality of pixel values of the three-value-processed image. As shown inFIG.8, it is a 145×385 three-value-processed image, so the three-value-processed image has 145 horizontal lines. As shown inFIG.9, assuming that the value of the positive-direction boundary pixels distributed in one of the horizontal lines is 6-12, then, regarding one road, each of the road boundaries has 3-6 pixel points, which is the horizontal pixel quantity. Step A3: according to a preset pixel quantity and a preset pixel-occurrence probability, determining pixel quantities of the plurality of test three-value-processed images. In an embodiment of the present disclosure, further assuming that the probability of the existence of the road boundaries is 50%, then, the total pixel quantity is 12*0.5*145=870, and the cumulative distribution probability of the pixels is 1-870/(145*385)=98.4%; in other words, 98.4% is the preset distribution probability. Step A4: according to the pixel quantities of the test three-value-processed images, acquiring a cumulative differential-value distribution probability of the pixel data. In an embodiment of the present disclosure, from the differential-image-data matrix obtained above, the corresponding positive-direction boundary pixels (the pixels that are positive values) or negative-direction boundary pixels (the pixels that are negative values) are extracted, to obtain their histogram distribution that is similar to that ofFIG.6. The histogram of the differential-image data is analyzed, to obtain the differential-value distribution and the cumulative differential-value distribution probability of the pixel data of the differential-image data. Step A5: according to the cumulative differential-value distribution probability, acquiring the three-value-processing coefficient. In an embodiment of the present disclosure, because previously it is determined that 98.4% is the preset distribution probability, then, correspondingly to the preset distribution probability, the corresponding ratio can be found in the differential-value-distribution diagram of the pixel data, which is the three-value-processing coefficient. Step207: by application of the three-value-processing threshold in the differential-image data, obtaining the three-value-processed image. This step is the same as the step105, and is not discussed here in detail. In practical applications, the method provided above is not limited to be applied to road identification, but may also be applied to object identification. As shown inFIG.10, if the traffic cone in a road is to be identified, firstly, the photographed original image is smoothed, grayscale-processed and difference-processed to obtain the differential image, and then, according to the differential image, the corresponding histogram is obtained, i.e., the differential-value distribution and the cumulative differential-value distribution probability of the pixel data of the differential-image data. It is known empirically that the cumulative distribution probability is 96%-99%, and thus the corresponding three-value-processing coefficient (ratio) is 0.1-0.4. Accordingly, the grayscale map is three-value-processed according to the three-value-processing coefficient, to obtain four three-value-processed images, from which the technician may, according to the practical demands, a corresponding three-value-processed image for further identification and processing of the traffic cone. The embodiments of the present disclosure, by, in a process of travelling of a vehicle, acquiring image data of an object to be identified; performing grayscale processing to the image data, to acquire a grayscale map of the image data of the object to be identified; processing the grayscale map by using a Sobel algorithm, to acquire the first differential-image data of the image data; performing difference processing to the image in the image database that corresponds to the object to be identified, to obtain second differential-image data; by normalizing the second differential-image data, acquiring a differential-value distribution and a cumulative differential-value distribution of the second differential-image data; and according to the differential-value distribution and the cumulative differential-value distribution, acquiring a ratio of a separated noise to a signal in the second differential-image data, and determining the ratio to be the three-value-processing coefficient, realize the purpose of effectively and systematically acquiring a three-value-processed image with a good effect of the original image. The Third Embodiment Referring toFIG.11,FIG.11is a structural block diagram of the device for acquiring a three-value-processed image according to an embodiment of the present disclosure. The device comprises: an object-to-be-identified-image acquiring module301configured for, in a process of travelling of a vehicle, acquiring image data of an object to be identified; a differential-image-data acquiring module302configured for acquiring first differential-image data of the image data; a three-value-processing-coefficient acquiring module303configured for, by performing predetermined processing to an image in an image database that corresponds to the object to be identified, obtaining a three-value-processing coefficient; a three-value-processing-threshold acquiring module304configured for, according to a product between the three-value-processing coefficient and a maximum pixel value in the first differential-image data, obtaining a three-value-processing threshold; and a three-value-processed-image acquiring module305configured for, by application of the three-value-processing threshold in the differential-image data, obtaining the three-value-processed image. Referring toFIG.12,FIG.12is a structural block diagram of the device for acquiring a three-value-processed image according to an embodiment of the present disclosure. The device comprises: an object-to-be-identified-image acquiring module301configured for, in a process of travelling of a vehicle, acquiring image data of an object to be identified; and a differential-image-data acquiring module302configured for acquiring first differential-image data of the image data. Preferably, the differential-image-data acquiring module302comprises: a grayscale-map acquiring submodule3021configured for performing grayscale processing to the image data, to acquire a grayscale map of the image data of the object to be identified; a differential-image-data acquiring submodule3022configured for processing the grayscale map by using a Sobel algorithm, to acquire the first differential-image data of the image data; and a three-value-processing-coefficient acquiring module303configured for, by performing predetermined processing to an image in an image database that corresponds to the object to be identified, obtaining a three-value-processing coefficient. Preferably, the three-value-processing-coefficient acquiring module comprises: a second-differential-image-data acquiring submodule3031configured for performing difference processing to the image in the image database that corresponds to the object to be identified, to obtain second differential-image data; a differential-distribution acquiring submodule3032configured for, by normalizing the second differential-image data, acquiring a differential-value distribution and a cumulative differential-value distribution of the second differential-image data; and a three-value-processing-coefficient acquiring submodule3033configured for, according to the differential-value distribution and the cumulative differential-value distribution, acquiring a ratio of a separated noise to a signal in the second differential-image data, and determining the ratio to be the three-value-processing coefficient. Preferably, the three-value-processing-coefficient acquiring module303comprises: a grayscale-map acquiring submodule configured for performing grayscale processing to the image in the image database that corresponds to the object to be identified, to obtain a grayscale map of the image data; a testing submodule configured for performing three-value processing to the grayscale map, to obtain a plurality of test three-value-processed images containing pixel data of the object to be identified; a pixel-quantity acquiring submodule configured for, according to a preset pixel quantity and a preset pixel-occurrence probability, determining pixel quantities of the plurality of test three-value-processed images; a distribution-value acquiring submodule configured for, according to the pixel quantities of the test three-value-processed images, acquiring a cumulative differential-value distribution probability of the pixel data; and a three-value-processing-coefficient acquiring submodule configured for, according to the cumulative differential-value distribution probability, acquiring the three-value-processing coefficient. a three-value-processing-threshold acquiring module304configured for, according to a product between the three-value-processing coefficient and a maximum pixel value in the first differential-image data, obtaining a three-value-processing threshold; and a three-value-processed-image acquiring module305configured for, by application of the three-value-processing threshold in the differential-image data, obtaining the three-value-processed image. Preferably, the image database contains historical image data acquired by the object to be identified within a predetermined time period. The embodiments of the present disclosure, by using an object-to-be-identified-image acquiring module configured for, in a process of travelling of a vehicle, acquiring image data of an object to be identified; a differential-image-data acquiring module configured for acquiring first differential-image data of the image data; a three-value-processing-coefficient acquiring module configured for, by performing predetermined processing to an image in an image database that corresponds to the object to be identified, obtaining a three-value-processing coefficient; a three-value-processing-threshold acquiring module configured for, according to a product between the three-value-processing coefficient and a maximum pixel value in the first differential-image data, obtaining a three-value-processing threshold; and a three-value-processed-image acquiring module configured for, by application of the three-value-processing threshold in the differential-image data, obtaining the three-value-processed image, realize the purpose of effectively and systematically acquiring a three-value-processed image with a good effect of the original image. The embodiments of the present disclosure further include a vehicle, wherein the vehicle comprises the method or device for acquiring a three-value-processed image of any one of the first embodiment to the third embodiment. The above-described device embodiments are merely illustrative, wherein the units that are described as separate components may or may not be physically separate, and the components that are displayed as units may or may not be physical units; in other words, they may be located at the same one location, and may also be distributed to a plurality of network units. Part or all of the modules may be selected according to the actual demands to realize the purposes of the solutions of the embodiments. A person skilled in the art can understand and implement the technical solutions without paying creative work. Each component embodiment of the present disclosure may be implemented by hardware, or by software modules that are operated on one or more processors, or by a combination thereof. A person skilled in the art should understand that some or all of the functions of some or all of the components of the calculating and processing device according to the embodiments of the present disclosure may be implemented by using a microprocessor or a digital signal processor (DSP) in practice. The present disclosure may also be implemented as apparatus or device programs (for example, computer programs and computer program products) for implementing part of or the whole of the method described herein. Such programs for implementing the present disclosure may be stored in a computer-readable medium, or may be in the form of one or more signals. Such signals may be downloaded from an Internet website, or provided on a carrier signal, or provided in any other forms. For example,FIG.13shows a calculating and processing device that can implement the method according to the present disclosure. The calculating and processing device traditionally comprises a processor1010and a computer program product or computer-readable medium in the form of a memory1020. The memory1020may be electronic memories such as flash memory, EEPROM (Electrically Erasable Programmable Read Only Memory), EPROM, hard disk or ROM. The memory1020has the storage space1030of the program code1031for implementing any steps of the above method. For example, the storage space1031for program code may contain program codes1031for individually implementing each of the steps of the above method. Those program codes may be read from one or more computer program products or be written into the one or more computer program products. Those computer program products include program code carriers such as hard disk, compact disk (CD), memory card or floppy disk as shown inFIG.14. Such computer program products are usually portable or fixed storage units. The storage unit may have storage segments or storage spaces with similar arrangement to the memory1020of the calculating and processing device inFIG.13. The program codes may for example be compressed in a suitable form. Generally, the storage unit contains a computer-readable code1031′, which can be read by a processor like1010. When those codes are executed by the calculating and processing device, the codes cause the calculating and processing device to implement each of the steps of the method described above. The “one embodiment”, “an embodiment” or “one or more embodiments” as used herein means that particular features, structures or characteristics described with reference to an embodiment are included in at least one embodiment of the present disclosure. Moreover, it should be noted that here an example using the wording “in an embodiment” does not necessarily refer to the same one embodiment. The description provided herein describes many concrete details. However, it can be understood that the embodiments of the present disclosure may be implemented without those concrete details. In some of the embodiments, well-known processes, structures and techniques are not described in detail, so as not to affect the understanding of the description. In the claims, any reference signs between parentheses should not be construed as limiting the claims. The word “comprise” does not exclude elements or steps that are not listed in the claims. The word “a” or “an” preceding an element does not exclude the existing of a plurality of such elements. The present disclosure may be implemented by means of hardware comprising several different elements and by means of a properly programmed computer. In unit claims that list several devices, some of those devices may be embodied by the same item of hardware. The words first, second, third and so on do not denote any order. Those words may be interpreted as names. Finally, it should be noted that the above embodiments are merely intended to explain the technical solutions of the present disclosure, and not to limit them. Although the present disclosure is explained in detail by referring to the above embodiments, a person skilled in the art should understand that he can still modify the technical solutions set forth by the above embodiments, or make equivalent substitutions to part of the technical features of them. However, those modifications or substitutions do not make the essence of the corresponding technical solutions depart from the spirit and scope of the technical solutions of the embodiments of the present disclosure. | 31,071 |
11861845 | DETAILED DESCRIPTION Radiation therapy quality assurance is a field that includes, among other things, determining whether a radiation delivery system is functioning properly and providing the prescribed radiation dose to a patient as detailed in a radiation therapy treatment plan. While many radiation delivery systems include their own functionality for displaying output and diagnostic metrics, radiation therapy quality assurance products can obtain independent measurements of what the radiation delivery system is providing. As used herein, the term “radiation delivery system” can include various components needed to generate, direct and control a radiation therapy beam. For example, a radiation therapy system can include a radiation source (e.g. a linear accelerator, particle beam source, etc.), a gantry (fixed or rotating), a collimator (to shape the radiation reaching the patient), imaging equipment (to image prior to or during therapy), and the like. As part of quality assurance, the operation of various components of the radiation delivery system can be independently assessed. Examples of such operations can include, for example, verifying the output of the radiation source, the position of a rotating gantry, the configuration of a multileaf collimator (e.g., determining its leaf positions), etc. The present disclosure describes, among other things, systems, software, and methods for determining collimator configurations based on the analysis of radiation patterns that emerge after a radiation beam passes through the collimator. FIGS.1A and1Bdepict an exemplary radiation delivery system100. This exemplary system is an open (or “C-arm”) type system that includes a linear accelerator (e.g., element110inFIG.1B) working with an RF source150, a multileaf collimator120, and a rotatable gantry130. In this exemplary system, the linear accelerator and multileaf collimator are mounted within the rotatable gantry to allow radiation beam160to be delivered along beam axis170to a patient10at multiple angles.FIG.1Aalso depicts an accessory tray140that can permit the mounting or positioning of hardware or devices between the radiation source and the patient. As described further herein, the technologies of the present disclosure can be used with radiation delivery systems such as the exemplary system depicted inFIGS.1A and1B, as well as with other types of radiotherapy systems. When performing radiation therapy quality assurance, one element of the radiation therapy device that can be assessed is the multileaf collimator (e.g., through verifying the collimator's leaf positions). One method for MLC configuration verification may involve examining the shape of the radiation field delivered to the patient by a radiation delivery system100including a radiation source (e.g., linear accelerator110) configured to deliver a radiation beam160. As shown in the simplified example ofFIG.2, a radiation field210can be shaped by blocking some portions with the leaves220of a multileaf collimator120to form an aperture230. The portion of the radiation field that passes through the aperture will then proceed to the patient to deliver a dose of radiation in the desired shape. As used herein, the term “radiation field” can refer to radiation before or after being shaped by a collimator. Scintillating materials may be used to determine the shape of a radiation field emerging from a multileaf collimator. “Determining the shape” can include determining the overall shape, determining particular MLC leaf positions (which provides information regarding the shape), etc. “Scintillators,” as discussed herein, are understood to include any material that, when hit by radiation, emit radiation (e.g., particle or photon) that can be detected (for example, by a camera). Scintillators include materials that absorb incoming radiation and then re-emit a portion of the absorbed energy in the form of light. It should be noted that when the term “light” is used herein, it is intended to include radiation within, or not within, the visible spectrum (for example, scintillators that emit infrared or other types of radiation are contemplated). Examples of scintillators can include plastic scintillators (such as Li6 plastic scintillators or polyethylene naphthalate), luminophores, crystal scintillators, phosphorescent materials, etc. As used herein, a “camera” can be any device that can detect radiation (e.g., light) coming from a scintillator. Examples of cameras can include CCD cameras, photodiodes, photomultiplier tubes, etc. The top portion ofFIG.3illustrates a simplified example of a scintillator310receiving a radiation beam160from radiation delivery system110, after it passes through a multileaf collimator120. The lower portion ofFIG.3illustrates a simplified example of a system for detecting a radiation pattern320from scintillator310using a camera330. The radiation pattern320is related to the shape of the aperture formed by the multileaf collimator (as used herein, “radiation pattern” refers to the pattern present at the scintillator as it emits radiation/light after exposure to a radiation field). Analysis of the images or signals acquired by the camera from the scintillator's radiation patterns can provide estimates of leaf positions of the multileaf collimator, independent of leaf position information that may be provided by the radiation delivery system itself. With reference toFIG.3a, the disclosure herein relating to obtaining and utilizing a shape of a radiation pattern at a scintillator also applies to obtaining images by, for example, using a camera330or screen capturing software to obtain images from a computer monitor340displaying information from a radiation detector180(e.g., an EPID). Camera330can have a field of view332, which may be different than the extents of the computer monitor. As used herein, and depicted inFIG.3a, the captured images (e.g., a video file generated by the camera or screen capture software) are referred to as images334and can be processed by image processing module336. During treatment or quality assurance, personnel may observe at computer monitor340the radiation pattern at the radiation detector180in terms of an accumulated amount of radiation. Such monitors can display the shape of the radiation pattern at any resolution (i.e., possibly, but not necessarily, the same resolution as the active surface of the radiation detector). For example, the radiation detector could have 1000×2000 detector elements but displayed at the computer monitor with 2000×4000 pixels, thus corresponding to four pixels at the computer monitor per detector element at the radiation detector. In some implementations, to provide a reference as to the extent of the displayed computer monitor image, the computer monitor may display a bounding box360of known size, scale, or other indication of the actual size of the radiation pattern at the radiation detector. Also, the shape can be displayed to also communicate the intensity of radiation. For example, the shape can have pixels with different colors reflecting different intensities or accumulations of radiation. As described further herein, both tilted scintillators or computer monitors can be utilized to provide information about the radiation reaching the radiation detector, with such information usable to derive collimator positions, dose calculation, fluence maps, etc. The present disclosure describes, among other things, technologies utilizing scintillators to verify collimator leaf positions. However, contrary to what is common in the art, the present disclosure describes certain embodiments that utilize a scintillator that is tilted, so as to not be perpendicular to the axis of the radiation beam. Also contrary to the art, certain embodiments disclose a scintillator system with a very shallow angle between the camera and the scintillator. For example, disclosed systems may be configured such that the angle between a planar scintillator and the camera's line of sight is less than 10 degrees. One implementation of the disclosed technology for determining at least a portion of a shape of a radiation field is depicted inFIG.4. The figure illustrates a simplified representation of a radiation source402and a series of collimators (an X-axis jaw410, a Y-axis jaw420, and a multileaf collimator120). Also illustrated is a patient10lying on a couch430such that the patient is hit by radiation beam160after its passes through the collimators. An accessory tray140is also depicted between the collimators and the patient couch. The exemplary system ofFIG.4can also include a scintillator440and a camera450configured to acquire images of light emitted by the scintillator during delivery of the radiation beam. As shown inFIG.4, the scintillator and camera system can be configured to be located between the radiation source and a patient couch, thus functioning as an entrance detector. It is contemplated that scintillator440may include a planar sheet of scintillating material or may include a curved sheet of scintillating material that may, for example, be oriented to have its convex surface facing toward the camera. In some embodiments, scintillator440may be sized to be large enough to cover the largest radiation field the radiation delivery system can deliver (at the location of the scintillator). In other embodiments, scintillator440may be more compact and may be smaller than the largest radiation field the system can deliver at the location of the scintillator, yet still sufficient for performing some measure of quality assurance. As illustrated inFIG.5A, scintillator440and camera450can be fixed in a support structure510. The support structure510may then be configured to be mounted to a radiation delivery system so that the scintillator will be struck by the radiation treatment beam. The scintillator and camera are preferably fixed to the support structure in a manner that sets a specific desired geometric relationship between the scintillator and camera. The exemplary embodiment depicted inFIG.5Aillustrates a simplified framework where the scintillator is located within the support structure and the camera is located and configured to view one side of the scintillator (e.g., its bottom surface). It is contemplated, however, that the camera may be placed in a position to view the top surface of the scintillator or that more than one camera may be used. In some embodiments, the scintillator and camera can be fixed to the support structure so that when the support structure is mounted to the radiation delivery system, the scintillator is not perpendicular to an axis of the radiation beam.FIG.4illustrates such an example of a scintillator440not being perpendicular to the axis170of radiation beam160. When referring herein to a scintillator being oriented so that it is not perpendicular to an axis of the radiation beam, such refers to an orientation that is purposefully not perpendicular (i.e., as opposed to an orientation that may slightly deviate from an intended perpendicular positioning). Some embodiments may have the scintillator fixed to the support structure so that the scintillator will be at an angle of between 80 to 89 degrees or between 91 and 110 degrees relative to the axis of the radiation beam when the support structure is mounted to the radiation delivery system. In other embodiments, the scintillator can be at an angle of between 84 and 88 degrees or between 92 and 96 degrees relative to the axis of the radiation beam when the support structure is mounted to the radiation delivery system. While the exemplary embodiment depicted inFIG.4shows a tilt in the Y direction, it is contemplated that a similar tilt could be implemented in the X direction or in both the X and Y directions. In some embodiments, such as the one depicted inFIG.5A, camera450and scintillator440can be fixed to support structure locations that maximize the angle between the camera and the scintillator. In such examples, the camera may be at one end of the support structure, as depicted, or may be in a corner of the support structure, or may even be located outside the perimeter of the support structure. Such embodiments can be beneficial in that the resolution of the radiation pattern imaged by the camera can be increased as compared to smaller angles where the radiation pattern is viewed more edge-on. The design of the support structure can be substantially open above and/or below the scintillator to reduce or eliminate material that may attenuate the radiation therapy beam. Alternatively, the scintillator and the camera can be substantially enclosed by the support structure (for example, to prevent dust from accumulating on the scintillator or to protect it from damage or scratching). In such embodiments, the top portion and/or bottom portion of the support structure may be designed to provide only minimal attenuation of a radiation beam. For example, the top and/or bottom portion may be a layer of thin plastic or glass that causes only slight attenuation of the radiation beam. The example depicted inFIG.5Acan represent a support structure510that substantially encloses the scintillator (i.e., is closed on all sides), butFIG.5Ais also intended to depict an example where the support structure510is merely a frame for mounting the scintillator and camera—with all the sides and the top/bottom being open. FIG.5BIllustrates another exemplary embodiment in which the top portion and bottom portion of support structure520are open to allow light or other radiation unobstructed access to/through the scintillator, but where the support structure520also includes closed structural portions on its sides. The present disclosure contemplates support structures that are constructed to include any combination of open or closed or transparent/translucent top, bottom or side materials. Some embodiments of the present disclosure can enable the use of a visible light source (e.g., a tungsten or any sort of atomic lamp, or a white light source) to check the shape of a collimator aperture while the scintillator/support structure is in place. In such embodiments, the support structure can include translucent or transparent portions, which may be, for example, the top portion, bottom portion, or any portion(s) that form the sides of the support structure. It is contemplated that any combination of portions of the support structure may be translucent or transparent. Similarly, the scintillator may also be translucent or transparent. Such translucent or transparent support structure portions and/or scintillators can allow formation of a pattern at a target location (e.g., at the isoplane) corresponding to the shape of a collimator aperture when a light source shines light through the collimator aperture onto the scintillator and/or supporting structure portion. The present disclosure contemplates that any embodiments herein (not just planar scintillator embodiments) can incorporate transparent or translucent support structure portions and/or scintillators. As used herein, the term “transparent” means that light corresponding to the shape of the collimator aperture is able to pass through without significant distortion, resulting in a pattern that can be accurately related to the shape of the collimator aperture. Similarly, as used herein, the term “translucent” means that light is able to pass, but there may be some distortion or dimming of the light and the resulting pattern corresponding to the shape of the collimator aperture. In embodiments where a translucent material is used, it is contemplated that degree of distortion will not be prohibitive of providing a pattern that can be utilized in radiation therapy quality assurance for determining the shape of a collimator aperture. Also, it is contemplated that the transparent or translucent material described herein can have any degree of attenuation of light. For example, a transparent scintillator may attenuate 50% of light but still allow a sharp (though dimmer) pattern to be formed at the target location. According to the type of application desired, translucent or transparent scintillators can have a polyvinyltoluene base, optionally including some fraction of lead (e.g., approximately 2%—appropriate for x-ray dosimetry), etc. In some embodiments, the support structure can be configured to be mounted to the radiation delivery system at an accessory tray disposed between the radiation source and a patient couch. For example, a linear accelerator may have an accessory tray or slot into which the support structure may be mounted. It thus contemplated herein that when reference is made to a support structure being “configured to be mounted,” this can include, for example, being configured in a way to be removably mounted (e.g., structurally designed to slide into an accessory tray slot or specifically sized to fit within the tray). Support structures herein are also contemplated to be configured to be mounted by virtue of more permanent structures such as the provision of screw holes or other fastening accessories to aid in mounting to a particular portion of a radiation delivery system. In certain embodiments, the scintillator(s) and camera(s) can be fixed to the support structure in a manner so that the whole assembly fits entirely within an accessory tray. The support structure mounting, in conjunction with specific fixation therein of the scintillator and camera can result in a tilted scintillator orientation with regard to the axis of the radiation beam. For example, mounting the support structure into a linac accessory tray that is perpendicular to the axis of the radiation beam, when the scintillator is fixed at an angle within the supporting structure, results in the scintillator being tilted with regard to the axis of the radiation beam. In contrast to the scintillator/camera systems described above for a C-arm type radiation delivery system, the present disclosure contemplates alternative embodiments for implementation with radiation delivery systems having a bore, for example, a radiation delivery system combined with an imaging system such as an Mill.FIG.6shows a simplified example of such a system600that includes first and second magnet housings602and604separated by gap region612. A radiation source606can be mounted to a gantry adjacent to or in a gap region612. A patient can be positioned on couch614inside bore616of the magnet so that the gantry can cause rotation of radiation source606around the patient. FIG.7depicts a cross-sectional view of the exemplary system shown inFIG.6. Such a system can be used to image a target region702(e.g., a volume and/or area within a patient), while radiation source606emits radiation704for patient treatment. Also shown is a multileaf collimator710for shaping the radiation beam directed toward the patient. The system illustrated inFIGS.6and7is only one example of a radiotherapy system having a bore that is compatible with embodiments of the present disclosure. Implementations of the technologies herein may also be used with other types of radiotherapy systems that include a patient bore. Scintillators that are shaped or configured for radiotherapy systems having a bore may be utilized in certain implementations. For example, as shownFIG.8, scintillator810can be mounted to radiation delivery system600such that scintillator810substantially follows the contour of a bore616of a radiation delivery system600. As noted above with regard toFIGS.6and7, the bore may be part of an MRI-guided radiotherapy system. “Substantially following the contour of a bore” is understood to include, for example, following the internal surface of the bore or having generally the same contour as the bore but a slightly larger or smaller diameter, etc. The scintillator may be a continuous sheet of scintillating material or may be comprised of multiple sheets. The example ofFIG.8illustrates an exemplary scintillator having multiple curved sheets of scintillating material, specifically, eight curved sheets, each covering 45 degrees of the bore. In other implementations, the scintillator may be made up of a number of planar sheets of scintillating material. For example, the configuration shown inFIG.8could instead be comprised of eight planar sheets, meeting at their edges. It is contemplated that any number of curved or planar sheets can be implemented, in any feasible combination, to cover a desired portion of the bore and they can be located at any radial distance from the axis of the bore. While the scintillator can extend around the entire circumference of the bore, it is not essential that it do so. For example, the scintillator can cover any degree or angular measure of the bore (e.g., 270 degrees, 180 degrees, 90 degrees, 45 degrees, etc.) and may be constituted of any number of sheets (e.g., ten sheets covering 27 degrees each, 18 sheets covering 10 degrees each, etc.). As described in further detail herein, mounting the scintillator to the radiation delivery system may include, for example: mounting the scintillator directly to a portion of the radiation delivery system such as the gantry, the linac, the MLC, etc.; mounting the scintillator to the bore of an imaging system associated with the radiation delivery system (e.g., an MM for an MM-guided radiation therapy system); and, mounting the scintillator indirectly, for example, mounting the scintillator to a supporting structure that can in turn be mounted to portions of the overall system (e.g., RT device, MM, etc.). In some embodiments, the scintillator can be mounted so it is at an angle to the radiation beam or the scintillator can be mounted so that at least one portion of the scintillator remains perpendicular to axis830of the radiation beam when the radiation source606is controlled to move around the bore. For example, in instances where the radiation beam axis830is radial and the scintillator is curved to be concentric with the bore, at least one portion of the scintillator (e.g., where axis830intersects the scintillator) would be perpendicular to axis830. One or more cameras configured to acquire images of light emitted by the scintillator during delivery of the radiation beam can be utilized. Similar to the previously-described embodiments, these may be mounted so as to have a shallow angle between the scintillator and the camera. For example, the cameras can be configured to be mounted at an angle of greater than 0 and less than 10 degrees relative to the scintillator. In other embodiments, the cameras may be mounted to result in angles of 10-20, 20-30, 30-40, 40-50 or 50-60 degrees between the scintillator and the line of sight of the camera. Cameras can be placed on the bore at various locations so they are able to view at least a portion of the scintillator. The cameras can be small so as to provide minimal intrusion into the inner volume of the bore where the patient is located. As shown in the example ofFIG.8, each of the scintillator sections or sheets may have a corresponding camera with a field of view covering it (shown approximately by the dotted lines). The number and disposition of the cameras and their fields of view inFIG.8are examples only and other configurations are contemplated. For example, the fields of view can cover only a portion of a section rather than an entire section, the cameras can be located at any axial location along the bore, and they can be on either side of the scintillator. In some embodiments, a support structure can be configured to be mounted to the bore, and the one or more cameras can be fixed to the support structure. In the present example, a support structure configured to be mounted to the bore may include a cylindrical framework that generally conforms to the shape of the bore, such that the supporting structure having the cameras can be inserted or installed in the bore, without the need to mount individual cameras to the bore structure itself. In such embodiments, and other embodiments, such as those described above with reference toFIG.8, the cameras can be oriented to view respective portions of the scintillator adjacent to the cameras. Here, the scintillator “adjacent the camera” means the scintillator can be located at least partially at the same angular location as the camera. In addition to mounting cameras on such a support structure, other embodiments can include having the scintillator also mounted to the support structure. Such embodiments can allow for the entire assembly of scintillators and cameras to be inserted or mounted to the bore as a unit, rather than as individual components. In other embodiments, the camera can be mounted to view a portion (i.e., some or all) of a radiation pattern displayed at a computer monitor showing radiation that was delivered to the radiation detector. The camera can be located at any position, for example, attached to the computer monitor via a mounting arm, mounted to a table or wall near the computer monitor, etc. As such, the camera can have any viewing angle relative to the computer monitor. Accordingly, the disclosure of the present, and parent, applications contemplate, among other things, the general concepts of acquiring images during delivery of a radiation beam, the images capturing at least a portion of a shape representative of a radiation pattern generated by a radiation delivery system that includes a radiation source configured to deliver the radiation beam. Thus, in addition to utilizing a scintillator to obtain images, the images can be of a computer monitor of a radiation detector, the operations further comprising determining one or more dimensions of the radiation pattern based on determining a conversion between the images and computer monitor images of the radiation pattern. As explained further below, the present and parent disclosures thus also contemplate the utilization of the captured images and the calibration techniques described herein during treatment or as part of quality assurance, to perform, for example, dose calculation, collimator position determination (e.g., MLC leaf position), fluence determinations, etc. The captured images can be acquired from the camera aimed at a computer monitor displaying the shape that is representative of the radiation pattern. The camera may be mounted in a fixed relationship to the computer monitor by mounting to the computer monitor itself or another location nearby. To allow for a user to be in front of the monitor, it is contemplated that in some implementations the images can be acquired at an angle not perpendicular to the computer monitor. In some implementations, the camera can be fixed to the computer monitor so that the camera will be at an angle of between 1 and 10 degrees relative to a screen of the computer monitor (with 90 degrees being perpendicular to the screen). In other implementations, the camera can be is fixed to the computer monitor so that the camera will be at an angle of between 4 and 8 degrees relative to the screen of the computer monitor. In some implementations to have a more direct viewing of the computer monitor, the camera can be fixed at a location that maximizes the angle between the camera and the screen. For example, the location can be a wall generally opposite the monitor. FIG.8Aillustrates an example of how the size of a radiation pattern may vary when captured by a camera viewing a computer monitor (or by screen capture at the computer monitor). The dimensions provided in the example are only for illustrative purposes and are not to scale. For example, a collimator can be controlled to form an aperture of 8 cm×8 cm. Due to beam divergence and based on distances along the beam axis, this aperture can result in a radiation pattern of 9 cm×9 cm at the isocenter and 10 cm×10 cm further along at the radiation detector. The radiation pattern, as displayed at the computer monitor (i.e., the “computer monitor image,” could be 800 pixels×800 pixels. If a screen capturing technique is used, the captured images of the radiation pattern could be, for example, 400 pixels×400 pixels. Similarly, if using a camera, due to factors including distance, offset, rotation, viewing angle, etc., the captured images from the camera could be rotated, skewed, magnified, offset, etc. When the images are corrected in software, they can then have the same shape as the radiation pattern (e.g., square), but with their final size in pixels depending on the final transformations used. However, with corrections/conversions, the images can be used to derive dimensions of radiation patterns and/or collimator positions, etc. The present disclosure provides several methods for determining dimensions of a radiation pattern, positions of a collimator used to shape the radiation field, etc. As described herein, computer monitor images may be captured by a camera or with screen capture software. As previously illustrated inFIG.8A, a complicating factor can be that the camera (and screen capture software) may generate images having a different resolution. Thus, the present disclosure provides implementations of methods and software algorithms to establish a conversion for the camera or screen capture images to provide a measure of the actual dimensions of the radiation pattern at the radiation detector (or other useful locations). Thus, in general, the software that performs this conversion (also referred to as “image processing module”) can determine a conversion between the captured images and the radiation pattern. In some implementations, the image processing module can receive conversion information entered by a user after measuring the geometric relationship between the camera and the computer monitor. In other implementations, conversion information can be determined based on utilizing imaging of markers placed at known locations. In yet other implementations, conversion information can be determined that establishes a relationship between image intensity and delivered dose. In other embodiments, the relationship between the pixels size of the displayed image and the radiation detector can be established by the original equipment manufacturer (OEM) a priori. In such circumstances, this calibration process can become a quality assurance process to confirm this relationship is as stated by the OEM. Information for conversion of camera images received by the image processing module can include: camera angle (which can introduce a different conversion of the horizontal (X) and vertical (Y) pixels in the camera image), distance between the camera and the computer monitor, magnification of the images, offset between the center of the camera's FOV and the center of the viewing field at the computer monitor (i.e., the center of the camera image of the computer monitor not coinciding with the location at the radiation detector of the axis of the radiation beam), the angle of the camera, etc. Other factors that can be considered are the refresh rate of the computer monitor, the frame rate of the camera, either (or both) of which can result in image blurring or missing data. In this way, the operations for determining the conversion can include applying one or more of a scaling, rotation, or skew correction to the images. Described below are exemplary methods for use with a camera imaging the radiation detector's computer monitor. Then, other exemplary methods are described for directly capturing the output of the computer monitor without the use of a camera. First, a predefined radiation field can be created that has known dimensions. For example, a collimator can be controlled to have an aperture 8 cm×8 cm. The camera can then acquire images of the resultant radiation pattern at a scintillator or from the computer monitor. In software, the conversion between pixels in the camera image and the known size of the aperture can be established as a calibration for the camera images. Thus, when imaging a radiation field of unknown or varying dimensions, this calibration can be applied to convert the camera images into actual dimensions of the aperture. Similarly, with a known beam divergence and distance from the collimator to the radiation detector, isocenter, or any other location, the images can then be converted or used to measure the radiation field at those locations as well. In another implementation, a graticule or other structure having markers representing known distance(s) and/or having known thicknesses can be placed at any location (e.g., on the radiation detector, at the isocenter, on the scintillator, etc.) and imaged. The markers can attenuate the beam and aid in determining image calibrations, as described further below. In some implementations, acquiring of images can be performed by screen capture of the computer monitor displaying the shape representative of the radiation pattern. Such implementations have advantages in that additional hardware (e.g., a camera) is not required, which eliminates error that could be introduced by uncertainty in a camera angle or position. Described below are some factors that can be implemented in determining conversions utilizing images acquired through screen capture. In one implementation, the conversion can be based on a ratio of pixels in the computer monitor images to the pixels in the acquired images. In another implementation, calibration methods similar to those discussed above can be performed where a radiation pattern of known dimension is projected onto the radiation detector. With known dimensions of the radiation pattern in pixels in the captured images, a conversion factor can be established. Thus, determining the conversion can include applying a scaling (likely) or rotation (if applicable). The scaling or rotation can utilize information entered by a user. Based on the present disclosure relating to use of cameras or screen capture software to obtain images of and calculate dimensions associated with a radiation pattern, the following example of use is provided. One such method can include the following steps (though not limited to the order shown below):Step 1—placing a graticule with markers that have known thicknesses and/or known dimensions between the markers. The markers can be made of one material or of different materials of similar or varying radiation attenuation. In this way, the markers will be visible in images due to being opaque to delivered radiation.Step 2—initiating delivery of a radiation beam.Step 3—imaging the graticule with the radiation detector (e.g., an EPID) during delivery of the radiation beam.Step 4—acquiring images during the delivery of the radiation beam, the images capturing at least a portion of the graticule (e.g., from a computer monitor).Step 5—determining a conversion factor based on at least the known dimensions of the graticule and the acquired images. In some implementations, the method can also include obtaining the images of a computer monitor with a camera aimed at the computer monitor or with screen capture of the computer monitor. Methods and software that enable the determination of MLC leaf positions are disclosed herein. Leaf positions can be reflected in scintillator radiation patterns imaged by one or more cameras, as described above. In one embodiment, leaf position determination can be facilitated by analyzing the edges of radiation patterns. As used herein, “radiation pattern” means the image of (or data representing the image of) scintillator light emitted due to interaction between the scintillator and a radiation beam. As illustrated by the example inFIG.9, computer software can perform operations to determine edges of a radiation pattern. For example, at910, using the image data acquired from the camera(s), an edge detection algorithm (e.g., a Canny edge detection algorithm) can be applied to a radiation pattern present in the images. The edge detection algorithm can determine at least one edge of the radiation pattern corresponding to a leaf of a multi-leaf collimator. From this, at1020, a leaf position can be determined based at least on a location of the determined edge. Following this determination, it is possible to compare leaf positions during delivery of the radiation beam with planned leaf positions (e.g., as dictated by a radiation therapy plan and/or detailed in system log files). The comparison can thus be utilized in radiation therapy quality assurance. Determining leaf position at isocenter (FIG.10) The process of determining leaf positions can then include, for example, compensating for image distortion caused by (or inherently present in) a given camera or camera system. For example, lens aberrations, and camera placement with respect to the scintillator can be accounted for. One method of accounting for optical effects from the camera system can include performing a calibration procedure with a well-known pattern that allows for a mapping of points in the image acquired by the camera to real positions in the object plane (e.g., the plane of a planar scintillator sheet). This correction/mapping can be performed for any number of opposing leaves of the MLC. As part of radiation therapy quality assurance, it can be desired to determine leaf positions at a plane through the isocenter. One method for doing so can include determining the effective size of an opening between collimator leaves at a plane parallel to the isocenter plane. Then, the effective MLC leaf positions at the isocenter plane can be determined based on the effective size, when geometrically extended to the isocenter plane. An exemplary arrangement of a simplified system used for the above determination is illustrated inFIG.10. Here, an example of a tilted scintillator1010is shown with a camera1020imaging the scintillator1010. The radiation from radiation source1005passing through the MLC1025results in a radiation pattern at the scintillator. The shape of the radiation pattern is a length (lengthscreen)1030that corresponds to the size of the opening between MLC leaves (which can be directly related to the position of the MLC leaves). One exemplary formula for determining the length (lengthpp)1040at on a plane parallel1050to the isocenter plane1092can be expressed as shown in Eq. 1, below. lengthpp=lengthscreen2-height2+X(d1+height)(1)lengthpp=lengthscreen2-height2+X(d1+height). In Eq. 1, the height is the height1060of the radiation pattern as measured in the vertical direction (or parallel to the beam axis). X is the X-coordinate1070of the right edge of the radiation pattern. d1 (element1080) is the height from the radiation source to the plane parallel to the isocenter plane. The expanded view on the right side of FIG.10illustrates that lengthpp1040can be found by accounting for the vertical projection1042of the radiation pattern to the plane parallel1050. Once lengthppis found the length (lengthiso)1090at the isocenter plane1092can be determined according to the following relation: lengthiso=d2d1lengthpp.(2) In Eq. 2, d2(element1095) is the distance from the radiation source to the isocenter plane1092. The above description and solution of the simplified geometrical arrangement of the scintillator and camera system should not be considered limiting or exclusive of other solutions that may be implemented for embodiments described herein. Furthermore, it is readily apparent that the above disclosure for a flat scintillator can apply to any flat surface or its equivalent, for example, the above-described computer monitor or image files obtained via screen capture of the computer monitor. The methods and operations described herein can further enable the determination of fluence maps at the isocenter plane, which can be useful for performing radiation therapy quality assurance. For example, the present disclosure contemplates software that can perform operations that include calculating a fluence map based at least on the leaf positions determined using the scintillator and also on beam output data obtained from the radiation therapy system. Furthermore, operations such as calculating a dose at a target location based at least on the fluence map and a patient image obtained from an imaging system may also be performed. Fluence maps, dose calculations, collimator shapes/MLC leaf positions, and other quantities that can be derived with the benefit of the disclosure herein are also described in commonly owned patent applications: U.S. patent application Ser. No. 14/694,865 (now U.S. Pat. No. 10,617,891) “Radiation detector Calibration” and U.S. patent application Ser. No. 15/395,852 “Determination Of Radiation Collimator Component Position,” the disclosures of which are incorporated by reference in their entirety. Below is one example of a method of calibrating the disclosed radiation detector monitoring system to allow accurate determination of delivered dose at the radiation detector. This example allows the user to establish a relationship between the dose delivered by the RT system and the intensity of the pixels as seen on the monitor and the image acquired by the radiation detector. The exemplary method can include any or all of the following steps, not all of which need be performed in the order shown.Step 1—The user can deliver a series of fixed (e.g., square or rectangular) radiation patterns, each pattern at a different dose level.Step 2—For each radiation pattern, the user can enable recording of the monitor screen by the camera or screen capturing via the software interface to the radiation detector system.Step 3—The user can turn the beam off and stop the camera or screen capturing software.Step 4—The camera or screen capturing software can write out a file containing a video of the screen during image acquisition.Step 5—The user can open the video file and measure the pixel intensity via software. This can be a statistical measure such as the average or median intensity in an area of the radiation pattern.Step 6—The user can enter the dose level for that radiation pattern thus establishing a relationship between the pixel intensity and the delivered dose from the radiation therapy system.Step 7—The user can repeat this process for all fields referred to in Step 1.Step 8—The dose calibration can then be saved for use during treatment or later quality assurance. In some implementations, the method may be used in conjunction with the processes described in U.S. patent application Ser. No. 14/694,865 (now U.S. Pat. No. 10,617,891) and U.S. patent application Ser. No. 15/395,852. A further example of use is provided, describing a clinical treatment workflow. The exemplary method can include any or all of the following steps, not all of which need be performed in the order shown.Step 1—The user can set the patient up for treatment and ready the radiotherapy delivery system for treatment.Step 2—The user can manually start video acquisition of the monitor by the camera or screen capturing software. Acquisition may occur continuously, and an image processing module (i.e., a collection of software operations and processors utilized for image processing) automatically processes the captured video files into segments in which the beam was being delivered at various points in the treatment.Step 3—The user can initiate beam delivery and the RT delivery system can deliver all treatment beams to the patient.Step 4—Treatment ends and the user can stop video acquisition of the monitor by the camera or screen capturing software.Step 5—The camera or screen capturing software can write out a file containing a video of the monitor during image acquisition.Step 6—The image processing system can automatically process the video file. This can be achieved by a software routine that monitors the folder in which the video files are saved.Step 6a—Alternately, the user can manually transfer the video file to a predefined location or opens the video file directly in the image processing system.Step 7—The image processing system inputs the video file into a software module (e.g., an image processing module) containing the algorithm as defined herein or by either of U.S. patent application Ser. No. 14/694,865 (now U.S. Pat. No. 10,617,891) and U.S. patent application Ser. No. 15/395,852. The software module can compute the leaf positions as function of time during delivery. This information can be used to generate a DICOM RT Plan object that can be used for dose computation. In the following, further features, characteristics, and exemplary technical solutions of the present disclosure will be described in terms of items that may be optionally claimed in any combination:Item 1: A computer program product comprising a non-transitory, machine-readable medium storing instructions which, when executed by at least one programmable processor, cause the at least one programmable processor to perform operations comprising: acquiring images during delivery of a radiation beam, the images capturing at least a portion of a shape representative of a radiation pattern generated by a radiation delivery system that includes a radiation source configured to deliver the radiation beam.Item 2: The computer program product of item 1, wherein the images are acquired from a camera aimed at a computer monitor displaying the shape representative of the radiation pattern.Item 3: The computer program product of any one of the preceding items, wherein the camera is mounted in a fixed relationship to the computer monitor by mounting to the computer monitor itself or to another location nearby.Item 4: The computer program product of any one of the preceding items, wherein the camera is fixed to the computer monitor so that the camera will be at an angle of between 1 and 10 degrees relative to a screen of the computer monitor.Item 5: The computer program product of any one of the preceding items, wherein the camera is fixed to the computer monitor so that the camera will be at an angle of between 4 and 8 degrees relative to the screen of the computer monitor.Item 6: The computer program product of any one of the preceding items, wherein the camera is fixed at a location that maximizes the angle between the camera and the screen.Item 7: The computer program product of any one of the preceding items, wherein the location is a wall generally opposite the monitor.Item 8: The computer program product of any one of the preceding items, wherein the images are acquired at an angle not perpendicular to the computer monitor.Item 9: The computer program product of any one of the preceding items, the operations further comprising receiving conversion information entered by a user after measuring a geometric relationship between the camera and the computer monitor.Item 10: The computer program product of any one of the preceding items, the operations further comprising determining conversion information based on utilizing imaging of markers placed at known locations.Item 11: The computer program product of any one of the preceding items, the operations further comprising determining conversion information that establishes a relationship between image intensity and delivered dose.Item 12: The computer program product of any one of the preceding items, the operations further comprising: applying an edge detection algorithm to a radiation pattern present in the images, the edge detection algorithm determining at least one edge of the radiation pattern corresponding to a leaf of a multi-leaf collimator; and determining a leaf position based at least on a location of the determined edge.Item 13: The computer program product of any one of the preceding items, the operations further comprising comparing the leaf position during delivery of the radiation beam with a planned leaf position, the comparing utilized in radiation therapy quality assurance.Item 14: The computer program product of any one of the preceding items, the operations further comprising calculating a fluence map based at least on the leaf position and beam output data obtained from the radiation therapy system.Item 15: The computer program product of any one of the preceding items, the operations further comprising calculating a dose at a target location based at least on the fluence map and a patient image obtained from an imaging system.Item 16: The computer program product of any one of the preceding items, wherein the dose is a three-dimensional dose delivered at the target location.Item 17: The computer program product of any one of the preceding items, wherein the acquiring is performed by screen capture of a computer monitor displaying the shape representative of the radiation pattern.Item 18: A method comprising: placing a graticule with markers that have known dimensions between the markers; initiating delivery of a radiation beam; imaging the graticule with the radiation detector during delivery of the radiation beam; acquiring images, the images capturing at least a portion of the graticule; and determining a conversion factor based on at least the known dimensions of the graticule and the acquired images.Item 19: The method of Item 18, wherein the acquiring of images comprises obtaining images of a computer monitor with a camera aimed at the computer monitor or with screen capture of the computer monitor.Item 20: A system comprising: at least one programmable processor; and a non-transitory machine-readable medium storing instructions which, when executed by the at least one programmable processor, cause the at least one programmable processor to perform operations comprising those of any of items 1-17. The present disclosure contemplates that the calculations disclosed in the embodiments herein may be performed in a number of ways, applying the same concepts taught herein, and that such calculations are equivalent to the embodiments disclosed. One or more aspects or features of the subject matter described herein can be realized in digital electronic circuitry, integrated circuitry, specially designed application specific integrated circuits (ASICs), field programmable gate arrays (FPGAs) computer hardware, firmware, software, and/or combinations thereof. These various aspects or features can include implementation in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which can be special or general purpose, coupled to receive data and instructions from, and to transmit data and instructions to, a storage system, at least one input device, and at least one output device. The programmable system or computing system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. These computer programs, which can also be referred to programs, software, software applications, applications, components, or code, include machine instructions for a programmable processor, and can be implemented in a high-level procedural language, an object-oriented programming language, a functional programming language, a logical programming language, and/or in assembly/machine language. As used herein, the term “machine-readable medium” (or “computer readable medium”) refers to any computer program product, apparatus and/or device, such as for example magnetic discs, optical disks, memory, and Programmable Logic Devices (PLDs), used to provide machine instructions and/or data to a programmable processor, including a machine-readable medium that receives machine instructions as a machine-readable signal. The term “machine-readable signal” (or “computer readable signal”) refers to any signal used to provide machine instructions and/or data to a programmable processor. The machine-readable medium can store such machine instructions non-transitorily, such as for example as would a non-transient solid-state memory or a magnetic hard drive or any equivalent storage medium. The machine-readable medium can alternatively or additionally store such machine instructions in a transient manner, such as for example as would a processor cache or other random access memory associated with one or more physical processor cores. To provide for interaction with a user, one or more aspects or features of the subject matter described herein can be implemented on a computer having a display device, such as for example a cathode ray tube (CRT) or a liquid crystal display (LCD) or a light emitting diode (LED) monitor for displaying information to the user and a keyboard and a pointing device, such as for example a mouse or a trackball, by which the user may provide input to the computer. Other kinds of devices can be used to provide for interaction with a user as well. For example, feedback provided to the user can be any form of sensory feedback, such as for example visual feedback, auditory feedback, or tactile feedback; and input from the user may be received in any form, including, but not limited to, acoustic, speech, or tactile input. Other possible input devices include, but are not limited to, touch screens or other touch-sensitive devices such as single or multi-point resistive or capacitive trackpads, voice recognition hardware and software, optical scanners, optical pointers, digital image capture devices and associated interpretation software, and the like. In the descriptions above and in the claims, phrases such as “at least one of” or “one or more of” may occur followed by a conjunctive list of elements or features. The term “and/or” may also occur in a list of two or more elements or features. Unless otherwise implicitly or explicitly contradicted by the context in which it used, such a phrase is intended to mean any of the listed elements or features individually or any of the recited elements or features in combination with any of the other recited elements or features. For example, the phrases “at least one of A and B;” “one or more of A and B;” and “A and/or B” are each intended to mean “A alone, B alone, or A and B together.” A similar interpretation is also intended for lists including three or more items. For example, the phrases “at least one of A, B, and C;” “one or more of A, B, and C;” and “A, B, and/or C” are each intended to mean “A alone, B alone, C alone, A and B together, A and C together, B and C together, or A and B and C together.” Use of the term “based on,” above and in the claims is intended to mean, “based at least in part on,” such that an unrecited feature or element is also permissible. The subject matter described herein can be embodied in systems, apparatus, methods, computer programs and/or articles depending on the desired configuration. Any methods or the logic flows depicted in the accompanying figures and/or described herein do not necessarily require the particular order shown, or sequential order, to achieve desirable results. The implementations set forth in the foregoing description do not represent all implementations consistent with the subject matter described herein. Instead, they are merely some examples consistent with aspects related to the described subject matter. Although a few variations have been described in detail above, other modifications or additions are possible. In particular, further features and/or variations can be provided in addition to those set forth herein. The implementations described above can be directed to various combinations and subcombinations of the disclosed features and/or combinations and subcombinations of further features noted above. Furthermore, above described advantages are not intended to limit the application of any issued claims to processes and structures accomplishing any or all of the advantages. Additionally, section headings shall not limit or characterize the invention(s) set out in any claims that may issue from this disclosure. Further, the description of a technology in the “Background” is not to be construed as an admission that technology is prior art to any invention(s) in this disclosure. Neither is the “Summary” to be considered as a characterization of the invention(s) set forth in issued claims. Furthermore, any reference to this disclosure in general or use of the word “invention” in the singular is not intended to imply any limitation on the scope of the claims set forth below. Multiple inventions may be set forth according to the limitations of the multiple claims issuing from this disclosure, and such claims accordingly define the invention(s), and their equivalents, that are protected thereby. | 56,943 |
11861846 | DESCRIPTION OF EMBODIMENTS FIG.1illustrates the structure of a neuron as part of a (convolutional) neural network, in which input is assigned certain weights for processing by an activation function which generates the output of the neuron. FIG.2describes the basic flow of the method according to the first aspect, which starts in step S21with acquiring the initial segmented patient image data, continues to step S22which encompasses acquisition of the segmentation correction data, and then proceeds to acquiring the atlas data in step S23. On that basis, step S24calculates the correction transformation data, which is followed by determination of the transformed correction data in step S25. Finally, the combined correction data is determined in step S26. FIG.3illustrates the basic steps of the method according to the second aspect, in which step S31encompasses execution of the method according to the first aspect and step S32relates to acquisition of the segmented individual patient image data. Step S33comprises determination of the correction re-transformation data. Then, step S34follows which determines the re-transformed correction data and step35then determines the corrected segmented patient image data. FIG.4illustrates the basic steps of the method according to the third aspect, in which step S41encompasses execution of the method according to the first or second aspect and step S42relates to acquisition of the training patient image data. Subsequent step43acquires the label data, followed by determination of the transformed statistics data in step S44. Then, step S45determines the statistics assignment data, and finally step S46determines the label relation data. FIG.5illustrates the basic steps of the method according to the fourth aspect, in which step S51encompasses acquisition of the individual patient image data and subsequent step S52determines the segmented individual patient image data. FIG.6gives an overview of the application of the method according to the first to third aspect. In a first (correction) stage, the following steps are executed: Patient image data1is segmented initially by an algorithm2to generate segmented patient image data3, e.g. to generate a label indicating the outline of an organ in the patient image data and then the segmented patient image data or the label are corrected4, e.g. by a user. The corrections are stored in the form of spatial correction maps. The spatial correction maps are registered into a generic model (atlas)5and aggregated (combined), e.g. in a central repository storing the combined correction data6. In a second (learning) stage, the following steps are executed: The combined correction data6are then input as weights7, e.g. into a cost or loss function that compares the segmented patient image data3which serves as a prediction for the cost or loss function and the label8which is part of a segmentation learning algorithm2, which, e.g. is identical to the segmentation algorithm used in the first stage. Additionally, patient data1and labels8, e.g. manually generated outlines of an organ are input. FIG.7is a schematic illustration of the medical system61according to the fifth or sixth aspect. The system is in its entirety identified by reference sign61and comprises a computer62, an electronic data storage device (such as a hard disc)63for storing the data stored by the system according to the fifth or sixth, respectively, aspect. The components of the medical system1have the functionalities and properties explained above with regard to the fifth and sixth, respectively, aspects of this disclosure. | 3,620 |
11861847 | DESCRIPTION OF EMBODIMENTS First Embodiment Embodiments of the present invention will be described below with reference to the drawings. FIG.1illustrates a general configuration of a thermal image generation system including an image processing device of a first embodiment of the present invention. The infrared generation system illustrated inFIG.1includes a thermal image sensor1, an image processing device2, a storage device3, and a display terminal4. The thermal image sensor1detects infrared light emitted from an object, and generates a thermal image representing a temperature distribution of the object. The infrared light mentioned here is electromagnetic waves having wavelengths of, for example, 8 to 12 μm. The thermal image sensor1includes multiple infrared detection elements arranged one-dimensionally or two-dimensionally. A signal output from each infrared detection element indicates a value (pixel value) of a pixel of the thermal image. As the infrared detection elements, for example, pyroelectric elements may be used. Alternatively, it is possible to use thermopile-type infrared detection elements obtained by connecting thermocouples exhibiting the Seebeck effect, bolometer-type infrared detection elements that use change in resistance with increase in temperature, or the like. The infrared detection elements are not limited to these, and may be of any type as long as they can detect infrared light. FIG.2is a functional block diagram of the image processing device2of the first embodiment. The illustrated image processing device2includes a background image generator21and an image corrector22. The background image generator21generates a background image on the basis of thermal images, which are multiple frames, output from the thermal image sensor1. The thermal images, which are multiple frames, used for generation of the background image are obtained by the thermal image sensor1repeating imaging in the same field of view. The background image generator21ranks pixels at the same positions of the thermal images, which are multiple frames, in order of magnitude of the pixel values, and generates sorted images, which are multiple frames, each formed by a set of pixels having the same rank. The background image generator21further determines, as a middle image, a sorted image formed by a set of the pixels located at a middle, i.e., the pixels having a middle rank, when the pixels are ranked in order of magnitude of the pixel values. The middle image determined in this manner is formed by a set of the pixels having the middle rank, and thus is a sorted image located at a middle when the sorted images Dc, which are multiple frames, are ranked in order of brightness. The background image generator21further calculates a feature quantity for each of the sorted images, which are multiple frames, and generates an average image by averaging, in a frame direction, sorted images satisfying the condition that the difference in the feature quantity between the sorted image and the middle image is smaller than a predetermined threshold (difference threshold) FDt. After sharpening the average image, the background image generator21generates a skeleton (or outline) image Dg by extracting a skeleton (or outline) component, and stores the generated skeleton image Dg as the background image in the storage device3. Each of the thermal images, which are multiple frames, used in the generation of the background image will be referred to as a first thermal image, and denoted by reference character Din1. The image corrector22superimposes the skeleton image Dg stored in the storage device3on a second thermal image Din2that is obtained by imaging in the same field of view as the first thermal images Din1and output from the thermal image sensor1, thereby generating a corrected thermal image Dout. The corrected thermal image Dout is sharpened and improved in S/N ratio relative to the second thermal image Din2. The background image generator21includes a temperature sorter211, a feature quantity calculator212, an analyzer213, an average image generator214, a sharpening unit215, and a skeleton component extractor216. The temperature sorter211compares the pixels at the same positions of the first thermal images Din1, which are multiple frames, e.g., N frames (N being an integer of 2 or more), and ranks the pixels in order of magnitude of the pixel values. In ranking them, it is possible to rank them from the highest to the lowest (in descending order) or from the lowest to the highest (in ascending order) of the pixel values. The temperature sorter211further generates the sorted images Dc, which are multiple frames, each formed by a set of the pixels having the same rank. Thus, the n-th sorted image Dc is formed by a set of the pixels whose rank is n (n being one of 1 to N). The temperature sorter211further determines, as the middle image Dd, a sorted image Dc formed by a set of the pixels located at the middle, i.e., the pixels having the middle rank, when the pixels are ranked in order of magnitude of the pixel values. The temperature sorter211outputs the generated multiple sorted images Dc together with information Sdc indicating the respective ranks. The temperature sorter211further outputs information IDd identifying the middle image Dd. The feature quantity calculator212calculates a feature quantity Qf serving as an indicator of brightness, for each of the sorted images Dc, which are multiple frames. As the feature quantity Qf, an average (or mean) value, a middle (or intermediate or median) value, a highest value, or a lowest value of pixel values of each sorted image, which is each frame, is calculated. The analyzer213receives the feature quantity Qf of each sorted image from the feature quantity calculator212, receives the information IDd identifying the middle image Dd from the temperature sorter211, and determines a high-temperature boundary frame Fu and a low-temperature boundary frame Fl. The analyzer213determines, as the high-temperature boundary frame Fu, an image having the largest feature quantity of the sorted images satisfying the condition that the feature quantity of the sorted image is larger than that of the middle image Dd and the difference (absolute value) in the feature quantity between the sorted image and the middle image Dd is smaller than the difference threshold FDt. When there is no sorted image satisfying the condition that the feature quantity of the sorted image is larger than that of the middle image Dd and the difference (absolute value) in the feature quantity is not smaller than the difference threshold FDt, an image having the largest feature quantity of the sorted images is determined as the high-temperature boundary frame Fu. The analyzer213further determines, as the low-temperature boundary frame Fl, an image having the smallest feature quantity of the sorted images satisfying the condition that the feature quantity of the sorted image is smaller than that of the middle image Dd and the difference (absolute value) in the feature quantity between the sorted image and the middle image Dd is smaller than the difference threshold FDt. When there is no sorted image satisfying the condition that the feature quantity of the sorted image is smaller than that of the middle image Dd and the difference (absolute value) in the feature quantity is not smaller than the difference threshold FDt, an image having the smallest feature quantity of the sorted images is determined as the low-temperature boundary frame Fl. The analyzer213outputs information IFu identifying the high-temperature boundary frame Fu and information IFl identifying the low-temperature boundary frame Fl. The difference threshold FDt may be stored in the storage device3or a parameter memory (not illustrated). The average image generator214receives, from the temperature sorter211, the sorted images Dc and the information Sdc indicating the ranks of the respective sorted images, receives, from the analyzer213, the information IFu identifying the high-temperature boundary frame Fu and the information IFl identifying the low-temperature boundary frame Fl, and generates the average image De. The average image generator214generates the average image De by averaging, in the frame direction, the pixel values of the images, which are frames, from the high-temperature boundary frame Fu to the low-temperature boundary frame Fl (including the high-temperature boundary frame Fu and low-temperature boundary frame Fl) of the sorted images Dc, which are multiple frames. “Averaging in the frame direction” refers to averaging the pixel values of the pixels at the same positions of images, which are multiple frames. In generating the average image De, by excluding the frames having feature quantities larger than the feature quantity of the high-temperature boundary frame Fu and the frames having feature quantities smaller than the feature quantity of the low-temperature boundary frame Fl, it is possible to prevent the average image from being affected by objects, in particular heat sources (high-temperature objects) or low-temperature objects, that appear temporarily. The objects that appear temporarily mentioned here include persons. The sharpening unit215sharpens the average image De to generate a sharpened image Df. Examples of the method of sharpening in the sharpening unit215include histogram equalization and a retinex method. FIG.3Aillustrates a configuration example of the sharpening unit215that performs the sharpening by histogram equalization. The sharpening unit215illustrated inFIG.3Ais formed by a histogram equalizer2151. The histogram equalizer2151performs histogram equalization on the average image De. Histogram equalization is a process of calculating a pixel value distribution of the entire image and changing the pixel values so that the pixel value distribution has a desired shape. The histogram equalization may be contrast limited adaptive histogram equalization. FIG.3Billustrates a configuration example of the sharpening unit215that performs the sharpening by a retinex method. The sharpening unit215illustrated inFIG.3Bincludes a filter separator2152, adjusters2153and2154, and a combiner2155. The filter separator2152separates the input average image De into a low-frequency component Del and a high-frequency component Deh. The adjuster2153multiplies the low-frequency component Del by a first gain to adjust the magnitudes of the pixel values. The adjuster2154multiplies the high-frequency component Deh by a second gain to adjust the magnitudes of the pixel values. The second gain is larger than the first gain. The combiner2155combines the outputs of the adjusters2153and2154. The image resulting from the combination has an enhanced high-frequency component. The skeleton component extractor216extracts a skeleton component from the sharpened image Df output from the sharpening unit215, and generates the skeleton image Dg formed by the extracted skeleton component. The skeleton component is a component representing a general structure of the image, and includes an edge component and a flat component (a slowly varying component) in the image. For the extraction of the skeleton component, a total variation norm minimization method can be used, for example. The background image generator21transmits and stores the skeleton image Dg as the background image to and in the storage device3. The image corrector22corrects the second thermal image Din2output from the thermal image sensor1by using the skeleton image Dg stored in the storage device3, and generates and outputs the corrected thermal image Dout. As described above, the second thermal image Din2is obtained by imaging in the same field of view as the first thermal images Din1. The second thermal image Din2may be obtained by imaging at an imaging time different from those of the first thermal images Din1, or one, which is one frame, of the first thermal images Din1may be used as the second thermal image. In the example ofFIG.2, the image corrector22includes a superimposer221. The superimposer221generates the corrected thermal image Dout by superimposing the skeleton image Dg on the second thermal image Din2. The superimposition is performed by, for example, weighted addition. To make the component of the skeleton image Dg sharper, it is possible to multiply the skeleton image Dg by a gain when adding the skeleton image Dg. Such weighted addition is represented by the following Equation (1): PDout=PDin2+PDg×g.Equation (1) In Equation (1), PDin2is a pixel value of the second thermal image Din2, PDgis a pixel value of the skeleton image Dg, g is a gain for the skeleton image Dg, and PDoutis a pixel value of the corrected thermal image Dout. The above is a description of an operation of the image processing device according to the first embodiment. The image processing device of the first embodiment generates the skeleton image with less noise and high contrast from the first thermal images, which are multiple frames, stores it as the background image, and combines the stored skeleton image with the second thermal image. Thus, the image processing device can generate a thermal image with a high S/N ratio and a high temporal resolution. Also, instead of simply using the average image as the background image, by extracting only the skeleton component of the average image, storing it as the background image, and adding it to the second thermal image, it is possible to add background structure information while preserving temperature information of the second thermal image. Also, in generating the background image, by excluding the thermal images, which are frames, satisfying the condition that the difference in the feature quantity between the thermal image and the middle image Dd is not smaller than the difference threshold FDt, it is possible to prevent the background image from being affected by objects, in particular heat sources (high-temperature objects) or low-temperature objects, that appear temporarily. Second Embodiment FIG.4is a functional block diagram of an image processing device2bof a second embodiment of the present invention. The image processing device2billustrated inFIG.4is generally the same as the image processing device2ofFIG.2, but includes a background image generator21band an image corrector22binstead of the background image generator21and image corrector22. The background image generator21bis generally the same as the background image generator21, but includes a threshold generator217. The image corrector22bis generally the same as the image corrector22, but includes a weight determiner222. The threshold generator217generates a threshold Th for weight determination, and transmits and stores the threshold Th to and in the storage device3. For example, the threshold generator217obtains an average (or mean) value or a middle (or intermediate or median) value of pixel values of the average image De output from the average image generator214, and determines the threshold Th on the basis of the calculated average value or middle value. The average value or middle value of pixel values of the average image De refers to an average value or middle value of the pixel values of the pixels of the entire average image De or the pixel values of the pixels located in a main portion of the average image De. The relationship of the threshold Th with the above average value or middle value is determined on the basis of experience or experiment (simulation). The threshold Th may be set to a value higher than the above average value or middle value. In this case, a value obtained by adding the difference threshold FDt to the above average value or middle value may be determined as the threshold Th. In such a case, the difference threshold FDt is also provided to the threshold generator217. The threshold generator217transmits and stores the generated threshold Th to and in the storage device3. The weight determiner222creates a weight table on the basis of the threshold Th stored in the storage device3, and generates a combination weight w on the basis of a pixel value of the second thermal image Din2with reference to the created weight table. FIGS.5A and5Billustrate examples of the weight table created by the weight determiner222on the basis of the threshold Th. In the examples illustrated inFIGS.5A and5B, in the range in which the pixel value PDin2of the second thermal image Din2is from 0 to the threshold Th, the combination weight w is kept at 1, and in the range in which the pixel value PDin2is larger than the threshold Th, the combination weight w gradually decreases as the pixel value PDin2increases. By using the weight table illustrated inFIG.5A or5B, it is possible to reduce a rate of the weighted addition of the skeleton image Dg only when the pixel value of the second thermal image Din2is higher than the threshold Th. In the above example, the weight determiner222creates the weight table by using the threshold Th. However, the weight table may be created without using the threshold Th.FIGS.5C and5Dillustrate examples of the weight table created without using the threshold Th. In the examples illustrated inFIGS.5C and5D, when the pixel value PDin2of the second thermal image Din2is 0, the combination weight w is 1, and the combination weight w gradually decreases as the pixel value PDin2increases. Even with such a weight table, it is possible to reduce the addition rate of the skeleton image Dg in a range in which the pixel value PDin2is large. In short, the weight table should be such that the combination weight w decreases as the pixel value of the second thermal image Din2increases. When the weight table illustrated inFIG.5C or5Dis created, the background image generator21bneed not include the threshold generator217(and thus may be the same as the background image generator21ofFIG.2), and the weight determiner222need not read the threshold Th from the storage device3. Third Embodiment FIG.6is a functional block diagram of an image processing device2cof a third embodiment of the present invention. The image processing device2cillustrated inFIG.6is generally the same as the image processing device2ofFIG.2, but includes a background image generator21cand an image corrector22cinstead of the background image generator21and image corrector22. The background image generator21cis generally the same as the background image generator21ofFIG.2, but does not include the skeleton component extractor216ofFIG.2and stores the sharpened image Df output from the sharpening unit215as the background image in the storage device3. The image corrector22creads the sharpened image Df stored in the storage device3, extracts a skeleton component to generate a skeleton image Dg, and corrects a second thermal image Din2by using the skeleton image Dg. Thus, in the image processing device2cillustrated inFIG.6, the extraction of the skeleton component is performed in the image corrector, not in the background image generator. Specifically, the image corrector22cincludes a skeleton component extractor223and a superimposer221. The skeleton component extractor223reads the sharpened image Df stored in the storage device3, and extracts the skeleton component to generate the skeleton image Dg. The superimposer221corrects the second thermal image Din2by superimposing the skeleton image Dg on the second thermal image Din2, and generates a corrected thermal image Dout. In the case of the configuration illustrated inFIG.6, by reading the sharpened image Df from the storage device3and displaying it, it is possible to easily determine whether the sharpened image Df includes a heat source. Fourth Embodiment FIG.7is a functional block diagram of an image processing device2dof a fourth embodiment of the present invention. The image processing device2dillustrated inFIG.7is generally the same as the image processing device2bofFIG.4, but includes a background image generator21dand an image corrector22dinstead of the background image generator21band image corrector22b. The background image generator21dis generally the same as the background image generator21b, but includes a threshold generator217dinstead of the threshold generator217. The threshold generator217dobtains an average (or mean) value or a middle (or intermediate or median) value of pixel values of the average image De output from the average image generator214, generates, in addition to the threshold Th for weight determination, a high-temperature threshold Tu and a low-temperature threshold Tl for image division, on the basis of the calculated average value or middle value, and transmits and stores the generated thresholds Th, Tu, and Tl to and in the storage device3. The high-temperature threshold Tu and low-temperature threshold Tl are used for image division. The high-temperature threshold Tu is obtained by adding the difference threshold FDt to the average value or middle value of the pixel values of the average image De. The low-temperature threshold Tl is obtained by subtracting the difference threshold FDt from the average value or middle value of the pixel values of the average image De. When the high-temperature threshold Tu is generated as described above, the threshold Th for weight determination may be the same as the high-temperature threshold Tu. The image corrector22ddivides a second thermal image Din2into a high-temperature region, an intermediate-temperature region, and a low-temperature region by using the high-temperature threshold Tu and low-temperature threshold Tl read from the storage device3, generates a color image Dh by coloring each region and combining them, and generates and outputs a corrected thermal image Dout by combining the color image Dh and the skeleton image Dg taken from the storage device3. The corrected thermal image Dout in this case is a color image colored according to the temperature of each part. The image corrector22dincludes a weight determiner222, a coloring unit224, and a superimposer221d. The weight determiner222creates a weight table and determines a weight as described regarding the configuration ofFIG.4. When creating the weight table illustrated inFIG.5A or5B, it is necessary to use the threshold Th. As described above, the threshold Th may be the same as the high-temperature threshold Tu. In this case, the high-temperature threshold Tu stored in the storage device3can be read and used as the threshold Th for creating the weight table. When the weight table illustrated inFIG.5A or5Bis created, it is possible to reduce a rate of the weighted addition of the skeleton image Dg only when the pixel value of the second thermal image Din2is higher than the threshold Th. When the threshold Th is the same as the high-temperature threshold Tu, it is possible to reduce the addition rate of the skeleton image Dg only when the pixel value PDin2of the second thermal image Din2belongs to the high-temperature region. As described regarding the configuration ofFIG.4, the weight table may be as illustrated inFIG.5C or5D. In short, the weight table should be such that the combination weight w decreases as the pixel value of the second thermal image Din2increases. As illustrated inFIGS.8and9, the coloring unit224divides the second thermal image Din2into the high-temperature region, intermediate-temperature region, and low-temperature region by using the high-temperature threshold Tu and low-temperature threshold Tl, colors each region, and combines the colored images, thereby generating the color image Dh. The color image Dh is represented by, for example, red (R), green (G), and blue (B) signals. Specifically, each pixel of the second thermal image Din2is determined to belong to the high-temperature region when the pixel value is higher than the high-temperature threshold Tu, the intermediate-temperature region when the pixel value is not higher than the high-temperature threshold Tu and not lower than the low-temperature threshold Tl, and the low-temperature region when the pixel value is lower than the low-temperature threshold Tl. In the example illustrated inFIG.8, pixels constituting a light emitting portion101of a street light, an automobile103, and a person105are determined to belong to the high-temperature region, pixels constituting a road marking107on a road are determined to belong to the intermediate-temperature region, and pixels constituting a support post109of the street light are determined to belong to the low-temperature region. The coloring unit224assigns colors in different ranges, i.e., first, second, and third ranges to the high-temperature region, intermediate-temperature region, and low-temperature region, and in each region, assigns, to each pixel, a color corresponding to the pixel value of the colors in the range assigned to the region. At this time, it is preferable to perform the assignment of colors to the high-temperature region, intermediate-temperature region, and low-temperature region and the assignment of colors corresponding to the pixel values so that the color continuously changes in boundary portions between the high-temperature region, intermediate-temperature region, and low-temperature region. For example, as illustrated inFIG.9, a hue range centered around red (e.g., from a center (center in the hue direction) of magenta to a center of yellow) is assigned to the high-temperature region, a hue range centered around green (from a center of yellow to a center of cyan) is assigned to the intermediate-temperature region, and a hue range centered around blue (from a center of cyan to a center of magenta) is assigned to the low-temperature region. In each region, a color in the assigned hue range is assigned to each pixel value. The superimposer221dweights and adds the color image Dh and the skeleton image Dg read from the storage device3by using the combination weights w. The color image Dh is represented by R, G, and B signals, which are signals of three channels, whereas the skeleton image Dg is represented by a single-channel gray signal. The skeleton image Dg is added to a luminance component Dhy of the color image Dh. In an example of the process, values of R, G, and B components of the corrected thermal image are obtained by transforming the color image Dh into the luminance component Dhy and a chrominance component, e.g., color difference components Dhcb and Dhcr, adding the skeleton image Dg to the luminance component Dhy, and inversely transforming the luminance component Djy after the addition and the chrominance component, e.g., the color difference components Dhcb and Dhcr into R, G, and B. The addition of the skeleton image Dg is represented by the following equation: PDjy=PDhy+PDg*g*w.Equation (2) In Equation (2), PDhyis a value of the luminance component Dhy of the color image Dh, PDgis a pixel value of the skeleton image Dg, g is the gain for the skeleton image Dg, w is the combination weight, and PDjyis a value of the luminance component Djy resulting from the addition. In another example of the process, when the color image Dh is constituted by signals of three channels of R, G, and B, the skeleton image Dg is added to each channel. The addition in this case is represented by the following Equations (3a) to (3c): PRout=PRin+PDg*g*w,Equation (3a) PGout=PGin+PDg*g*w,Equation (3b) PBout=PBin+PDg*g*w.Equation (3c) In Equations (3a) to (3c), PRinis a value of the R channel signal Rin of the color image Dh (a value of the red component), PGinis a value of the G channel signal Gin of the color image Dh (a value of the green component), PBinis a value of the B channel signal Bin of the color image Dh (a value of the blue component), PDgis a pixel value of the skeleton image Dg, g is the gain for the skeleton image Dg, w is the combination weight, PRoutis a value of the R channel signal Rout (a value of the red component) resulting from the addition, PGoutis a value of the G channel signal Gout (a value of the green component) resulting from the addition, and PBoutis a value of the B channel signal Bout (a value of the blue component) resulting from the addition. The above is a description of an operation of the image processing device according to the fourth embodiment. The fourth embodiment provides the same advantages as the first embodiment. In addition, in the fourth embodiment, the luminance component of the color image generated by coloring the second thermal image and the skeleton image Dg are combined. Thus, it is possible to visually separate information indicating heat sources and the skeleton image, and improve the visibility of heat sources. Specifically, when the second thermal image Din2and skeleton image Dg are combined without coloring, information indicating heat sources included in the second thermal image Din2may be buried in the skeleton image Dg. Such a situation can be avoided by coloring the second thermal image. Also, using the weight table illustrated inFIG.5A or5Bprovides the advantage that in correcting the image, when the pixel value of the second thermal image Din2is not higher than the threshold Th, the combination weight is made large, so that the second thermal image Din2is sufficiently corrected with the skeleton image Dg, and when the pixel value of the second thermal image Din2is higher than the threshold Th, the combination weight is made small, which prevents the skeleton image Dg from being added, at a great rate, to regions of the second thermal image Din2in which heat sources are present, and improves the visibility. Also, instead of fixing the assignment of colors to the pixel values regardless of the overall brightness (e.g., the average value of the pixel values) of the second thermal image Din2, by dividing the second thermal image Din2into the high-temperature region, intermediate-temperature region, and low-temperature region, assigning different colors to the regions, and performing the coloring, it is possible to always represent the portion of the image having relatively high temperature with the high-temperature color (the color assigned to the high temperature) and the portion of the thermal image having relatively low temperature with the low-temperature color (the color assigned to the low temperature). For example, when the thermal image has a temperature offset, if the assignment of colors to the pixel values is fixed, a low-temperature region may be colored with a color representing an intermediate temperature. Such a situation is advantageously prevented. Specifically, in the case of displaying a thermal image in color, for example, displaying high-temperature objects in red, low-temperature objects in blue, and intermediate-temperature objects in green is one of the typical coloring methods. However, when the thermal image has a temperature offset, the range from the low temperature to the intermediate temperature may be displayed in green, for example. Such a situation can be prevented by dividing the thermal image into the high-temperature region, intermediate-temperature region, and low-temperature region, and then coloring each region. In the fourth embodiment, as with the first embodiment, the sharpening and the extraction of the skeleton component are performed in the background image generator21d, the skeleton image is stored in the storage device3, and in the image corrector22d, the skeleton image stored in the storage device3is read and used to correct the second thermal image. Also in the fourth embodiment, as described in the third embodiment, it is possible that in the background image generator21d, the sharpened image Df obtained by the sharpening is stored in the storage device3, and in the image corrector22d, the sharpened image Df stored in the storage device3is read, the skeleton image is generated by extracting the skeleton component, and the generated skeleton image is used to correct the second thermal image. Also, in the fourth embodiment, it is possible to omit the weight determiner222and perform the weighted addition using a combination weight of a constant value. Fifth Embodiment FIG.10is a functional block diagram of an image processing device2eof a fifth embodiment. The image processing device2eillustrated inFIG.10is generally the same as the image processing device2ofFIG.2, but includes a background image generator21einstead of the background image generator21. The background image generator21eis generally the same as the background image generator21, but includes a temperature sorter211e, a feature quantity calculator212e, an analyzer213e, and an average image generator214einstead of the temperature sorter211, feature quantity calculator212, analyzer213, and average image generator214. The feature quantity calculator212ecalculates a feature quantity Qe serving as an indicator of brightness, for each of the first thermal images Din1, which are multiple frames, i.e., for each first thermal image, which is each frame. As the feature quantity Qe, an average (or mean) value, a middle (or intermediate or median) value, a highest value, or a lowest value of pixel values of each frame is calculated. The temperature sorter211ereceives the feature quantities Qe calculated by the feature quantity calculator212e, and ranks the first thermal images Din1, which are multiple frames, in order of magnitude of the feature quantities Qe. In ranking them, it is possible to rank them from the highest to the lowest (in descending order) or from the lowest to the highest (in ascending order) of the feature quantities. The temperature sorter211efurther determines, as a middle image Dd, a first thermal image Din1located at a middle, i.e., having a middle rank, when the first thermal images Din1are ranked in order of magnitude of the feature quantities Qe. The temperature sorter211eoutputs information Sdin indicating the respective ranks of the first thermal images Din1, which are multiple frames. The temperature sorter211ealso outputs information IDd identifying the middle image Dd. The analyzer213ereceives the feature quantities Qe of the respective first thermal images from the feature quantity calculator212e, receives the information IDd identifying the middle image Dd from the temperature sorter211e, and determines a high-temperature boundary frame Fu and a low-temperature boundary frame Fl. The analyzer213edetermines, as the high-temperature boundary frame Fu, an image having the largest feature quantity of the first thermal images satisfying the condition that the feature quantity of the first thermal image is larger than that of the middle image Dd and the difference (absolute value) in the feature quantity between the first thermal image and the middle image Dd is smaller than the difference threshold FDt. When there is no first thermal image satisfying the condition that the feature quantity of the first thermal image is larger than that of the middle image Dd and the difference (absolute value) in the feature quantity is not smaller than the difference threshold FDt, one of the first thermal images having the largest feature quantity is determined as the high-temperature boundary frame Fu. The analyzer213efurther determines, as the low-temperature boundary frame Fl, an image having the smallest feature quantity of the first thermal images satisfying the condition that the feature quantity of the first thermal image is smaller than that of the middle image Dd and the difference (absolute value) in the feature quantity between the first thermal image and the middle image Dd is smaller than the difference threshold FDt. When there is no first thermal image satisfying the condition that the feature quantity of the first thermal image is smaller than that of the middle image Dd and the difference (absolute value) in the feature quantity is not smaller than the difference threshold FDt, one of the first thermal images having the smallest feature quantity is determined as the low-temperature boundary frame Fl. The analyzer213eoutputs information IFu identifying the high-temperature boundary frame Fu and information IFl identifying the low-temperature boundary frame Fl. As described in the first embodiment, the difference threshold FDt may be stored in the storage device3or a parameter memory (not illustrated). The average image generator214ereceives the input first thermal images Din1, receives the information Sdin indicating the ranks of the respective first thermal images, which are frames, from the temperature sorter211e, receives the information IFu identifying the high-temperature boundary frame Fu and the information IFl identifying the low-temperature boundary frame Fl from the analyzer213e, and generates an average image De. The average image generator214egenerates the average image De by averaging, in the frame direction, the pixel values of the images, which are frames, from the high-temperature boundary frame Fu to the low-temperature boundary frame Fl (including the high-temperature boundary frame Fu and low-temperature boundary frame Fl) of the first thermal images Din1, which are multiple frames. In generating the average image De, by excluding the frames having feature quantities larger than the feature quantity of the high-temperature boundary frame Fu and the frames having feature quantities smaller than the feature quantity of the low-temperature boundary frame Fl, it is possible to prevent the average image from being affected by objects, in particular heat sources (high-temperature objects) or low-temperature objects, that appear temporarily. The objects that appear temporarily mentioned here include persons. The processes in the sharpening unit215and skeleton component extractor216are the same as those described in the first embodiment. As described above, the background image generator21ecalculates the feature quantity Qe for each of the first thermal images Din1, which are multiple frames, generates the average image De by averaging, in the frame direction, the thermal images satisfying the condition that the difference in the feature quantity between the thermal image and the thermal image located at the middle when the thermal images, which are multiple frames, are ranked in order of magnitude of the feature quantities Qe is smaller than the predetermined threshold (difference threshold) FDt, generates the skeleton image Dg by sharpening the average image and then extracting the skeleton component, and stores the generated skeleton image Dg as the background image in the storage device3. The image corrector22is the same and operates in the same manner as the image corrector22of the first embodiment. In the fifth embodiment, since temperature sort is performed according to the feature quantity of each frame, the process is relatively simple. In addition to the above-described modifications, various modifications can be made to the image processing device of each embodiment described above. Also, it is possible to combine features of each embodiment with features of other embodiments. For example, although the second embodiment has been described as a modification to the first embodiment, the same modification can be applied to the third embodiment. Also, although the fifth embodiment has been described as a modification to the first embodiment, the same modification can be applied to the second to fourth embodiments. The image processing device2,2b,2c,2d, or2edescribed in the first to fifth embodiments may be partially or wholly formed by processing circuitry. For example, the functions of the respective portions of the image processing device may be implemented by respective separate processing circuits, or the functions of the portions may be implemented by a single processing circuit. The processing circuitry may be implemented by dedicated hardware, or by software or a programmed computer. It is possible that a part of the functions of the respective portions of the image processing device is implemented by dedicated hardware and another part is implemented by software. FIG.11illustrates an example of a configuration in the case of implementing all the functions of the image processing device2,2b,2c,2d, or2eof the above embodiments with a computer300including a single processor, together with the thermal image sensor1, storage device3, and display terminal4. In the illustrated example, the computer300includes a processor310and a memory320. A program for implementing the functions of the respective portions of the image processing device is stored in the memory320or storage device3. The processor310uses, for example, a central processing unit (CPU), a graphics processing unit (GPU), a microprocessor, a microcontroller, a digital signal processor (DSP), or the like. The memory320uses, for example, a semiconductor memory, such as a random access memory (RAM), a read only memory (ROM), a flash memory, an erasable programmable read only memory (EPROM), or an electrically erasable programmable read only memory (EEPROM), a magnetic disk, an optical disk, a magnetic optical disk, or the like. The processor310implements the function of the image processing device by executing the program stored in the memory320or storage device3. When the program is stored in the storage device3, it may be executed after being loaded into the memory320once. The function of the image processing device includes control of display on the display terminal4, writing of information to the storage device3, and reading of information from the storage device3, as described above. The above processing circuitry may be attached to the thermal image sensor1. Thus, the image processing device2,2b,2c,2d, or2emay be implemented by processing circuitry attached to the thermal image sensor. Alternatively, the image processing device2,2b,2c,2d, or2emay be implemented on a cloud server connectable to the thermal image sensor1via a communication network. Also, the storage device3may be a storage area on a server on a cloud. At least one of the image processing device and storage device may be implemented in a communication mobile terminal, such as a smartphone or a remote controller. The thermal image generation system including the image processing device may be applied to a home appliance, and in this case, at least one of the image processing device and storage device may be implemented in a home energy management system (HEMS) controller. The display terminal may also be implemented in a communication terminal, such as a smartphone or a home energy management system (HEMS) controller. Image processing devices and thermal image generation systems including image processing devices of the present invention have been described above. The image processing methods implemented by the above image processing devices also form part of the present invention. Programs for causing computers to execute processes of the above image processing devices or image processing methods and computer-readable recording media storing the programs also form part of the present invention. Although embodiments of the present invention have been described, the present invention is not limited to these embodiments. REFERENCE SIGNS LIST 1thermal image sensor,2,2b,2c,2d,2eimage processing device,3storage device,4display terminal,21,21b,21c,21d,21ebackground image generator,22,22b,22c,22dimage corrector,211,211etemperature sorter,212,212efeature calculator,213,213eanalyzer,214,214eaverage image generator,215sharpening unit,216skeleton component extractor,217,217dthreshold generator,221,221dsuperimposer,222weight determiner,223skeleton component extractor,224coloring unit. | 43,930 |
11861848 | To facilitate understanding, identical reference numerals have been used, where possible, to designate identical elements that are common to the figures. It is contemplated that elements disclosed in one embodiment may be beneficially utilized on other embodiments without specific recitation. DETAILED DESCRIPTION Player tracking data has been an invaluable resource for leagues and teams to evaluate not only the team itself, but the players on the team. Conventional approaches to harvesting or generating player tracking data are limited, however, relied on installing fixed cameras in a venue in which a sporting event would take place. In other words, conventional approaches for a team to harvest or generate player tracking data required that team to equip each venue with a fixed camera system. As those skilled in the art recognize, this constraint has severely limited the scalability of player tracking systems. Further, this constraint also limits player tracking data to matches played after installation of the fixed camera system, as historical player tracking data would simply be unavailable. The one or more techniques describe herein provide a significant improvement over conventional systems by eliminating the need for a fixed camera system. Instead, the one or more techniques described herein are directed to leveraging the broadcast video feed of a sporting event to generate player tracking data. By utilizing the broadcast video feed of the sporting event, not only is the need for a dedicated fixed camera system in each arena eliminated, but generating historical player tracking data from historical sporting events would now be possible. Leveraging the broadcast video feed of the sporting event is not, however, a trivial task. For example, included in a broadcast video feed may a variety of different camera angles, close-ups of players, close-ups of the crowd, close-ups of the coach, video of the commentators, commercials, halftime shows, and the like. Accordingly, to address these issues, one or more techniques described herein are directed to clipping or partitioning a broadcast video feed into its constituent parts, e.g., different scenes in a movie or commercials from a basketball game. By clipping or partitioning the broadcast video feed into its constituent parts, the overall system may better understand the information presented to it so that the system can more effectively extract data from the underlying broadcast video feed. FIG.1is a block diagram illustrating a computing environment100, according to example embodiments. Computing environment100may include camera system102, organization computing system104, and one or more client devices108communicating via network105. Network105may be of any suitable type, including individual connections via the Internet, such as cellular or Wi-Fi networks. In some embodiments, network105may connect terminals, services, and mobile devices using direct connections, such as radio frequency identification (RFID), near-field communication (NFC), Bluetooth™, low-energy Bluetooth™ (BLE), Wi-Fi™ ZigBee™, ambient backscatter communication (ABC) protocols, USB, WAN, or LAN. Because the information transmitted may be personal or confidential, security concerns may dictate one or more of these types of connection be encrypted or otherwise secured. In some embodiments, however, the information being transmitted may be less personal, and therefore, the network connections may be selected for convenience over security. Network105may include any type of computer networking arrangement used to exchange data or information. For example, network105may be the Internet, a private data network, virtual private network using a public network and/or other suitable connection(s) that enables components in computing environment100to send and receive information between the components of environment100. Camera system102may be positioned in a venue106. For example, venue106may be configured to host a sporting event that includes one or more agents112. Camera system102may be configured to capture the motions of all agents (i.e., players) on the playing surface, as well as one or more other objects of relevance (e.g., ball, referees, etc.). In some embodiments, camera system102may be an optically-based system using, for example, a plurality of fixed cameras. For example, a system of six stationary, calibrated cameras, which project the three-dimensional locations of players and the ball onto a two-dimensional overhead view of the court may be used. In another example, a mix of stationary and non-stationary cameras may be used to capture motions of all agents on the playing surface as well as one or more objects of relevance. As those skilled in the art recognize, utilization of such camera system (e.g., camera system102) may result in many different camera views of the court (e.g., high sideline view, free-throw line view, huddle view, face-off view, end zone view, etc.). Generally, camera system102may be utilized for the broadcast feed of a given match. Each frame of the broadcast feed may be stored in a game file110. Camera system102may be configured to communicate with organization computing system104via network105. Organization computing system104may be configured to manage and analyze the broadcast feed captured by camera system102. Organization computing system104may include at least a web client application server114, a data store118, an auto-clipping agent120, a data set generator122, a camera calibrator124, a player tracking agent126, and an interface agent128. Each of auto-clipping agent120, data set generator122, camera calibrator124, player tracking agent126, and interface agent128may be comprised of one or more software modules. The one or more software modules may be collections of code or instructions stored on a media (e.g., memory of organization computing system104) that represent a series of machine instructions (e.g., program code) that implements one or more algorithmic steps. Such machine instructions may be the actual computer code the processor of organization computing system104interprets to implement the instructions or, alternatively, may be a higher level of coding of the instructions that is interpreted to obtain the actual computer code. The one or more software modules may also include one or more hardware components. One or more aspects of an example algorithm may be performed by the hardware components (e.g., circuitry) itself, rather as a result of the instructions. Data store118may be configured to store one or more game files124. Each game file124may include the broadcast data of a given match. For example, the broadcast data may a plurality of video frames captured by camera system102. Auto-clipping agent120may be configured parse the broadcast feed of a given match to identify a unified view of the match. In other words, auto-clipping agent120may be configured to parse the broadcast feed to identify all frames of information that are captured from the same view. In one example, such as in the sport of basketball, the unified view may be a high sideline view. Auto-clipping agent120may clip or segment the broadcast feed (e.g., video) into its constituent parts (e.g., difference scenes in a movie, commercials from a match, etc.). To generate a unified view, auto-clipping agent120may identify those parts that capture the same view (e.g., high sideline view). Accordingly, auto-clipping agent120may remove all (or a portion) of untrackable parts of the broadcast feed (e.g., player close-ups, commercials, half-time shows, etc.). The unified view may be stored as a set of trackable frames in a database. Data set generator122may be configured to generate a plurality of data sets from the trackable frames. In some embodiments, data set generator122may be configured to identify body pose information. For example, data set generator122may utilize body pose information to detect players in the trackable frames. In some embodiments, data set generator122may be configured to further track the movement of a ball or puck in the trackable frames. In some embodiments, data set generator122may be configured to segment the playing surface in which the event is taking place to identify one or more markings of the playing surface. For example, data set generator122may be configured to identify court (e.g., basketball, tennis, etc.) markings, field (e.g., baseball, football, soccer, rugby, etc.) markings, ice (e.g., hockey) markings, and the like. The plurality of data sets generated by data set generator122may be subsequently used by camera calibrator124for calibrating the cameras of each camera system102. Camera calibrator124may be configured to calibrate the cameras of camera system102. For example, camera calibrator124may be configured to project players detected in the trackable frames to real world coordinates for further analysis. Because cameras in camera systems102are constantly moving in order to focus on the ball or key plays, such cameras are unable to be pre-calibrated. Camera calibrator124may be configured to improve or optimize player projection parameters using a homography matrix. Player tracking agent126may be configured to generate tracks for each player on the playing surface. For example, player tracking agent126may leverage player pose detections, camera calibration, and broadcast frames to generate such tracks. In some embodiments, player tracking agent126may further be configured to generate tracks for each player, even if, for example, the player is currently out of a trackable frame. For example, player tracking agent126may utilize body pose information to link players that have left the frame of view. Interface agent128may be configured to generate one or more graphical representations corresponding to the tracks for each player generated by player tracking agent126. For example, interface agent128may be configured to generate one or more graphical user interfaces (GUIs) that include graphical representations of player tracking each prediction generated by player tracking agent126. Client device108may be in communication with organization computing system104via network105. Client device108may be operated by a user. For example, client device108may be a mobile device, a tablet, a desktop computer, or any computing system having the capabilities described herein. Users may include, but are not limited to, individuals such as, for example, subscribers, clients, prospective clients, or customers of an entity associated with organization computing system104, such as individuals who have obtained, will obtain, or may obtain a product, service, or consultation from an entity associated with organization computing system104. Client device108may include at least application132. Application132may be representative of a web browser that allows access to a website or a stand-alone application. Client device108may access application132to access one or more functionalities of organization computing system104. Client device108may communicate over network105to request a webpage, for example, from web client application server114of organization computing system104. For example, client device108may be configured to execute application132to access content managed by web client application server114. The content that is displayed to client device108may be transmitted from web client application server114to client device108, and subsequently processed by application132for display through a graphical user interface (GUI) of client device108. FIG.2is a block diagram illustrating a computing environment200, according to example embodiments. As illustrated, computing environment200includes auto-clipping agent120, data set generator122, camera calibrator124, and player tracking agent126communicating via network205. Network205may be of any suitable type, including individual connections via the Internet, such as cellular or Wi-Fi networks. In some embodiments, network205may connect terminals, services, and mobile devices using direct connections, such as radio frequency identification (RFID), near-field communication (NFC), Bluetooth™, low-energy Bluetooth™ (BLE), Wi-Fi™ ZigBee™, ambient backscatter communication (ABC) protocols, USB, WAN, or LAN. Because the information transmitted may be personal or confidential, security concerns may dictate one or more of these types of connection be encrypted or otherwise secured. In some embodiments, however, the information being transmitted may be less personal, and therefore, the network connections may be selected for convenience over security. Network205may include any type of computer networking arrangement used to exchange data or information. For example, network205may be the Internet, a private data network, virtual private network using a public network and/or other suitable connection(s) that enables components in computing environment200to send and receive information between the components of environment200. Auto-clipping agent120may include principal component analysis (PCA) agent202, clustering model204, and neural network206. As recited above, when trying to understand and extract data from a broadcast feed, auto-clipping agent120may be used to clip or segment the video into its constituent parts. In some embodiments, auto-clipping agent120may focus on separating a predefined, unified view (e.g., a high sideline view) from all other parts of the broadcast stream. PCA agent202may be configured to utilize a PCA analysis to perform per frame feature extraction from the broadcast feed. For example, given a pre-recorded video, PCA agent202may extract a frame every X-seconds (e.g., 10 seconds) to build a PCA model of the video. In some embodiments, PCA agent202may generate the PCA model using incremental PCA, through which PCA agent202may select a top subset of components (e.g., top 120 components) to generate the PCA model. PCA agent202may be further configured to extract one frame every X seconds (e.g., one second) from the broadcast stream and compress the frames using PCA model. In some embodiments, PCA agent202may utilize PCA model to compress the frames into 120-dimensional form. For example, PCA agent202may solve for the principal components in a per video manner and keep the top 100 components per frame to ensure accurate clipping. Clustering model204may be configured to the cluster the top subset of components into clusters. For example, clustering model204may be configured to center, normalize, and cluster the top 120 components into a plurality of clusters. In some embodiments, for clustering of compressed frames, clustering model204may implement k-means clustering. In some embodiments, clustering model204may set k=9 clusters. K-means clustering attempts to take some data x={x1, x2, . . . , xn} and divide it into k subsets, S={S1, S2, . . . , Sk} by optimizing: argminS∑jk∑x∈Sjx-μj2 where μjis the mean of the data in the set Sj. In other words, clustering model204attempts to find clusters with the smallest inter-cluster variance using k-means clustering techniques. Clustering model204may label each frame with its respective cluster number (e.g., cluster 1, cluster 2, . . . , cluster k). Neural network206may be configured to classify each frame as trackable or untrackable. A trackable frame may be representative of a frame that includes captures the unified view (e.g., high sideline view). An untrackable frame may be representative of a frame that does not capture the unified view. To train neural network206, an input data set that includes thousands of frames pre-labeled as trackable or untrackable that are run through the PCA model may be used. Each compressed frame and label pair (i.e., cluster number and trackable/untrackable) may be provided to neural network206for training. In some embodiments, neural network206may include four layers. The four layers may include an input layer, two hidden layers, and an output layer. In some embodiments, input layer may include 120 units. In some embodiments, each hidden layer may include 240 units. In some embodiments, output layer may include two units. The input layer and each hidden layer may use sigmoid activation functions. The output layer may use a SoftMax activation function. To train neural network206, auto-clipping agent120may reduce (e.g., minimize) the binary cross-entropy loss between the predicted label for sampleand the true label yjby: H=-1N∑jNyjlogy+(1-yj)log(1-y^J) Accordingly, once trained, neural network206may be configured to classify each frame as untrackable or trackable. As such, each frame may have two labels: a cluster number and trackable/untrackable classification. Auto-clipping agent120may utilize the two labels to determine if a given cluster is deemed trackable or untrackable. For example, if auto-clipping agent120determines that a threshold number of frames in a cluster are considered trackable (e.g., 80%), auto-clipping agent120may conclude that all frames in the cluster are trackable. Further, if auto-clipping agent120determines that less than a threshold number of frames in a cluster are considered untrackable (e.g., 30% and below), auto-clipping agent120may conclude that all frames in the cluster are untrackable. Still further, if auto-clipping agent120determines that a certain number of frames in a cluster are considered trackable (e.g., between 30% and 80%), auto-clipping agent120may request that an administrator further analyze the cluster. Once each frame is classified, auto-clipping agent120may clip or segment the trackable frames. Auto-clipping agent120may store the segments of trackable frames in database205associated therewith. Data set generator122may be configured to generate a plurality of data sets from auto-clipping agent120. As illustrated, data set generator122may include pose detector212, ball detector214, and playing surface segmenter216. Pose detector212may be configured to detect players within the broadcast feed. Data set generator122may provide, as input, to pose detector212both the trackable frames stored in database205as well as the broadcast video feed. In some embodiments, pose detector212may implement Open Pose to generate body pose data to detect players in the broadcast feed and the trackable frames. In some embodiments, pose detector212may implement sensors positioned on players to capture body pose information. Generally, pose detector212may use any means to obtain body pose information from the broadcast video feed and the trackable frame. The output from pose detector212may be pose data stored in database215associated with data set generator122. Ball detector214may be configured to detect and track the ball (or puck) within the broadcast feed. Data set generator122may provide, as input, to ball detector214both the trackable frames stored in database205and the broadcast video feed. In some embodiments, ball detector214may utilize a faster region-convolutional neural network (R-CNN) to detect and track the ball in the trackable frames and broadcast video feed. Faster R-CNN is a regional proposal based network. Faster R-CNN uses a convolutional neural network to propose a region of interest, and then classifies the object in each region of interest. Because it is a single unified network, the regions of interest and the classification steps may improve each other, thus allowing the classification to handle objects of various sizes. The output from ball detector214may be ball detection data stored in database215associated with data set generator122. Playing surface segmenter216may be configured to identify playing surface markings in the broadcast feed. Data set generator122may provide, as input, to playing surface segmenter216both trackable frames stored in database205and the broadcast video feed. In some embodiments, playing surface segmenter216may be configured to utilize a neural network to identify playing surface markings. The output from playing surface segmenter216may be playing surface markings stored in database215associated with data set generator122. Camera calibrator124may be configured to address the issue of moving camera calibration in sports. Camera calibrator124may include spatial transfer network224and optical flow module226. Camera calibrator124may receive, as input, segmented playing surface information generated by playing surface segmenter216, the trackable clip information, and posed information. Given such inputs, camera calibrator124may be configured to project coordinates in the image frame to real-world coordinates for tracking analysis. Keyframe matching module224may receive, as input, output from playing surface segmenter216and a set of templates. For each frame, keyframe matching module224may match the output from playing surface segmenter216to a template. Those frames that are able to match to a given template are considered keyframes. In some embodiments, keyframe matching module224may implement a neural network to match the one or more frames. In some embodiments, keyframe matching module224may implement cross-correlation to match the one or more frames. Spatial transformer network (STN)224may be configured to receive, as input, the identified keyframes from keyframe matching module224. STN224may implement a neural network to fit a playing surface model to segmentation information of the playing surface. By fitting the playing surface model to such output, STN224may generate homography matrices for each keyframe. Optical flow module226may be configured to identify the pattern of motion of objects from one trackable frame to another. In some embodiments, optical flow module226may receive, as input, trackable frame information and body pose information for players in each trackable frame. Optical flow module226may use body pose information to remove players from the trackable frame information. Once removed, optical flow module226may determine the motion between frames to identify the motion of a camera between successive frames. In other words, optical flow module226may identify the flow field from one frame to the next. Optical flow module226and STN224may work in conjunction to generate a homography matrix. For example, optical flow module226and STN224may generate a homography matrix for each trackable frame, such that a camera may be calibrated for each frame. The homography matrix may be used to project the track or position of players into real-world coordinates. For example, the homography matrix may indicate a 2-dimensional to 2-dimensional transform, which may be used to project the players' locations from image coordinates to the real world coordinates on the playing surface. Player tracking agent126may be configured to generate a track for each player in a match. Player tracking agent126may include neural network232and re-identification agent232. Player tracking agent126may receive, as input, trackable frames, pose data, calibration data, and broadcast video frames. In a first phase, player tracking agent126may match pairs of player patches, which may be derived from pose information, based on appearance and distance. For example, let HI be the player patch of the jthplayer at time t, and let Ijt={xjt, yjt, wjt, hjt} be the image coordinates xjt, yjt, the width wjt, and the height hjtof the jthplayer at time t. Using this, player tracking agent126may associate any pair of detections using the appearance cross correlation Cijt=Hit*Hjt+1and Lijt=∥Iit−Ijt+1∥22by finding: argmaxij(Cijt+Lijt) where I is the bounding box positions (x, y), width w, and height h; C is the cross correlation between the image patches (e.g., image cutout using a bounding box) and measures similarity between two image patches; and L is a measure of the difference (e.g., distance) between two bounding boxes I. Performing this for every pair may generate a large set of short tracklets. The end points of these tracklets may then be associated with each other based on motion consistency and color histogram similarity. For example, let vibe the extrapolated velocity from the end of the ithtracklet and vjbe the velocity extrapolated from the beginning of the jthtracklet. Then cij=vi·vjmay represent the motion consistency score. Furthermore, let p(h)irepresent the likelihood of a color h being present in an image patch i. Player tracking agent126may measure the color histogram similarity using Bhattacharyya distance: DB(pi,pj)=−ln(BC(pi,pj)) withBC(pi,pj)=Σh√{square root over (pi(h)pj(h))} Recall, tracking agent120finds the matching pair of tracklets by finding: argmaxij(Cij+DB(pi,pj)). Solving for every pair of broken tracklets may result in a set of clean tracklets, while leaving some tracklets with large, i.e., many frames, gaps. To connect the large gaps, player tracking agent may augment affinity measures to include a motion field estimation, which may account for the change of player direction that occurs over many frames. The motion field may be a vector field that represents the velocity magnitude and direction as a vector on each location on the playing surface. Given the known velocity of a number of players on the playing surface, the full motion field may be generated using cubic spline interpolation. For example, let Xi={xit}t∈(0,T)to be the court position of a player i at every time t. Then, there may exist a pair of points that have a displacement diλ(xit)=xit−xit+1if λ<T. Accordingly, the motion field may then be: V(x,λ)=G(x,5)*∑idiλ(xit), where G (x, 5) may be a Gaussian kernel with standard deviation equal to about five feet. In other words, motion field may be a Gaussian blur of all displacements. Neural network232may be used to predict player trajectories given ground truth player trajectories. Given a set of ground truth player trajectories, Xi, the velocity of each player at each frame may be calculated, which may provide the ground truth motion field for neural network232to learn. For example, given a set of ground truth player trajectories Xi, player tracking agent126may be configured to generate the set {circumflex over (V)}(x, λ), where {circumflex over (V)}(x, λ) may be the predicted motion field. Neural network232may be trained, for example, to minimize ∥V(x, λ)−{circumflex over (V)}(x, λ)∥22. Player trajectory agent may then generate the affinity score for any tracking gap of size λ by: Kij=V(x,λ)·dij where dij=xit−xit+λis the displacement vector between all broken tracks with a gap size of A. Re-identification agent234may be configured to link players that have left the frame of view. Re-identification agent234may include track generator236, conditional autoencoder240, and Siamese network242. Track generator236may be configured to generate a gallery of tracks. Track generator236may receive a plurality of tracks from database205. For each track X, there may include a player identity label y, and for each player patch I, pose information p may be provided by the pose detection stage. Given a set of player tracks, track generator236may build a gallery for each track where the jersey number of a player (or some other static feature) is always visible. The body pose information generated by data set generator122allows track generator236to determine a player's orientation. For example, track generator236may utilize a heuristic method, which may use the normalized shoulder width to determine the orientation: Sorient=lLshoulder-lRshoulder2lNeck-lHip2 where l may represent the location of one body part. The width of shoulder may be normalized by the length of the torso so that the effect of scale may be eliminated. As two shoulders should be apart when a player faces towards or backwards from the camera, track generator236may use those patches whose Sorientis larger than a threshold to build the gallery. After this stage, each track Xn, may include a gallery: Gn={Ii|Sorient,i>thresh}∀Ii∈Xn Conditional autoencoder240may be configured to identify one or more features in each track. For example, unlike conventional approaches to re-identification issues, players in team sports may have very similar appearance features, such as clothing style, clothing color, and skin color. One of the more intuitive differences may be the jersey number that may be shown at the front and/or back side of each jersey. In order to capture those specific features, conditional autoencoder240may be trained to identify such features. In some embodiments, conditional autoencoder240may be a three-layer convolutional autoencoder, where the kernel sizes may be 3×3 for all three layers, in which there are 64, 128, 128 channels respectively. Those hyper-parameters may be tuned to ensure that jersey number may be recognized from the reconstructed images so that the desired features may be learned in the autoencoder. In some embodiments, f(It) may be used to denote the features that are learned from image i. Use of conditional autoencoder240improves upon conventional processes for a variety of reasons. First, there is typically not enough training data for every player because some players only play a very short time in each game. Second, different teams can have the same jersey colors and jersey numbers, so classifying those players may be difficult. Siamese network242may be used to measure the similarity between two image patches. For example, Siamese network242may be trained to measure the similarity between two image patches based on their feature representations f(I). Given two image patches, their feature representations f(Ii) and f(Ij) may be flattened, connected, and input into a perception network. In some embodiments, L2norm may be used to connect the two sub-networks of f(Ii) and f(Ij). In some embodiments, perception network may include three layers, which include may 1024, 512, and 216 hidden units, respectively. Such network may be used to measure the similarity s(Ii, Ij) between every pair of image patches of the two tracks that have no time overlapping. In order to increase the robustness of the prediction, the final similarity score of the two tracks may be the average of all pairwise scores in their respective galleries: S(xn,xm)=1❘"\[LeftBracketingBar]"GnGm❘"\[RightBracketingBar]"∑i∈Gn,j∈Gms(Ii,Ij) This similarity score may be computed for every two tracks that do not have time overlapping. If the score is higher than some threshold, those two tracks may be associated. FIG.3is a block diagram300illustrating aspects of operations discussed above and below in conjunction withFIG.2andFIGS.4-10, according to example embodiments. Block diagram300may illustrate the overall workflow of organization computing system104in generating player tracking information. Block diagram300may include set of operations302-308. Set of operations302may be directed to generating trackable frames (e.g., Method500inFIG.5). Set of operations304may be directed to generating one or more data sets from trackable frames (e.g., operations performed by data set generator122). Set of operations306may be directed to camera calibration operations (e.g., Method700inFIG.7). Set of operations308may be directed to generating and predicting player tracks (e.g., Method900ifFIG.9and Method1000inFIG.10). FIG.4is a flow diagram illustrating a method400of generating player tracks, according to example embodiments. Method400may begin at step402. At step402, organization computing system104may receive (or retrieve) a broadcast feed for an event. In some embodiments, the broadcast feed may be a live feed received in real-time (or near real-time) from camera system102. In some embodiments, the broadcast feed may be a broadcast feed of a game that has concluded. Generally, the broadcast feed may include a plurality of frames of video data. Each frame may capture a different camera perspective. At step404, organization computing system104may segment the broadcast feed into a unified view. For example, auto-clipping agent120may be configured to parse the plurality of frames of data in the broadcast feed to segment the trackable frames from the untrackable frames. Generally, trackable frames may include those frames that are directed to a unified view. For example, the unified view may be considered a high sideline view. In other examples, the unified view may be an endzone view. In other examples, the unified view may be a top camera view. At step406, organization computing system104may generate a plurality of data sets from the trackable frames (i.e., the unified view). For example, fata set generator122may be configured to generate a plurality of data sets based on trackable clips received from auto-clipping agent120. In some embodiments, pose detector212may be configured to detect players within the broadcast feed. Data set generator122may provide, as input, to pose detector212both the trackable frames stored in database205as well as the broadcast video feed. The output from pose detector212may be pose data stored in database215associated with data set generator122. Ball detector214may be configured to detect and track the ball (or puck) within the broadcast feed. Data set generator122may provide, as input, to ball detector214both the trackable frames stored in database205and the broadcast video feed. In some embodiments, ball detector214may utilize a faster R-CNN to detect and track the ball in the trackable frames and broadcast video feed. The output from ball detector214may be ball detection data stored in database215associated with data set generator122. Playing surface segmenter216may be configured to identify playing surface markings in the broadcast feed. Data set generator122may provide, as input, to playing surface segmenter216both trackable frames stored in database205and the broadcast video feed. In some embodiments, playing surface segmenter216may be configured to utilize a neural network to identify playing surface markings. The output from playing surface segmenter216may be playing surface markings stored in database215associated with data set generator122. Accordingly, data set generator120may generate information directed to player location, ball location, and portions of the court in all trackable frames for further analysis. At step408, organization computing system104may calibrate the camera in each trackable frame based on the data sets generated in step406. For example, camera calibrator124may be configured to calibrate the camera in each trackable frame by generating a homography matrix, using the trackable frames and body pose information. The homography matrix allows camera calibrator124to take those trajectories of each player in a given frame and project those trajectories into real-world coordinates. By projection player position and trajectories into real world coordinates for each frame, camera calibrator124may ensure that the camera is calibrated for each frame. At step410, organization computing system104may be configured to generate or predict a track for each player. For example, player tracking agent126may be configured to generate or predict a track for each player in a match. Player tracking agent126may receive, as input, trackable frames, pose data, calibration data, and broadcast video frames. Using such inputs, player tracking agent126may be configured to construct player motion throughout a given match. Further, player tracking agent126may be configured to predict player trajectories given previous motion of each player. FIG.5is a flow diagram illustrating a method500of generating trackable frames, according to example embodiments. Method500may correspond to operation404discussed above in conjunction withFIG.4. Method500may begin at step502. At step502, organization computing system104may receive (or retrieve) a broadcast feed for an event. In some embodiments, the broadcast feed may be a live feed received in real-time (or near real-time) from camera system102. In some embodiments, the broadcast feed may be a broadcast feed of a game that has concluded. Generally, the broadcast feed may include a plurality of frames of video data. Each frame may capture a different camera perspective. At step504, organization computing system104may generate a set of frames for image classification. For example, auto-clipping agent120may utilize a PCA analysis to perform per frame feature extraction from the broadcast feed. Given, for example, a pre-recorded video, auto-clipping agent120may extract a frame every X-seconds (e.g., 10 seconds) to build a PCA model of the video. In some embodiments, auto-clipping agent120may generate the PCA model using incremental PCA, through which auto-clipping agent120may select a top subset of components (e.g., top 120 components) to generate the PCA model. Auto-clipping agent120may be further configured to extract one frame every X seconds (e.g., one second) from the broadcast stream and compress the frames using PCA model. In some embodiments, auto-clipping agent120may utilize PCA model to compress the frames into 120-dimensional form. For example, auto-clipping agent120may solve for the principal components in a per video manner and keep the top 100 components per frame to ensure accurate clipping. Such subset of compressed frames may be considered the set of frames for image classification. In other words, PCA model may be used to compress each frame to a small vector, so that clustering can be conducted on the frames more efficiently. The compression may be conducted by selecting the top N components from PCA model to represent the frame. In some examples, N may be 100. At step506, organization computing system104may assign each frame in the set of frames to a given cluster. For example, auto-clipping agent120may be configured to center, normalize, and cluster the top 120 components into a plurality of clusters. In some embodiments, for clustering of compressed frames, auto-clipping agent120may implement k-means clustering. In some embodiments, auto-clipping agent120may set k=9 clusters. K-means clustering attempts to take some data x={x1, x2, . . . , xn} and divide it into k subsets, S={S1, S2, . . . , Sk} by optimizing: argminS∑jk∑x∈Sjx-μj2 where μjis the mean of the data in the set Sj. In other words, clustering model204attempts to find clusters with the smallest inter-cluster variance using k-means clustering techniques. Clustering model204may label each frame with its respective cluster number (e.g., cluster 1, cluster 2, . . . , cluster k). At step508, organization computing system104may classify each frame as trackable or untrackable. For example, auto-clipping agent120may utilize a neural network to classify each frame as trackable or untrackable. A trackable frame may be representative of a frame that includes captures the unified view (e.g., high sideline view). An untrackable frame may be representative of a frame that does not capture the unified view. To train the neural network (e.g., neural network206), an input data set that includes thousands of frames pre-labeled as trackable or untrackable that are run through the PCA model may be used. Each compressed frame and label pair (i.e., cluster number and trackable/untrackable) may be provided to neural network for training. Accordingly, once trained, auto-clipping agent120may classify each frame as untrackable or trackable. As such, each frame may have two labels: a cluster number and trackable/untrackable classification. At step510, organization computing system104may compare each cluster to a threshold. For example, auto-clipping agent120may utilize the two labels to determine if a given cluster is deemed trackable or untrackable. In some embodiments, f auto-clipping agent120determines that a threshold number of frames in a cluster are considered trackable (e.g., 80%), auto-clipping agent120may conclude that all frames in the cluster are trackable. Further, if auto-clipping agent120determines that less than a threshold number of frames in a cluster are considered untrackable (e.g., 30% and below), auto-clipping agent120may conclude that all frames in the cluster are untrackable. Still further, if auto-clipping agent120determines that a certain number of frames in a cluster are considered trackable (e.g., between 30% and 80%), auto-clipping agent120may request that an administrator further analyze the cluster. If at step510organization computing system104determines that greater than a threshold number of frames in the cluster are trackable, then at step512auto-clipping agent120may classify the cluster as trackable. If, however, at step510organization computing system104determines that less than a threshold number of frames in the cluster are trackable, then at step514, auto-clipping agent120may classify the cluster as untrackable. FIG.6is a block diagram600illustrating aspects of operations discussed above in conjunction with method500, according to example embodiments. As shown, block diagram600may include a plurality of sets of operations602-608. At set of operations602, video data (e.g., broadcast video) may be provided to auto-clipping agent120. Auto-clipping agent120may extract frames from the video. In some embodiments, auto-clipping agent120may extract frames from the video at a low frame rate. An incremental PCA algorithm may be used by auto-clipping agent to select the top 120 components (e.g., frames) from the set of frames extracted by auto-clipping agent120. Such operations may generate a video specific PCA model. At set of operations604, video data (e.g., broadcast video) may be provided to auto-clipping agent120. Auto-clipping agent120may extract frames from the video. In some embodiments, auto-clipping agent120may extract frames from the video at a medium frame rate. The video specific PCA model may be used by auto-clipping agent120to compress the frames extracted by auto-clipping agent120. At set of operations606, the compressed frames and a pre-selected number of desired clusters may be provided to auto-clipping agent120. Auto-clipping agent120may utilize k-means clustering techniques to group the frames into one or more clusters, as set forth by the pre-selected number of desired clusters. Auto-clipping agent120may assign a cluster label to each compressed frames. Auto-clipping agent120may further be configured to classify each frame as trackable or untrackable. Auto-clipping agent120may label each respective frame as such. At set of operations608, auto-clipping agent120may analyze each cluster to determine if the cluster includes at least a threshold number of trackable frames. For example, as illustrated, if 80% of the frames of a cluster are classified as trackable, then auto-clipping agent120may consider the entire cluster as trackable. If, however, less than 80% of a cluster is classified as trackable, auto-clipping agent may determine if at least a second threshold number of frames in a cluster are trackable. For example, is illustrated if 70% of the frames of a cluster are classified as untrackable, auto-clipping agent120may consider the entire cluster trackable. If, however, less than 70% of the frames of the cluster are classified as untrackable, i.e., between 30% and 70% trackable, then human annotation may be requested. FIG.7is a flow diagram illustrating a method700of calibrating a camera for each trackable frame, according to example embodiments. Method700may correspond to operation408discussed above in conjunction withFIG.4. Method700may begin at step702. At step702, organization computing system104may retrieve video data and pose data for analysis. For example, camera calibrator124may retrieve from database205the trackable frames for a given match and pose data for players in each trackable frame. Following step702, camera calibrator124may execute two parallel processes to generate homography matrix for each frame. Accordingly, the following operations are not meant to be discussed as being performed sequentially, but may instead be performed in parallel or sequentially. At step704, organization computing system104may remove players from each trackable frame. For example, camera calibrator124may parse each trackable frame retrieved from database205to identify one or more players contained therein. Camera calibrator124may remove the players from each trackable frame using the pose data retrieved from database205. For example, camera calibrator124may identify those pixels corresponding to pose data and remove the identified pixels from a given trackable frame. At step706, organization computing system104may identify the motion of objects (e.g., surfaces, edges, etc.) between successive trackable frames. For example, camera calibrator124may analyze successive trackable frames, with players removed therefrom, to determine the motion of objects from one frame to the next. In other words, optical flow module226may identify the flow field between successive trackable frames. At step708, organization computing system104may match an output from playing surface segmenter216to a set of templates. For example, camera calibrator124may match one or more frames in which the image of the playing surface is clear to one or more templates. Camera calibrator124may parse the set of trackable clips to identify those clips that provide a clear picture of the playing surface and the markings therein. Based on the selected clips, camera calibrator124may compare such images to playing surface templates. Each template may represent a different camera perspective of the playing surface. Those frames that are able to match to a given template are considered keyframes. In some embodiments, camera calibrator124may implement a neural network to match the one or more frames. In some embodiments, camera calibrator124may implement cross-correlation to match the one or more frames. At step710, organization computing system104may fit a playing surface model to each keyframe. For example, camera calibrator124may be configured to receive, as input, the identified keyframes. Camera calibrator124may implement a neural network to fit a playing surface model to segmentation information of the playing surface. By fitting the playing surface model to such output, camera calibrator124may generate homography matrices for each keyframe. At step712, organization computing system104may generate a homography matrix for each trackable frame. For example, camera calibrator124may utilize the flow fields identified in step706and the homography matrices for each key frame to generate a homography matrix for each frame. The homography matrix may be used to project the track or position of players into real-world coordinates. For example, given the geometric transform represented by the homography matrix, camera calibrator124may use his transform to project the location of players on the image to real-world coordinates on the playing surface. At step714, organization computing system104may calibrate each camera based on the homography matrix. FIG.8is a block diagram800illustrating aspects of operations discussed above in conjunction with method700, according to example embodiments. As shown, block diagram800may include inputs802, a first set of operations804, and a second set of operations806. First set of operations804and second set of operations806may be performed in parallel. Inputs802may include video clips808and pose detection810. In some embodiments, video clips808may correspond to trackable frames generated by auto-clipping agent120. In some embodiments, pose detection810may correspond to pose data generated by pose detector212. As illustrated, only video clips808may be provided as input to first set of operations804; both video clips804and post detection810may be provided as input to second set of operations806. First set of operations804may include semantic segmentation812, keyframe matching814, and STN fitting816. At semantic segmentation812, playing surface segmenter216may be configured to identify playing surface markings in a broadcast feed. In some embodiments, playing surface segmenter216may be configured to utilize a neural network to identify playing surface markings. Such segmentation information may be performed in advance and provided to camera calibration124from database215. At keyframe matching814, keyframe matching module224may be configured to match one or more frames in which the image of the playing surface is clear to one or more templates. At STN fitting816, STN226may implement a neural network to fit a playing surface model to segmentation information of the playing surface. By fitting the playing surface model to such output, STN224may generate homography matrices for each keyframe. Second set of operations806may include camera motion estimation818. At camera flow estimation818, optical flow module226may be configured to identify the pattern of motion of objects from one trackable frame to another. For example, optical flow module226may use body pose information to remove players from the trackable frame information. Once removed, optical flow module226may determine the motion between frames to identify the motion of a camera between successive frames. First set of operations804and second set of operations806may lead to homography interpolation816. Optical flow module226and STN224may work in conjunction to generate a homography matrix for each trackable frame, such that a camera may be calibrated for each frame. The homography matrix may be used to project the track or position of players into real-world coordinates. FIG.9is a flow diagram illustrating a method900of tracking players, according to example embodiments. Method900may correspond to operation410discussed above in conjunction withFIG.4. Method900may begin at step902. At step902, organization computing system104may retrieve a plurality of trackable frames for a match. Each of the plurality of trackable frames may include one or more sets of metadata associated therewith. Such metadata may include, for example, body pose information and camera calibration data. In some embodiments, player tracking agent126may further retrieve broadcast video data. At step904, organization computing system104may generate a set of short tracklets. For example, player tracking agent126may match pairs of player patches, which may be derived from pose information, based on appearance and distance to generate a set of short tracklets. For example, let Hjibe the player patch of the jthplayer at time t, and let Ijt={xjt, yjt, wjt, hjt} be the image coordinates xjt, yjt, the width wjt, and the height hjtof the jthplayer at time t. Using this, player tracking agent126may associated any pair of detections using the appearance cross correlation Cijt=Hit*Hjt+1and Lijt=∥Iit−Ijt+1∥22by finding: argmaxij(Cijt+Lijt). Performing this for every pair may generate a set of short tracklets. The end points of these tracklets may then be associated with each other based on motion consistency and color histogram similarity. For example, let vibe the extrapolated velocity from the end of the ithtracklet and vjbe the velocity extrapolated from the beginning of the jthtracklet. Then cij=vi·vjmay represent the motion consistency score. Furthermore, let p(h)irepresent the likelihood of a color h being present in an image patch i. Player tracking agent126may measure the color histogram similarity using Bhattacharyya distance: DB(pi,pj)=−ln(BC(pi,pj)) withBC(pi,pj)=Σh√{square root over (pi(h)pj(h))} At step906, organization computing system104may connect gaps between each set of short tracklets. For example, recall that tracking agent120finds the matching pair of tracklets by finding: argmaxij(cij+DB(pi,pj)). Solving for every pair of broken tracklets may result in a set of clean tracklets, while leaving some tracklets with large, i.e., many frames, gaps. To connect the large gaps, player tracking agent126may augment affinity measures to include a motion field estimation, which may account for the change of player direction that occurs over many frames. The motion field may be a vector field which measures what direction a player at a point on the playing surface x would be after some time λ. For example, let Xi={xit}t∈(0,T)to be the court position of a player i at every time t. Then, there may exist a pair of points that have a displacement diλ(xit)=xit−xit+1if λ<T. Accordingly, the motion field may then be: V(x,λ)=G(x,5)*∑idiλ(xit), where G (x, 5) may be a Gaussian kernel with standard deviation equal to about five feet. In other words, motion field may be a Gaussian blur of all displacements. At step908, organization computing system104may predict a motion of an agent based on the motion field. For example, player tracking system126may use a neural network (e.g., neural network232) to predict player trajectories given ground truth player trajectory. Given a set of ground truth player trajectories Xi, player tracking agent126may be configured to generate the set {circumflex over (V)}(x, λ), where {circumflex over (V)}(x, λ) may be the predicted motion field. Player tracking agent126may train neural network232to reduce (e.g., minimize) ∥V(x, λ)−{circumflex over (V)}(x, λ)∥22. Player tracking agent126may then generate the affinity score for any tracking gap of size λ by: Kij=V(x,λ)·dij where dij=xit−xjt+λis the displacement vector between all broken tracks with a gap size of λ. Accordingly, player tracking agent126may solve for the matching pairs as recited above. For example, given the affinity score, player tracking agent126may assign every pair of broken tracks using a Hungarian algorithm. The Hungarian algorithm (e.g., Kuhn-Munchers) may optimize the best set of matches under a constraint that all pairs are to be matched. At step910, organization computing system104may output a graphical representation of the prediction. For example, interface agent128may be configured to generate one or more graphical representations corresponding to the tracks for each player generated by player tracking agent126. For example, interface agent128may be configured to generate one or more graphical user interfaces (GUIs) that include graphical representations of player tracking each prediction generated by player tracking agent126. In some situations, during the course of a match, players or agents have the tendency to wander outside of the point-of-view of camera. Such issue may present itself during an injury, lack of hustle by a player, quick turnover, quick transition from offense to defense, and the like. Accordingly, a player in a first trackable frame may no longer be in a successive second or third trackable frame. Player tracking agent126may address this issue via re-identification agent234. FIG.10is a flow diagram illustrating a method1000of tracking players, according to example embodiments. Method1000may correspond to operation410discussed above in conjunction withFIG.4. Method1000may begin at step1002. At step1002, organization computing system104may retrieve a plurality of trackable frames for a match. Each of the plurality of trackable frames may include one or more sets of metadata associated therewith. Such metadata may include, for example, body pose information and camera calibration data. In some embodiments, player tracking agent126may further retrieve broadcast video data. At step1004, organization computing system104may identify a subset of short tracks in which a player has left the camera's line of vision. Each track may include a plurality of image patches associated with at least one player. An image patch may refer to a subset of a corresponding frame of a plurality of trackable frames. In some embodiments, each track X may include a player identity label y. In some embodiments, each player patch I in a given track X may include pose information generated by data set generator122. For example, given an input video, pose detection, and trackable frames, re-identification agent234may generate a track collection that includes a lot of short broken tracks of players. At step1006, organization computing system104may generate a gallery for each track. For example, given those small tracks, re-identification agent234may build a gallery for each track. Re-identification agent234may build a gallery for each track where the jersey number of a player (or some other static feature) is always visible. The body pose information generated by data set generator122allows re-identification agent234to determine each player's orientation. For example, re-identification agent234may utilize a heuristic method, which may use the normalized shoulder width to determine the orientation: Sorient=lLshoulder-lRshoulder2lNeck-lHip2 where l may represent the location of one body part. The width of shoulder may be normalized by the length of the torso so that the effect of scale may be eliminated. As two shoulders should be apart when a player faces towards or backwards from the camera, re-identification agent234may use those patches whose Sorientis larger than a threshold to build the gallery. Accordingly, each track Xn, may include a gallery: Gn={Ii|Sorient,i>thresh}∀Ii∈Xn At step1008, organization computing system104may match tracks using a convolutional autoencoder. For example, re-identification agent234may use conditional autoencoder (e.g., conditional autoencoder240) to identify one or more features in each track. For example, unlike conventional approaches to re-identification issues, players in team sports may have very similar appearance features, such as clothing style, clothing color, and skin color. One of the more intuitive differences may be the jersey number that may be shown at the front and/or back side of each jersey. In order to capture those specific features, re-identification agent234may train conditional autoencoder to identify such features. In some embodiments, conditional autoencoder may be a three-layer convolutional autoencoder, where the kernel sizes may be 3×3 for all three layers, in which there are 64, 128, 128 channels respectively. Those hyper-parameters may be tuned to ensure that jersey number may be recognized from the reconstructed images so that the desired features may be learned in the autoencoder. In some embodiments, f(It) may be used to denote the features that are learned from image i. Using a specific example, re-identification agent234may identify a first track that corresponds to a first player. Using conditional autoencoder240, re-identification agent234may learn a first set of jersey features associated with the first track, based on for example, a first set of image patches included or associated with the first track. Re-identification agent234may further identify a second track that may initially correspond to a second player. Using conditional autoencoder240, re-identification agent234may learn a second set of jersey features associated with the second track, based on, for example, a second set of image patches included or associated with the second track. At step1010, organization computing system104may measure a similarity between matched tracks using a Siamese network. For example, re-identification agent234may train Siamese network (e.g., Siamese network242) to measure the similarity between two image patches based on their feature representations f(I). Given two image patches, their feature representations f(Ii) and f(Ij) may be flattened, connected, and fed into a perception network. In some embodiments, L2norm may be used to connect the two sub-networks of f(Ii) and f(Ij). In some embodiments, perception network may include three layers, which include 1024, 512, and 216 hidden units, respectively. Such network may be used to measure the similarity s(Ii, Ij) between every pair of image patches of the two tracks that have no time overlapping. In order to increase the robustness of the prediction, the final similarity score of the two tracks may be the average of all pairwise scores in their respective galleries: S(xn,xm)=1❘"\[LeftBracketingBar]"GnGm❘"\[RightBracketingBar]"∑i∈Gn,j∈Gms(Ii,Ij) Continuing with the aforementioned example, re-identification agent234may utilize Siamese network242to compute a similarity score between the first set of learned jersey features and the second set of learned jersey features. At step1012, organization computing system104may associate the tracks, if their similarity score is higher than a predetermined threshold. For example, re-identification agent234may compute a similarity score be computed for every two tracks that do not have time overlapping. If the score is higher than some threshold, re-identification agent234may associate those two tracks may. Continuing with the above example, re-identification agent234may associated with first track and the second track if, for example, the similarity score generated by Siamese network242is at least higher than a threshold value. Assuming the similarity score is higher than the threshold value, re-identification agent234may determine that the first player in the first track and the second player in the second track are indeed one in the same. FIG.11is a block diagram1100illustrating aspects of operations discussed above in conjunction with method1000, according to example embodiments. As shown block diagram1100may include input video1102, pose detection1104, player tracking1106, track collection1108, gallery building and pairwise matching1110, and track connection1112. Block diagram1100illustrates a general pipeline of method1000provided above. Given input video1102, pose detection information1104(e.g., generated by pose detector212), and player tracking information1106(e.g., generated by one or more of player tracking agent126, auto-clipping agent120, and camera calibrator124), re-identification agent234may generate track collection1108. Each track collection1108may include a plurality of short broken tracks (e.g., track1114) of players. Each track1114may include one or more image patches1116contained therein. Given the tracks1114, re-identification agent234may generate a gallery1110for each track. For example, gallery1110may include those image patches1118in a given track that include an image of a player in which their orientation satisfies a threshold value. In other words, re-identification agent234may generate gallery1110for each track1114that includes image patches1118of each player, such that the player's number may be visible in each frame. Image patches1118may be a subset of image patches1116. Re-identification agent234may then pairwise match each frame to compute a similarity score via Siamese network. For example, as illustrated, re-identification agent234may match a first frame from track2with a second frame from track1and feed the frames into Siamese network. Re-identification agent234may then connect tracks1112based on the similarity scores. For example, if the similarity score of two frames exceed some threshold, re-identification agent234may connect or associate those tracks. FIG.12is a block diagram illustrating architecture1200of Siamese network242of re-identification agent234, according to example embodiments. As illustrated, Siamese network242may include two sub-networks1202,1204, and a perception network1205. Each of two sub-networks1202,1204may be configured similarly. For example, sub-network1202may include a first convolutional layer1206, a second convolutional layer1208, and a third convolutional layer1210. First sub-network1202may receive, as input, a player patch I1and output a set of features learned from player patch I1(denoted f(I1)). Sub-network1204may include a first convolutional layer1216, a second convolutional layer1218, and a third convolutional layer1220. Second sub-network1204may receive, as input, a player patch I2and may output a set of features learned from player patch I2(denoted f(I2)). The output from sub-network1202and sub-network1204may be an encoded representation of the respective player patches I1, I2. In some embodiments, the output from sub-network1202and sub-network1204may be followed by a flatten operation, which may generate respective feature vectors f(I1) and f(I2), respectively. In some embodiments, each feature vector f(I1) and f(I2) may include 10240 units. In some embodiments, the L2 norm of f(I1) and f(I2) may be computed and used as input to perception network1205. Perception network1205may include three layers1222-1226. In some embodiments, layer1222may include 1024 hidden units. In some embodiments, layer1224may include 512 hidden units. In some embodiments, layer1226may include 256 hidden units. Perception network1205may output a similarity score between image patches I1and I2. FIG.13Aillustrates a system bus computing system architecture1300, according to example embodiments. System1300may be representative of at least a portion of organization computing system104. One or more components of system1300may be in electrical communication with each other using a bus1305. System1300may include a processing unit (CPU or processor)1310and a system bus1305that couples various system components including the system memory1315, such as read only memory (ROM)1320and random access memory (RAM)1325, to processor1310. System1300may include a cache of high-speed memory connected directly with, in close proximity to, or integrated as part of processor1310. System1300may copy data from memory1315and/or storage device1330to cache1312for quick access by processor1310. In this way, cache1312may provide a performance boost that avoids processor1310delays while waiting for data. These and other modules may control or be configured to control processor1310to perform various actions. Other system memory1315may be available for use as well. Memory1315may include multiple different types of memory with different performance characteristics. Processor1310may include any general purpose processor and a hardware module or software module, such as service11332, service21334, and service31336stored in storage device1330, configured to control processor1310as well as a special-purpose processor where software instructions are incorporated into the actual processor design. Processor1310may essentially be a completely self-contained computing system, containing multiple cores or processors, a bus, memory controller, cache, etc. A multi-core processor may be symmetric or asymmetric. To enable user interaction with the computing device1300, an input device1345may represent any number of input mechanisms, such as a microphone for speech, a touch-sensitive screen for gesture or graphical input, keyboard, mouse, motion input, speech and so forth. An output device1335may also be one or more of a number of output mechanisms known to those of skill in the art. In some instances, multimodal systems may enable a user to provide multiple types of input to communicate with computing device1300. Communications interface1340may generally govern and manage the user input and system output. There is no restriction on operating on any particular hardware arrangement and therefore the basic features here may easily be substituted for improved hardware or firmware arrangements as they are developed. Storage device1330may be a non-volatile memory and may be a hard disk or other types of computer readable media which may store data that are accessible by a computer, such as magnetic cassettes, flash memory cards, solid state memory devices, digital versatile disks, cartridges, random access memories (RAMs)1325, read only memory (ROM)1320, and hybrids thereof. Storage device1330may include services1332,1334, and1336for controlling the processor1310. Other hardware or software modules are contemplated. Storage device1330may be connected to system bus1305. In one aspect, a hardware module that performs a particular function may include the software component stored in a computer-readable medium in connection with the necessary hardware components, such as processor1310, bus1305, display1335, and so forth, to carry out the function. FIG.13Billustrates a computer system1350having a chipset architecture that may represent at least a portion of organization computing system104. Computer system1350may be an example of computer hardware, software, and firmware that may be used to implement the disclosed technology. System1350may include a processor1355, representative of any number of physically and/or logically distinct resources capable of executing software, firmware, and hardware configured to perform identified computations. Processor1355may communicate with a chipset1360that may control input to and output from processor1355. In this example, chipset1360outputs information to output1365, such as a display, and may read and write information to storage device1370, which may include magnetic media, and solid state media, for example. Chipset1360may also read data from and write data to RAM1375. A bridge1380for interfacing with a variety of user interface components1385may be provided for interfacing with chipset1360. Such user interface components1385may include a keyboard, a microphone, touch detection and processing circuitry, a pointing device, such as a mouse, and so on. In general, inputs to system1350may come from any of a variety of sources, machine generated and/or human generated. Chipset1360may also interface with one or more communication interfaces1390that may have different physical interfaces. Such communication interfaces may include interfaces for wired and wireless local area networks, for broadband wireless networks, as well as personal area networks. Some applications of the methods for generating, displaying, and using the GUI disclosed herein may include receiving ordered datasets over the physical interface or be generated by the machine itself by processor1355analyzing data stored in storage1370or1375. Further, the machine may receive inputs from a user through user interface components1385and execute appropriate functions, such as browsing functions by interpreting these inputs using processor1355. It may be appreciated that example systems1300and1350may have more than one processor1310or be part of a group or cluster of computing devices networked together to provide greater processing capability. While the foregoing is directed to embodiments described herein, other and further embodiments may be devised without departing from the basic scope thereof. For example, aspects of the present disclosure may be implemented in hardware or software or a combination of hardware and software. One embodiment described herein may be implemented as a program product for use with a computer system. The program(s) of the program product define functions of the embodiments (including the methods described herein) and can be contained on a variety of computer-readable storage media. Illustrative computer-readable storage media include, but are not limited to: (i) non-writable storage media (e.g., read-only memory (ROM) devices within a computer, such as CD-ROM disks readably by a CD-ROM drive, flash memory, ROM chips, or any type of solid-state non-volatile memory) on which information is permanently stored; and (ii) writable storage media (e.g., floppy disks within a diskette drive or hard-disk drive or any type of solid state random-access memory) on which alterable information is stored. Such computer-readable storage media, when carrying computer-readable instructions that direct the functions of the disclosed embodiments, are embodiments of the present disclosure. It will be appreciated to those skilled in the art that the preceding examples are exemplary and not limiting. It is intended that all permutations, enhancements, equivalents, and improvements thereto are apparent to those skilled in the art upon a reading of the specification and a study of the drawings are included within the true spirit and scope of the present disclosure. It is therefore intended that the following appended claims include all such modifications, permutations, and equivalents as fall within the true spirit and scope of these teachings. | 72,337 |
11861849 | DETAILED DESCRIPTION OF THE EMBODIMENTS The disclosed technique overcomes the disadvantages of the prior art by providing a method and apparatus for increasing the intrinsic resolution of an infrared (IR) imaging detector without increasing the total size or the pixel density of the detector array. Instead, the effective spatial resolution of the IR detector is enlarged by reducing the active region within the individual pixels of the detector array (i.e., reducing the “fill factor”). Multiple imaging samples of the same image scene are acquired, in which only a portion of each pixel of the image scene is imaged onto the corresponding pixel of the detector array. The image scene is successively shifted relative to the detector array to provide imaging of different configurations of sub-pixel regions in each of the imaging samples. A higher resolution image frame is then reconstructed from the individual imaging samples. The disclosed technique also provides systems and methods for using microscanned images to extract both spatial and temporal information for enhancing motion detection, object tracking, situational awareness and super-resolution video. Whereas prior art systems use microscanned images to directly improve image resolution, no consideration is given to temporal information contained within microscanned images. According to the disclosed technique, since microscanned images contain slightly different information about an imaged scene, a temporal analysis of consecutive microscanned images can be used to detect moving objects in an imaged scene thereby enhancing object tracking. As microscanned images can be acquired at a higher rate than the rate required for constructing a super-resolution image, temporal analysis of microscanned images can be used to detect even very fast moving objects in an imaged scene. A temporal and spatial analysis of microscanned images can also be used to improve the accuracy of velocity estimates of detected moving targets and objects and can also generally improve existing detection algorithms such as track, detect, learn (herein abbreviated TDL), image differencing, background subtraction algorithms and background modeling and foreground detection algorithms including mixture of Gaussians (herein abbreviated MOG), MOG2, kernel density estimation (herein abbreviated KDE), global minimum with a guarantee (herein abbreviated GMG), running average, temporal median, principal component analysis (herein abbreviated PCA) and Bayesian background modeling, optical flow estimation algorithms including the Lucas-Kanade method and the Horn-Schunck method, combinations of optical flow and image differencing algorithms, motion detection based on objection detection and recognition algorithms including machine learning approaches, support vector machines (herein abbreviated SVMs), deep learning algorithms and convolutional neural networks (herein abbreviated CNNs), image registration algorithms for background modeling and image subtraction including correlation based registration algorithms, feature based registration algorithms including SIFT (Scale-Invariant Feature Transform), SURF (Speeded Up Robust Features) and BRIEF (Binary Robust Independent Elementary Features) and pyramidal image registration. Also according to the disclosed technique, microscanned images can be used to generate super-resolution images. By combining a detection system for generating super-resolution images with a pan-and-tilt system, a panoramic super-resolution image can be generated thus increasing the situational awareness capabilities of a detection system. Further according to the disclosed technique, microscanned images can be combined to form a super-resolution video track. By only updating portions of a microscanned image wherein object motion is detected, microscanned images can be combined into a super-resolution video track in real-time. The disclosed technique comes to improve both moving target detection systems and methods as well as display systems and methods for displaying moving targets. The disclosed technique simultaneously analyzes acquired microscanned images for an indication of a moving target while also constructing a super-resolution image and analyzing the super-resolution image for an indication of a moving target. The indications of a moving target from both the analysis of acquired microscanned images as well as the analysis of a super-resolution image are combined to improve the overall performance of movement detection methods and systems. The simultaneous analyses of acquired microscanned images as well as of a super-resolution image constructed from the acquired microscanned images enables various types of moving objects to be detected that would otherwise not be detected if only one analysis were used. For example, analyzing acquired microscanned images for a movement indication enables the detection of very fast moving objects, as defined below, which might otherwise be undetected as the same individual moving object by an analysis of the super-resolution image constructed from the acquired microscanned images due to the rapid movement of such objects and the generally lower resolution of acquired microscanned images. Analyzing a super-resolution image for a movement indication enables the detection of moving objects exhibiting small movements, as defined below, which might otherwise be undetected by an analysis of the acquired microscanned images due to the small amount of movement of such objects and the generally lower resolution of the microscanned images. Beyond that, the simultaneous analysis of movement indications improves the probability of detection (herein abbreviated PD) for moving objects and targets while also lowering the false alarm rate (herein abbreviated FAR). As explained below in greater detail, the simultaneous analysis of movement indications allows for an initial determination of the position, velocity and acceleration of a moving object in a single image frame (such as a single super-resolution image frame), which can be used to improve the performance of target tracking algorithms (such as TDL), thus lowering the delay in warning for an identified moving target or threat. In addition, the disclosed technique can be used to improve display systems and methods for displaying image data acquired from microscanned images and displayed as super-resolution images. The disclosed technique enables image data acquired from microscanned images to be displayed as hybrid super-resolution video, combining low-resolution, high frame rate video with high-resolution, low frame rate video. In the case of very fast moving objects, the disclosed technique enables super-resolution images to be constructed from acquired microscanned images and displayed without blurring and shown as low frame rate video. According to the disclosed technique, the position of a detected fast moving object in a plurality of acquired microscanned images can be adjusted such that each acquired microscanned image positions the fast moving object at a specified position for all microscanned images used to construct a single super-resolution image. The result of such an adjustment enables a super-resolution image to be constructed without blurring of the fast moving object and also displayed as low frame rate video. Such an adjustment can be achieved using an image processing algorithm involving pixel interpolation as described below. According to another embodiment of the disclosed technique, a super-resolution image can be constructed from acquired microscanned images of a fast moving object wherein within a bounded area in the super-resolution image, the acquired microscanned images are played consecutively as high frame rate video (albeit at the lower image resolution of microscanned images) thus enabling a hybrid super-resolution image to be constructed with increased resolution in the periphery while also showing the movement of a fast moving object at a high frame rate. Reference is now made toFIG.1, which is a perspective view schematic illustration of an apparatus, generally referenced100, for increasing the resolution of an infrared imaging detector, constructed and operative in accordance with an embodiment of the disclosed technique. Apparatus100includes an IR detector array110and a fill factor reduction means120made up of a masking filter130and an optical element140. Masking filter130and optical element140are disposed in between detector array110and the focal plane150of a scene to be imaged by the detector. Detector array110is made up of a lattice or matrix pattern of photosensitive pixels arranged in rows and columns (e.g., a 320×240 array, which includes 320 pixels along the array width and 240 pixels along the array height). The pixels in array110may be any suitable size or area, where the individual pixel size is generally substantially consistent across all pixels of the array. Fill factor reduction means120is operative to selectively reduce the active (i.e., photosensitive) region of the pixels of detector array110, by masking or blocking a portion of the photosensitive region of the pixels from receiving radiation from the image scene, such that only a portion of the image scene pixel is imaged onto the corresponding detector array pixel. The active region and masked region of the pixels are then progressively shifted during subsequent imaging samples of the scene. In particular, optical element140projects an image region152of image scene150onto masking filter130, which in turn blocks out a portion of image region152from reaching the corresponding pixel111of detector array110while allowing only the remaining portion154of image region152to reach pixel111. Consequently, pixel111includes an imaged region114that is less than the total area (i.e., potential photosensitive area) of array pixel111. Masking filter130includes a masking region132and a non-masking region134, such that radiation incident onto masking region132(via optical element140) is prevented from passing through (toward detector array110), while radiation incident onto non-masking region134is passed through. For example, masking region132may be embodied by a substantially opaque or non-transmissive portion of filter130or a non-transmissive coating disposed at the required portion, whereas non-masking region134may be embodied by a substantially transmissive portion of filter130, such as a window or opening thereat. It is noted that any of the components of fill factor reduction means120may be fully or partially integrated with the IR imaging detector in accordance with the disclosed technique, or may be separate therefrom. For example, masking filter130may be situated within the housing enclosing the IR detector, while optical element140may be situated outside of the housing, provided optical element140and masking filter130function to implement the aforementioned masking operation of image scene150onto detector array110. Reference is now made toFIGS.2A,2B,2C and2D.FIG.2Ais a perspective view schematic illustration of an initial set of image sub-frames acquired over successive imaging samples with the apparatus ofFIG.1.FIG.2Bis a perspective view schematic illustration of a subsequent set of image sub-frames acquired over successive imaging samples with the apparatus ofFIG.1.FIG.2Cis a perspective view schematic illustration of another subsequent set of image sub-frames acquired over successive imaging samples with the apparatus ofFIG.1.FIG.2Dis a perspective view schematic illustration of a final set of image sub-frames acquired over successive imaging samples with the apparatus ofFIG.1. Detector array110is depicted with nine (9) pixels arranged in a three-by-three (3×3) matrix. Masking filter130is disposed directly on array110and includes nine windows (i.e., non-masking regions134) situated on a sub-region of each of the detector pixels (DPx,y) of detector array110, while the remaining area of masking filter130is made up of masking regions132. The image scene150is similarly divided into nine image pixels (IPx,y) arranged in a three-by-three (3×3) matrix (i.e., each image pixel representing the region of image scene150that would ordinarily be projected onto a corresponding detector pixel during regular image acquisition). A first set of sub-frames of image scene150is acquired inFIG.2A. The light (IR radiation) emitted from image scene150is directed toward detector array110through masking filter130via optical element140(not shown), such that only the radiation passing through the windows134of masking filter130reaches detector array110. In particular, each detector pixel of detector array110captures a portion of a corresponding image pixel of image scene150. For example, referring to the first imaging sample (“sub-frame1”) inFIG.2A, radiation corresponding to an upper-left corner image pixel (IP1,1) is directed toward a detector pixel (DP1,1) situated at the upper-left corner of detector array110. A portion of the radiation (154) passes through the masking filter window and is incident onto a sub-region114of detector pixel DP1,1. The rest of the radiation (152) from image pixel IP1,1is blocked by masking region132such that it does not reach detector pixel DP1,1. Consequently, detector pixel DP1,1includes an imaged region114and a non-imaged region112. Similarly, the next image pixel (IP1,2) in the top row of image scene150reaches detector pixel DP1,2after passing through the masking filter window such that only a portion of image pixel IP1,2is incident onto a sub-region of detector pixel DP1,2. Fill factor reduction means120is shown implementing an exemplary fill factor reduction of 25% (i.e., “25% FF”), denoting that each imaged region114occupies approximately one-quarter of the area of the respective pixel, while each non-imaged region112occupies an area of approximately three-quarters of the respective pixel. The remaining pixels (DPx,y) of detector array110are imaged in an analogous manner during the first imaging sample (sub-frame), resulting in each detector pixel acquiring an imaged region114at its upper-left quadrant, while the remainder of the detector pixel is not imaged. Following acquisition of the first imaging sample, the portion of each image pixel imaged onto detector array110is shifted for the subsequent imaging samples. The shifting increment between each imaging sample is selected in accordance with the fill factor reduction amount and is generally equal to a fraction of the pixel width (defined as the distance between the midpoint of adjacent pixels of the detector array). In this example, the fill factor reduction amount is 25% (25% FF), and so the shifting increment is also selected to be 25%, or approximately one quarter of the pixel width of the detector pixels. The shifting may be implemented by adjusting the line-of-sight (herein abbreviated LOS) of fill factor reduction means120relative to detector array110(e.g., by suitable adjustment of masking filter130and/or of optical element140). Referring to the second imaging sample (“sub-frame2”) inFIG.2A, each detector pixel DPx,yreceives incident radiation from another portion of image pixel IPx,y, such that the imaged region114corresponds to an upper-middle quadrant of the corresponding image pixel IPx,y(e.g., the imaged quadrant of “sub-frame1” being shifted to the right by a quarter of the pixel width, such that the second image quadrant partially overlaps the first image quadrant). Referring to the third imaging sample (“sub-frame3”), the line-of-sight is shifted again such that the imaged region114of each detector pixel DPx,ycorresponds to an upper-right quadrant of the corresponding image pixel IPx,y(e.g., the imaged quadrant of “sub-frame2” being shifted to the right by a quarter of the pixel width). Additional imaging samples are acquired in an analogous manner, covering remaining overlapping portions (e.g., quadrants) of each image pixel, by successively adjusting the line-of-sight systematically over the same shifting increment (e.g., a quarter pixel width) along both the vertical axis and the horizontal axis of image scene150, applying a technique called “microscanning”, known in the art. For example, referring to the fifth imaging sample (“sub-frame5”) inFIG.2B, the line-of-sight is shifted downwards by the shifting increment with respect to the first imaging sample, such that imaged region114of each detector pixel DPx,ycorresponds to an middle-left quadrant of the corresponding image pixel IPx,y(e.g., the imaged quadrant of “sub-frame1” being shifted downwards by a quarter of the pixel width). The remaining imaging samples (“sub frame6” through “sub-frame16”) result in additional image pixel portions being acquired (i.e., imaging a respective portion that was not acquired in a previous sub-frame). The line-of-sight alignment of fill factor reduction means120relative to detector array110may be successively shifted using any suitable mechanism (i.e., a shift mechanism) or technique, in order to obtain the desired imaged sub-region on the detector pixels for each imaging sample. For example, masking filter130and detector array110may remain in a fixed position, and thus the positions of masking regions132and non-masking regions134remain stationary, while optical element140is successively repositioned for each imaging sample to shift the directional angle at which optical element140directs the radiation from image scene150. Alternatively, masking filter130and detector array110are jointly repositioned relative to optical element140(i.e., where masking filter130remains fixed with respect to detector array110), for adjusting the optical path of image scene150for each imaging sample. The structure of such a mechanism for shifting the line-of-sight may include for example a group of optical elements, such as at least one lens, prism and/or mirror, and at least one actuator for shifting at least one of the group of optical elements. At least one of optical element140, masking filter130and detector array110may be moved or displaced by an actuator for shifting the line-of-sight. The structure of such a shift mechanism may also be a pixel shift unit or a micro-optical mechanical device. The pixel shift unit or micro-scanning mechanism can be implemented using a motorized mirror having a 45° configuration (i.e., a folding mirror). In general, in such a setup two mirrors are used, each providing a shift in a separate single axis. Others examples of such an implementation can include a dual-axis microelectromechanical system (herein abbreviated MEMS) mirror in which a single controller is used to provide a shift in two axes. Another example is a motorized lens, specifically designed in the optical design of the system to have a transmission ratio (i.e., linear motion to line-of-sight) that is suitable for mechanical motion, while maintaining good optical performance of the overall system. The mechanical movement of the pixel shift can be done using an electrical mechanism such as a DC motor, a stepper motor, a piezoelectric motor and the like. Further examples of a pixel shift unit or micro-optical mechanical device can include a moving mirror, a fast steering mirror, a motorized steering mirror and the like, which can be used to shift the LOS for adjusting the optical path of image scene150to the detector array for each imaging sample. Another option would be to use a lens or an optical element (i.e., an optical system including at least one lens and at least one other optical element but without mirrors) in which either the lens or the other optical element is moved slightly thereby changing the LOS of the optical path. Specific optical configurations can include prisms and wedges, for example Risley prisms. The option of a lens or other optical element generally requires an opto-mechanical design in which the at least one optical element or lens is moved to change the optical path. The optical element or lens can be moved using for example an electric motor, a DC motor, a stepper motor, a piezoelectric motor and the like. It is noted that the actual size of the imaged regions on the detector pixels may be varied, such as by adjusting the characteristics of masking filter130(e.g., size, amount, and/or relative positions of non-masking regions134) and/or adjusting the optical characteristics of optical element140. In some detectors, such as vacuum-sealed detectors and/or cryogenically-cooled detectors, it is very difficult to reposition a masking filter relative to the detector array, since the two components should be arranged as close as possible to one another. Consequently, a mechanism for repositioning a movable masking filter would need to be situated within the cryogenic storage Dewar (vacuum flask) along with the masking filter and detector array. This requires such a mechanism to be exceptionally miniature and fast moving, while being capable of operating in cryogenic temperatures. Furthermore, the cryogenic storage Dewar would require significant enlargement, as well as an enhanced cooling mechanism to support the additional heating load. Thus, even if the implementation of a movable masking filter is feasible, the aforementioned issues would yield a detector with minimal practical applications due to the resultant high cost, higher power consumption, greater volume and lower reliability. Therefore, according to an embodiment of the disclosed technique, a stationary masking filter is maintained at a fixed position and orientation relative to the detector array, while the optical path of the image scene is successively adjusted relative to the stationary masking filter between imaging samples. In this embodiment, the shift mechanism for adjusting the optical path may be positioned outside the cryogenic storage Dewar thereby not requiring any changes (e.g., size, cost, power consumption, volume and the like) to the Dewar itself. The different sub-regions of the image pixels IPx,yof image scene150may be imaged in any order or permutation. For example, a bottom row of image pixel portions may be imaged first (i.e., the four sub-frames depicted inFIG.2D), followed by a higher row, and so forth; or alternatively, a first column of image pixel portions may be imaged in a first group of sub-frames, followed by an adjacent column, and so forth. Furthermore, the imaged sub-regions may be nonconsecutive within a given sub-frames (e.g., an upper-left quadrant and lower-right quadrant of the image pixel may be simultaneously acquired in one sub-frame, while an upper-right quadrant and lower-left quadrant of the image pixel are simultaneously acquired in a subsequent sub-frame). After all the sub-frames are acquired over successive imaging samples, where each individual sub-frame corresponds to a different imaged sub-region of each image pixel of image scene150, a final image frame is constructed from all of the acquired sub-frames. Namely, all of the imaged sub-regions for each image pixel are processed and combined, in accordance with a suitable image processing scheme. Reference is now made toFIG.3, which is a schematic illustration of a reconstructed image frame, referenced164, formed from the image sub-frames ofFIGS.2A,2B,2C and2D, compared with a regular image frame, referenced162, of the imaging detector. Regular image frame162includes a total of 9 pixels (3×3), whereas reconstructed image frame164includes a total of 144 sub-pixels (12×12)), providing a 16-fold increase in resolution (i.e., increasing the number of pixels by a factor of four along each of the horizontal and vertical axes). In particular, each individual pixel in reconstructed image frame164(corresponding to a pixel of image frame162) is made up of 16 pixels arranged in a 4×4 matrix (each of which can be referred to as a sub-pixel in comparison to the pixels in image frame162). Each sub-pixel of reconstructed image frame164is formed from a combination of the respective sub-frames in which that sub-pixel was imaged. For example, sub-pixel168of image frame164is formed based on sub-frames1and2(FIG.2A) and sub-frames5and6(FIG.2B), in which that particular sub-pixel portion of image scene150was acquired (in different configurations). Reconstructed image frame164represents a 16-fold increase in resolution with respect to image frame162, which is an image frame that would result from regular imaging with detector array110(i.e., without application of the disclosed technique). The intrinsic resolution of detector array110is represented by a 3×3 pixel array (i.e., 3 rows by 3 columns of pixels=9 total pixels), as depicted in image frame162, whereas reconstructed image frame164includes 12×12) sub-pixels within the same fixed area of array110. As a result, the final image frame contains greater image detail (i.e., by a factor of sixteen) as compared to a standard image frame, as each pixel of the reconstructed image frame is made up of sixteen individual sub-pixels which provides four times the detail or information along each axis as would be contained in the corresponding pixel of the standard image frame. It is appreciated that alternative resolution increase factors (i.e., the amount by which the image resolution is increased) may be obtained by varying the shifting increment between sub-frames, as well as the fill factor reduction amount (i.e., the amount by which the active region of the detector pixels is reduced). For example, to increase the image resolution based on the surface area of the detector array by a factor of 9 (along each of the horizontal and vertical axes), then the shifting increment would be set to be approximately one-third (⅓) of the detector pixel width, while each imaging sample would image a sub-region occupying an area of approximately one-ninth ( 1/9) of the image pixels (i.e., corresponding to a fill factor reduction factor of 1/9 or approximately 11%). As an example, a masking filter having windows or non-masking regions134that are one-third (⅓) the size of the detector pixels, may be used to provide imaging of the desired image pixel sub-region size, instead of masking filter130shown inFIGS.2A-2Dwhich includes windows that are one-quarter (¼) the detector pixel size. A total of 9 sub-frames would be acquired via microscanning (following a shifting increment of one-third (⅓) the detector pixel width between sub-frames), from which an alternative final higher-resolution image frame can be reconstructed. It is noted that the fill factor reduction of the detector pixels serves to reduce the overall detector sensitivity, as only a fraction of the entire radiation from the image scene reaches the detector array. According to one embodiment of the disclosed technique, to compensate for this effect, the f-number (also known as the “focal ratio”, defined as the ratio between the entrance pupil diameter and the lens focal length) of the detector optics is decreased by a factor corresponding to the fill factor reduction amount (or to the shifting increment between imaging samples). Consequently, more radiation is received from the image scene, which offsets the reduction in received radiation resulting from the fill factor reduction. The f-number decrease also provides an improved optical Modulation Transfer Function (MTF), generally representing the ability of the detector to distinguish between details in the acquired image, thereby allowing the detector to support the enhanced spatial resolution of the reconstructed image frame. Thus, the disclosed technique enhances the performance of the IR imaging detector by essentially reducing the detector sensitivity (by reducing the fill factor) and compensating for this reduction by providing suitable detector optics that will provide an adequate level of overall sensitivity together with a substantially higher image spatial resolution. According to the disclosed technique, the f-number of the detector optics may also be reduced in order to lessen any effects of diffraction of an incoming light beam onto the detector array. In general, as the detector pixel size (i.e., the size of the active pixel area) decreases, the f-number of the detector optics should also be decreased correspondingly. According to other embodiments of the disclosed technique other methods can be used to compensate for the reduced amount of radiation reaching the detector array. For example, an increased integration time could be used or a larger detector array with larger pixel dimensions could be used. Furthermore, depending on the wavelength range of radiation being detected by the detector array, for example in the range of 8-12 μm, the amount of radiation might be sufficiently strong such that no compensation is needed and sufficient radiation is received by the detector array. In the range of 8-12 μm, if the masking filter covers a sufficient area of the detector array such that the active pixel size is small, the f-number of the detector optics should be reduced, regardless of whether the overall detector sensitivity is reduced or not. Reference is now made toFIG.4, which is a schematic illustration of a graph, generally referenced170, showing Modulation Transfer Function (MTF) as a function of spatial frequency for different fill factor reduction amounts in accordance with the disclosed technique. Graph170depicts the detector MTF as a function of spatial frequency or “detector sampling frequency” (corresponding to a normalized representation of the spatial resolution). When implementing “regular microscanning” to increase the resolution of the detector (i.e., without reducing the fill factor of the detector pixels), then the resolution increase (additional information that can be derived from the image) is limited by the detector MTF, which approaches zero at a frequency of 1activepixelsize. For example, when imaging without any fill factor reduction (i.e., “100% FF”), shifting between each microscan at increments of less than half the detector pixel pitch would not increase the overall image resolution beyond a factor of ×2 (since the MTF reaches zero beyond the “2× Microscan” frequency point on the graph and is thus essentially unusable for imaging purposes). In contrast, when implementing microscanning in conjunction with fill factor reduction, then the spatial resolution of the detector image can be increased by a larger factor (i.e., not limited by the detector MTF) while still deriving additional information from the image. For example, if the fill factor is decreased to 25% of the total active pixel area (“25% FF”), it is possible to microscan at shifting increments of up to ¼ of the detector pixel pitch (thereby increasing image resolution by x4 along each axis=×16 total), while still being able to distinguish between the additional information (since the MTF is still above zero). It is noted that in one embodiment the shifting increment can be determined by the following equation: Microscan_Shift=FF2(1) and that in the case of a fill factor reduction of 25%, a shifting increment of ¼ is possible however for other fill factor reduction amounts, the relationship between the shifting increment and the fill factor reduction amount is not a 1:1 relationship. Equation (1) shows the maximum fill factor reduction amount for a given shifting increment. Thus a fill factor reduction of 25% or less (such as 20% or 15%) can be used with a shifting increment of ¼ and in general, a lower fill factor reduction (i.e., lower than the maximum for a given shifting increment) may lead to better results and is a matter of optimization. It is further noted however that in general there is no direct relationship between the shifting increment and the fill factor and thus Equation (1) should be taken as merely an example. Whereas the shifting increment and the fill factor are related, the relationship is only indirect as applied to the spatial resolution of the detector image. Thus the masking filter and the shifting increment might be any given size compared to the size of individual pixels of the detector array. In general, the size of the masking filter (i.e., the fill factor reduction amount) is a continuous parameter whereas the size of the shifting increment is a discrete parameter (e.g., such as fractions of a pixel as listed below). As an example, given a pixel size of 15 μm, to achieve a specific spatial resolution, a shift increment of 3.75 μm can be used (thus 25% of a pixel) however the fill factor reduction might be 40%, thus resulting in an active area of 6 μm being illuminated for each pixel based on the masking filter. However in another example, a similar spatial resolution can be achieved using a shift increment of 3 μm (thus 20% of a pixel) with a fill factor reduction of 50%. In general, the shift increment is a rational number expressed as a fraction of a pixel, such as ½ pixel, ⅓ of a pixel, ¼ pixel, ⅕ pixel and the like. As another example, four microscans per axis per pixel may be executed, as per the example given above in Equation (1) however with a shifting increment of ⅕ instead of ¼ (i.e., less than the maximum fill factor reduction amount for a given shifting increment). By reducing the fill factor even further, it is possible to microscan at higher frequencies/smaller shifting increments to provide an even larger resolution increase. It is noted that the potential fill factor reduction amount (and thus the potential resolution increase) that can actually be implemented for a given imaging detector is generally limited by opto-mechanical design constraints. Such limitations in the ability to design and manufacture the suitable high resolution optics may vary according to a particular system design and requirements. Referring back toFIG.1, fill factor reduction means120may be implemented using any suitable device, mechanism or technique operative for reducing the fill factor of the detector pixels by the desired amount. For example, fill factor reduction means120may alternatively be implemented by only a masking filter, which is successively repositioned and/or reoriented to obtain different imaging samples, or by only an optical element, which adjusts the optical path of the radiation from the image scene150over successive increments for each imaging sample. Further alternatively, fill factor reduction means120may be implemented by configuring detector array110such that the active (photosensitive) region of the pixels is less than the potentially maximum active region. For example, the pixels may be electronically configured such that only a selected sub-pixel region is active during each imaging sample. The disclosed technique is applicable to all types of IR detectors, operative anywhere within the wavelength range of approximately 1-15 μm, encompassing LWIR, MWIR and SWIR wavelengths. The disclosed technique is particularly applicable to thermal imaging cameras, and particularly vacuum-sealed and cryogenically-cooled thermal imagers, where the term “cryogenically-cooled” as used herein encompasses different types of low-temperature detectors, including those operating at temperatures above what may be considered cryogenic temperatures under some definitions (for example, including temperatures between approximately −150° C. (123K) and approximately −120° C. (153K)). In accordance with the disclosed technique, there is provided a method for increasing the resolution of an IR imaging detector comprising a two-dimensional detector array of photosensitive pixels arranged in a matrix. The method includes the procedure of successively exposing the detector array to an image scene, to acquire multiple imaging samples of the image scene, where for each imaging sample, the region of the pixels collecting incident radiation from the image scene is reduced such that only a portion of the pixel area of the imaged scene is imaged onto the corresponding pixel of the detector array. The method further includes the procedure of successively shifting the image scene relative to the detector array by a shifting increment equal to a fraction of the pixel width of the array pixels, to provide imaging of successive sub-pixel regions in each of the imaging samples. The method further includes the procedure of reconstructing an image frame having a resolution greater, by a factor defined by the shifting increment, than the intrinsic resolution of the detector, from the acquired imaging samples. The disclosed technique also comes to address a number of problems in the field of object tracking and motion detection, also referred to as video motion detection (herein abbreviated VMD) or target tracking, using acquired microscanned images. Object tracking and motion detection relates to the analysis of a plurality of images in order to identify objects of interest in the images and to track their movement without human intervention. The objects of interest may be moving vehicles, persons, animals or other objects that change position over the course of a plurality of images. The objects of interest are contextually sensitive, thus different types of objects in an image will be considered objects of interest in a given implementation context of the disclosed technique. For example, in the field of security and surveillance, objects of interest may include vehicles, people as well as objects that people may carry which can be used as a weapon. Such objects may be referred to as potential threats. Object tracking and motion detection in this respect thus relates to the identification of potential threats, tracking their movement and generating a warning or alarm if such potential threats become real threats. In general, object tracking and motion detection comprises two main procedures, a) detection of a moving object or target in an image frame and b) correlating the movement of the moving object or target over consecutive image frames. Procedure or step a) involves determining differences between consecutive image frames to determine which pixels in the image frames represent possible moving objects. Procedure or step b) involves correlating the determined differences between consecutive image frames to establish a path of movement of a possible moving object after which it can be ascertained that the identified possible moving object in step a) was indeed a moving object. Whereas step a) is computationally simple, step b) is significantly more complex as differences in each image frame must be correlated to determine a possible path of a moving object. In real-world applications an image frame may include a plurality of moving objects, thus the complexity of step b) is expressed as determining the differences between consecutive image frames for each moving object and correlating the differences correctly for each moving object, thus obtaining a reasonable and logical path of movement for each moving object. Algorithms, referred to as path management, path movement, detection or tracking algorithms, are known in the art for executing step b). Reference is now made toFIG.5A, which is a schematic illustration of a method for object tracking and motion detection, generally referenced200, as is known in the art. In a procedure202, images of a scene of observation are acquired. The images may be visible light images or infrared images and are usually taken from a camera of sorts such that there is a plurality of consecutive images of a scene of observation in which there may be objects to track. In a procedure204, the acquired images are analyzed for movement indications. This procedure is similar to step a) as mentioned above and generally involves comparing subsequent images for differences in the location and intensity of pixels between subsequent images. Not all movement indications necessarily correlate to a moving object or target but represent a difference between at least two subsequent images which can potentially indicate a moving object. In a procedure206, based on the movement indications in procedure204, a moving target indication is generated. This procedure is similar to step b) as mentioned above and generally involves using a detection or tracking (i.e., path movement) algorithm for correlating the movement indications of subsequent or consecutive images to determine a path of a moving object. If a path for a potential moving object can be determined then a moving target indication can be generated for the moving object. The method shown inFIG.5Ais known in the art and many algorithms exist for implementing such a method. Reference is now made toFIG.5B, which is a block diagram illustration of a method for enhancing motion detection and the tracking of objects using microscanned images, generally referenced220, operative in accordance with another embodiment of the disclosed technique. Block diagram220shows the general steps of how microscanned images can be used to enhance motion detection as well as object tracking. As shown, a plurality of microscanned images2221-222Nis acquired. Plurality of microscanned images2221-222Ncan represent M sets of microscanned images where each set is sufficient for constructing a super-resolution image. For example, each set may include at minimum two microscanned images or as many as N microscanned images for forming a super-resolution image. As described in greater detail below, microscanned images2221-222Ncan be acquired in a number of different ways and do not necessarily need to come from a microscanner. Furthermore, microscanned images2221-222Nmay be captured from a visible light detector, an infrared light detector, an ultraviolet light detector and the like. In one embodiment, the detector (i.e., visible light, IR, UV and the like) may be a high frequency detector capable of capturing images at a capture rate of 180 Hz or even higher. Microscanned images2221-222Nare not the same as the acquired images referred to inFIG.5A, as they are not representative of the captured light rays from a scene of observation on an entire detector array (not shown). As is known in the art, microscanned images can be combined into a super-resolution image to form an image of a scene of observation however each microscan image by itself is not considered a full image of the scene of observation. According to the disclosed technique, microscanned images2221-222Nare used for two simultaneous sets of procedures shown by a first plurality of arrows221A and a second plurality of arrows221B. It is noted that both sets of procedures can occur in parallel in the sense that the time required for each set of procedures may be different however regardless they occur in parallel. In the set of procedures following first plurality of arrows221A, microscanned images2221-222Nare combined into a super-resolution image224. It is noted that in one embodiment, only a portion of microscanned images2221-222Nare combined into super-resolution image224. For example, if a super-resolution image with increased resolution in only the horizontal axis is desired, then only a portion of microscanned images2221-222Nneed to be combined to form super-resolution image224. The portion of microscanned images2221-222Nmay be microscanned images from a shift increment over a single axis, such as in the case of 4×4 microscans per super-resolution image, only a single row/axis (i.e., four microscans) may be used to form the super-resolution image. As mentioned above, combining microscanned images into a super-resolution image is known in the art. Super-resolution image224schematically represents the super-resolution images formed from the M sets of microscanned images schematically shown as microscanned images2221-222N. Thus super-resolution image224represents at least two super-resolution images. Once combined into super-resolution images, known object motion detection and tracking algorithms (such as TDL) can be used on the super-resolution images to determine spatial and temporal information as shown by a block226. The spatial and temporal information derived from the super-resolution images can include an analysis of movement indications between consecutive super-resolution images as well as moving target indications, as described above as steps a) and b) in standard object tracking. As described below in further detail, performing object tracking and motion detection on super-resolution images is a slower process than standard object tracking and motion detection on regular images since the refresh rates for super-resolution images is slower than the refresh rate of regular images. Regular image frames may have a refresh rate of 30 Hz whereas super-resolution images can have refresh rates between 1-10 Hz depending on how many microscanned images are used to construct the super-resolution image. In the case of a pan and tilt system for generating super-resolution images, as described below inFIGS.9-10B, the refresh rates can be even lower as such a system cycles through a number of different non-overlapping or slightly overlapping stare positions before returning to its original position. As used herein, a pan and tilt system or mechanism as used with the disclosed technique can include any imaging system in which the imaging detector of the system can change its LOS via a mechanical means or an optical means. A mechanical means may include any setup or mechanism capable of mechanically changing the LOS of the imaging detector. An optical means may include any setup or mechanism for shifting the optical path of the imaging detector. The change in LOS can be a change in one axis of the imaging detector or in two axes of the imaging detector. Simultaneously, or in parallel, as shown by second plurality of arrows221B, microscanned images2221-222Nare also individually analyzed for spatial and temporal information, as shown by a block228. In this set of procedures, microscanned images2221-222Nare not formed into a super-resolution image but are individually analyzed as separate images for spatial and temporal information using object motion detection and tracking algorithms. This includes determining potential moving objects between consecutive microscanned images and also determining movement paths of potential moving objects to determine that a potential moving object is indeed a moving target. As described below in further detail, performing object motion detection and tracking on microscanned images is a faster process than standard object motion detection and tracking on regular images since the refresh rate for a single microscanned image is higher than the refresh rate of a single regular image frame. Thus performing object motion detection and tracking on microscanned images according to the disclosed technique enables the detection of rapidly moving and/or short-lived targets. Individual microscanned images can be refreshed or scanned at a rate of, for example, 180 Hz. Also, as described below in further detail, the spatial and temporal information derived from at least two microscan images can include information about the velocity and with at least three microscan images, the acceleration of the identified moving target(s) in block228. The result of block226is a first at least one moving target indication as derived from a movement analysis of the super-resolution images of block224. The result of block228is a second at least one moving target indication derived from a movement analysis of the microscanned images of blocks2221-222N. The second moving target indication(s) can also include an estimate of the velocity and acceleration of identified moving targets even after the analysis of only two or three individual microscanned images. As shown inFIG.5B, the spatial and temporal information derived in blocks226and228can be combined, as shown by a block230, for generating enhanced motion detection and object tracking. Block230thus represents a combination of the moving target indications of both block226and block228. In one embodiment of the disclosed technique, the results of blocks226and228can be combined using a weighted calculation, such as by a Kalman filter and the like. Various criteria and parameters of a motion detection and object tracking system (not shown) can be used to determine what weight each moving target indication from blocks226and228receives in the combined moving target indication of block230. Examples of possible weights as factors could be the detected size of the target, the signal-to-noise ratio resulting from the set of procedures shown by plurality of arrows221A and221B, the type of determined movement/path of the target in each of blocks226and228, the presence of obstacles and/or obscuration in the determining of a moving target indication in each of blocks226and228(for example, if block226always determines a moving target indication but block228sometimes does and sometimes does not, for whatever reason) and the like. The weights can also include the outputted certainty level of each motion detection algorithm (i.e., the resulting moving target indication from blocks226and228) based on several parameters such as the signal-to-noise ratio, the target size, the target contour, the continuity of detections, the number of past detections, the estimated velocity and acceleration of the target and the like. The two simultaneous and in parallel procedures shown by first plurality of arrows221A and second plurality of arrows221B according to the disclosed technique enables enhanced motion detection and object tracking for a few reasons. For the purposes of explanation, the procedure shown by first plurality of arrows221A can be referred to as motion detection on super-resolution images (as described above, and below inFIG.6) and the procedure shown by second plurality of arrows221B can be referred to as motion detection on microscanned images (as described above, and below inFIG.6). One understanding of the enhanced nature of the motion detection and object tracking of the disclosed technique is that each of the two described procedures can detect moving objects which might not be detectable by use of the other procedure. Each of the moving detection procedures shown inFIG.5Bprovides spatial and temporal information at different spatial and temporal resolutions. Therefore a combination of the two improves the overall ability of a motion detection system to detect moving targets by enlarging the possible spatial and temporal resolutions at which targets can be detected and then tracked. Due to the nature of how super-resolution images are constructed, motion detection on super-resolution images allows for the detection of small and slow moving targets which may be only detectable in a high resolution image. Due to the nature of how microscanned images are acquired, motion detection on microscanned images allows for the detection of large moving targets, fast or rapid moving targets or short-lived targets which can be detectable in lower resolutions but might only be detectable in images (such as microscanned images) that can be processed very quickly. Thus a combination of the motion detection in both super-resolution images and acquired microscan images according to the disclosed technique allows for target detection and tracking wherein targets are fast moving, slow moving, requiring a high resolution for detection and detectable even at low resolution, thereby enhancing motion detection and object tracking. Another benefit of the combination of the motion detection in both super-resolution images and acquired microscan images according to the disclosed technique is that it allows for lowering the number of false positive detections of targets (i.e., reducing the FAR). An example of a fast moving target only detectable by motion detection of acquired microscanned images would be a moving object that has less than 50% overlap in pixel location between consecutive whole image frames. Such a moving object does not need to be physically moving fast (let's say at 100 km/h) but within the imaged scene of observation needs to move fast enough that between consecutive whole image frames, it is difficult for a detection or tracking algorithm to establish a correlation between movements of the object per image frame and to determine that the moving object in two consecutive image frames is indeed the same moving object or different moving objects. This may be the case if less than 50% of the pixels representing the moving object do not overlap between consecutive image frames, thus making it difficult to correlate a path of movement of the object. Phrased in another way, a fast moving object may be any object wherein the movement of the object between consecutive frames is significantly larger than the size of the object in each frame or significantly larger than the size of the frame itself. In either case a detection or tracking algorithm will have difficulty establishing the detection of the moving object in the consecutive image frames as the same object. Since microscanned images are acquired at a very fast rate, such as 180 Hz, unless the moving object is moving at extremely high speeds (e.g., 1000 km/h or 50 rad/sec), there will be sufficient overlap between consecutive microscanned images to easily correlate the movements of the moving object between consecutive microscanned images. Another consideration in this respect relates to the perceived distance of a moving target in an acquired image. A moving object which appears close in a series of acquired images might not have its movements correlated whereas a moving object which appears far in the same series of acquired images, which might even be moving faster than the close moving object, might have its movements easily correlated to form a movement path. This may be based on the number of pixels of the moving object which overlap between consecutive acquired images. A fast moving target may be also a short-lived target which may only appear in a small number of image frames (for example less than 50 image frames). Since microscanned images are acquired at a very fast rate, thus even short-lived targets can be detected and an accurate estimate of their motion can be performed according to the disclosed technique. Analyzing the movement of such fast moving targets may yield no detection of a moving object if a movement analysis of consecutive super-resolution images is used as there may be no overlap whatsoever of the moving object between consecutive super-resolution images due to the longer time is takes to generate a super-resolution image. Such short-lived objects might also not appear as objects in a super-resolution image if the amount of image frames in which the short-lived object appears is less than the number of image frames combined to form the super-resolution image. Another reason such short-lived objects might also not appear as objects in a super-resolution image is that due to their high velocity, such objects may appear distorted in the super-resolution image when the microscan images are combined thus making it difficult to ascertain if the distorted (and most probably blurred) element seen in the super-resolution image is indeed an object or not. It is noted though that due to the lower resolution of acquired microscan images, small targets or targets requiring a high resolution to even identify as potential moving objects may be missed by a movement analysis of acquired microscanned images. An example of a slow moving target only detectable by motion detection of super-resolution images would be a small moving object that has significant overlap in pixel location between consecutive microscanned images. Such a moving object does not need to be physically small but within the imaged scene of observation needs to be small as compared to the field-of-view (herein FOV) of the scene, i.e. the object is represented in the imaged scene of observation by a few pixels, thus making it difficult for a motion detection algorithm to even determine that an object has moved between consecutive image frames. As example of such an object may be in a scene of observation, such as an airport terminal being imaged by a surveillance camera, wherein in the scene of observation there may be hundreds of people. The hand of a single person in such a scene of observation, moving his hand to reach for a weapon, such as a gun, might not be resolvable at all in an acquired microscanned image (due to its low resolution) and might only be resolvable with a super-resolution image (due to its high resolution). It is noted that a large object which moves very slowly (as mentioned above, wherein there is significant overlap of pixels between consecutive images) might also not be determined as a moving object if only analyzed using microscanned images. Analyzing the movement of such a slow and/or small moving target may yield no detection of a moving object if a movement analysis of consecutive acquired microscan images is used as there may be so much overlap of the moving object between consecutive microscan images due to the short amount of time it takes to acquire microscan images that no movement detection will be detected at all. In the case of a small moving object, an analysis of consecutive microscan images might also miss such objects because very few pixels actually make up such an object in the acquired microscan images. Such an object however may be detectable as a moving object via the high resolution and slower refresh rates of consecutive super-resolution images where movement of the object may not be negligible between consecutive super-resolution images. This may be even more the case when the disclosed technique is used with a pan-and-tilt system where the refresh rate of two consecutive super-resolution images at a given field and angle of observation is even slower. Another understanding of the enhanced nature of the motion detection and object tracking of the disclosed technique is that each of the two described procedures can be used to detect moving objects which can go from being detectable by only one method to being detectable by only the other method. For example, a moving object which is initially fast moving in the scene of observation but then becomes slow moving and possibly small in size. According to the disclosed technique, continuity of the detection and tracking of the object can be maintained as movement and size characteristics of the target change over time. A further understanding of the enhanced nature of the motion detection and object tracking of the disclosed technique is that a target may be detectable using both the procedures shown, however unlike the prior art, the ability to determine a velocity and acceleration estimate from two or three acquired microscan images enhances the motion tracking available using the procedure of motion detection for super-resolution images since a motion detection algorithm can already be provided with a velocity and acceleration estimate of the moving object before a single super-resolution image has been formed. The added redundancy of detecting a moving object using two procedures simultaneously improves the PD of moving objects overall while also reducing the FAR as the detection of a moving object using one of the motion detection procedures can be used to validate the detection of the same moving object using the other motion detection procedure. This is particularly relevant in the case of a scene of observation containing a plurality of moving objects wherein moving objects might move in front of or behind each other over consecutive whole image frames. The added redundancy of detecting a moving object using the aforementioned two motion detection procedures according to the disclosed technique increases the probability that a motion detection algorithm can maintain the tracking of a moving object in the scene of observation over time even if the spatial and temporal characteristics of the moving objects change over time and even if the moving object seems to appear and disappear over consecutive image frames due to the moving object passing behind other objects and moving targets in the scene of observation. Thus in general, the spatial and temporal information extracted from a set of microscanned images, from a generated super-resolution image or from both can be used to improve the overall tracking of a moving object. Motion detection systems and microscanning systems do not work off of a single image frame or a single acquired microscan image. Thus part of the enhancement in motion detection as shown in block230does not only come from the ability, according to the disclosed technique, to determine a velocity and acceleration estimate after two or three microscanned images but from the use of such velocity and acceleration estimates in detection and tracking algorithms. Having velocity and acceleration estimates of a moving target within two or three acquired microscan images generally improves the overall success rate of detection and tracking algorithms being able to correlate a movement path for moving objects between consecutive images (whether microscan images, regular images or super-resolution images). Better correlation of the movement of moving objects between consecutive images lowers the FAR and also allows for high probabilities of detection of a moving target to occur in less time. Thus for these reasons as well, the disclosed technique enables enhanced motion detection and object tracking. For example, the analysis of both a plurality of microscanned images as well as the super-resolution image formed from the plurality of microscanned images can be used to determine the presence and movement of a very fast moving object by increasing the ability of motion detection algorithms to correlate the movements of such an object even if it has little overlap between consecutive microscanned images. It is noted that the instantaneous determination of velocity and acceleration estimates from two or three microscanned images or two or three super-resolution images in the disclosed technique does not represent the linear velocity and linear acceleration but rather the angular velocity and angular acceleration of moving objects. Angular velocity and acceleration can be estimated from successive microscan images or successive super-resolution images without any knowledge of the distance of the moving object from the image detector. In order to estimate the instantaneous linear velocity and linear acceleration of a moving object in a microscan image or a super-resolution image, the distance of the moving object from the image detector capturing the microscan images must be determined and together with the angular velocity and angular acceleration can be used to respectively determine the linear velocity and linear acceleration. According to the disclosed technique, the image detector used to acquire microscanned images can be coupled with an external source for determining the instantaneous distance between the image detector and a moving object in the scene of observation. The external source may be a geo-referencing device, a rangefinder and the like. The distance can also be determined using geo-referencing methods and techniques, using a parallax calculation and the like. The distance can further be determined using any known aerial photography or aerial surveillance system as is known in the art. In some embodiments of the disclosed technique, the instantaneous linear velocity and linear acceleration of the moving object are determined as well, using the methods listed above, for use with the disclosed technique in order to provide additional information about the moving object, such as being able to classify the moving object based on its instantaneous linear velocity and acceleration. As explained in more detail below, for example inFIG.8B, microscanned images2221-222Nalone can be analyzed to determine the presence of object motion in the microscanned images. Slight differences between consecutive microscanned images can be used to determine that an object in the imaged scene has moved. At least two microscanned images can then be used to determine an instantaneous angular velocity estimate of a detected moving object in the microscanned images and at least three microscanned images can be used to determine an instantaneous angular acceleration estimate of a detected moving object in the microscanned images. Very slow moving objects (e.g., moving less than 0.01 rad/sec), may not be detected by only analyzing microscanned images, thus according to the disclosed technique, as described above, a super-resolution image generated from a plurality of microscanned images is also analyzed for the detection of moving objects. The prior art merely relates to the analysis of microscanned images for spatial information and for combining microscanned images into a super-resolution image. According to the disclosed technique, microscanned images as well as super-resolution images are analyzed for temporal information as well in order to enable motion detection and object tracking. Thus according to the disclosed technique, additional information and data are extracted from microscanned images and super-resolution images. As mentioned above, microscanned images2221-222Ncan be acquired in a number of different ways. One example was shown above in the apparatus ofFIG.1. According to the disclosed technique, any method or system used to generate microscanned images requires knowledge of the relative LOS shift of an image detector as well as the generation time of each microscan image within an image subset. In general, the main LOS of an imaging detector system is measured by an external device, such as inertial sensors, magnetic sensors and the like, positioned on a goniometer, for example. According to the disclosed technique, the shift in the LOS of the imaging detector during microscanning relative to the main LOS of the imaging detector is to be determined internally by a shift mechanism (as described above such as a mirror or optical element which is moved by an actuator). Thus any method or system where each image is captured at slightly different lines-of-sight, and the relative shift in the LOS between images can be determined can be used to generate microscanned images according to the disclosed technique. Knowledge of the relative LOS shift can be determined either by direct measurement or by image processing methods. For example, an encoder, an inertial measurement device and the like, either coupled with or integrated into an image detector, can be used to determine the LOS of the image detector as well as shifts in the LOS of the image detector. As another example, image correlation can be used as an image processing method for determining the relative shifts in LOS of each image subset (i.e., each microscan). In addition, microscan images require that an image detector shift slightly between each microscan image, as shown above inFIGS.2A-2D. The LOS shift of the image detector can be embodied either as a controlled LOS shift or an uncontrolled LOS shift. For example, a controlled LOS shift may be achieved by a LOS shifting mechanism, such as a motor, solenoid or actuator for slightly shifting the LOS of the image detector. As described above, this can be achieved by shifting the image detector itself, shifting an optical element which focuses light onto the image detector, shifting a mask to control where light beams impinge upon the surface of an image detector or any combination of the above. The shifting can also be achieved by used a mirror or a prism. An uncontrolled LOS shift can be achieved when the image detector interface is moved or vibrated with respect to the scene of observation thereby creating a slight shift in the LOS from which the image is captured. In an uncontrolled LOS shift, a separate microscanning element is not required as the vibrations or movements of the image detector themselves generate arbitrary and substantially constant shifts in the LOS of the image detector. As an example, microscanned images with an uncontrolled LOS shift can be embodied by an image detector placed on a platform. Such a platform will slightly vibrate and shake due to natural forces and certainly in the case where the platform is moving (as the motor causing the platform to move will cause vibrations). The platform may be for example a car, a truck, a tank or any known land or sea vehicle. The platform may also be for example a fixed-wing aircraft or a rotary-wing aircraft and can also include unmanned aerial vehicles, such as remotely-piloted vehicles and drones. The platform does not necessarily need to be mobile and can be a stationary platform such as an observation tower which nevertheless may slightly vibrate and shake due to natural forces (such as winds). The images detected by such an image detector will substantially be offset. In this embodiment of the disclosed technique, the image detector's electro-optic design must be designed accordingly to determine when consecutive detected images are detected at slightly different lines-of-sight. Thus each detected image from an uncontrolled LOS shift in an image detector is equivalent to a microscanned image detected through a controlled LOS shift in an image detector which may include a microscanning element. As described herein, microscanned images and the methods and systems of the disclosed technique which relate to microscanned images refer to images having slightly different lines-of-sight whether the LOS shift was either controlled or uncontrolled, and thus also refer to images received from image detectors which may not include a microscanner or LOS shifting mechanism but nevertheless can detect images having slightly different lines-of-sight. According to the disclosed technique, offset images can also be used to extract spatial and temporal information about moving objects captured in the offset images. Reference is now made toFIG.6, which is a schematic illustration of a method for enhancing motion detection and the tracking of objects using microscanned images, operative in accordance with a further embodiment of the disclosed technique. The method ofFIG.6is based on the block diagrams shown above inFIG.5B.FIG.6shows a method for enhanced motion detection using microscanned images wherein the microscanned images are used to construct a super-resolution image and are also used individually. In a procedure250a plurality of microscanned images of at least one moving object is acquired. A first subset of the microscanned images forms a first data set and a second subset of the microscanned images forms a second data set. Each data set represents the raw image data captured on an image detector and can be used in various ways. In general, the raw image data of a data set represents the raw image data captured on the entire detector array of the image detector for a given image frame (for example, all the sub-frames shown above inFIGS.2A-2D). The data sets can be analyzed spatially and temporally, as described below. In addition, the data sets can be used to construct a super-resolution image. The data sets can also be used to display microscanned images to a user. However each of these is merely an option and the data sets can remain as data sets without graphic representation to a user. In general, the data sets can be referred to as images not necessarily designated for graphic representation to a user. For example, in the field of computer vision, the generic term “images” may refer to data sets upon which manipulations and processing are performed even though those images are not displayed to a user until processing is complete. Thus in the disclosed technique, the term “images” or “microscanned images” may be used to refer to images or data sets upon which an analysis is executed or performed even though those images or data sets may not be graphically represented to a user. The plurality of microscanned images of the moving object can be divided into a plurality of data sets, however according to the disclosed technique, at minimum the plurality of microscanned images should be divided into two different data sets. In general, a data set represents a sufficient number of microscanned images for forming a super-resolution image frame. Thus in a typical scenario, the number of data sets is equal to the number of super-resolution image frames that can be constructed (as described below in procedure252). In an imaging system where a detector does not pan and/or tilt and image a panoramic scene of observation, each data set represents the same scene of observation as acquired microscanned images. According to the disclosed technique, at minimum each data set should include at least two microscanned images from which temporal and spatial information can be extracted and which can be used to form a super-resolution image, as described below. With reference toFIG.5B, plurality of microscanned images2221-222Nare acquired. With reference toFIG.8B(as described below), of the microscanned images which are acquired, various subsets of microscan images can be formed. For example, a first set of microscan images362A (FIG.8B) includes a first subset of microscanned images of a moving object whereas a second set of microscan images362B (FIG.8B) includes a second subset of microscanned images of the moving object. As can be seen, the first subset and the second subset are different. Regardless, each subset is imaging the same scene of observation and thus the same stationary objects in each subset are imaged in both subsets. In a procedure252, a respective super-resolution image is formed from each data set. As mentioned above, each data set should include at least two microscanned images. With reference toFIG.5B, plurality of microscanned images2221-222Nare formed into super-resolution image224. With reference toFIG.8B, a first set of microscan images362A can be constructed into a first super-resolution image, a second set of microscan images362B can be constructed into a second super-resolution image and a third set of microscan images362C can be constructed into a third super-resolution image. In a procedure254, each super-resolution image is analyzed for spatial and temporal information, including an analysis of the movement of any objects between consecutive super-resolution images. Spatial information can include the position of any moving object detected in the super-resolution image. Procedure254is similar to step a) mentioned above regarding object motion detection and tracking, wherein procedure254executes step a) on at least two consecutive super-resolution images. Thus in procedure254, potential moving objects between at least two consecutive super-resolution images are identified, including their respective positions in each super-resolution image. As explained above, such moving objects may be for example objects which move very slowly between consecutive image frames and/or require increased resolution and detail to ascertain that movement is actually occurring. In a procedure256, a respective first movement indication of the moving object is determined according to the spatial and temporal information derived in procedure254. Procedure256substantially represents step b) as mentioned above regarding object motion detection and tracking, wherein at least one tracking or track manager algorithm is used to establish a path of movement of the potentially moving objects identified in procedure254over consecutive super-resolution image frames. Procedure256is performed for each identified potentially moving object in procedure254, however the first movement indication is only determined if a path of movement for the potential moving object between consecutive super-resolution images is determined by the at least one tracking or track manager algorithm according to the criteria and parameters of the used tracking or track manager algorithm. With reference toFIG.5B, the spatial and temporal information represented by block226represents the analysis of the super-resolution images and the determination of a respective first movement indication in procedures254and256. In a procedure258, at least a portion of a respective subset of the microscanned images is analyzed for spatial and temporal information. Procedure258is performed for each acquired data set in procedure250. Thus in the case shown in procedure250, procedure258is performed once for the first data set and once for the second data set. The spatial and temporal information derived in procedure258represents step a) as mentioned above regarding object detection and motion tracking in general and can be executed using known object tracking algorithms. In procedure258the data sets are representative of the acquired microscanned images. Procedure258is used to determine the presence of potential moving objects in the acquired microscan images and in general, each data set includes a plurality of microscanned images, thus an initial estimate of both velocity and acceleration can be determined for potentially moving objects. It is noted that the estimate of both the velocity and acceleration is an estimate of the angular velocity and angular acceleration of potentially moving objects. In general, in display systems of moving objects, changes in position of moving objects on the pixels of a display system represent the angular velocity of the movement component of the object which is perpendicular to the LOS of the image detector. Thus movement of an object as seen in consecutive images (whether they be microscanned images, regular images or super-resolution images) is correlated to the perpendicular component of the angular velocity of the moving object to the LOS of the image detector as well as the distance of the moving object from the moving detector. This explains why fast moving objects which are distant from an observation point appear to move slower than even slower moving objects which are closer to the observation point. Spatial information which is analyzed may be the position of the moving object in each microscanned image of each data set. Temporal information of the moving object may include the instantaneous velocity and acceleration of the moving object. As mentioned above, since procedure258is executed on data sets from microscanned images, very fast moving objects may be identified which would otherwise not be identified if procedure258was performed on a regular image frame or a super-resolution image as the acquisition (or refresh) rate of microscanned images is much higher than for regular images or super-resolution images. Thus procedure258is able to identify moving objects which might not be identified in procedure254. With reference toFIG.8B, a first set of microscan images362A is analyzed for temporal and spatial information. As shown, an object364is detected and is analyzed over subsequent microscanned images in the first set to determine that the subsequent movements of moving objects364′,364″ and364″′ are indeed the same object which is moving. As seen, an initial estimate is made of the position of moving object364in each microscanned image. In addition, an initial estimate of the instantaneous velocity and acceleration of the moving object can be determined as shown by a reference number372. It is noted that moving objects364,364′,364″ and364″′ are shown with an offset in the vertical direction (i.e., Y-axis) to avoid clutter and increase clarity in the figure. In a procedure260, the spatial and temporal information of procedure258is used to determine a respective second movement indication of the moving object. Procedure260substantially represents step b) mentioned above regarding object motion detection and tracking methods, wherein at least one tracking or track manager algorithm is used to establish a path of movement of the potential moving objects identified in procedure258according to the criteria and parameters of the tracking or track manager algorithm. Procedure260is performed for each acquired data set. Since each acquired data set includes a plurality of microscanned images (at minimum two or three, but practically speaking many more), consecutive microscanned images in each data set can be used to determine the instantaneous velocity and acceleration of moving objects for which a second movement indication is determined. Thus unlike procedure256, in procedure258the movement indication of a moving object in the acquired and analyzed data sets can include an estimate of the angular velocity and angular acceleration of the moving object. With reference toFIG.5B, the spatial and temporal information derived, as shown in block228, represents the analysis of the movement of potential moving objects and the determination of a second movement indication as shown in procedures258and260. With reference toFIG.8B, the movement of moving object364is analyzed spatially and temporally to determine the instantaneous angular velocity and angular acceleration of the moving object, as shown by a reference number372, as well as a position estimate of moving object364. Each subset of microscanned images can be analyzed spatially and temporally for determining an updated instantaneous velocity and acceleration, as shown by reference numbers376and384(both inFIG.8B). As can be seen, procedures252-256and procedures258-260are performed simultaneously or in parallel and both derive from the acquisition of a plurality of microscanned images forming at least two different data sets. As described above, procedures252-256are used to identify slow moving objects in a scene of observation where a high resolution is required to identify and determine that motion of the object is actually occurring. Procedures258-260are used to identify fast moving objects in a scene of observation where a low resolution may be sufficient to identify and determine that differences in motion of an object can be correlated as the same object which is moving fast. In addition, procedures252-256and258-260may produce movement indications of the same moving object. According to the disclosed technique, procedures252-256and258-260can be executed using known object motion detection and tracking algorithms, such as TDL, image differencing, background subtraction algorithms and background modeling and foreground detection algorithms including MOG, MOG2, KDE, GMG, running average, temporal median, PCA and Bayesian background modeling, optical flow estimation algorithms including the Lucas-Kanade method and the Horn-Schunck method, combinations of optical flow and image differencing algorithms, motion detection based on objection detection and recognition algorithms including machine learning approaches, SVMs, deep learning algorithms and CNNs, image registration algorithms for background modeling and image subtraction including correlation based registration algorithms, feature based registration algorithms including SIFT, SURF and BRIEF and pyramidal image registration. In a procedure262, the respective first movement indication from procedure256and the respective second movement indication from procedure260are combined for enhancing motion detection of the moving object. The two movement indications can be combined using known methods, for example weighted calculation methods and systems, such as a Kalman filter. In this procedure, the extracted spatial and temporal information and the first and second movement indications are used to positively identify a moving object in the acquired microscan images and also to predict where the moving object will appear in a subsequent microscan image or a subsequent image frame or super-resolution image frame. This procedure thus enhances motion detection of a moving object by increasing the PD since moving objects which would only be identified by a movement analysis of super-resolution images (procedures252-256) or which would only be identified by a movement analysis of microscanned images (procedures258-260) can now be identified simultaneously. The combination of the two movement indications of both super-resolution images and microscanned images also enhances motion detection by lowering the FAR as the two movement indications can further be used to correlate that an identified moving object in the super-resolution image in procedure256is indeed a moving object as determined in the data set (i.e., representative of a set of microscanned images) in procedure260and vice-versa. Furthermore, as mentioned above, procedures258and260can be used to determine an instantaneous angular velocity and angular acceleration of a moving object between two and three consecutive microscanned images. Practically speaking, motion detection methods and systems, including the disclosed technique, do not function using only a single microscan image. However the determination of the position, instantaneous angular velocity and instantaneous angular acceleration of a moving object can improve the performance of algorithms used in procedures254-256and procedures258-260, especially in correlating a path of movement of a potential moving object over consecutive frames (be they microscanned image frames as data sets or super-resolution image frames). The determination of the position, instantaneous velocity and instantaneous acceleration of a moving object can also be used in predicting where the moving object will be in a subsequent frame, thus further reducing the FAR and reducing the time required to issue a warning about a moving object that might be a threat (depending on the circumstance and context where the disclosed technique is used). The determination of the instantaneous velocity and instantaneous acceleration of the moving object from two and three consecutive microscan images thus also contributes to the enhancement of motion detection as described in procedure262. With reference toFIG.8B, the instantaneous velocity and instantaneous acceleration determined for each set of microscanned images can be used to predict the position of the moving object in a subsequent set of microscanned images. For example, v1and a1determined for a first set of microscan images362A (FIG.8B) can be used to predict the position of the moving object in a second set of microscan images362B (FIG.8B), shown as a predicted position of moving object380. It is noted that the predicted position of moving object380is further along the image scene than the position of moving object364″′ in first set of microscan images362A. The predicted position of the moving object in the second set based on the analysis of the microscanned images in the first set enhances the motion detection as well as the object tracking of the moving object according to the disclosed technique. In accordance with another embodiment of the disclosed technique (not shown), only an analysis of the microscanned images is performed (i.e., only procedures250,258and260). In this embodiment, at least one data set including at least two microscanned images is analyzed for spatial and temporal information in order to extract an instantaneous angular velocity estimate of a moving object. Such an embodiment might be used when the disclosed technique is used in a photogrammetry system for generating a single large or panoramic image of a scene of observation comprising a plurality of images (such as described below inFIGS.9-10B). Such an embodiment can also be used, as described below, for generating a single large or panoramic image of a scene of observation wherein moving objects have been removed and only non-moving elements in the microscanned images are shown in the panoramic image. Information about the background can be determined from different microscanned images such that interpolation algorithms can be used to generate a static image of an aerial photograph. The single large or panoramic image is substantially a still image comprised of a plurality of images. According to the disclosed technique as described above inFIG.6, a position estimate of a moving object as well as movement information (such as a movement indication) and an instantaneous angular velocity estimate of the moving object can be determined from a plurality of microscanned images, even from a single data set of microscanned images, even when there is technically only a single panoramic image frame. In accordance with a further embodiment of the disclosed technique, at least one data set including at least two microscanned images can be used in a photogrammetry system for generating a single large or panoramic image without any moving objects in the generated image, even if there were moving objects identified in the microscanned images. In this embodiment, the static areas of the acquired images are used to construct a still image without any identified moving objects in the constructed image. Due to the slightly different perspectives on the image scene from the acquired microscanned images, image information about the background of a moving object can be derived from different microscanned images. Thus, an identified moving object can be removed from the image to get a clean static image of an aerial photograph without any moving objects. An example of how background information can be derived from microscanned images is given below inFIG.14. Reference is now made toFIG.7, which is a graphic illustration of a first example using a plurality of microscanned images for enhancing motion detection, generally referenced280, constructed and operative in accordance with a further embodiment of the disclosed technique.FIG.7graphically shows how procedures258and260can be used to determine a position estimate and an instantaneous estimate of velocity and acceleration. It is noted that in procedures258and260, images of the microscanned images may be formed however they are not images which have been processed and prepared for displaying to a user, as graphically shown inFIG.7. These images may be images upon which processing and manipulations can be performed by the disclosed technique but are not meant for displaying. In procedures258and260, a super-resolution image is not formed, however for the sake of explaining the disclosed technique, a super-resolution image is graphically shown inFIG.7. The graphics shown inFIG.7are merely brought to better explain procedures258and260and should not be construed as limiting. As an example, three microscanned images of a scene are shown, a first microscan image2821(shown as microscan 1), a second microscan image2822(shown as microscan 2) and a third microscan image2823(shown as microscan 3). Each microscan image represents an image which is detected on a portion of a pixel for each pixel of an image detector (not shown). The LOS of the image detector to the scene is slightly changed or modified between microscan images, thereby generating slightly different images of the scene. Each one of microscan images2821-2823shows a moving object which is captured in three slightly different positions. As an example, the moving object is shown as a vehicle and only one moving object is shown in the microscan images however the moving object may be any type or sort of moving object and a microscan image can include a plurality of moving objects. A reference line286which spans each of the shown microscan images is drawn to show the relative movement of the moving object between microscan images. An image detector may capture a plurality of microscanned images for forming a super-resolution image, wherein the plurality may be two microscan images, ten microscan images or even one hundred microscan images. In microscan image2821a moving object2881is shown. In microscan image2822the moving object is now shown in an updated position2901with the previous position of moving object2881shown as a silhouetted position2882. As can be seen, the moving object has advanced between microscan 1 and microscan 2. The movement and position of the moving object in each of microscan 1 and microscan 2 can be determined and the difference in position including the elapsed time between the capture of microscan 1 and microscan 2 can be used to determine the instantaneous angular velocity of the moving object. This is shown schematically via a double arrow294, listing the letter ‘v’. In microscan image2823the moving object is again shown in an updated position2921with the previous positions of moving object2881shown as silhouetted positions2902(showing the position of the moving object in microscan 2) and2883(showing the position of the moving object in microscan 1). As can be seen, the moving object has advanced between microscan 2 and microscan 3. Using microscan images2821-2823, the movement, position and velocity of the moving object between each of microscans 1-3 can be determined and the difference in position and velocity including the elapsed time between the capture of microscans 1-3 can be used to determine the instantaneous angular acceleration of the moving object. This is shown schematically via a double arrow296, listing the letter ‘a’. Thus according to the disclosed technique, a minimum of two microscan images can be used to not only detect movement of objects in the microscan images but can also be used to determine the velocity of the moving object. Furthermore according to the disclosed technique, a minimum of three microscan images can be used to determine the acceleration of the moving object. Updated position2921can also be used to generate an updated determination of the velocity of the moving object. According to the disclosed technique, a super-resolution image may be formed minimally of two microscan images or three microscan images covering an entire image scene and in general, all the microscan images covering an entire image scene should be used when forming a super-resolution image. Thus in an imaging system where 9 microscan images cover an entire image scene, all 9 microscan images should be used in the construction of the super-resolution image. In an embodiment of the disclosed technique wherein the imaging system is used to construct a super-resolution image and also to enable a spatial and temporal analysis of the microscanned images, the imaging system should be designed such that at least two or at least three microscanned images cover the entire image scene. In the case of determining the velocity of a moving object, minimally at least two microscan images are required and in the case of determining the acceleration of a moving object, minimally at least three microscan images are required. In general, all the microscanned images of a given set of microscanned images are used to form a super-resolution image, however the aforementioned relates to the minimal number of microscanned images from which a super-resolution can be formed and from which spatial and temporal information can also be derived from. Due to the frame rate at which the microscan images are captured, the moving object overlaps between consecutive microscan images, thus enabling a temporal analysis to determine that the differences in the moving object between consecutive microscan images represent a single moving object and not different objects. It is noted that between two regular image frames or two super-resolution image frames constructed from microscan images, there may not be sufficient overlap of the moving object between consecutive images to enable a spatial and temporal analysis of the moving object to properly establish a path of movement of the moving object. As described above inFIG.6, acquired microscan images are analyzed for movement while also being used to construct a super-resolution image. Thus as shown, as the temporal information (position, velocity and acceleration) about the movement of the object in microscan images2821-2823is extracted, the microscan images can be combined together to form a super-resolution image284, which shows a higher resolution (i.e., super-resolution) image of a moving object300. The super-resolution image is based on a combination of the captured moving object in the microscanned images (i.e., moving object2881and its two updated positions2901and2921). As described below inFIGS.12-14, based on the extracted temporal information from microscan images2821-2823, moving object300can be displayed with updated information regarding its angular velocity and acceleration, as shown by an arrow298. Thus according to the disclosed technique, during the time it takes to construct a super-resolution image, an instantaneous estimate of the angular velocities and accelerations of moving objects, as well as an estimate of the positions of the moving objects, can be determined and once the super-resolution image is displayed, it can be displayed with the added information of a velocity and acceleration estimate for each detected and determined moving object. Reference is now made toFIG.8A, which is a graphic illustration of the prior art, generally referenced320.FIG.8Ashows how velocity estimates of moving objects are performed in known object tracking systems. For the purposes of illustration only three captured images are shown, a first image322A (image 1), a second image322B (image 2) and a third image322C (image 3). Each image is captured by an image detector (not shown) of a tracking system. First image322A is captured at time t=0, second image322B is captured at time t=1 and third image322C is captured at time t=2. In first image322A a moving object324is detected and identified. Dotted lines330,332A,332B and332C which overlap the shown images are used to show the relative movement of moving object324between images. In first image322A, dotted lines330and332A delineate a length of moving object324, shown as a vehicle, for example the vehicle's length. In second image322B, the moving object has moved half the distance of its length. As shown, a moving object324′ shows the original position of the moving object in image 1 and a moving object326shows its current position in image 2. The distance moved by moving object324in image 1 and moving object326in image 2 is shown by an arrow334A. The distance between dotted lines332B and332A is half the distance between dotted lines332A and330in this example. The distance moved by the moving object between image 1 and image 2 and the difference in time between image 1 and image 2 is used to estimate the angular velocity of the moving object. Based on the estimated velocity, the tracking system can make a prediction of where the moving object should be in image 3. As shown, a moving object326′ represents the estimated position of the moving object based on the velocity calculation performed using images 1 and 2. The distance moving object326′ has moved is shown by an arrow334B representing a distance between dotted lines332C and332B, the same distance between dotted lines332B and332A. However as shown in image 3, the moving object has apparently accelerated (shown as a moving object328) and its actual position is half a vehicle length longer than predicted by the tracking system as shown by moving object326′. At this point, images 1-3 can be used to predict both the velocity and the acceleration of the moving object in a fourth image, however the lack of an estimate of the acceleration of the moving object in image 2 has led to an inaccurate prediction of the position of the moving object in image 3. Reference is now made toFIG.8B, which is a graphic illustration of velocity and acceleration estimates for enhancing motion detection and the tracking of objects using microscanned images, generally referenced360, constructed and operative in accordance with a further embodiment of the disclosed technique.FIG.8Bshows how velocity and acceleration estimates are performed by a tracking system of the disclosed technique as compared to known tracking systems as shown inFIG.8A.FIG.8Bshows three sets of microscan images, a first set362A, a second set362B and a third set362C. Each set of microscan images can be combined together to form a super-resolution image at a given time. Each set of microscan images shows a plurality of microscan images captured of a moving object. For example, first set of microscan images362A shows four microscan images of a moving object364, second set of microscan images362B shows four microscan images of the moving object in an updated position, shown as a moving object374and third set of microscan images362C shows four microscan images of the moving object in a further updated position, shown as a moving object382. Each set of microscan images shows four microscan images of the same moving object superposed on top of one another with each microscan image being positioned in a different vertically shifted position to clearly show the relative horizontal movement of the moving object between consecutive microscan images. As shown, first set362A shows moving object364and its subsequent movement as a moving objet364′, a moving objet364″ and a moving objet364″′. An actual superposing of the four microscan images would show moving objects364,364′,364″ and364″′ overlapping each other giving the impression of a blurred moving object. As mentioned, the overlapping of the moving object has been shifted vertically to show the different positions of the moving object within a given set of microscanned images. Second set362B shows moving object374and its subsequent movement as a moving objet374′, a moving objet374″ and a moving objet374″′. Third set362C shows moving object382and its subsequent movement as a moving objet382′, a moving objet382″ and a moving objet382″′. Note that the different sets of microscanned images are drawn one below the other for convenience. Moving object364″′ in the first set of microscanned images continues its movement in the second set of microscanned images and is thus drawn further to the right as moving object374. The same is true for moving object374″′ and the third set of microscanned images showing moving object382. In first set of microscan images362A, a first dotted line366is shown delineating a starting position of moving object364. A set of subsequent dotted lines368,368′,368″ and368″′ delineate a respective end position of each of moving objects364,364′,364″ and364″′. A distance between first dotted line366and dotted line368is shown an arrow370. A distance between each of dotted lines368,368′,368″ and368″′, graphically showing the acceleration of the moving object in the first set of microscans, is shown via a respective plurality of arrows371A,371B and371C. Even though first set of microscan images362A are shown being captured at a time t=0, each microscan image is captured at a slightly different time. The difference in position between moving object364′ and moving object364can be used to determine an instantaneous velocity of the moving object, as can the difference in position between moving object364″ and moving object364′ and between moving object364″′ and moving object364″. Furthermore, the difference in position between moving object364″ and moving object364can be used to determine an instantaneous acceleration of the moving object as can the difference between moving object364″′ and moving object364′. These differences can also be used to determine updated velocity estimates of the moving object per microscan image. As shown, the lengths of plurality of arrows371A-371C are all different and the relative difference between their respective lengths is also different, implying that moving object364accelerates within first set362A. Using the method as described above inFIG.6(in particular, procedures258and260), the various velocity and acceleration calculations between the microscan images in first set362A can be used to determine a current velocity and acceleration of moving object364, shown as v1and a1by reference number372. As noted above, even though the moving object is graphically shown having linear movement, its change in position is a result of the perpendicular component of its angular velocity in relation to the LOS of the image detector which generated each of the microscanned images. Using the estimated velocity and acceleration calculations as shown in first set362A, a prediction of where the moving object will be in second set362B can be made. A second dotted line378shows the start position of a predicted position of a moving object380whereas a subsequent dotted line378′ shows the actual start position of moving object374. As shown in first set362A, similar calculations and determinations of the instantaneous velocity and acceleration can be made in second set362B using the various positions of moving objects374,374′,374″ and374″′. These calculations are not shown in second set362B as they were shown in first set362A to keep second set of microscan images362B less cluttered inFIG.8B. Second set362B shows a plurality of microscan images which together can be combined into a super-resolution image taken at time t=1. An updated current velocity and acceleration calculation v2and a2shown by a reference number376is listed taking into account all the instantaneous velocity and acceleration calculations which are possible in second set362B as described above regarding first set362A. Just between first set362A and second set362B at least two differences between the disclosed technique and the prior art as shown inFIG.8Acan be seen. A first difference is that within a single image frame (for example, first set of microscan images362A), an estimate of the angular velocity and angular acceleration of a moving object can be performed such that already in a second image frame (for example, second set of microscan images362B) a prediction of the position, angular velocity and angular acceleration of a moving object can be made. In the prior art, at least two image frames are required to determine a velocity such that a prediction of the position and velocity can be made in a third image frame, whereas according to the disclosed technique, the microscan images forming a first image frame (such as a super-resolution image) can be used to predict a position, velocity and acceleration in a second image frame. A second difference is that the disclosed technique provides a significant increase in accuracy of the prediction of position, velocity and acceleration over the prior art. As can be seen in second set362B, the difference in position between moving object374and the predicted position of moving object380is significantly smaller as compared with the position of moving object328(FIG.8A) and the estimated position of moving object326′ (FIG.8A). Even though inFIG.8Athere is still overlap between moving objects326(FIG.8A) and328and a prior art system may identify those moving objects as the same moving object, in a slower frame rate system (and/or with a faster moving object) this overlap might not exist thus a prior art system might not identify these two objects as actually being the same moving object. Using the estimated velocity and acceleration calculations as shown in second set362B, an updated prediction of where the moving object will be in third set362C can be made. A third dotted line386shows the start position of a predicted position of a moving object388whereas a subsequent dotted line386′ shows the actual start position of moving object382. As shown in first set362A, similar calculations and determinations of the instantaneous velocity and acceleration can be made in third set362C using the various positions of moving objects382,382′,382″ and382″′. Similar to second set362B, these calculations are not shown in third set362C to keep third set of microscan images362C less cluttered inFIG.8B. Third set362C shows a plurality of microscan images which together can be combined into a super-resolution image taken at time t=2. An updated current velocity and acceleration calculation v3and a3shown by a reference number384is listed taking into account all the instantaneous velocity and acceleration calculations which are possible in third set362C as described above regarding first set362A. As can be seen, by using an updated calculation of the velocity and acceleration of the moving object in second set362B, the difference between the predicted position of moving object388compared to the actual position of moving object382is minimal as compared to the prior art calculation as shown inFIG.8A. This minimal difference improves the ability to recognize that images of moving objects belong to the same physical moving object, and thus improves the tracking of the moving object by improving the execution of a detection or tracking algorithm for correlating a logical path movement of a moving object over consecutive microscan images. As mentioned above,FIG.8Bgraphically illustrates how velocity and acceleration estimates can be derived from acquired microscan images, however actual microscan images for display do not need to be formed to make such determinations. InFIG.6procedures258and260, estimating the velocity and acceleration of potential moving objects, as part of analyzing each data set for spatial and temporal information, is performed on data sets or images which represent the microscan images however such images are not meant for displaying to a user. Reference is now made toFIG.9, which is a schematic illustration of a system for increased situational awareness using super-resolution images, generally referenced460, constructed and operative in accordance with a further embodiment of the disclosed technique. System460enables wide area situational awareness to be implemented using a single image detector that can capture microscan images and form super-resolution images. Situational awareness systems are used for surveillance and generally require wide area coverage (i.e., wider than is possible with a single image detector at a fixed position) while also generating high resolution images. Prior art situational awareness systems can achieve high resolution images with wide area coverage by either employing a cluster of high resolution image detectors or via the use of a sophisticated pan-and-tilt mechanism using a single high resolution image detector. The first system of the prior art involves an increase in costs since high resolution image detectors are expensive (such as high resolution mid-wavelength infrared (MWIR) sensors) and this system requires many of them to achieve wide area coverage. The second system of the prior art also involves an increase in costs since the pan-and-tilt mechanism may be expensive to ensure that a desired refresh rate of the surveyed scene is achieved. Such an approach might also reduce the system's overall reliability. The disclosed technique as shown in system460obviates the need for expensive high resolution image detectors and expensive pan-and-tilt mechanisms, thereby providing a cost effective simple scanning mechanism combined with a super-resolution image generated by a rapid acquisition sensor, as described below. System460includes a pan-and-tilt mechanism462, a microscanner464and a processor466. Microscanner464is coupled with both pan-and-tilt mechanism462and with processor466. Pan-and-tilt mechanism462is also coupled with processor466. Pan-and-tilt mechanism462can be an off-the-shelf product and does not need to be of excessive quality as explained below. Microscanner464can be any image detector capable of performing microscanning, such as apparatus100(FIG.1), a system capable of implementing the various steps shown in block diagram220(FIG.5B) or any other known microscanning image detector. Microscanner464may be a high frequency microscanner, a high frequency detector or even a combination of the two, where high frequency means an image capture frequency of 100 Hz or higher. Microscanner464can any known combination of an image detector and at least one optical element, and any other elements needed, if necessary, to acquire microscan images (such as a LOS shifting mechanism). It is noted as well that in one embodiment, microscanner464does not need to be part of an image detector and may be implemented using a moving mask and does not need to be limited to microscanners which optically shift the line-of-sight between the image detector and the scene of observation. Microscanner464is used to capture a plurality of microscanned images of a portion of a wide area scene to be surveyed. The plurality of microscanned images are provided to processor466which can derive a plurality of super-resolution images based on the microscanned images. Once microscanner464has captured sufficient microscanned images (i.e., at least two) of the wide area scene to be surveyed to form a super-resolution image, pan-and-tilt mechanism462moves microscanner464to another portion of the wide area scene. Microscanner464then captures another plurality of microscanned images and provides them to processor466for forming another super-resolution image. Once pan-and-tilt mechanism462has moved microscanner464to capture microscanned images of the entire wide area scene and processor466has generated super-resolution images of different portions of the wide area scene, processor466stitches together all the super-resolution images generated to generate a panoramic super-resolution image of the wide area scene to be surveyed. Pan-and-tilt mechanism462then moves microscanner464back to its initial position for generating an updated plurality of microscanned images and super-resolution images of the wide area scene. It is noted that in an alternative (not shown) to system460, the microscanner and the pan-and-tilt mechanism can be replaced by a pan-and-tilt mechanism (not shown) without a dedicated microscanner. In such a setup, a regular image detector can be used having a smaller FOV in which the pan-and-tilt mechanism uses its scanning mechanism to perform step-and-stare scanning at the same resolution and FOV of system460. For example, if system460performs a 3×3 microscanning pattern wherein each detector pixel receives 9 different sub-pixel images, the alternative system described above may be designed such that the image detector has a FOV three times smaller (in each dimension) than the FOV of system460and the pan-and-tilt mechanism would then perform a 3×3 scanning pattern of the whole FOV for each pan-and-tilt position. Such a scanning pattern would thus be equivalent to the scanning pattern of a microscanner thereby enabling an image to be formed over the same FOV without a microscanner. In this embodiment, a regular and basic image detector can be used (thus obviating the need for an expensive high resolution image detector) however an expensive pan-and-tilt mechanism will be required in order to properly execute the scanning pattern of the image detector such that it is equivalent to the scanning pattern of a microscanner. It is noted that the disclosed technique as compared to this embodiment provides for a significantly higher capture rate of images because with step-and-stare scanning only several captured images are possible per second. The disclosed technique also enables the capabilities of enhanced VMD described above inFIGS.5B and6as compared to VMD on a super-resolution image formed using the above mentioned embodiment. By using a microscanner to capture images of the wide area scene, the use of a plurality of expensive image detectors (IR or visible) can be obviated since the microscanner can produce high resolution images via image processing in processor466without requiring special, complex and/or expensive detectors and/or lenses. Furthermore, since a microscanner is used to capture images of the wide area scene, each formed image includes many more pixels than in a standard pan-and-tilt system, thus fewer capture positions are required to cover the entire wide area scene. This can be translated into a simpler pan-and-tilt system as compared to the prior art. For example, known in the art are wide area motion imaging systems which use an image detector having a pixel array of 640×480 pixels. Such a known system may include gimbals to pan and tilt the image detector to cover a wide area scene and may enable the imaging system to pan and tilt between 81 different stare positions to generate a panoramic image having an effective resolution of around 2.7 megapixels. In order to enable 81 different stare positions with a reasonable refresh rate of a few hertz, such an imaging system requires a high-end and complex gimbals to accurately and rapidly move the image detector over the various stare positions. In this respect, the refresh rate of the imaging system is the rate at which the imaging system can pan and tilt between its different stare positions and return to an initial stare position. Thus in the example given above, the refresh rate is the time it takes the imaging system to cycle through all 81 stare positions before returning to its initial stare position. In contrast, using the disclosed technique with a similar image detector having a pixel array of 640×480 pixels, wherein the image detector captures microscanned images in a 3×3 pattern (thus 9 microscan images per stare position), each stare position can produce a super-resolution image having an effective resolution of around 2.7 megapixels. Thus with around 9 different stare positions, the imaging system of the disclosed technique can cover the same wide area scene as compared to the prior art, either at a substantially higher refresh rate (since fewer stare positions have to be cycled through for the imaging system to return to its initial stare position) and/or with a simpler, less expensive and smaller gimbals. In addition, by using a microscanner which captures microscanned images of the wide area, thus shortening the integration time (as well as the f-number) as compared to a regular image detector, more flexibility can be afforded regarding the stability requirements of such a system. Based on the above explanations, an off-the-shelf pan-and-tilt mechanism can be used to change the capture position of the microscanner, thus obviating the need for an expensive pan-and-tilt mechanism. All that is required of the microscanner in this embodiment of the disclosed technique is that it be of sufficiently high quality to accurately construct a high quality image of the wide area scene. Thus according to the disclosed technique, situational awareness of a wide area scene can be implemented using microscanned images resulting in a cost effective detection system and thus increasing the situational awareness capabilities of a detection system. It is noted that system460can be used with the method described above inFIG.6. However in such an embodiment, the analysis of consecutive super-resolution images for a respective first movement indication (procedures254and256inFIG.6) must be executed for consecutive super-resolution images which are constructed when the microscanner is in the same position and field-of-view vis-à-vis the wide area scene. Thus, if microscanner464is moved by pan-and-tilt mechanism462between five different positions, only consecutive super-resolution images in the same position are analyzed to determine a respective first movement indication. Thus after a first super-resolution image is constructed, four more super-resolution images will be constructed before a second super-resolution is constructed in the same position as the initial super-resolution image. Such an operation increases the time required to determine a first movement indication according to the scanning rate of pan-and-tilt mechanism462. Reference is now made toFIGS.10A and10B, which are block diagram illustrations of a method for increased situational awareness capabilities using super-resolution images, generally referenced480and500respectively, constructed and operative in accordance with another embodiment of the disclosed technique. Both of block diagrams480and500graphically show the method described above inFIG.9, implemented by system460(FIG.9). With reference toFIG.10A, a block482shows that a microscanner is positioned in a first position to capture images of a wide area scene. This position is denoted as position X. In a block484, N microscanned images are acquired of the wide area scene in position X, where N is at least 2. In a block486, the microscanner is moved to position X+1 where again N microscanned images are acquired of the wide area scene. This process is repeated M−1 times, as shown by an arrow488, until the entire wide area scene has been imaged. In general, M is also at least 2. Thus as shown, N microscanned images are captured at each position M such that N×M microscanned images are captured in total of the wide area scene. As shown in greater detail below inFIG.10B, at each position M, the N microscan images which are captured are generated into a respective super-resolution image, thus forming M super-resolution images of the wide area scene. The M super-resolution images can then be stitched together to form a panoramic super-resolution image of the entire wide area scene to be surveyed. By using super-resolution images to implement situational awareness, fewer images are required to be stitched together to cover an entire wide area scene as compared with prior art methods. The requirement of fewer images can be translated into a simpler scanning mechanism as compared with the prior art thereby increasing the situational awareness capabilities of a detection system. For example, the microscanner in a given position might perform a 3×3 microscanning pattern and the pan-and-tilt mechanism might change the FOV of the entire system over a 3×3 area (meaning three pan positions and three tilt positions). Such a system would be equivalent to a prior art situational awareness system without a microscanner performing 81 scans of a scene of observation. A pan-and-tilt mechanism capable of scanning 81 different positions in a short amount of time to continuously update a panoramic image of a scene of observation is complex and expensive. The disclosed technique only requires the pan-and-tilt mechanism to scan between 9 different positions, thus resulting in a simpler scanning mechanism as compared to the prior art. With reference toFIG.10B, a plurality of N acquired microscan images at M positions is shown. For example, block5021shows N microscan images at a first position, block5022shows N microscan images at a second position and block502Mshows N microscan images at an Mthposition. A plurality of ellipses504is shown delineating that the plurality of microscanned images can be acquired at a plurality of positions. The N microscan images at each position are respectively combined into a super-resolution image at the given position at which the plurality of microscanned images was captured. This is graphically shown by plurality of arrows5061,5062and506M. Block5021of N microscan images is formed into a first super-resolution image5081, block5022of N microscan images is formed into a second super-resolution image5082and block502Mof N microscan images is formed into an Mthsuper-resolution image508M. As shown, the M super-resolution images can be combined into a panoramic super-resolution image510of the wide area scene to be surveyed. The system and methods described inFIGS.9-10Brelate to step-and-stare surveillance systems and methods. It is noted that step-and-stare surveillance systems and methods cycle through a finite number of stare positions or FOVs, since in theory such systems could have endless different stare positions. The various stare positions are usually pre-defined and may slightly overlap and the imaging system in such a setup “steps” (i.e., moves) cyclically, or periodically, from stare position (i.e., FOV) to stare position in a pre-defined order. The refresh rate of such an imaging system is thus the periodic rate at which the imaging system cyclically steps through each of its stare positions and then returns to an initial stare position before cycling through all the stare positions again. In the case of the disclosed technique, each step of the pan-and-tilt mechanism moves the LOS of the entire imaging system to a different portion of the wide area scene to be imaged, wherein in each position, microscanned images are captured in order to form a super-resolution of that portion. The stare time at each step is the amount of time required by the microscanner to acquire sufficient microscanned images to form a super-resolution image. The entire imaging system is moved over M positions to cover the entire wide area to be surveyed and is then moved over the same M positions to repeatedly update the panoramic super-resolution image of the wide area to be surveyed, thus implementing increased situational awareness capabilities through the use of microscanned images. The disclosed technique described above inFIGS.9-10Brelates to all kinds of wide area scenes to be surveyed and can be a panoramic image as well as an aerial photogrammetry image. Reference is now made toFIGS.11A-11C, which are graphic illustrations showing issues in generating super-resolution video from a plurality of microscanned images, constructed and operative in accordance with a further embodiment of the disclosed technique. With reference toFIG.11A, generally referenced530, shown is a scene of observation532which includes a plurality of moving objects.FIG.11Ashows a road536, a first sidewalk540and a second sidewalk546. A vehicle534is shown moving on road536along the path shown by an arrow538. A tree542is shown on first sidewalk540which is blowing in the wind. The swaying motion of tree542is shown via a plurality of arrows544. An animal548is shown on second sidewalk546. Animal548is shown moving along the path shown by an arrow550. Using an image detector (not shown) capable of microscanning, three sets of captured microscanned images are shown via a plurality of arrows554, a first set of microscanned images552A, a second set of microscanned images552B and a third set of microscanned images552C. Each set of captured microscanned images show three microscanned images each of which have been superposed and positioned in slightly different vertical positions to avoid overlap of the microscanned images and to show the slight differences in movement of each one of the plurality of moving objects. In first set of microscanned images552A, vehicle534is shown advancing in the direction of arrow538as denoted by an arrow556, tree542is shown swaying in a rightward direction of one of plurality of arrows544as denoted by an arrow558and animal548is shown advancing in the direction of arrow550as denoted by an arrow560. In second set of microscanned images552B, vehicle534is shown advancing further in the direction of arrow538as denoted by an arrow562, tree542is shown now swaying in a leftward direction of the other one of plurality of arrows544as denoted by an arrow564and animal548is shown further advancing in the direction of arrow550as denoted by an arrow566. This is shown by arrow566in second set of microscanned images552B where each microscan image shows animal548further advancing forward. In third set of microscanned images552C, vehicle534is again shown advancing further in the direction of arrow538and has almost exited the captured image frames as denoted by an arrow568. Tree542is shown now swaying again in a rightward direction as denoted by an arrow570and animal548is shown further advancing in the direction of arrow550as denoted by an arrow572. As can be seen in third set of microscanned images552C, animal548has now also almost exited the captured image frames. As described above, the data sets forming the three sets of microscanned images shown can be used to generate super-resolution images and can also be used to enhance motion detection of vehicle534, tree542and animal548as well as object tracking once those moving objects have been detected. According to the disclosed technique, since each set of microscanned images comprises a plurality of captured images showing slightly different movement of moving objects, it should be possible to play the microscanned images as video and thereby use an image detector with microscanning capabilities to generate video of moving objects. However simply playing microscanned images as video presents two different tradeoffs, neither of which is ideal. Reference is now made toFIG.11B, generally referenced600, which shows a first tradeoff in playing microscanned images as video. In this tradeoff, microscanned images in each of the microscanned images are combined to generate super-resolution images which are then played as video.FIG.11Bshows three image frames which are to be played as video, a first image frame602A, a second image frame602B and a third image frame602C. First image frame602A represents a super-resolution image formed from first set of microscanned images552A (FIG.11A), second image frame602B represents a super-resolution image formed from second set of microscanned images552B (FIG.11A) and third image frame602C represents a super-resolution image formed from third set of microscanned images552C (FIG.11A). First, second and third image frames602A-602C are played in succession as shown by a plurality of arrows604. The first tradeoff or challenge with playing super-resolution images as video is that the processing time for generating a super-resolution image from microscanned images may be too high to generate a fast enough video rate for a human viewer to properly perceive a continuously moving object. In general, a video rate of about 30 frames per second (herein abbreviated FPS) is required in order for human viewers to perceive consecutively shown images of a moving object as video. In the example shown inFIG.11B, it takes about a third of a second to generate a super-resolution image from each set of microscanned images. Therefore, first image frame602A is shown at a time of 0.33 seconds, second image frame602B is shown at a time of 0.67 seconds and third image frame602C is shown at a time of 1.00 seconds. This results in a video rate of 3 FPS which will not be perceived by a human viewer as continuous video, as shown by an arrow606. Playing the three super-resolution images as video will show high quality images of the movement of objects moving in the scene of observation but due to the low video rate, the moving objects in the captured microscan images as video will appear to jump across the image frames. Another challenge or issue in playing super-resolution images as video is that the process of generating super-resolution images in which there are moving objects within a single data set of microscanned images may cause blurring of the moving objects in the generated super-resolution image. This issue can also be referred to as image artifacts. Even though combining microscanned images together into a single image may increase the sampling frequency, such an image will only be artifact free if none of the objects in the combined image move over the time period the microscanned images are captured which form the combined image. In object detection and tracking systems, such an assumption of the movement of objects is unreasonable as objects may move from microscan image to microscan image. As shown in first image frame602A, the vehicle is shown as a blurred vehicle608A, the tree is shown as a blurred tree610A and the animal is shown as a blurred animal612A. Likewise for second image frame602B, the vehicle is shown as a blurred vehicle608B, the tree is shown as a blurred tree610B and the animal is shown as a blurred animal612B and for third image frame602C, the vehicle is shown as a blurred vehicle608C, the tree is shown as a blurred tree610C and the animal is shown as a blurred animal612C. Thus playing super-resolution images as video which is derived from sets of microscanned images may lead to video which includes image artifacts and which is played at too slow a video rate for video to be perceived and viewed by human viewers. Reference is now made toFIG.11C, generally referenced630, which shows a second tradeoff in playing microscanned images as video. In this tradeoff, the microscanned images in each set of microscanned images are played continuously as video.FIG.11Cshows three image frames which are to be played as video, a first image frame632A, a second image frame632B and a third image frame632C. First, second and third image frame632A-632C represent the three microscanned images shown in first set of microscanned images552A (FIG.11A) and are played in succession as shown by a plurality of arrows634. In this tradeoff or challenge, image frames may be shown at a fast enough video rate for a human viewer to properly perceive a continuously moving object, for example at 30 FPS as shown by an arrow636. As shown first image frame632A is shown at a time of 0.03 seconds, second image frame632B is shown at a time of 0.06 seconds and third image frame632C is shown at a time of 0.09 seconds. However, since microscanned images are low resolution images as compared to high resolution images, the resolution of the shown video may be compromised and of low quality when increased in size to fill the viewing area of a screen. In the example shown inFIG.11C, each microscanned image is played at a thirtieth of a second (i.e., resulting in a video rate of 30 FPS), however the image quality of each microscanned image is of lower resolution as compared to high resolution images, thus resulting is lower image quality. As shown, in first image frame632A, the vehicle is shown as a low quality image of vehicle638A, the tree is shown as a low quality image of tree640A and the animal is shown as a low quality image of animal642A and compared to the original images shown inFIG.11A, the low quality images shown inFIG.11Care missing some of the image data. Likewise for second image frame632B, the vehicle is shown as a low quality image of vehicle6388, the tree is shown as a low quality image of tree640B and the animal is shown as a low quality image of animal642B and for third image frame632C, the vehicle is shown as a low quality image of vehicle638C, the tree is shown as a low quality image of tree640C and the animal is shown as a low quality image of animal642C. On the one hand, playing microcanned images individually as video enables video to be played at a proper video rate for human viewers without any image artifacts, however the played video will be of lower quality and may not discern sufficient details in object detection and motion tracking systems to be of substantial use, for example for security systems. According to the disclosed technique, a system and method are disclosed wherein microscan images can be rapidly updated and presented to a viewer without image artifacts and with sufficient resolution quality thus providing the viewer with the semblance of contiguous movement and continuous imagery as expected in video. The disclosed technique relies on the viewer's innate ability to interpolate rapid imagery both temporally and spatially. According to the disclosed technique, consecutive microscan images are displayed to a user, thereby improving the perceived quality of moving objects in the viewed video, however without image artifacts and with sufficient resolution quality. In addition, the disclosed technique enables super-resolution images to be displayed without any blurring of moving objects in the constructed super-resolution image. Reference is now made toFIG.12, which is a block diagram illustration of a method for presenting enhancing motion detection and the tracking of objects using microscanned images, generally referenced660, operative in accordance with another embodiment of the disclosed technique. The general method shown inFIG.12shows two separate methods for overcoming the issues of generating and displaying super-resolution images and video as presented above inFIGS.11A-11C. In one method, hybrid super-resolution video is constructed displaying a super-resolution background with a portion of the video displaying high frame rate targets (e.g., fast moving objects) in the imaged scene of observation at low resolution. In this hybrid presentation, high frame rate video of identified moving targets is displayed at low resolution in a portion of the displayed images whereas the background (which is assumed to be static) is displayed as a super-resolution image. This is described inFIG.12as a first method670A and in further detail below inFIGS.13A and13B. In another method, a hybrid super-resolution image is constructed with reduced image artifacts taking into account the instantaneous velocity estimates of any moving objects. In this hybrid presentation, low frame rate video is displayed at high resolution over the entire displayed image. This is described inFIG.12as well as a second method670B and in further detail below inFIG.14. Block diagram660ofFIG.12is somewhat similar to block diagram220(FIG.5B) and shows the general steps of how microscanned images, used to form a super-resolution image can be displayed without image artifacts and also can be used to show video of moving objects within a super-resolution image. As shown, a plurality of microscanned images6621-662Nis acquired. Plurality of microscanned images6621-662Ncan represent M sets of microscanned images where each set is sufficient for constructing a super-resolution image. For example, each set may include at minimum two microscanned images or as many as N microscanned images for forming a super-resolution image. Regardless of what N is equal to, each set of microscanned images should cover the entire scene of observation to be imaged. As described above, microscanned images6621-662Ncan be acquired in a number of different ways and do not necessarily need to come from a microscanner. According to the disclosed technique, microscanned images6621-662Nare used for two simultaneous procedures shown by a first plurality of arrows664A and a second plurality of arrows664B and can then be used based on those two simultaneous procedures for two separate methods for displaying a super-resolution image and video as per the disclosed technique. It is noted that the procedures of the two methods can occur in parallel, meaning that the amount of time required to execute each method may be different however regardless the methods can be executed simultaneously. The first method, shown by an arrow670A and marked as method 1., enables the display of improved super-resolution video at a high video frame rate (albeit at low image resolution) wherein the background portion of the video is displayed at super-resolution while also displaying high frame rate targets in the imaged scene of observation at low resolution. The second method, shown by an arrow670B and marked as method 2., enables the display of improved super-resolution video at a low video frame rate (albeit at high image resolution). In the procedure following first plurality of arrows664A, microscanned images6621-662Nare individually analyzed for spatial and temporal information, as shown by a block666. In this procedure, microscanned images6621-662Nare not formed into a super-resolution image but are individually analyzed as separate images for spatial and temporal information using object motion detection and tracking algorithms. This includes determining potential moving objects between consecutive microscanned images and also determining movement paths of potential moving objects to determine that a potential moving object is indeed a moving target. As described above, performing object motion detection and tracking on microscanned images is a faster process than standard object motion detection and tracking on regular images since the refresh rate for a single microscanned image is higher than the refresh rate of a single image frame. Thus performing object motion detection and tracking on microscanned images according to the disclosed technique enables the detection of rapid and short-lived targets as well as fast moving targets. As described above (seeFIGS.5B and6), the spatial and temporal information derived from at least two microscan images can include information about the instantaneous angular velocity and (with at least three microscan images) the instantaneous angular acceleration of any identified moving target in block666as well as a movement indication of each potential moving object in the microscanned images. Simultaneously or in parallel, in the set of procedures following second plurality of arrows664B, microscanned images6621-662Nare combined into a super-resolution image668. As mentioned above, combining microscanned images into a super-resolution image is known in the art. Super-resolution image668schematically represents the super-resolution image(s) formed from the microscanned images schematically shown as plurality of microscanned images6621-662N. Thus super-resolution image668represents at least one super-resolution image. In the block method shown, super-resolution image668is not a displayed image yet and merely represents the formed image from the microscanned images which if presented as is may include image artifacts and blurred moving objects. The result of block668is a super-resolution image whereas the result of block666is at least one moving target indication derived from a movement analysis of the microscanned images of blocks6621-662Nwhich might include an estimate of the instantaneous angular velocity and instantaneous angular acceleration of identified moving targets even after the analysis of only two or three individual microscanned images. As shown inFIG.12, the spatial and temporal information of block666and the super-resolution image of block668can be used for a first method, shown by arrow670A, for displaying low resolution high frame rate super-resolution video or for a second method, shown by arrow670B, for displaying high resolution low frame rate super-resolution video. The methods shown by arrows670A and670B do not need to be executed simultaneously and can be executed separately on different processor and/or displayed on different display units or generated by different display generators (not shown). In the first method, as shown by arrow670A, the moving target indications from block666are used to identify regions in super-resolution image668where a moving target may be present. Those regions are then bounded. When the super-resolution image is to be displayed to a user, the unbounded regions, which represent areas in the super-resolution image where there are no moving targets based on block666, are displayed to the user based on the pixel information in super-resolution image668for the duration of time it takes to generate the next super-resolution image (for example, 0.33 seconds), thus showing those unbounded regions at a higher resolution compared to the native resolution of the image detector which initially captured microscanned images6621-662N. The unbounded regions are displayed at a standard video frame rate however they are only updated once the next super-resolution image has been generated. In the bounded regions of the super-resolution image, the low resolution microscanned images of the moving objects and targets are displayed at a standard video frame rate (for example at 0.03 seconds per microscan image), thus displaying the microscanned images as video within the bounded regions of the super-resolution image. This is shown in a block672. The result of this type of displaying to the user is a hybrid improved super-resolution video, as shown by a block676having low image resolution but a high video frame rate for high frame rate targets. The super-resolution video is a hybrid since the unbounded regions (in general background portions of the video) are repeatedly displayed as still high resolution images based on the super-resolution image whereas the bounded regions are displayed as video based on the microscanned images. According to the disclosed technique, by displaying areas of the scene of observation in which there are no moving objects as derived from the super-resolution image, a high resolution video can be played without image artifacts and without the perception that the video rate is not at a sufficiently high video rate for smooth human viewing. By displaying areas of the scene of observation in which there are moving objects as derived from microscan images, moving objects can be shown at a high video rate and also at a relatively high resolution (compared to the prior art) since the microscan images do not need to be increased to fill the size of the viewing screen and can be shown at their native size when captured on a portion of the image detector array (not shown). Even though the resulting video of the moving object may not be completely smooth and may show the moving object with some jumps between super-resolution images, within the time it takes to generate a subsequent super-resolution image, the moving objects can be shown as smooth video. According to the disclosed technique, the number of microscan images captured which are used to generate a super-resolution image can either be stretched or squeezed to fit within the amount of time required to generate a next super-resolution image from the microscan images. Using the examples shown above inFIGS.11B and11C, if a super-resolution image can be shown as video at a video rate of 3 FPS and microscanned images can be shown as video at a video rate of 30 FPS, that means that for every super-resolution image shown, 10 microscan images of moving objects can be shown per super-resolution image. If the set of microscan images forming each super-resolution is indeed 10 microscan images, then the same unbounded areas of the super-resolution image are each repeatedly shown 10 times at a video rate of 30 FPS until the next super-resolution is generated whereas the bounded areas of the super-resolution image show each microscan image (showing the moving object) once, also at a video rate of 30 FPS. In the case of more than 10 microscanned images per super-resolution image, the microscanned images can be played at a video rate higher than 30 FPS (thereby squeezing them all into the time the super-resolution image is shown) such that within the time the super-resolution image is shown (for example 0.33 seconds) all the microscanned images making up the respective super-resolution image are shown. In the case of fewer than 10 microscanned images per super-resolution image, the microscanned images can be played at a video rate lower than 30 FPS, for example by showing the same microscan image twice or multiple times (and thus stretching them into the time the super-resolution image is shown), such that within the time the super-resolution image is shown (for example 0.33 seconds) all the microscanned images making up the respective super-resolution image are shown. Blocks672and676as shown relate in general to a case where microscanned images6621-662Nare acquired from a single LOS position, meaning where microscanned images are not acquired in a pan and tilt system. In the case of microscanned images6621-662Nbeing acquired in a pan and tilt system (not shown), blocks666and668are performed sequentially for each set of microscanned images captured as each LOS position of the pan and tilt system. As described above pan and tilt systems can be used to generate a wide-area image or a panoramic image of a scene of observation wherein microscanned images are captured from either non-overlapping or slightly overlapping LOS positions. As an example, a pan and tilt system may have 3 pan positions and 3 tilt positions, thus resulting in a 3×3 square matrix of 9 different LOS positions from which microscanned images are acquired according to the disclosed technique, with LOS positions changing such that LOS position X is refreshed every second (i.e., each LOS position captured microscanned images for 1/9thof a second). In addition, there may not be a direct correlation between the number of LOS positions of the pan and tilt system with the number of microscanned images acquired at each position. For example, a pan and tilt system having 9 different positions might acquire 16 microscanned images at each LOS position. When blocks672and676are performed for microscanned images at a first LOS position of the aforementioned pan and tilt system, the improved super-resolution video at the first LOS position might not get updated for another second until the pan and tilt system returns to the first LOS position. However there might not be sufficient data acquired from the microscanned images to display a moving object continuously from the first LOS position until the pan and tilt system returns to the first LOS position and provides refreshed information from newly acquired microscanned images. According to the disclosed technique, as shown in blocks680and682, two different methods can be used in the case of a pan and tilt system to give a user the perception of a moving object still moving within the super-resolution video shown in a given LOS position up until updated super-resolution video can be generated in the same LOS position. Blocks680and682are shown using dotted lines to indicate that these blocks are optional and might only be necessary when the method ofFIG.12is used with a pan and tilt system (also known as a step-and-stare system). In block680, the super-resolution video shown in the bounded moving object area is stretched out over the time a pan and tilt system cycles through its different LOS positions before returning to its initial position. Such a technique might result in the moving object appearing to jump between refreshed images and video at a given LOS position. Using the example mentioned above, assuming the pan-and-tilt system cycles through 9 different positions, if the microscanned images in blocks6621-662Nat each LOS position are acquired at a rate of 300 Hz (with 16 microscans per LOS position) and the pan and tilt system changes LOS position at a rate of 9 Hz (thus each second the system returns to its original LOS position since the system cycles through 9 positions each second), the 16 microscans acquired at a given LOS position can initially show video of a moving object for 1/9thof a second. However the video shown at the given LOS position will appear as a still image for 8/9thof a second until the pan and tilt system returns to the given LOS position. In block680, within the bounded moving object area, the acquired microscanned images at the given LOS position are stretched out timewise to cover the entire second (i.e., 9 Hz) until the pan and tilt system returns to the given LOS position and acquires updated microscanned images. As mentioned above, even with the stretching out of the playing of the acquired microscanned images as video within the bounded moving object area, there might nonetheless be jumps in the position of the moving object when the updated microscanned images are used to display the moving object in an updated bounded moving object area. In block682, based on the spatial and temporal information of block666, the position of the object shown in the bounded moving object area is extrapolated to show smooth and substantially continuous video from the time a given LOS position acquires microscanned images until the pan and tilt system returns to the given LOS position and provides updated acquired microscanned images. A filter, such as a Kalman filter, can be used regarding the extrapolation to ensure that the displayed extrapolated position of the moving object in the bounded area remains close to the actual updated position of the moving object each time the given LOS position is updated and refreshed with newly acquired microscanned images. In this block, the determined estimates of the angular velocity and angular acceleration of the moving object are used to predict and simulate the dynamic movement of the object even though there is no direct information captured regarding the position of the moving object at a given LOS position when the pan and tilt system is at the other LOS positions. Using the example above, the estimated velocity and acceleration determined for a moving object which is displayed as video in the bounded moving object area for 1/9thof a second is used to extrapolate the position of the moving object over the next 8/9thof a second and show continuous video of the moving object moving until updated microscan images can be displayed and until an updated estimate of the velocity and acceleration can be determined based on the updated acquired microscanned images. As mentioned above, in this approach, care needs to be taken when predicting the position of the moving object during the time the pan and tilt system is at other LOS positions, since over-extrapolation may lead to the moving object as displayed being far off its actual position in the next updated acquired microscanned images. This might also result in the moving object appearing to jump each time the pan and tilt system cycles through its LOS positions and returns to its initial position. As stated above, a Kalman filter or a similar type of numerical estimation algorithm can be used to determine how much of the extrapolation should be used when displaying the video of the moving object in the bounded moving object area so that the updated position of the moving object is not too far off the updated position of the moving object determined from the updated microscanned images when the pan and tilt system returns to the first LOS position. In the second method, as shown by arrow670B, the moving target indications from block666are used to identify regions in super-resolution image668where a moving target may be present. Those regions are then bounded. Based on the instantaneous velocity estimates of potential moving targets from the microscanned images, a decision can be made as to where in the super-resolution image each moving target should be placed. In general, the moving target is placed in the super-resolution image in the position it was captured in from the microscanned images, with one of the microscanned images acting as an anchoring point of where in the super-resolution image the moving target should be placed. Data and information on the moving target from the other microscanned images can then be used to generate the super-resolution image of the moving target at the position of the anchoring point. Pixel information about a moving target from each set of microscanned images can be combined together to form a corrected image of the moving target. In the bounded regions of the super-resolution image, the moving target is displayed in a single position for the duration of time it takes to construct the super-resolution image, where the moving target image is derived from a corrected combination of the microscanned images and the single position is based upon the instantaneous velocity estimates of the moving target in the microscanned images. This enables the moving target to be displayed in the super-resolution image without any image blur or image artifacts. This is shown in a block674. In the unbounded regions, the super-resolution image can be displayed based on the microscanned images, where pixel information from the microscan images can be used to fill in regions where a moving target was identified in a given microscan image but where in the super-resolution image itself, the moving target is not displayed. This is shown more clearly below inFIG.14. This displaying results in an improved super-resolution image having no image artifacts and no image blurring since the spatial and temporal information of each microscan image is used to position a moving target in a single position in the super-resolution image, thus correcting for image artifacts and any blurring of a moving target in the super-resolution image. Thus as shown in a block678, corrected and improved super-resolution images as described in block674can be shown sequentially as video resulting in high image resolution yet low video frame rate super-resolution video. An example of blocks674and678is the surveillance approach known as WAMI (wide-area motion imagery) which is considered a type of video having a low video frame rate (usually having a refresh rate of around 1 Hz or higher). It is noted however that WAMI is merely an example of blocks674and678and that other approaches can be used to implement blocks674and678. Reference is now made toFIG.13A, which is a block diagram illustration of a method for generating low image resolution high video frame rate super-resolution video from a plurality of microscanned images based onFIG.12, generally referenced700, constructed and operative in accordance with a further embodiment of the disclosed technique. The method shown inFIG.13Aprovides a solution to the tradeoffs and challenges of playing video using microscanned images described above inFIGS.11B and11Cand enables super-resolution quality video to be played at a high (and proper for smooth human viewing) video frame rate such as 30 FPS. As shown inFIG.13Aare a plurality of microscanned images shown as blocks702A,702B and702N. The captured microscanned images are formed into a super-resolution image704. Using the disclosed technique as described above inFIG.5Bor using other image processing techniques for identifying moving objects in image frames, moving objects within the microscanned images and within the super-resolution image are identified. This is shown in a block706. Identified moving objects in the super-resolution image are bounded by a shape, such as a box. Within the bounded shape, the microscanned images forming the super-resolution image are played consecutively as video however the area outside the bounded shape, which does not include moving objects, is shown as the super-resolution image. This is shown schematically in block708. The super-resolution image is shown at the video rate at which super-resolution images can be generated however within the bounded area of each super-resolution image, the microscanned images are played consecutively at a much higher video rate. This is shown as a first updated microscan image710A, a second updated microscan image710B and so on until updated microscan image710N. Thus within block708, identified moving objects in microscanned images are shown at a high video rate albeit at a low resolution, however in the remainder of block708where no moving objects were identified the displayed image is displayed at a low video rate albeit at a high resolution (i.e., as a super-resolution image). The embodiment shown above inFIG.13Amay have a higher overall latency due to the construction and displaying of a super-resolution image however the overall high resolution of a super-resolution image will be maintained as the video sequences of the microscan images will be displayed at their native resolution within a bounded area in the super-resolution image. It is noted that in the embodiment ofFIG.13A, moving objects within the bounded area may appear to jump between their position in the last microscanned image displayed in a video sequence for a given super-resolution image and their position in the first microscanned image displayed in the next video sequence for a subsequent super-resolution image. Reference is now made toFIG.13B, which is a schematic illustration showing some example image frames used in the method ofFIG.13A, generally referenced730, constructed and operative in accordance with another embodiment of the disclosed technique.FIG.13Bshows a super-resolution image732which is comprised of a plurality of microscan images. In the example shown, four microscan images are used to generated super-resolution image732, a first microscan image734A (shown as MS1), a second microscan image734B (shown as MS2), a third microscan image734C (shown as MS3) and a fourth microscan image734D (shown as MS4). Super-resolution image732is schematically shown as being made up of 16 different pixel regions as an example, with every four staggered pixel regions being filled by a different microscan image. In order to avoid clutter inFIG.13Bnot every pixel region is numbered with a reference number. As shown, super-resolution image732is shown at a time of T0. The time shown is the time at which the super-resolution image is generated and not the time at which microscan images are captured (which is at a faster rate). Super-resolution image732is derived as shown in blocks702A-702N and704(FIG.13A). Also shown in super-resolution image732is a bounded area736in which a moving object (not shown) has been identified, which was shown as block706(FIG.13A). Once bounded area706has been identified, super-resolution video of the captured images (microscan images and super-resolution images) can be displayed according to the disclosed technique. This is shown via an arrow738which shows three example video frames, a first video frame7401, a second video frame7402and a third video frame7403. The succession of video frames is shown via a plurality of arrows744. As can be seen in this example, each video frame comprises two sections which correspond to the sections of super-resolution image732. One section is the bounded area as designated in super-resolution image732and shown respectively as bounded areas7421,7422and7423which each correspond to a respective one of video frames7401-7403. The other section is the unbounded area (not specifically referenced with a reference number). For each video frame, the unbounded area remains static and the same microscan image is shown in each pixel region for the length of time it takes to generate another super-resolution image. Thus as can be seen and as indicated, the upper left hand corner region of each video frame shows first microscan image734A at T0and the lower right hand corner region of each video frame shows fourth microscan image734D at T0. Since this part of the super-resolution image does not include a moving object, these parts can be shown in each video frame at a video rate of 30 FPS. However in the bounded areas, each video frame shows a subsequent microscan image thereby presenting the movement of the moving object (not shown) as video at a video rate of 30 FPS. In video frame7401, bounded area7421shows first microscan image734A (MS1), in video frame7402, bounded area7422shows second microscan image734B (MS2) and in video frame7403, bounded area7423shows third microscan image734C (MS3). The microscan images will be continuously shown in the bounded area until the next set of captured microscan images have been processed into an updated super-resolution image which will then replace the unbounded areas and will be shown at a time of T1. As shown in bounded area7421, MS1 at T0covers four times the area of the original microscan image captured in super-resolution image732. MS1 thus needs to be interpolated to cover a larger area in video frame7401. The same is true regarding MS2 and MS3. According to the disclosed technique microscanned images such as first microscan image734A, second microscan image734B and third microscan image734C can be increased in size to fill the respective areas of bounded areas7421-7423using image interpolation techniques, digital zoom techniques and the like. As mentioned above, the time shown is not the time at which microscanned images are captured since the time required to generate a super-resolution image might not be equivalent to the time it takes to capture the microscanned images (for example in a pan-and-tilt system). Within a given set of microscan images played within a bounded area in a super-resolution image, smooth video of the moving object as captured by microscan images can be displayed. The spatial and temporal updates of the microscan images in the bounded area are interpolated by a user viewing the images as continuous movement (i.e., as video) while the high image resolution of a super-resolution image is maintained in the unbounded regions where there are no moving objects. Depending on how fast the moving object moves, when the super-resolution image is updated, a jump in the position of the moving object may be perceived. It is noted that in the bounded areas, the presentation of the movement of the moving object (not shown) at a video rate of 30 FPS represents a high video frame rate, however the image resolution of the moving object will be low as compared to the static images shown in the unbounded area. Reference is now made toFIG.14, which is a schematic illustration showing the presentation of high image resolution low video frame rate super-resolution video using the method ofFIG.12, generally referenced760, constructed and operative in accordance with a further embodiment of the disclosed technique.FIG.14represents the method for displaying super-resolution images of the disclosed technique when there is a moving object detected in the microscan images such that the displayed super-resolution image will be shown without any image artifacts or image blurring. Updated displayed super-resolution images can be shown in sequence thereby generating super-resolution video. The main difference between the video displayed according to this method and the video displayed in the method shown asFIGS.13A and13Bis that in this method, each video frame shows a super-resolution at a high image resolution with substantially no image artifacts and no image blurring. However each video frame will be shown as a low video frame rate, for example at 3 FPS or even lower. This is different than the method shown inFIGS.13A and13Bwhere moving objects are shown with a high video frame rate however each video frame shows a low image resolution microscanned image. Shown are three microscan images, a first microscan image7621(microscan 1), a second microscan image7622(microscan 2) and a third microscan image7623(microscan 3). Each microscan image captures a moving object764and as can be seen, moving object764changes positions within each microscan image as shown by a dotted reference line772. As mentioned above inFIG.11B, simply constructing a super-resolution image from microscan images7621-7623will result in a blurred image since moving object764changes position from microscan image to microscan image. In each microscan image, moving object764is identified and a bounded area is formed around each moving object. In first microscan image7621, a bounded area7661surrounds moving object764, in second microscan image7622, a bounded area7662surrounds moving object764and in third microscan image7623, a bounded area7663surrounds moving object764. A plurality of arrows774shows that the three shown microscan images are combined into a single super-resolution image776to be displayed to a user. A bounded area786is identified in the super-resolution image where the moving object is to be displayed. Moving object764is displayed as a super-resolution moving object778. The image formed in bounded area786is derived from each of the images of moving object764in microscan images7621-7623and can be achieved using image processing algorithms that utilize pixel interpolation. Thus the images of moving object764are corrected and combined into the image of moving object778which is displayed in bounded area786. As shown in second microscan image7622, an instantaneous angular velocity estimate7701(v1-2) of moving object764can be made between the first two microscan images and as shown in third microscan image7623, an instantaneous angular velocity estimate7702(v2-3) of moving object764can be made between the second and third microscan images. Both velocity estimates7701and7702can be used to determine where in super-resolution image776moving object778should be displayed. By selecting a particular position for moving object778to be displayed in super-resolution image776, moving object778is actually displayed as a still image derived from the different images of moving object764captured in microscan images7621-7623along with a weighted estimate of the angular velocity of moving object778for the super-resolution image, shown by an arrow784. Super-resolution image776shows two bounded areas E and F, referenced as bounded areas780and782. Bounded area780was covered by moving object764in microscan 1 whereas bounded area782was covered by moving object764in microscan 3. Bounded areas780and782can be filled in super-resolution image776by interpolating pixels from the microscan images where the moving object is not located. In the example shown, microscan 1 has a bounded area768A, microscan 2 has two bounded areas768B and768C and microscan 3 has a bounded area768D. Each of these bounded areas768A-768D represents regions of the microscan images which include pixel information possibly not present in other microscan images due the movement of moving object764. Once the position of moving object778is selected as bounded area786in super-resolution image776, bounded area780can be filled using pixel information from bounded areas768B and768D whereas bounded area782can be filled using pixel information from bounded areas768A and768C. Bounded areas780and782are thus filled in by interpolating pixel information from microscan images7621-7623. In this respect, a super-resolution image can be constructed and displayed to a user without image artifacts and without a moving object shown as a blurred image. Such constructed super-resolution images having a high image resolution can be consecutively displayed to a user at a low video frame rate. Reference is now made toFIGS.15A and15B, which are a schematic illustration of a method for presenting and displaying enhanced motion detection and the tracking of objects using microscanned images to a user, operative in accordance with another embodiment of the disclosed technique.FIGS.15A-15Bshow a method based on the block diagrams and illustrations inFIGS.12-14. In general, the right hand column of procedures (such as procedures812,814and816) are for displaying high image resolution in a low video frame rate super-resolution still video, thus displaying at least one moving object with no image artifacts at a selected location in super-resolution video and is substantially similar to what was shown in blocks674and678inFIG.12. The left hand column of procedures (such as procedures808,810,818and820) are for displaying low image resolution high video frame rate super-resolution video and is substantially similar to what was shown in blocks672,676,680and682inFIG.12. In a procedure800, a plurality of microscanned images of at least one moving object is acquired. Procedure800is similar to procedure250(FIG.6). A first subset of the microscanned images forms a first data set and a second subset of the microscanned images forms a second data set. The data sets are without graphic representation to a user but are nonetheless considered images. The plurality of microscanned images of the moving object can be divided into a plurality of data sets, however according to the disclosed technique, at minimum the plurality of microscanned images should be divided into two different data sets. In general, a data set represents a sufficient number of microscanned images for forming a super-resolution image frame. Thus in a typical scenario, the number of data sets is equal to the number of super-resolution image frames that can be constructed (as described below in procedure806). In an imaging system where a detector does not pan and image a panoramic scene of observation, each data set represents the same scene of observation as acquired microscanned images. According to the disclosed technique, at minimum each data set should include at least two microscanned images from which temporal and spatial information can be extracted and which can be used to form a super-resolution image, as described below. With reference toFIG.13A, plurality of microscanned images702A-702N are acquired. In a procedure802, at least a portion of a respective subset of the microscanned images is analyzed for spatial and temporal information. Procedure802is similar to procedure258(FIG.6). Procedure802is performed for each acquired data set in procedure800. Thus in the case shown in procedure800, procedure802is performed once for the first data set and once for the second data set. The spatial and temporal information derived in procedure802represents step a) as mentioned above regarding object detection and motion tracking in general and can be executed using known object tracking algorithms. Procedure802is executed on a data set representative of the acquired microscanned images. Procedure802is used to determine the presence of potential moving objects in the acquired microscan images and in general, each data set includes a plurality of microscanned images, thus an initial estimate of both angular velocity and angular acceleration can be determined for potentially moving objects. Spatial information which is analyzed may be the position of the moving object in each microscanned image of each data set. Temporal information of the moving object may include the instantaneous angular velocity and instantaneous angular acceleration of the moving object. With reference toFIG.12, microscanned images6621-662Nare individually analyzed for spatial and temporal information, as shown by block666. In this procedure, microscanned images6621-662Nare not formed into a super-resolution image but are individually analyzed as separate images for spatial and temporal information using object motion detection and tracking algorithms. In a procedure804, the spatial and temporal information of procedure802is used to determine a movement indication of the moving object. Procedure804is similar to procedure260(FIG.6). Procedure804substantially represents step b) mentioned above regarding object motion detection and tracking methods, wherein an attempt is made to establish and correlate a path of movement of the potential moving objects identified in procedure802. Procedure804is performed for each acquired data set. Since each acquired data set includes a plurality of microscanned images (at minimum two or three, but practically speaking many more, such as four, nine, sixteen and the like), consecutive microscanned images in each data set can be used to determine the instantaneous velocity and instantaneous acceleration of moving objects for which a movement indication is determined. In procedure804the movement indication of a moving object in the acquired and analyzed data sets can include an estimate of the angular velocity and angular acceleration of the moving object. With reference toFIG.13A, moving objects within the microscanned images and within the super-resolution image are identified, as shown in block706. In a procedure806, a respective super-resolution image is formed from each data set. Procedure806is similar to procedure252(FIG.6). It is noted that in this procedure, the super-resolution image is not yet displayed to a user or viewer. As mentioned above, each data set should include at least two microscanned images. With reference toFIG.12, microscanned images6621-662Nare combined into a super-resolution image668. With reference toFIG.13A, plurality of microscanned images702A-702N are formed into super-resolution image704. As shown inFIG.15A, after procedure800, procedures802and804can occur simultaneously or in parallel as procedure806is executed. In the method ofFIGS.15A-15B, after procedures804and806have been executed, the method can either progress via procedures808,810,818and820(as described below regarding the displaying of a hybrid super-resolution video) or via procedures812,814and816(also as described below regarding the displaying of a super-resolution image which can be displayed as low video frame rate super-resolution video). Regardless, each of procedures808and812require both of procedures804and806to have been executed before either procedure can be executed. The different paths via which the method ofFIGS.15A-15Bprogresses after procedures804and806are shown by different dotted lines inFIG.15A. Likewise, the method shown can progress with procedures808,810,818and820simultaneously or in parallel as the method progresses with procedures812,814and816. This might be the case when the displaying of super-resolution video according to the disclosed technique is displayed on at least two different display surfaces or on at least two different portions of a common display surface, wherein the displayed super-resolution video of procedures808and810(and optionally of procedures818and/or820) is displayed on one display surface (or one portion of a single display surface) and the displayed super-resolution video of procedures812,814and816is displayed on another display surface (or another portion of a single display surface). In a procedure808, for each respective super-resolution image formed in procedure806, a respective bounded area is designated surrounding each at least one moving object based on the respective movement indication(s) of procedure804. The bounded area may have any suitable shape for demarcating where a moving object may be in the super-resolution image. With reference toFIG.12, the moving target indications from block666are used to identify regions in super-resolution image668where a moving target may be present. Those regions are then bounded. In a procedure810, each respective super-resolution image is displayed to a user as follows. For areas in each respective super-resolution image outside the respective bounded areas or regions determined in procedure808, the super-resolution image is repeatedly displayed a plurality of times at a video frame rate. For example, if a super-resolution image can be constructed within 3 FPS (frames per second) and a video frame rate of 30 FPS is desired, then areas outside the bounded areas and regions of each super-resolution image repeatedly display the same image ten times for each super-resolution image until a subsequent super-resolution can be constructed. In procedure810, within the bounded areas or regions, a plurality of consecutive microscanned images of the at least one moving object is displayed at the video frame rate. Using the same example above, within each bounded area of a given super-resolution image, ten microscanned images of the moving object are displayed as video within the time it takes to construct another super-resolution image. In procedure810, a hybrid super-resolution image with microscanned images played as video in the bounded areas is displayed to a user, thereby providing a super-resolution image without image artifacts while also presenting microscanned images of a moving object as video at the native pixel resolution at which the microscanned images were acquired. Thus a super-resolution background is displayed with a portion of the video showing high frame rate targets in the imaged scene of observation at low resolution. With reference toFIG.12, when the super-resolution image is to be displayed to a user, the unbounded regions, which represent areas in the super-resolution image where there are no moving targets based on block666, are displayed to the user based on the pixel information in super-resolution image668for the duration of time it takes to generate the super-resolution image (for example, 0.33 seconds), thus showing those unbounded regions at a higher resolution compared to the native resolution of the image detector which initially captured microscanned images6621-662N. In the bounded regions of the super-resolution image, the low resolution microscanned images of the moving objects and targets are displayed at a fast frame rate (for example at 0.03 seconds per microscan image), thus displaying the microscanned images as video within the bounded regions of the super-resolution image. This is shown in a block672. The result of this type of displaying to the user is a hybrid improved super-resolution video, as shown by a block676. The super-resolution video is a hybrid since the unbounded regions are displayed as low frame rate video (like still images between the frames of the low frame rate video) based on the super-resolution image whereas the bounded regions are displayed as high frame rate video based on the microscanned images. Regardless the images are displayed at a standard video frame rate except that the unbounded regions display the same super-resolution background until an updated super-resolution image is formed whereas the bounded regions display the microscanned images showing the movement of moving objects in the imaged scene. With reference toFIG.13A, within the bounded shape, the microscanned images forming the super-resolution image are played consecutively as video however the area outside the bounded shape, which does not include moving objects, is shown like a super-resolution image. This is shown schematically in block708. The super-resolution image is shown at the video rate at which super-resolution images can be generated however within the bounded area of each super-resolution image, the microscanned images are played consecutively at a much higher video rate. As shown inFIG.15A, after procedure810, the method continues to procedures818and820which are shown inFIG.15B. It is noted that procedures818and820are optional procedures and are relevant in the case that the method ofFIGS.15A and15Bis used in a pan and tilt system for acquiring microscanned images to form a wide-area video image and/or a panoramic video image. It is noted that procedures818and820represent different procedures for showing continuous video of a determined moving object in procedure810while a pan and tilt system cycles through its different LOS positions. Likewise, procedures818and820can also be executed simultaneously or in parallel and displayed on different display units or generated by different display generators (or different portions of the same display unit). In a procedure818, the plurality of consecutive microscanned images of the at least one moving object within the bounded area is stretched out over the refresh rate of a pan and tilt system. As mentioned above, the refresh rate of a pan and tilt system is the amount of time required for the system to pan and tilt through all its different stare positions before returning to an initial stare position. The microscanned images of the moving object within the bounded area are thus played consecutively at a video frame rate such that they cover sufficient time in the bounded area until the pan and tilt system returns to the same LOS position at which the previous microscanned images showing the moving object were acquired. With reference toFIG.12, in block680, within the bounded moving object area, the acquired microscanned images at the given LOS position are stretched out timewise to cover the entire second (i.e., 9 Hz) until the pan and tilt system returns to the given LOS position and acquires updated microscanned images. In a procedure820, a position of the at least one moving object within the bounded area is extrapolated according to the spatial and temporal information as determined in procedure804. Within the bounded area, the movement of the moving object is extrapolated for the duration of a pan and tilt refresh rate based on the determined velocity and acceleration estimates of the moving object, thereby displaying the moving object having continuous smooth movement each time updated microscanned images are acquired at a given LOS position of the pan and tilt system. As mentioned above, a Kalman filter and the like can be used to determine how much extrapolation should be used to minimize differences in the moving object's actual position as determined from updated acquired microscanned image and as predicted according to the extrapolation of the moving object's movement based on the velocity and acceleration estimates. With reference toFIG.12, in block682, based on the spatial and temporal information of block666, the position of the object shown in the bounded moving object area is extrapolated to show smooth and substantially continuous video from the time a given LOS position acquires microscanned images until the pan and tilt system returns to the given LOS position and provides updated acquired microscanned images. A filter, such as a Kalman fitler, can be used regarding the extrapolation to ensure that the displayed extrapolated position of the moving object in the bounded area remains close to the actual updated position of the moving object each time the given LOS position is updated and refreshed with newly acquired microscanned images. Returning back toFIG.15A, in a procedure812, a bounded area is designated surrounding each at least one moving object based on the movement indication of procedure804for each microscanned image. The designation can also include bounded regions adjacent to the bounded areas surrounding each moving object. With reference toFIG.14, in each microscan image, moving object764is identified and a bounded area is formed around each moving object. In first microscan image7621, a bounded area7661surrounds moving object764, in second microscan image7622, a bounded area7662surrounds moving object764and in third microscan image7623, a bounded area7663surrounds moving object764. Microscan 1 has a bounded area768A, microscan 2 has two bounded areas768B and768C and microscan 3 has a bounded area768D. Each of these bounded areas768A-768D represents regions of the microscan images which include pixel information possibly not present in other microscan images due the movement of moving object764. In a procedure814, a corrected position of each at least one moving object in each super-resolution image is determined according to the spatial and temporal information analyzed in procedure802. In this procedure, the respective super-resolution image of procedure806is still not displayed to the user or viewer. Using the spatial and temporal information about each moving object, for example based on the instantaneous velocity estimates of the moving object between each two consecutive microscanned images, a corrected position of the moving object is determined for each super-resolution image. Therefore instead of averaging the position of the moving object based on its different positions in the microscanned images and data sets forming a respective super-resolution image, or combining all the microscanned images together which would form a blurry image of the moving object, as shown above inFIG.11B, a corrected position of the moving object is determined where it will be displayed as a still object in the super-resolution image. With reference toFIG.14, bounded area786is identified in the super-resolution image where the moving object is to be displayed. Moving object764is displayed as a super-resolution moving object778. The image formed in bounded area786is derived from each of the images of moving object764in microscan images7621-7623. Thus the images of moving object764are corrected and combined into the image of moving object778which is displayed in bounded area786. Both velocity estimates7701and7702can be used to determine where in super-resolution image776moving object778should be displayed. By selecting a particular position for moving object778to be displayed in super-resolution image776, moving object778is actually displayed as an image derived from the different images of moving object764captured in microscan images7621-7623along with a weighted estimate of the velocity of moving object778for the super-resolution image, shown by an arrow784. In a procedure816, each at least one moving object is displayed in each respective super-resolution image using the spatial and temporal information of the corrected position determined in procedure814. In this procedure, the at least one moving object in each of the microscanned images are combined and used to display the moving object as a still object in the super-resolution image at the determined corrected position. The moving object displayed as a still object is not only displayed without image blur or image smear but also displayed at a higher resolution due to the combination of the microscanned images. According to the disclosed technique, a correction algorithm is used in procedure816to resample and interpolate each identified moving object in the microscanned images into the respective determined correction position for each identified moving object in the super-resolution image. The identified moving object is thus displayed in the super-resolution image using high-resolution sampling of the microscanned images thereby cancelling the blur and smear effect caused by sampling the identified moving object at different times and different positions for each microscan image. The correction algorithm of the disclosed technique includes the following procedures. As described above in procedure812regarding designating a bounded area around each moving object, in a first procedure (not shown) once a moving object has been identified in a microscan image, at least two consecutive microscan images are used to determine an estimate of the instantaneous angular velocity of the moving object and at least a third consecutive microscan image is used to determine an estimate of the instantaneous angular acceleration of the moving object. In a second procedure (not shown), a determination is made of the position of the moving object in each microscan image forming a respective super-resolution image, the determination being based upon the estimates of the instantaneous angular velocity and acceleration of the moving object. In a third procedure (not shown), based on the determined position of the moving object in each microscan image, the pixels forming each moving object in all the microscan images are resampled into the respective super-resolution image at the determined corrected position, thereby forming a higher resolution image of the moving object within the super-resolution image. In an alternative to the methods shown and described visually inFIGS.13A,13B and14, the bounded areas of identified moving objects in the microscanned images can be combined to increase the available pixel information of each identified moving object. In this embodiment, the aforementioned correction algorithm is performed for each bounded area using the position of the moving object in each microscan image. As an example, if 9 microscan images are used to form a super-resolution image, then the identified moving object is resampled 9 times as per the above mentioned correction algorithm, each iteration of the correction algorithm resampling the bounded areas of the microscanned images at each respective determined position of each identified moving object for each microscan image. In this respect, not only can the unbounded areas in the microscanned images be formed and displayed as a super-resolution image, the bounded areas as well can be formed as super-resolution images and played consecutively to show video of the identified moving object at a high image resolution as well as a high video frame rate. Thus higher resolution images of the moving object at each position in the microscanned images can be displayed taking into account all the pixel data from all the microscanned images forming a super-resolution image. The pixel data comes from the different positions of the identified moving object in the bounded area of each microscanned images and the resampling of all the microscanned images at each different position of the identified moving object. Thus the identified moving object in each microscan image can be enhanced and displayed at super-resolution as super-resolution video. Using the position of each identified moving object determined from each microscan image, identified moving objects can be displayed at a high video frame rate while also being displayed at a high resolution (i.e., super-resolution). Thus unlike the embodiment shown inFIGS.13A and13B, the image resolution of moving objects in the bounded areas will be similar to the static images shown in the unbounded areas, at a high resolution. This alternative embodiment is possible due to the difference in the image capture rate of microscanned images, which can be for example around 180 Hz, versus the refresh rate of a display, which is around 30 Hz. The rapid capture rate of microscan images thus enables the captured microscan images to be used to display an identified moving object in each microscan image at super-resolution while also having sufficient time to display the identified moving object as video in the bounded areas within the super-resolution image shown in the unbounded areas. In comparison to the method mentioned above inFIG.13B, in this alternative embodiment, instead of resampling all the microscan images into a respective super-resolution image at a single determined corrected position, all the microscan images are resampled into each position of the moving object in each microscan image, thus enabling super-resolution video for static as well as moving objects to be displayed. Areas of the super-resolution image where the moving object could have been placed can be filled in by the pixel information of the designated bounded areas adjacent to the bounded areas surrounding each moving object as determined in procedure812. In this respect, procedure816displays a super-resolution image of a moving object without any image artifacts by positioning the moving object per super-resolution image at a given corrected position within each respective super-resolution image. Displayed super-resolution images of the moving object as per procedure816can be displayed sequentially to displaying super-resolution video wherein the image resolution is high but the video frame rate is low. With reference toFIG.14, by selecting a particular position for moving object778to be displayed in super-resolution image776, moving object778is actually displayed as a still image derived from the different images of moving object764captured in microscan images7621-7623along with a weighted estimate of the velocity of moving object778for the super-resolution image, shown by an arrow784. Super-resolution image776shows two bounded areas E and F, referenced as bounded areas780and782. Bounded area780was covered by moving object764in microscan 1 whereas bounded area782was covered by moving object764in microscan 3. Bounded areas780and782can be filled in in super-resolution image776by interpolating pixels from the microscan images where the moving object is not located. The method ofFIGS.15A and15Brelates to how super-resolution images can be displayed without image artifacts when there are moving objects present in the microscan images from which the super-resolution image is constructed. The method can either present super-resolution video with a high video frame rate yet a low image resolution (as per procedures808and810and optionally procedures818and820) or super-resolution video with a high image resolution yet a low video frame rate (as per procedure812-816). It will be appreciated by persons skilled in the art that the disclosed technique is not limited to what has been particularly shown and described hereinabove. | 196,275 |
11861850 | To facilitate understanding, identical reference numerals have been used, where possible, to designate identical elements that are common to the figures. It is contemplated that elements disclosed in one embodiment may be beneficially utilized on other embodiments without specific recitation. DETAILED DESCRIPTION In the last decade, vision-based tracking systems have been widely deployed in many professional sports leagues to capture the positional information of players in a match. This data may be used to generate fine-grained trajectories which are valuable for coaches and experts to analyze and train their players. Although players can be detected quite well in every frame, an issue arises in conventional systems due to the amount of manual annotation needed from gaps between player trajectories. The majority of these gaps may be caused by player occlusion or players wandering out-of-scene. One or more techniques disclosed herein improve upon conventional systems by providing a system capable of generating fine-grained player trajectories despite player occlusion or the player wandering out-of-scene. For example, one or more techniques disclosed herein may be directed to operations associated with recognizing and associating a person at different physical locations overtime, after that person had been previously observed elsewhere. The problem solved by the techniques described herein are compounded in the domain of team sports because, unlike ordinary surveillance, players' appearances are not discriminative. The techniques described herein overcome this challenge by leveraging player jersey (or uniform) information to aid in identifying players. However, because players constantly change their orientations during the course of the game, simply using jersey information is not a trivial task. To accurately identify player trajectories, the present system may identify frames of video with visible jersey numbers and may associate identities using a deep neural network. FIG.1is a block diagram illustrating a computing environment100, according to example embodiments. Computing environment100may include camera system102, organization computing system104, and one or more client devices108communicating via network105. Network105may be of any suitable type, including individual connections via the Internet, such as cellular or Wi-Fi networks. In some embodiments, network105may connect terminals, services, and mobile devices using direct connections, such as radio frequency identification (RFID), near-field communication (NFC), Bluetooth™, low-energy Bluetooth™ (BLE), Wi-Fi™ ZigBee™, ambient backscatter communication (ABC) protocols, USB, WAN, or LAN. Because the information transmitted may be personal or confidential, security concerns may dictate one or more of these types of connection be encrypted or otherwise secured. In some embodiments, however, the information being transmitted may be less personal, and therefore, the network connections may be selected for convenience over security. Network105may include any type of computer networking arrangement used to exchange data or information. For example, network105may be the Internet, a private data network, virtual private network using a public network and/or other suitable connection(s) that enables components in computing environment100to send and receive information between the components of environment100. Camera system102may be positioned in a venue106. For example, venue106may be configured to host a sporting event that includes one or more agents112. Camera system102may be configured to capture the motions of all agents (i.e., players) on the playing surface, as well as one or more other objects of relevance (e.g., ball, referees, etc.). In some embodiments, camera system102may be an optically-based system using, for example, a plurality of fixed cameras. For example, a system of six stationary, calibrated cameras, which project the three-dimensional locations of players and the ball onto a two-dimensional overhead view of the court may be used. In another example, a mix of stationary and non-stationary cameras may be used to capture motions of all agents on the playing surface as well as one or more objects of relevance. As those skilled in the art recognize, utilization of such camera system (e.g., camera system102) may result in many different camera views of the court (e.g., high sideline view, free-throw line view, huddle view, face-off view, end zone view, etc.). Generally, camera system102may be utilized for the broadcast feed of a given match. Each frame of the broadcast feed may be stored in a game file110. Camera system102may be configured to communicate with organization computing system104via network105. Organization computing system104may be configured to manage and analyze the broadcast feed captured by camera system102. Organization computing system104may include at least a web client application server114, a data store118, an auto-clipping agent120, a data set generator122, a camera calibrator124, a player tracking agent126, and an interface agent128. Each of auto-clipping agent120, data set generator122, camera calibrator124, player tracking agent126, and interface agent128may be comprised of one or more software modules. The one or more software modules may be collections of code or instructions stored on a media (e.g., memory of organization computing system104) that represent a series of machine instructions (e.g., program code) that implements one or more algorithmic steps. Such machine instructions may be the actual computer code the processor of organization computing system104interprets to implement the instructions or, alternatively, may be a higher level of coding of the instructions that is interpreted to obtain the actual computer code. The one or more software modules may also include one or more hardware components. One or more aspects of an example algorithm may be performed by the hardware components (e.g., circuitry) itself, rather as a result of the instructions. Data store118may be configured to store one or more game files124. Each game file124may include the broadcast data of a given match. For example, the broadcast data may a plurality of video frames captured by camera system102. Auto-clipping agent120may be configured parse the broadcast feed of a given match to identify a unified view of the match. In other words, auto-clipping agent120may be configured to parse the broadcast feed to identify all frames of information that are captured from the same view. In one example, such as in the sport of basketball, the unified view may be a high sideline view. Auto-clipping agent120may clip or segment the broadcast feed (e.g., video) into its constituent parts (e.g., difference scenes in a movie, commercials from a match, etc.). To generate a unified view, auto-clipping agent120may identify those parts that capture the same view (e.g., high sideline view). Accordingly, auto-clipping agent120may remove all (or a portion) of untrackable parts of the broadcast feed (e.g., player close-ups, commercials, half-time shows, etc.). The unified view may be stored as a set of trackable frames in a database. Data set generator122may be configured to generate a plurality of data sets from the trackable frames. In some embodiments, data set generator122may be configured to identify body pose information. For example, data set generator122may utilize body pose information to detect players in the trackable frames. In some embodiments, data set generator122may be configured to further track the movement of a ball or puck in the trackable frames. In some embodiments, data set generator122may be configured to segment the playing surface in which the event is taking place to identify one or more markings of the playing surface. For example, data set generator122may be configured to identify court (e.g., basketball, tennis, etc.) markings, field (e.g., baseball, football, soccer, rugby, etc.) markings, ice (e.g., hockey) markings, and the like. The plurality of data sets generated by data set generator122may be subsequently used by camera calibrator124for calibrating the cameras of each camera system102. Camera calibrator124may be configured to calibrate the cameras of camera system102. For example, camera calibrator124may be configured to project players detected in the trackable frames to real world coordinates for further analysis. Because cameras in camera systems102are constantly moving in order to focus on the ball or key plays, such cameras are unable to be pre-calibrated. Camera calibrator124may be configured to improve or optimize player projection parameters using a homography matrix. Player tracking agent126may be configured to generate tracks for each player on the playing surface. For example, player tracking agent126may leverage player pose detections, camera calibration, and broadcast frames to generate such tracks. In some embodiments, player tracking agent126may further be configured to generate tracks for each player, even if, for example, the player is currently out of a trackable frame. For example, player tracking agent126may utilize body pose information to link players that have left the frame of view. Interface agent128may be configured to generate one or more graphical representations corresponding to the tracks for each player generated by player tracking agent126. For example, interface agent128may be configured to generate one or more graphical user interfaces (GUIs) that include graphical representations of player tracking each prediction generated by player tracking agent126. Client device108may be in communication with organization computing system104via network105. Client device108may be operated by a user. For example, client device108may be a mobile device, a tablet, a desktop computer, or any computing system having the capabilities described herein. Users may include, but are not limited to, individuals such as, for example, subscribers, clients, prospective clients, or customers of an entity associated with organization computing system104, such as individuals who have obtained, will obtain, or may obtain a product, service, or consultation from an entity associated with organization computing system104. Client device108may include at least application132. Application132may be representative of a web browser that allows access to a website or a stand-alone application. Client device108may access application132to access one or more functionalities of organization computing system104. Client device108may communicate over network105to request a webpage, for example, from web client application server114of organization computing system104. For example, client device108may be configured to execute application132to access content managed by web client application server114. The content that is displayed to client device108may be transmitted from web client application server114to client device108, and subsequently processed by application132for display through a graphical user interface (GUI) of client device108. FIG.2is a block diagram illustrating a computing environment200, according to example embodiments. As illustrated, computing environment200includes auto-clipping agent120, data set generator122, camera calibrator124, and player tracking agent126communicating via network205. Network205may be of any suitable type, including individual connections via the Internet, such as cellular or Wi-Fi networks. In some embodiments, network205may connect terminals, services, and mobile devices using direct connections, such as radio frequency identification (RFID), near-field communication (NFC), Bluetooth™, low-energy Bluetooth™ (BLE), Wi-Fi™ ZigBee™, ambient backscatter communication (ABC) protocols, USB, WAN, or LAN. Because the information transmitted may be personal or confidential, security concerns may dictate one or more of these types of connection be encrypted or otherwise secured. In some embodiments, however, the information being transmitted may be less personal, and therefore, the network connections may be selected for convenience over security. Network205may include any type of computer networking arrangement used to exchange data or information. For example, network205may be the Internet, a private data network, virtual private network using a public network and/or other suitable connection(s) that enables components in computing environment200to send and receive information between the components of environment200. Auto-clipping agent120may include principal component analysis (PCA) agent202, clustering model204, and neural network206. As recited above, when trying to understand and extract data from a broadcast feed, auto-clipping agent120may be used to clip or segment the video into its constituent parts. In some embodiments, auto-clipping agent120may focus on separating a predefined, unified view (e.g., a high sideline view) from all other parts of the broadcast stream. PCA agent202may be configured to utilize a PCA analysis to perform per frame feature extraction from the broadcast feed. For example, given a pre-recorded video, PCA agent202may extract a frame every X-seconds (e.g., 10 seconds) to build a PCA model of the video. In some embodiments, PCA agent202may generate the PCA model using incremental PCA, through which PCA agent202may select a top subset of components (e.g., top 120 components) to generate the PCA model. PCA agent202may be further configured to extract one frame every X seconds (e.g., one second) from the broadcast stream and compress the frames using PCA model. In some embodiments, PCA agent202may utilize PCA model to compress the frames into 120-dimensional form. For example, PCA agent202may solve for the principal components in a per video manner and keep the top 100 components per frame to ensure accurate clipping. Clustering model204may be configured to the cluster the top subset of components into clusters. For example, clustering model204may be configured to center, normalize, and cluster the top 120 components into a plurality of clusters. In some embodiments, for clustering of compressed frames, clustering model204may implement k-means clustering. In some embodiments, clustering model204may set k=9 clusters. K-means clustering attempts to take some data x={x1, x2, . . . , xn} and divide it into k subsets, S={S1, S2, . . . Sk} by optimizing: argminS∑jk∑x∈Sjx-μj2 where μjis the mean of the data in the set Sj. In other words, clustering model204attempts to find clusters with the smallest inter-cluster variance using k-means clustering techniques. Clustering model204may label each frame with its respective cluster number (e.g., cluster 1, cluster 2, . . . , cluster k). Neural network206may be configured to classify each frame as trackable or untrackable. A trackable frame may be representative of a frame that includes captures the unified view (e.g., high sideline view). An untrackable frame may be representative of a frame that does not capture the unified view. To train neural network206, an input data set that includes thousands of frames pre-labeled as trackable or untrackable that are run through the PCA model may be used. Each compressed frame and label pair (i.e., cluster number and trackable/untrackable) may be provided to neural network206for training. In some embodiments, neural network206may include four layers. The four layers may include an input layer, two hidden layers, and an output layer. In some embodiments, input layer may include 120 units. In some embodiments, each hidden layer may include 240 units. In some embodiments, output layer may include two units. The input layer and each hidden layer may use sigmoid activation functions. The output layer may use a SoftMax activation function. To train neural network206, auto-clipping agent120may reduce (e.g., minimize) the binary cross-entropy loss between the predicted label for sampleand the true label yjby: H=-1N∑jNyjlogy+(1-yj)log(1-yJ^) Accordingly, once trained, neural network206may be configured to classify each frame as untrackable or trackable. As such, each frame may have two labels: a cluster number and trackable/untrackable classification. Auto-clipping agent120may utilize the two labels to determine if a given cluster is deemed trackable or untrackable. For example, if auto-clipping agent120determines that a threshold number of frames in a cluster are considered trackable (e.g., 80%), auto-clipping agent120may conclude that all frames in the cluster are trackable. Further, if auto-clipping agent120determines that less than a threshold number of frames in a cluster are considered untrackable (e.g., 30% and below), auto-clipping agent120may conclude that all frames in the cluster are untrackable. Still further, if auto-clipping agent120determines that a certain number of frames in a cluster are considered trackable (e.g., between 30% and 80%), auto-clipping agent120may request that an administrator further analyze the cluster. Once each frame is classified, auto-clipping agent120may clip or segment the trackable frames. Auto-clipping agent120may store the segments of trackable frames in database205associated therewith. Data set generator122may be configured to generate a plurality of data sets from auto-clipping agent120. As illustrated, data set generator122may include pose detector212, ball detector214, and playing surface segmenter216. Pose detector212may be configured to detect players within the broadcast feed. Data set generator122may provide, as input, to pose detector212both the trackable frames stored in database205as well as the broadcast video feed. In some embodiments, pose detector212may implement Open Pose to generate body pose data to detect players in the broadcast feed and the trackable frames. In some embodiments, pose detector212may implement sensors positioned on players to capture body pose information. Generally, pose detector212may use any means to obtain body pose information from the broadcast video feed and the trackable frame. The output from pose detector212may be pose data stored in database215associated with data set generator122. Ball detector214may be configured to detect and track the ball (or puck) within the broadcast feed. Data set generator122may provide, as input, to ball detector214both the trackable frames stored in database205and the broadcast video feed. In some embodiments, ball detector214may utilize a faster region-convolutional neural network (R-CNN) to detect and track the ball in the trackable frames and broadcast video feed. Faster R-CNN is a regional proposal based network. Faster R-CNN uses a convolutional neural network to propose a region of interest, and then classifies the object in each region of interest. Because it is a single unified network, the regions of interest and the classification steps may improve each other, thus allowing the classification to handle objects of various sizes. The output from ball detector214may be ball detection data stored in database215associated with data set generator122. Playing surface segmenter216may be configured to identify playing surface markings in the broadcast feed. Data set generator122may provide, as input, to playing surface segmenter216both trackable frames stored in database205and the broadcast video feed. In some embodiments, playing surface segmenter216may be configured to utilize a neural network to identify playing surface markings. The output from playing surface segmenter216may be playing surface markings stored in database215associated with data set generator122. Camera calibrator124may be configured to address the issue of moving camera calibration in sports. Camera calibrator124may include spatial transfer network224and optical flow module226. Camera calibrator124may receive, as input, segmented playing surface information generated by playing surface segmenter216, the trackable clip information, and posed information. Given such inputs, camera calibrator124may be configured to project coordinates in the image frame to real-world coordinates for tracking analysis. Keyframe matching module224may receive, as input, output from playing surface segmenter216and a set of templates. For each frame, keyframe matching module224may match the output from playing surface segmenter216to a template. Those frames that are able to match to a given template are considered keyframes. In some embodiments, keyframe matching module224may implement a neural network to match the one or more frames. In some embodiments, keyframe matching module224may implement cross-correlation to match the one or more frames. Spatial transformer network (STN)224may be configured to receive, as input, the identified keyframes from keyframe matching module224. STN224may implement a neural network to fit a playing surface model to segmentation information of the playing surface. By fitting the playing surface model to such output, STN224may generate homography matrices for each keyframe. Optical flow module226may be configured to identify the pattern of motion of objects from one trackable frame to another. In some embodiments, optical flow module226may receive, as input, trackable frame information and body pose information for players in each trackable frame. Optical flow module226may use body pose information to remove players from the trackable frame information. Once removed, optical flow module226may determine the motion between frames to identify the motion of a camera between successive frames. In other words, optical flow module226may identify the flow field from one frame to the next. Optical flow module226and STN224may work in conjunction to generate a homography matrix. For example, optical flow module226and STN224may generate a homography matrix for each trackable frame, such that a camera may be calibrated for each frame. The homography matrix may be used to project the track or position of players into real-world coordinates. For example, the homography matrix may indicate a 2-dimensional to 2-dimensional transform, which may be used to project the players' locations from image coordinates to the real world coordinates on the playing surface. Player tracking agent126may be configured to generate a track for each player in a match. Player tracking agent126may include neural network232and re-identification agent232. Player tracking agent126may receive, as input, trackable frames, pose data, calibration data, and broadcast video frames. In a first phase, player tracking agent126may match pairs of player patches, which may be derived from pose information, based on appearance and distance. For example, let Hjtbe the player patch of the jthplayer at time t, and let Ijt={xjt, yjt, wjt, hjt} be the image coordinates xjt, yjt, the width wjt, and the height hjtof the jthplayer at time t. Using this, player tracking agent126may associate any pair of detections using the appearance cross correlation Cijt=Hjt*Hjt+1and Lijt=∥Iit−Ijt+1∥22by finding: argmaxij(Cijt+Lijt) where I is the bounding box positions (x,y), width w, and height h; C is the cross correlation between the image patches (e.g., image cutout using a bounding box) and measures similarity between two image patches; and L is a measure of the difference (e.g., distance) between two bounding boxes I. Performing this for every pair may generate a large set of short tracklets. The end points of these tracklets may then be associated with each other based on motion consistency and color histogram similarity. For example, let vibe the extrapolated velocity from the end of the ithtracklet and vjbe the velocity extrapolated from the beginning of the jthtracklet. Then cij=vi·vjmay represent the motion consistency score. Furthermore, let p(h)irepresent the likelihood of a color h being present in an image patch i. Player tracking agent126may measure the color histogram similarity using Bhattacharyya distance: DB(pi,pj)=−ln(BC(pi,pj)) withBC(pi,pi)=Σh√{square root over (pi(h)pj(h))} Recall, tracking agent120finds the matching pair of tracklets by finding: argmaxij(cij+DB(pi,pj)). Solving for every pair of broken tracklets may result in a set of clean tracklets, while leaving some tracklets with large, i.e., many frames, gaps. To connect the large gaps, player tracking agent may augment affinity measures to include a motion field estimation, which may account for the change of player direction that occurs over many frames. The motion field may be a vector field that represents the velocity magnitude and direction as a vector on each location on the playing surface. Given the known velocity of a number of players on the playing surface, the full motion field may be generated using cubic spline interpolation. For example, let Xi={xit}t∈(0,T)to be the court position of a player i at every time t. Then, there may exist a pair of points that have a displacement diλ(xit)=xit−xit+1if λ<T. Accordingly, the motion field may then be: V(x,λ)=G(x,5)*∑idiλ(xit), where G(x, 5) may be a Gaussian kernel with standard deviation equal to about five feet. In other words, motion field may be a Gaussian blur of all displacements. Neural network232may be used to predict player trajectories given ground truth player trajectories. Given a set of ground truth player trajectories, Xi, the velocity of each player at each frame may be calculated, which may provide the ground truth motion field for neural network232to learn. For example, given a set of ground truth player trajectories Xi, player tracking agent126may be configured to generate the set {circumflex over (V)}(x,λ), where {circumflex over (V)}(x,λ) may be the predicted motion field. Neural network232may be trained, for example, to minimize ∥V(x,λ)−{circumflex over (V)}(x,λ)∥22. Player trajectory agent may then generate the affinity score for any tracking gap of size λ by: Kij=V(x,λ)·dij where dij=xit−xjt=λis the displacement vector between all broken tracks with a gap size of λ. Re-identification agent234may be configured to link players that have left the frame of view. Re-identification agent234may include track generator236, conditional autoencoder240, and Siamese network242. Track generator236may be configured to generate a gallery of tracks. Track generator236may receive a plurality of tracks from database205. For each track X, there may include a player identity label y, and for each player patch I, pose information p may be provided by the pose detection stage. Given a set of player tracks, track generator236may build a gallery for each track where the jersey number of a player (or some other static feature) is always visible. The body pose information generated by data set generator122allows track generator236to determine a player's orientation. For example, track generator236may utilize a heuristic method, which may use the normalized shoulder width to determine the orientation: Sorient=lLshoulder-lRshoulder2lNeck-lHip2 where l may represent the location of one body part. The width of shoulder may be normalized by the length of the torso so that the effect of scale may be eliminated. As two shoulders should be apart when a player faces towards or backwards from the camera, track generator236may use those patches whose Sorientis larger than a threshold to build the gallery. After this stage, each track Xn, may include a gallery: Gn={Ii|Sorient,i>thresh}∀Ii∈Xn Conditional autoencoder240may be configured to identify one or more features in each track. For example, unlike conventional approaches to re-identification issues, players in team sports may have very similar appearance features, such as clothing style, clothing color, and skin color. One of the more intuitive differences may be the jersey number that may be shown at the front and/or back side of each jersey. In order to capture those specific features, conditional autoencoder240may be trained to identify such features. In some embodiments, conditional autoencoder240may be a three-layer convolutional autoencoder, where the kernel sizes may be 3×3 for all three layers, in which there are 64, 128, 128 channels respectively. Those hyper-parameters may be tuned to ensure that jersey number may be recognized from the reconstructed images so that the desired features may be learned in the autoencoder. In some embodiments, f(Ii) may be used to denote the features that are learned from image i. Use of conditional autoencoder240improves upon conventional processes for a variety of reasons. First, there is typically not enough training data for every player because some players only play a very short time in each game. Second, different teams can have the same jersey colors and jersey numbers, so classifying those players may be difficult. Siamese network242may be used to measure the similarity between two image patches. For example, Siamese network242may be trained to measure the similarity between two image patches based on their feature representations f(I). Given two image patches, their feature representations f(Ii) and f(Ij) may be flattened, connected, and input into a perception network. In some embodiments, L2norm may be used to connect the two sub-networks of f(Ii) and f(Ij). In some Embodiments, perception network may include three layers, which include may 1024, 512, and 216 hidden units, respectively. Such network may be used to measure the similarity s(Ii, Ij) between every pair of image patches of the two tracks that have no time overlapping. In order to increase the robustness of the prediction, the final similarity score of the two tracks may be the average of all pairwise scores in their respective galleries: S(xn,xm)=1❘"\[LeftBracketingBar]"Gn❘"\[RightBracketingBar]"❘"\[LeftBracketingBar]"Gm❘"\[RightBracketingBar]"∑i∈Gn,j∈Gms(Ii,Ij) This similarity score may be computed for every two tracks that do not have time overlapping. If the score is higher than some threshold, those two tracks may be associated. FIG.3is a block diagram300illustrating aspects of operations discussed above and below in conjunction withFIG.2andFIGS.4-10, according to example embodiments. Block diagram300may illustrate the overall workflow of organization computing system104in generating player tracking information. Block diagram300may include set of operations302-308. Set of operations302may be directed to generating trackable frames (e.g., Method500inFIG.5). Set of operations304may be directed to generating one or more data sets from trackable frames (e.g., operations performed by data set generator122). Set of operations306may be directed to camera calibration operations (e.g., Method700inFIG.7). Set of operations308may be directed to generating and predicting player tracks (e.g., Method900ifFIG.9and Method1000inFIG.10). FIG.4is a flow diagram illustrating a method400of generating player tracks, according to example embodiments. Method400may begin at step402. At step402, organization computing system104may receive (or retrieve) a broadcast feed for an event. In some embodiments, the broadcast feed may be a live feed received in real-time (or near real-time) from camera system102. In some embodiments, the broadcast feed may be a broadcast feed of a game that has concluded. Generally, the broadcast feed may include a plurality of frames of video data. Each frame may capture a different camera perspective. At step404, organization computing system104may segment the broadcast feed into a unified view. For example, auto-clipping agent120may be configured to parse the plurality of frames of data in the broadcast feed to segment the trackable frames from the untrackable frames. Generally, trackable frames may include those frames that are directed to a unified view. For example, the unified view may be considered a high sideline view. In other examples, the unified view may be an endzone view. In other examples, the unified view may be a top camera view. At step406, organization computing system104may generate a plurality of data sets from the trackable frames (i.e., the unified view). For example, fata set generator122may be configured to generate a plurality of data sets based on trackable clips received from auto-clipping agent120. In some embodiments, pose detector212may be configured to detect players within the broadcast feed. Data set generator122may provide, as input, to pose detector212both the trackable frames stored in database205as well as the broadcast video feed. The output from pose detector212may be pose data stored in database215associated with data set generator122. Ball detector214may be configured to detect and track the ball (or puck) within the broadcast feed. Data set generator122may provide, as input, to ball detector214both the trackable frames stored in database205and the broadcast video feed. In some embodiments, ball detector214may utilize a faster R-CNN to detect and track the ball in the trackable frames and broadcast video feed. The output from ball detector214may be ball detection data stored in database215associated with data set generator122. Playing surface segmenter216may be configured to identify playing surface markings in the broadcast feed. Data set generator122may provide, as input, to playing surface segmenter216both trackable frames stored in database205and the broadcast video feed. In some embodiments, playing surface segmenter216may be configured to utilize a neural network to identify playing surface markings. The output from playing surface segmenter216may be playing surface markings stored in database215associated with data set generator122. Accordingly, data set generator120may generate information directed to player location, ball location, and portions of the court in all trackable frames for further analysis. At step408, organization computing system104may calibrate the camera in each trackable frame based on the data sets generated in step406. For example, camera calibrator124may be configured to calibrate the camera in each trackable frame by generating a homography matrix, using the trackable frames and body pose information. The homography matrix allows camera calibrator124to take those trajectories of each player in a given frame and project those trajectories into real-world coordinates. By projection player position and trajectories into real world coordinates for each frame, camera calibrator124may ensure that the camera is calibrated for each frame. At step410, organization computing system104may be configured to generate or predict a track for each player. For example, player tracking agent126may be configured to generate or predict a track for each player in a match. Player tracking agent126may receive, as input, trackable frames, pose data, calibration data, and broadcast video frames. Using such inputs, player tracking agent126may be configured to construct player motion throughout a given match. Further, player tracking agent126may be configured to predict player trajectories given previous motion of each player. FIG.5is a flow diagram illustrating a method500of generating trackable frames, according to example embodiments. Method500may correspond to operation404discussed above in conjunction withFIG.4. Method500may begin at step502. At step502, organization computing system104may receive (or retrieve) a broadcast feed for an event. In some embodiments, the broadcast feed may be a live feed received in real-time (or near real-time) from camera system102. In some embodiments, the broadcast feed may be a broadcast feed of a game that has concluded. Generally, the broadcast feed may include a plurality of frames of video data. Each frame may capture a different camera perspective. At step504, organization computing system104may generate a set of frames for image classification. For example, auto-clipping agent120may utilize a PCA analysis to perform per frame feature extraction from the broadcast feed. Given, for example, a pre-recorded video, auto-clipping agent120may extract a frame every X-seconds (e.g., 10 seconds) to build a PCA model of the video. In some embodiments, auto-clipping agent120may generate the PCA model using incremental PCA, through which auto-clipping agent120may select a top subset of components (e.g., top 120 components) to generate the PCA model. Auto-clipping agent120may be further configured to extract one frame every X seconds (e.g., one second) from the broadcast stream and compress the frames using PCA model. In some embodiments, auto-clipping agent120may utilize PCA model to compress the frames into 120-dimensional form. For example, auto-clipping agent120may solve for the principal components in a per video manner and keep the top 100 components per frame to ensure accurate clipping. Such subset of compressed frames may be considered the set of frames for image classification. In other words, PCA model may be used to compress each frame to a small vector, so that clustering can be conducted on the frames more efficiently. The compression may be conducted by selecting the top N components from PCA model to represent the frame. In some examples, N may be 100. At step506, organization computing system104may assign each frame in the set of frames to a given cluster. For example, auto-clipping agent120may be configured to center, normalize, and cluster the top 120 components into a plurality of clusters. In some embodiments, for clustering of compressed frames, auto-clipping agent120may implement k-means clustering. In some embodiments, auto-clipping agent120may set k=9 clusters. K-means clustering attempts to take some data x={x1, x2, . . . , xn} and divide it into k subsets, S={S1, S2, . . . Sk} by optimizing: argminS∑jk∑x∈Sjx-μj2 where μjis the mean of the data in the set Sj. In other words, clustering model204attempts to find clusters with the smallest inter-cluster variance using k-means clustering techniques. Clustering model204may label each frame with its respective cluster number (e.g., cluster 1, cluster 2, . . . , cluster k). At step508, organization computing system104may classify each frame as trackable or untrackable. For example, auto-clipping agent120may utilize a neural network to classify each frame as trackable or untrackable. A trackable frame may be representative of a frame that includes captures the unified view (e.g., high sideline view). An untrackable frame may be representative of a frame that does not capture the unified view. To train the neural network (e.g., neural network206), an input data set that includes thousands of frames pre-labeled as trackable or untrackable that are run through the PCA model may be used. Each compressed frame and label pair (i.e., cluster number and trackable/untrackable) may be provided to neural network for training. Accordingly, once trained, auto-clipping agent120may classify each frame as untrackable or trackable. As such, each frame may have two labels: a cluster number and trackable/untrackable classification. At step510, organization computing system104may compare each cluster to a threshold. For example, auto-clipping agent120may utilize the two labels to determine if a given cluster is deemed trackable or untrackable. In some embodiments, f auto-clipping agent120determines that a threshold number of frames in a cluster are considered trackable (e.g., 80%), auto-clipping agent120may conclude that all frames in the cluster are trackable. Further, if auto-clipping agent120determines that less than a threshold number of frames in a cluster are considered untrackable (e.g., 30% and below), auto-clipping agent120may conclude that all frames in the cluster are untrackable. Still further, if auto-clipping agent120determines that a certain number of frames in a cluster are considered trackable (e.g., between 30% and 80%), auto-clipping agent120may request that an administrator further analyze the cluster. If at step510organization computing system104determines that greater than a threshold number of frames in the cluster are trackable, then at step512auto-clipping agent120may classify the cluster as trackable. If, however, at step510organization computing system104determines that less than a threshold number of frames in the cluster are trackable, then at step514, auto-clipping agent120may classify the cluster as untrackable. FIG.6is a block diagram600illustrating aspects of operations discussed above in conjunction with method500, according to example embodiments. As shown, block diagram600may include a plurality of sets of operations602-608. At set of operations602, video data (e.g., broadcast video) may be provided to auto-clipping agent120. Auto-clipping agent120may extract frames from the video. In some embodiments, auto-clipping agent120may extract frames from the video at a low frame rate. An incremental PCA algorithm may be used by auto-clipping agent to select the top 120 components (e.g., frames) from the set of frames extracted by auto-clipping agent120. Such operations may generate a video specific PCA model. At set of operations604, video data (e.g., broadcast video) may be provided to auto-clipping agent120. Auto-clipping agent120may extract frames from the video. In some embodiments, auto-clipping agent120may extract frames from the video at a medium frame rate. The video specific PCA model may be used by auto-clipping agent120to compress the frames extracted by auto-clipping agent120. At set of operations606, the compressed frames and a pre-selected number of desired clusters may be provided to auto-clipping agent120. Auto-clipping agent120may utilize k-means clustering techniques to group the frames into one or more clusters, as set forth by the pre-selected number of desired clusters. Auto-clipping agent120may assign a cluster label to each compressed frames. Auto-clipping agent120may further be configured to classify each frame as trackable or untrackable. Auto-clipping agent120may label each respective frame as such. At set of operations608, auto-clipping agent120may analyze each cluster to determine if the cluster includes at least a threshold number of trackable frames. For example, as illustrated, if 80% of the frames of a cluster are classified as trackable, then auto-clipping agent120may consider the entire cluster as trackable. If, however, less than 80% of a cluster is classified as trackable, auto-clipping agent may determine if at least a second threshold number of frames in a cluster are trackable. For example, is illustrated if 70% of the frames of a cluster are classified as untrackable, auto-clipping agent120may consider the entire cluster trackable. If, however, less than 70% of the frames of the cluster are classified as untrackable, i.e., between 30% and 70% trackable, then human annotation may be requested. FIG.7is a flow diagram illustrating a method700of calibrating a camera for each trackable frame, according to example embodiments. Method700may correspond to operation408discussed above in conjunction withFIG.4. Method700may begin at step702. At step702, organization computing system104may retrieve video data and pose data for analysis. For example, camera calibrator124may retrieve from database205the trackable frames for a given match and pose data for players in each trackable frame. Following step702, camera calibrator124may execute two parallel processes to generate homography matrix for each frame. Accordingly, the following operations are not meant to be discussed as being performed sequentially, but may instead be performed in parallel or sequentially. At step704, organization computing system104may remove players from each trackable frame. For example, camera calibrator124may parse each trackable frame retrieved from database205to identify one or more players contained therein. Camera calibrator124may remove the players from each trackable frame using the pose data retrieved from database205. For example, camera calibrator124may identify those pixels corresponding to pose data and remove the identified pixels from a given trackable frame. At step706, organization computing system104may identify the motion of objects (e.g., surfaces, edges, etc.) between successive trackable frames. For example, camera calibrator124may analyze successive trackable frames, with players removed therefrom, to determine the motion of objects from one frame to the next. In other words, optical flow module226may identify the flow field between successive trackable frames. At step708, organization computing system104may match an output from playing surface segmenter216to a set of templates. For example, camera calibrator124may match one or more frames in which the image of the playing surface is clear to one or more templates. Camera calibrator124may parse the set of trackable clips to identify those clips that provide a clear picture of the playing surface and the markings therein. Based on the selected clips, camera calibrator124may compare such images to playing surface templates. Each template may represent a different camera perspective of the playing surface. Those frames that are able to match to a given template are considered keyframes. In some embodiments, camera calibrator124may implement a neural network to match the one or more frames. In some embodiments, camera calibrator124may implement cross-correlation to match the one or more frames. At step710, organization computing system104may fit a playing surface model to each keyframe. For example, camera calibrator124may be configured to receive, as input, the identified keyframes. Camera calibrator124may implement a neural network to fit a playing surface model to segmentation information of the playing surface. By fitting the playing surface model to such output, camera calibrator124may generate homography matrices for each keyframe. At step712, organization computing system104may generate a homography matrix for each trackable frame. For example, camera calibrator124may utilize the flow fields identified in step706and the homography matrices for each key frame to generate a homography matrix for each frame. The homography matrix may be used to project the track or position of players into real-world coordinates. For example, given the geometric transform represented by the homography matrix, camera calibrator124may use his transform to project the location of players on the image to real-world coordinates on the playing surface. At step714, organization computing system104may calibrate each camera based on the homography matrix. FIG.8is a block diagram800illustrating aspects of operations discussed above in conjunction with method700, according to example embodiments. As shown, block diagram800may include inputs802, a first set of operations804, and a second set of operations806. First set of operations804and second set of operations806may be performed in parallel. Inputs802may include video clips808and pose detection810. In some embodiments, video clips808may correspond to trackable frames generated by auto-clipping agent120. In some embodiments, pose detection810may correspond to pose data generated by pose detector212. As illustrated, only video clips808may be provided as input to first set of operations804; both video clips804and post detection810may be provided as input to second set of operations806. First set of operations804may include semantic segmentation812, keyframe matching814, and STN fitting816. At semantic segmentation812, playing surface segmenter216may be configured to identify playing surface markings in a broadcast feed. In some embodiments, playing surface segmenter216may be configured to utilize a neural network to identify playing surface markings. Such segmentation information may be performed in advance and provided to camera calibration124from database215. At keyframe matching814, keyframe matching module224may be configured to match one or more frames in which the image of the playing surface is clear to one or more templates. At STN fitting816, STN226may implement a neural network to fit a playing surface model to segmentation information of the playing surface. By fitting the playing surface model to such output, STN224may generate homography matrices for each keyframe. Second set of operations806may include camera motion estimation818. At camera flow estimation818, optical flow module226may be configured to identify the pattern of motion of objects from one trackable frame to another. For example, optical flow module226may use body pose information to remove players from the trackable frame information. Once removed, optical flow module226may determine the motion between frames to identify the motion of a camera between successive frames. First set of operations804and second set of operations806may lead to homography interpolation816. Optical flow module226and STN224may work in conjunction to generate a homography matrix for each trackable frame, such that a camera may be calibrated for each frame. The homography matrix may be used to project the track or position of players into real-world coordinates. FIG.9is a flow diagram illustrating a method900of tracking players, according to example embodiments. Method900may correspond to operation410discussed above in conjunction withFIG.4. Method900may begin at step902. At step902, organization computing system104may retrieve a plurality of trackable frames for a match. Each of the plurality of trackable frames may include one or more sets of metadata associated therewith. Such metadata may include, for example, body pose information and camera calibration data. In some embodiments, player tracking agent126may further retrieve broadcast video data. At step904, organization computing system104may generate a set of short tracklets. For example, player tracking agent126may match pairs of player patches, which may be derived from pose information, based on appearance and distance to generate a set of short tracklets. For example, let H be the player patch of the jthplayer at time t, and let Ijt={xjt, yjt, wjt, hjt} be the image coordinates xjt, yjt, the width wjt, and the height hjtof the jthplayer at time t. Using this, player tracking agent126may associated any pair of detections using the appearance cross correlation Cijt=Hit*Hit+1and Lijt=∥Iit−Ijt+1∥22by finding: argmaxij(Cijt+Lijt). Performing this for every pair may generate a set of short tracklets. The end points of these tracklets may then be associated with each other based on motion consistency and color histogram similarity. For example, let vg be the extrapolated velocity from the end of the ithtracklet and vjbe the velocity extrapolated from the beginning of the jthtracklet. Then cij=vi·vjmay represent the motion consistency score. Furthermore, let p(h)irepresent the likelihood of a color h being present in an image patch i. Player tracking agent126may measure the color histogram similarity using Bhattacharyya distance: DB(pi,pj)=−ln(BC(pi,pj)) withBC(pi,pj)=Σh√{square root over (pi(h)pj(h))} At step906, organization computing system104may connect gaps between each set of short tracklets. For example, recall that tracking agent120finds the matching pair of tracklets by finding: argmaxij(cij+DB(pi,pj)). Solving for every pair of broken tracklets may result in a set of clean tracklets, while leaving some tracklets with large, i.e., many frames, gaps. To connect the large gaps, player tracking agent126may augment affinity measures to include a motion field estimation, which may account for the change of player direction that occurs over many frames. The motion field may be a vector field which measures what direction a player at a point on the playing surface x would be after some time A. For example, let Xi={xit}r∈(0,T)to be the court position of a player i at every time t. Then, there may exist a pair of points that have a displacement diλ(xit)=xit−xit+1if λ<T. Accordingly, the motion field may then be: V(x,λ)=G(x,5)*∑idiλ(xit), where G(x, 5) may be a Gaussian kernel with standard deviation equal to about five feet. In other words, motion field may be a Gaussian blur of all displacements. At step908, organization computing system104may predict a motion of an agent based on the motion field. For example, player tracking system126may use a neural network (e.g., neural network232) to predict player trajectories given ground truth player trajectory. Given a set of ground truth player trajectories Xi, player tracking agent126may be configured to generate the set {circumflex over (V)}(x,λ), where {circumflex over (V)}(x,λ) may be the predicted motion field. Player tracking agent126may train neural network232to reduce (e.g., minimize) ∥V(x,λ)−{circumflex over (V)}(x,λ)∥22. Player tracking agent126may then generate the affinity score for any tracking gap of size λ by: Kij=V(x,λ)·dij where dij=xit−xjt+λis the displacement vector between all broken tracks with a gap size of λ. Accordingly, player tracking agent126may solve for the matching pairs as recited above. For example, given the affinity score, player tracking agent126may assign every pair of broken tracks using a Hungarian algorithm. The Hungarian algorithm (e.g., Kuhn-Munkres) may optimize the best set of matches under a constraint that all pairs are to be matched. At step910, organization computing system104may output a graphical representation of the prediction. For example, interface agent128may be configured to generate one or more graphical representations corresponding to the tracks for each player generated by player tracking agent126. For example, interface agent128may be configured to generate one or more graphical user interfaces (GUIs) that include graphical representations of player tracking each prediction generated by player tracking agent126. In some situations, during the course of a match, players or agents have the tendency to wander outside of the point-of-view of camera. Such issue may present itself during an injury, lack of hustle by a player, quick turnover, quick transition from offense to defense, and the like. Accordingly, a player in a first trackable frame may no longer be in a successive second or third trackable frame. Player tracking agent126may address this issue via re-identification agent234. FIG.10is a flow diagram illustrating a method1000of tracking players, according to example embodiments. Method1000may correspond to operation410discussed above in conjunction withFIG.4. Method1000may begin at step1002. At step1002, organization computing system104may retrieve a plurality of trackable frames for a match. Each of the plurality of trackable frames may include one or more sets of metadata associated therewith. Such metadata may include, for example, body pose information and camera calibration data. In some embodiments, player tracking agent126may further retrieve broadcast video data. At step1004, organization computing system104may identify a subset of short tracks in which a player has left the camera's line of vision. Each track may include a plurality of image patches associated with at least one player. An image patch may refer to a subset of a corresponding frame of a plurality of trackable frames. In some embodiments, each track X may include a player identity label y. In some embodiments, each player patch I in a given track X may include pose information generated by data set generator122. For example, given an input video, pose detection, and trackable frames, re-identification agent234may generate a track collection that includes a lot of short broken tracks of players. At step1006, organization computing system104may generate a gallery for each track. For example, given those small tracks, re-identification agent234may build a gallery for each track. Re-identification agent234may build a gallery for each track where the jersey number of a player (or some other static feature) is always visible. The body pose information generated by data set generator122allows re-identification agent234to determine each player's orientation. For example, re-identification agent234may utilize a heuristic method, which may use the normalized shoulder width to determine the orientation: Sorient=lLshoulder-lRshoulder2lNeck-lHip2 where l may represent the location of one body part. The width of shoulder may be normalized by the length of the torso so that the effect of scale may be eliminated. As two shoulders should be apart when a player faces towards or backwards from the camera, re-identification agent234may use those patches whose Sorientis larger than a threshold to build the gallery. Accordingly, each track Xn, may include a gallery: Gn={Ii|Sorient,i>thresh}∀Ii∈Xn At step1008, organization computing system104may match tracks using a convolutional autoencoder. For example, re-identification agent234may use conditional autoencoder (e.g., conditional autoencoder240) to identify one or more features in each track. For example, unlike conventional approaches to re-identification issues, players in team sports may have very similar appearance features, such as clothing style, clothing color, and skin color. One of the more intuitive differences may be the jersey number that may be shown at the front and/or back side of each jersey. In order to capture those specific features, re-identification agent234may train conditional autoencoder to identify such features. In some embodiments, conditional autoencoder may be a three-layer convolutional autoencoder, where the kernel sizes may be 3×3 for all three layers, in which there are 64, 128, 128 channels respectively. Those hyper-parameters may be tuned to ensure that jersey number may be recognized from the reconstructed images so that the desired features may be learned in the autoencoder. In some embodiments, f(Ii) may be used to denote the features that are learned from image i. Using a specific example, re-identification agent234may identify a first track that corresponds to a first player. Using conditional autoencoder240, re-identification agent234may learn a first set of jersey features associated with the first track, based on for example, a first set of image patches included or associated with the first track. Re-identification agent234may further identify a second track that may initially correspond to a second player. Using conditional autoencoder240, re-identification agent234may learn a second set of jersey features associated with the second track, based on, for example, a second set of image patches included or associated with the second track. At step1010, organization computing system104may measure a similarity between matched tracks using a Siamese network. For example, re-identification agent234may train Siamese network (e.g., Siamese network242) to measure the similarity between two image patches based on their feature representations f(I). Given two image patches, their feature representations f(Ii) and f(Ij) may be flattened, connected, and fed into a perception network. In some embodiments, L2norm may be used to connect the two sub-networks of f(Ii) and f(Ij). In some embodiments, perception network may include three layers, which include 1024, 512, and 216 hidden units, respectively. Such network may be used to measure the similarity s(Ii,Ij) between every pair of image patches of the two tracks that have no time overlapping. In order to increase the robustness of the prediction, the final similarity score of the two tracks may be the average of all pairwise scores in their respective galleries: S(xn,xm)=1❘"\[LeftBracketingBar]"Gn❘"\[RightBracketingBar]"❘"\[LeftBracketingBar]"Gm❘"\[RightBracketingBar]"∑i∈Gn,j∈Gms(Ii,Ij) Continuing with the aforementioned example, re-identification agent234may utilize Siamese network242to compute a similarity score between the first set of learned jersey features and the second set of learned jersey features. At step1012, organization computing system104may associate the tracks, if their similarity score is higher than a predetermined threshold. For example, re-identification agent234may compute a similarity score be computed for every two tracks that do not have time overlapping. If the score is higher than some threshold, re-identification agent234may associate those two tracks may. Continuing with the above example, re-identification agent234may associated with first track and the second track if, for example, the similarity score generated by Siamese network242is at least higher than a threshold value. Assuming the similarity score is higher than the threshold value, re-identification agent234may determine that the first player in the first track and the second player in the second track are indeed one in the same. FIG.11is a block diagram1100illustrating aspects of operations discussed above in conjunction with method1000, according to example embodiments. As shown block diagram1100may include input video1102, pose detection1104, player tracking1106, track collection1108, gallery building and pairwise matching1110, and track connection1112. Block diagram1100illustrates a general pipeline of method1000provided above. Given input video1102, pose detection information1104(e.g., generated by pose detector212), and player tracking information1106(e.g., generated by one or more of player tracking agent126, auto-clipping agent120, and camera calibrator124), re-identification agent234may generate track collection1108. Each track collection1108may include a plurality of short broken tracks (e.g., track1114) of players. Each track1114may include one or more image patches1116contained therein. Given the tracks1114, re-identification agent234may generate a gallery1110for each track. For example, gallery1110may include those image patches1118in a given track that include an image of a player in which their orientation satisfies a threshold value. In other words, re-identification agent234may generate gallery1110for each track1114that includes image patches1118of each player, such that the player's number may be visible in each frame. Image patches1118may be a subset of image patches1116. Re-identification agent234may then pairwise match each frame to compute a similarity score via Siamese network. For example, as illustrated, re-identification agent234may match a first frame from track2with a second frame from track1and feed the frames into Siamese network. Re-identification agent234may then connect tracks1112based on the similarity scores. For example, if the similarity score of two frames exceed some threshold, re-identification agent234may connect or associate those tracks. FIG.12is a block diagram illustrating architecture1200of Siamese network242of re-identification agent234, according to example embodiments. As illustrated, Siamese network242may include two sub-networks1202,1204, and a perception network1205. Each of two sub-networks1202,1204may be configured similarly. For example, sub-network1202may include a first convolutional layer1206, a second convolutional layer1208, and a third convolutional layer1210. First sub-network1202may receive, as input, a player patch I1and output a set of features learned from player patch I1(denoted f(I1)). Sub-network1204may include a first convolutional layer1216, a second convolutional layer1218, and a third convolutional layer1220. Second sub-network1204may receive, as input, a player patch I2and may output a set of features learned from player patch I2(denoted f(I2)). The output from sub-network1202and sub-network1204may be an encoded representation of the respective player patches I1, I2. In some embodiments, the output from sub-network1202and sub-network1204may be followed by a flatten operation, which may generate respective feature vectors f(I1) and f(I2), respectively. In some embodiments, each feature vector f(I1) and f(I2) may include 10240 units. In some embodiments, the L2 norm of f(I1) and f(I2) may be computed and used as input to perception network1205. Perception network1205may include three layers1222-1226. In some embodiments, layer1222may include 1024 hidden units. In some embodiments, layer1224may include 512 hidden units. In some embodiments, layer1226may include 256 hidden units. Perception network1205may output a similarity score between image patches I1and I2. FIG.13Aillustrates a system bus computing system architecture1300, according to example embodiments. System1300may be representative of at least a portion of organization computing system104. One or more components of system1300may be in electrical communication with each other using a bus1305. System1300may include a processing unit (CPU or processor)1310and a system bus1305that couples various system components including the system memory1315, such as read only memory (ROM)1320and random access memory (RAM)1325, to processor1310. System1300may include a cache of high-speed memory connected directly with, in close proximity to, or integrated as part of processor1310. System1300may copy data from memory1315and/or storage device1330to cache1312for quick access by processor1310. In this way, cache1312may provide a performance boost that avoids processor1310delays while waiting for data. These and other modules may control or be configured to control processor1310to perform various actions. Other system memory1315may be available for use as well. Memory1315may include multiple different types of memory with different performance characteristics. Processor1310may include any general purpose processor and a hardware module or software module, such as service11332, service21334, and service31336stored in storage device1330, configured to control processor1310as well as a special-purpose processor where software instructions are incorporated into the actual processor design. Processor1310may essentially be a completely self-contained computing system, containing multiple cores or processors, a bus, memory controller, cache, etc. A multi-core processor may be symmetric or asymmetric. To enable user interaction with the computing device1300, an input device1345may represent any number of input mechanisms, such as a microphone for speech, a touch-sensitive screen for gesture or graphical input, keyboard, mouse, motion input, speech and so forth. An output device1335may also be one or more of a number of output mechanisms known to those of skill in the art. In some instances, multimodal systems may enable a user to provide multiple types of input to communicate with computing device1300. Communications interface1340may generally govern and manage the user input and system output. There is no restriction on operating on any particular hardware arrangement and therefore the basic features here may easily be substituted for improved hardware or firmware arrangements as they are developed. Storage device1330may be a non-volatile memory and may be a hard disk or other types of computer readable media which may store data that are accessible by a computer, such as magnetic cassettes, flash memory cards, solid state memory devices, digital versatile disks, cartridges, random access memories (RAMs)1325, read only memory (ROM)1320, and hybrids thereof. Storage device1330may include services1332,1334, and1336for controlling the processor1310. Other hardware or software modules are contemplated. Storage device1330may be connected to system bus1305. In one aspect, a hardware module that performs a particular function may include the software component stored in a computer-readable medium in connection with the necessary hardware components, such as processor1310, bus1305, display1335, and so forth, to carry out the function. FIG.13Billustrates a computer system1350having a chipset architecture that may represent at least a portion of organization computing system104. Computer system1350may be an example of computer hardware, software, and firmware that may be used to implement the disclosed technology. System1350may include a processor1355, representative of any number of physically and/or logically distinct resources capable of executing software, firmware, and hardware configured to perform identified computations. Processor1355may communicate with a chipset1360that may control input to and output from processor1355. In this example, chipset1360outputs information to output1365, such as a display, and may read and write information to storage device1370, which may include magnetic media, and solid state media, for example. Chipset1360may also read data from and write data to RAM1375. A bridge1380for interfacing with a variety of user interface components1385may be provided for interfacing with chipset1360. Such user interface components1385may include a keyboard, a microphone, touch detection and processing circuitry, a pointing device, such as a mouse, and so on. In general, inputs to system1350may come from any of a variety of sources, machine generated and/or human generated. Chipset1360may also interface with one or more communication interfaces1390that may have different physical interfaces. Such communication interfaces may include interfaces for wired and wireless local area networks, for broadband wireless networks, as well as personal area networks. Some applications of the methods for generating, displaying, and using the GUI disclosed herein may include receiving ordered datasets over the physical interface or be generated by the machine itself by processor1355analyzing data stored in storage1370or1375. Further, the machine may receive inputs from a user through user interface components1385and execute appropriate functions, such as browsing functions by interpreting these inputs using processor1355. It may be appreciated that example systems1300and1350may have more than one processor1310or be part of a group or cluster of computing devices networked together to provide greater processing capability. While the foregoing is directed to embodiments described herein, other and further embodiments may be devised without departing from the basic scope thereof. For example, aspects of the present disclosure may be implemented in hardware or software or a combination of hardware and software. One embodiment described herein may be implemented as a program product for use with a computer system. The program(s) of the program product define functions of the embodiments (including the methods described herein) and can be contained on a variety of computer-readable storage media. Illustrative computer-readable storage media include, but are not limited to: (i) non-writable storage media (e.g., read-only memory (ROM) devices within a computer, such as CD-ROM disks readably by a CD-ROM drive, flash memory, ROM chips, or any type of solid-state non-volatile memory) on which information is permanently stored; and (ii) writable storage media (e.g., floppy disks within a diskette drive or hard-disk drive or any type of solid state random-access memory) on which alterable information is stored. Such computer-readable storage media, when carrying computer-readable instructions that direct the functions of the disclosed embodiments, are embodiments of the present disclosure. It will be appreciated to those skilled in the art that the preceding examples are exemplary and not limiting. It is intended that all permutations, enhancements, equivalents, and improvements thereto are apparent to those skilled in the art upon a reading of the specification and a study of the drawings are included within the true spirit and scope of the present disclosure. It is therefore intended that the following appended claims include all such modifications, permutations, and equivalents as fall within the true spirit and scope of these teachings. | 71,956 |
11861851 | DETAILED DESCRIPTION Provided are techniques for making anatomical and functional assessments of coronary artery disease (CAD) by analyzing dynamic angiography image data, using physics-based machine learning from neural networks, high-fidelity model order reduction using Graph Theory, together with machine learning-based anatomical segmentation, and high-fidelity computer simulations of hemodynamics, to produce an automated workflow that can offer superior diagnostic performance for CAD. Techniques are provided for performing non-invasive Fractional Flow Reserve (FFR) quantification based on angiographic data, relying on machine learning algorithms for image segmentation, and relying on physics-based machine learning and computational fluid dynamics (CFD) simulation for more accurate functional assessment of the vessel. In particular, in some examples, two-dimensional dynamic angiography data is used to capture the transport of dye down the vessels of interest. This dynamic information can be used to inform the computational simulation and therefore to obtain more accurate predictions of the FFR, particularly in borderline cases. Further, although angiography data does not offer three-dimensional anatomical information, the present techniques include processes for deploying image reconstruction algorithms to obtain 3-dimensional (3D) and one-dimensional (1D) geometric models of a patient's vasculature, which are then used for computer simulation of hemodynamics. While techniques are described herein in terms of determining FFR, the same techniques may be used to calculate other flow reserve metrics. Therefore, references in the examples herein to determinations of FFR include determinations of instantaneous wave-free ratio (iFR), quantitative flow ration (QFR), etc. In some examples, systems and methods are provided for assessing coronary artery disease. The system may receive angiography image data of a vessel inspection region for a subject. That angiography image data may contain a plurality of angiography images captured over a sampling time period. The system may apply that angiography image data to a first machine learning model, a vessel segmentation machine learning model. The vessel segmentation machine learning model may generate two-dimensional (2D) segmented vessel images for the vessel inspection region, and from these 2D segmented vessel images, a 3D geometric vessel tree model is generated modeling vessels with the vessel inspection region. In other examples, a 1D equivalent vessel tree model may be generated from the 3D vessel tree model. The 3D or 1D geometric vessel tree model may be applied to a second machine learning model, a fluid dynamics machine learning model to assimilate flow data over a sampling time period for one or more vessels within the vessel inspection region. From that assimilated flow data and from the 3D or 1D geometric vessel tree model, a computational fluid dynamics model is configured to determine states of vessels in the vasculature, where those states may include a state of vessel occlusion and/or a state of microvascular disease/resistance. In particular, to determine microvascular disease/resistance, angiographic images may be acquired under two (2) different hemodynamic states, one being a baseline state and a hyperemic (high flow) state, and a comparison may be made between the two. In yet other examples, the microvasculature may be assessed from examining angiographic images captured during the hyperemic state, only. InFIG.1, a CAD assessment system100includes a computing device102(or “signal processor” or “diagnostic device”) configured to collect angiography image data from a patient120via an angiography imaging device124in accordance with executing the functions of the disclosed embodiments. As illustrated, the system100may be implemented on the computing device102and in particular on one or more processing units104, which may represent Central Processing Units (CPUs), and/or on one or more or Graphical Processing Units (GPUs), including clusters of CPUs and/or GPUs, any of which may be cloud based. Features and functions described for the system100may be stored on and implemented from one or more non-transitory computer-readable media106of the computing device102. The computer-readable media106may include, for example, an operating system108and a CAD machine learning (deep learning) framework110having elements corresponding to that of deep learning framework described herein. More generally, the computer-readable media106may store trained deep learning models, including vessel segmentation machine learning models, fluid dynamics machine learning models, Graph-theory based reduced order models, executable code, etc. used for implementing the techniques herein. The computer-readable media106and the processing units104may store image data, segmentation models or rules, fluid dynamic classifiers, and other data herein in one or more databases112. As discussed in examples herein, the CAD machine learning framework110applying the techniques and processes herein (e.g., various different neural networks) may generate 3D and or 1D segmented vessel treed geometric models, FFR and other fluid dynamic assessments, state of vessel occlusion data, and/or microvascular disease data. The computing device102includes a network interface114communicatively coupled to the network116, for communicating to and/or from a portable personal computer, smart phone, electronic document, tablet, and/or desktop personal computer, or other computing devices. The computing device further includes an I/O interface115connected to devices, such as digital displays118, user input devices122, etc. As described herein, the computing device102generates indications of CAD for a subject, which may include states of vessels in the vasculature, such as a state of vessel occlusion (anatomical and functional through an FFR calculation, through an iFR calculation, or through a QFR calculation) and a state of microvascular disease prediction (by contrasting changes in distal resistance when two hemodynamic states are recorded), as an electronic document that can be accessed and/or shared on the network116. In the illustrated example, the computing device102is communicatively coupled, through the network116, to an electronic medical records (EMR) database126. The EMR126may be a network accessible database or dedicated processing system. In some examples, the EMR126includes data on one or more respective patients. That EMR data may include vital signs data (e.g., pulse oximetry derived hemoglobin oxygen saturation, heart rate, blood pressure, respiratory rate), lab data such as complete blood counts (e.g., mean platelet volume, hematocrit, hemoglobin, mean corpuscular hemoglobin, mean corpuscular hemoglobin concentration, mean corpuscular hemoglobin volume, white blood cell count, platelets, red blood cell count, and red cell distribution width), lab data such as basic metabolic panel (e.g., blood urea nitrogen, potassium, sodium, glucose, chloride, CO2, calcium, creatinine), demographic data (e.g., age, weight, race and gender, zip code), less common lab data (e.g., bilirubin, partial thromboplastin time, international normalized ratio, lactate, magnesium and phosphorous), and any other suitable patient indicators now existing or later developed (e.g., use of O2, Glasgow Coma Score or components thereof, and urine output over past 24 hours, antibiotic administration, blood transfusion, fluid administration, etc.); and calculated values including shock index and mean arterial pressure. The EMR data may additionally or alternatively include chronic medical and/or surgical conditions. The EMR data may include historical data collected from previous examinations of the patient, including historical FFR, iFR, or QFR data. Previous determinations of stenosis, vascular disease prediction, vascular resistance, CFD simulation data, and other data produced in accordance with the techniques herein. The EMR126may be updated as new data is collected from the angiography imaging device124and assessed using the computing device102. In some examples, the techniques may provide continuous training of the EMR126. In conventional angiography imaging applications, angiography images are captured by the medical imager and then sent to an EMR for storage and further processing, including, in some examples image processing, before those images are sent to a medical professional. With the present techniques, the state of occlusion and state of microvascular disease can be determined at computing device based on the angiography images, and without first offloading those images to the EMR126for processing. In total, the techniques proposed herein are able to reduce analysis times for cardiologists considerably, and, in part, due to this bypassing of the EMR126for processing. The EMR126may be simply poled for data during analysis by the computing device102and used for storage of state determinations and other computations generated by the techniques herein. Indeed, there are numerous benefits that result from the faster and more automated analyses resulting from the present techniques. For example, modelling and vessel occlusion/disease state analysis can be performed on vessels corresponding to either left or right coronary trees, separately and sequentially, while still producing results for the cardiologist in mins, for example using the 3D modeler or 1D modeler as described herein. In the illustrated example, the system100is implemented on a single server. However, the functions of the system100may be implemented across distributed devices connected to one another through a communication link. In other examples, functionality of the system100may be distributed across any number of devices, including the portable personal computer, smart phone, electronic document, tablet, and desktop personal computer devices shown. In other examples, the functions of the system100may be cloud based, such as, for example one or more connected cloud CPU (s) or computing systems, labeled105, customized to perform machine learning processes and computational techniques herein. The network116may be a public network such as the Internet, private network such as research institution's or corporation's private network, or any combination thereof. Networks can include, local area network (LAN), wide area network (WAN), cellular, satellite, or other network infrastructure, whether wireless or wired. The network can utilize communications protocols, including packet-based and/or datagram-based protocols such as internet protocol (IP), transmission control protocol (TCP), user datagram protocol (UDP), or other types of protocols. Moreover, the network116can include a number of devices that facilitate network communications and/or form a hardware basis for the networks, such as switches, routers, gateways, access points (such as a wireless access point as shown), firewalls, base stations, repeaters, backbone devices, etc. The computer-readable media106may include executable computer-readable code stored thereon for programming a computer (e.g., comprising a processor(s) and GPU(s)) to the techniques herein. Examples of such computer-readable storage media include a hard disk, a CD-ROM, digital versatile disks (DVDs), an optical storage device, a magnetic storage device, a ROM (Read Only Memory), a PROM (Programmable Read Only Memory), an EPROM (Erasable Programmable Read Only Memory), an EEPROM (Electrically Erasable Programmable Read Only Memory) and a Flash memory. More generally, the processing units of the computing device102may represent a CPU-type processing unit, a GPU-type processing unit, a field-programmable gate array (FPGA), another class of digital signal processor (DSP), or other hardware logic components that can be driven by a CPU. It is noted that while example deep learning frameworks herein are described as configured with example machine learning architectures, any number of suitable convolutional neural network architectures may be used. Broadly speaking, the deep learning frameworks herein may implement any suitable statistical model (e.g., a neural network or other model implemented through a machine learning process) that will be applied to each of the received images. As discussed herein, that statistical model may be implemented in a variety of manners. In some examples, the machine learning model has the form of a neural network, Support Vector Machine (SVM), or other machine learning process and is trained using images or multi-dimensional datasets to develop models for vessel segmentation or fluid dynamics computations. Once these models are adequately trained with a series of training images, the statistical models may be employed in real time to analyze subsequent angiography image data provided as input to the statistical model for determining the presence of CAD and for determining vessel occlusion status and disease. In some examples, when a statistical model is implemented using a neural network, the neural network may be configured in a variety of ways. In some examples, the neural network may be a deep neural network and/or a convolutional neural network. In some examples, the neural network can be a distributed and scalable neural network. The neural network may be customized in a variety of manners, including providing a specific top layer such as but not limited to a logistics regression top layer. A convolutional neural network can be considered as a neural network that contains sets of nodes with tied parameters. A deep convolutional neural network can be considered as having a stacked structure with a plurality of layers. The neural network or other machine learning processes may include many different sizes, numbers of layers and levels of connectedness. Some layers can correspond to stacked convolutional layers (optionally followed by contrast normalization and max-pooling) followed by one or more fully-connected layers. The present techniques may be implemented such that machine learning training may be performed using a small dataset, for example less than 10,000 images, less than 1000 images, or less than 500 images. In an example, approximately 400 images were used. To avoid overfitting, a multi-fold cross validation process can be used (e.g., a 5 fold cross validation). In some examples, to avoid overfitting, a regularization process, such as L1 or L2, can be used. For neural networks trained on large datasets, e.g., greater 10,000 images, the number of layers and layer size can be increased by using dropout to address the potential problem of overfitting. In some instances, a neural network can be designed to forego the use of fully connected upper layers at the top of the network. By forcing the network to go through dimensionality reduction in middle layers, a neural network model can be designed that is quite deep, while dramatically reducing the number of learned parameters. FIG.2illustrates an example deep learning framework200that be an example of the CAD machine learning framework110of the computer device102. The framework200includes a 3D/1D segmented vessel tree geometric model generator204including a pre-processor stage205, and a vessel segmentation machine learning model206that includes two neural networks, an angiographic processing neural network (APN)202and a second stage, semantic neural network207. In the illustrated example, the pre-processor205receives clinical angiogram images203A, along with data on the contrast agent injection used to form the same. Optionally, the pre-processor205may be coupled to receive synthetic angiogram images203B, for example, for machine learning training. Further, the pre-processor205may be coupled to receive geometric adjusted vessel images203C. In some examples, these inputs may feed directly to the vessel segmentation machine learning model206, more specifically to the APN202. The pre-processor205is capable of performing various pre-processing on the received image data that may include a de-noising process, a linear filtering process, an image size normalization process, and a pixel intensity normalization process to received image data. The deep learning framework200may operate in two different modes: machine learning training mode and analysis mode. In machine learning training mode of the framework, the angiogram image data203A, the synthetic angiogram image data203B, and/or the geometrically adjusted angiogram image data203C (such as horizontal or vertical flips, arbitrary levels of zoom, rotation, or shearing) may be provided to the APN202. Different pre-processing functions and values may be applied to the received image data depending on the data type and data source. In analysis mode, in which the machine learning models have been trained, captured angiography image data203A for a subject is provided to the APN202for analysis and CAD determination. In either mode, the pre-processed image data is provided to the 3D/1D segmented vessel tree geometric model generator204that includes the vessel segmentation machine learning model206that receives the pre-processed image data and performs processes at the APN202and the semantic NN207and, in an analysis model, generates 2D segmented vessel images. Thus, the vessel segmentation machine learning model206may be a convolution neural network, such as two different convolution neural networks in staged configuration, as shown in the example ofFIG.5. Thus, in some examples, the semantic NN207is configured as a modified or unmodified Deeplab v3+ architecture. The 3D/1D segmented vessel tree geometric model generator204further includes a 3D modeler208configured to generate a 3D vessel tree geometric model of the target region based on the 2D segmented vessel images. Once the 3D vessel tree model is generated, the generator204may applying a further smoothing algorithm and/or surface spline fitting algorithm to further improve the 3D vessel tree model for 3D (e.g. high-fidelity) flow dynamics classification and occlusion analysis. To increase processing time and analysis of the state of vessel occlusion in larger vessels and the state of microvascular diseases in smaller vessels, in some examples, the techniques here are implemented with a reduced order model. In some examples, the 3D segmented vessel tree geometric model generated from the captured 2D angiography images, is further reduced to generate a 1D segmented vessel tree geometric model, in which sufficiently data is maintained to nonetheless provide for flow data generation, fluid dynamics modelling, FFR, iFR, or QFR determinations, and computational fluid dynamics modelling. To implement model order reduction, in some examples, the vessel tree geometric model generator204includes a 1D modeler209. The 1D model209produces a skeletonization of the 3D segmented vessel tree model, given by pathlines/centerlines in 3D-space of the vessels included in the 3D segmented vessel tree model, and a series of 2D cross-sectional contours separated by arbitrary distances along each pathline/centerline of the tree. An example 1D segmented vessel geometric tree model generated from a 3D segmented vessel geometric tree model is shown inFIG.6. The 3D or 1D vessel tree geometric models from generator208or209are provided to a flow data generator210, which includes a fluid dynamics machine learning model212, which may include at least one network of the type: convolutional neural network (CNN), autoencoder, or long short-term memory (LSTM), or a Graph-theory based reduced order model of flow and pressure. As shown, the fluid dynamics machine learning model212may include many different types of models, trained and untrained. In some examples, the fluid dynamics machine learning model212is a Navier Stokes informed deep learning framework configured to determine pressure data and velocity data over a 3D vessel space or a 1D vessel space depending on the modeler providing input208or209. In some examples, the Navier Stokes informed deep learning framework includes one or more methods of the type: Kalman Filtering, Physics-informed Neural Network, iterative assimilation algorithm based upon contrast arrival time at anatomical landmarks, and TIMI frame counting. Dynamic data on a series of images describing the transport of the dye down the vessels of interest (see, e.g.,FIG.8) is used to assimilate information on blood velocity. In some cases, the contrast agent injection system203D with precise information on pressure applied to the bolus of dye, volume of dye, timing, etc., may be used to acquire the clinical angiograms and to provide additional information for the fluid dynamics machine learning model, as shown as well. In other examples, the fluid dynamics machine learning model212includes a graph theory based reduced order model obtained through a graph of discrepancies between ground truth data (in-silico or clinical) including geometry, pressure, flow, and indices such as FFR, iFR, or QFR. Any of the techniques herein to define a graph-theory based reduced order model can generate faster results in comparison to occlusion analysis techniques based on finite element modeling (FEM) or other 3D techniques. Furthermore, the techniques herein can model and analyze not only large vessels but also the microvasculature and thus are able to determine state of occlusion in large vessels and state of microvascular disease in small vessels. More generally, the model212is configured to determine assimilated flow data over the sampling time period for one or more vessels within a 3D vessel tree geometric model or 1D vessel tree geometric model. Such a determination may include determining pressure and/or flow velocity for a plurality of connect vessels in the 3D vessel tree geometric model or 1D vessel geometric tree model. In some examples, lumped parameter boundary condition parameters are determined by the fluid dynamics machine learning model212for one or more vessels in the vessel inspection region. In some examples, the fluid dynamics machine learning model212determines a lumped parameter model of flow for a first vessel and determines a lumped parameter model of flow for each vessel branching from the first vessel. Any of these may then be stored as assimilated flow data. The assimilated flow data from the flow data generator210and the 3D vessel tree model (or 1D vessel tree model) are provided to a computational fluid dynamics model214that may apply physics-based processes to determine a state of vessel occlusion for the one or more vessels within the 3D vessel tree model (or 1D vessel tree model) and/or a state of microvascular disease for the one or more vessels. In some examples, the computational fluid dynamics model includes one or more of: multi-scale 3D Navier-Stokes simulations with reduced-order (lumped parameter) models; reduced-order Navier-Stokes (1D) simulations with reduced-order models, reduced-order models derived from a Graph Theory framework relying on 1D nonlinear theory models; or reduced order model simulations (lumped parameter models, 0D) models for the entire segmented vessel tree models. In the example shown, the computational fluid dynamics model includes at least a 3D high-fidelity trained model211and a graph-theory information reduced order model213, in accordance with examples herein. In some examples, the computational fluid dynamics model214is configured to determine FFR, iFR, and/or QFR for the one or more vessels in the 3D vessel tree model or 1D vessel tree model from the flow data. In some examples, the computational fluid dynamics model214is configured to determine the state of vessel occlusion from the FFR, iFR, and/or QFR for the one or more vessels. In some examples, the computational fluid dynamics model214is configured to determine coronary flow reserve (CFR) for the one or more vessels from the flow data, from one or more physiological states (baseline and hyperemic), and to determine the state of microvascular disease from the CFR for the one or more vessels. Determining the state of vessel occlusion includes determining a presence of stenosis in the one or more vessels. Determining the state of microvascular disease includes determining the lumped parameter models on the boundaries of the vessels in vessel inspection region. InFIG.3, a process300is shown for assessing coronary artery disease as may be performed by the system100. 2D angiography image data is obtained by the medical imager116and provided to the computer device102, at a process302, where optionally pre-processing operations may be performed. In a training mode, the 2D angiography image data may be captured clinical angiogram image data, but such training data may further include synthetic image data, geometrically adjusted image data, etc., as shown inFIG.2. In an example, training of the vessel segmentation machine learning model206was performed on 462 clinical angiogram images (see, e.g.,FIG.8) augmented through a combination of geometric transformations (zoom, horizontal flip, vertical flip, rotation, and/or shear) to form over 500,000 angiogram images and 461 synthetic angiogram images (see, e.g.,FIG.9) augmented through those geometric transformations into an additional set of over 500,000 images. The clinical angiogram images may include time series data in a plurality of frames with segmentation for extraction velocity throughout the entire vessel tree, as shown. In some examples, the vessel segmentation machine learning model206includes a synthetic images generator configured to generate synthetic images using a combination of transformations such as flipping, shearing, rotation, and/or zooming. In any event, these numbers of images are provided by way of empirical example only, as any suitable number of training images, captured or synthetic, may be used. In a training mode, the image data is applied to the CAD assessment machine learning framework110for generating two types of models, the vessel segmentation machine learning model206and the fluid dynamics machine learning model212. In diagnostic mode, the image data is applied to the CAD assessment machine learning framework110for classifications and diagnosing CAD. At a process304, the CAD assessment machine learning framework110, for example, through the APN202and the semantic NN207of the vessel segmentation machine learning model206, applies the received image data to a vessel segmentation machine learning model and generates 2D segmented vessel images. The CAD assessment machine learning framework110, such as through the 3D modeler208, receives the 2D segmented vessel images and generates a 3D segmented vessel tree model or a 1D segmented vessel tree model, at a process306. At a process308, the 3D segmented vessel tree model or a 1D segmented vessel tree model is applied to the fluid dynamics machine learning model212of the flow data generator210, and assimilated flow data is generated over a sampling period for one or more vessels in the 3D vessel tree model or in the 1D segmented vessel tree model. At a process310, the assimilated flow data and the 3D vessel tree model or a 1D segmented vessel tree model are applied to the computational fluid dynamics model214, which assesses the data, using a either the 3D segmented vessel tree model or a 1D vessel tree model, and determines vessel health, such as through a determination of a state of vessel occlusion via indices such as the FFR, iFR, QFR, or others, by solving either the 3D Navier-Stokes equations, or a graph-theory-based reduced order model. If data on two hemodynamic states is available (e.g. baseline and hyperemic conditions), a state of microvascular disease, or CFR, will be determined from the lumped parameter values of the boundary conditions for each hemodynamic state. Process400, shown inFIG.4, is an example implementation of the process304for generating the 2D segmented vessel images from angiography image data that may be performed by the angiographic processing network202and the semantic NN207of the vessel segmentation machine learning model206. Initially, the 2D angiography image data for a target vessel region is received, at the process302. At a process402, image pre-processing is applied (including an image size normalization process, and/or a pixel intensity normalization process) before further image processing via an angiographic processing network implemented with a convolutional neural network (e.g., APN202), where the further pre-processing may include a non-linear filtering process, de-noising, and/or contrast enhancement, which, when combined with a process404filters out objects such as catheters and bony structures. That is, at the process404, the pre-processed 2D angiography image is applied to a second convolution neural network (CNN) trained using clinical angiography image data, synthetic image data, and/or geometrically altered image data (e.g., the semantic NN207).FIG.5illustrates an example CNN framework500configured with a combination of an angiographic processing network (APN)503and a semantic NN in the form of a Deeplab v3+ network507, configured to generate the 2D segmented vessel images at process408. The Deeplab v3+ network507is a semantic segmentation deep learning technique that includes an encoder502and a decoder504. A series of angiography images501are provided as inputs to an angiographic processing network503, composed of several 3×3 and 5×5 convolutional layers, which applies non-linear filters that perform contrast enhancement, boundary sharpening, and other image processing functions. The APN503together with the semantic NN507form a vessel segmentation machine learning model, such as the model206. The APN503that feeds the encoder502, which applies atrous convolutions to the image data, which deploys a rate that controls the effective field of view of each convolution layer. The larger the rate the greater the area of image capture for convolution and low-level feature extraction as may be performed on each input image. For example, rates of 6, 12, and 18 may be used to affect different fields of view to capture different features, for example, features at different resolutions. Using the atrous or dilated convolutions, a dilated sample of an image, or image portion, is convolved to a smaller image. The encoder502determines low level features using several atrous convolution strides, then applies a 1×1 (depthwise separable) convolution to combine the outputs of the many different atrous convolutions. This produces a single matrix of features pertaining to different classes, which is input into to the decoder504, along with high level features determined from 1×1 convolutions applied to the original input image. The convolutions of the encoder502may be performed using atrous or depthwise convolutions, in some examples. The decoder504concatenates the low level and high-level features into a stacked matrix, then applies transposed convolutions to this matrix, using a fractional stride size to assign class labels to each pixel based on spatial and feature information. These transposed convolutions generate a probability map which determines how likely it is that each pixel belongs to the background or vessel class. The Softmax function is applied to generate the output segmented 2D images505, as shown. InFIG.6, a process600that may be performed by the 3D segmented vessel tree model generator204to generate the 3D segmented vessel tree model from a generated 2D segmented vessel images602. As shown, any one of four different pipelines may be used to generate the 3D segmented vessel tree model, from which a 1D segmented vessel tree model can also be obtained, in accordance with an example. In a first configuration, at a process603, the 3D modeler208receives the 2D segmented vessel images and finds a centerline for each of the 2D segmented vessel images. At a process604, the 3D modeler208co-locates points from each of the 2D segmented vessel images using geometric tools which may include epipolar geometry, projective geometry, Euclidean geometry or any related geometric tools, and, at a process606, triangulates 3D points having a projection that maps on the co-located points (back-projection). The local radius of the vessel is determined from each 2D segmented vessel, and these radius vectors are projected onto the 3D centerline. From there, at process608, the 3D modeler208determines vessel contours based on the triangulated 3D points and the 3D vessel tree model is generated, at a process610. From there, a 1D vessel tree model is generated a process620. In second configuration, the 3D modeler208generates a plurality of 3D rotation matrices from the 2D segmented vessel images, at a process612. The 3D modeler208then generates the 3D segmented vessel tree model by solving a linear least squares system of equations mapping the plurality of the 2D segmented vessel images into a 3D space, at a process614. In third configuration, the 3D modeler208forward projects voxels in a 3D space onto the plurality of 2D segmented vessel images, at a process616, and identifies the set of 3D voxels which project inside the plurality of 2D segmented vessel images, at a process618. The resulting binary volume is then smoothed to ensure a realistic vessel boundary. In a fourth configurations, active contours models are used to reconstruct the 3D geometry of the vessel. The finish centerline of each 2D segmented vessel image is performed, at a process622. The endpoints of each vessel are identified in the plurality of 2D segmented vessel images and back-projected, at a process624, to identify the endpoints in 3D space. A 3D cylinder is drawn between these 3D endpoints, and external and internal forces on this cylinder are defined by material properties, imaging parameters of the system, and re-projection error between the cylinder and the plurality of 2D segmented images, as contour modeling to deform until the projections match the vessel shape in each 2D image, at a process626. At a process628, a re-project deformation is performed forcing the 2D images to the 3D cylinder where a deformation may be performed. The forces deform the cylinder in order to minimize the re-projection error. The process628may be repeated for all branches of the vessel until the full coronary tree is reconstructed. The fluid dynamics machine learning models212herein are, in some examples, implemented as physics informed neural networks, in particular neural networks capable of encoding any underlying physical laws that govern a given dataset, and that can be described by partial differential equations. For example, the fluid dynamics machine learning model may include partial differential equations for the 3D Navier-Stokes equations, a set of 4 partial differential equations for balance of mass and momentum, whose unknown fields are a 3D velocity vector (vx, vy, vz) and a scalar p. In an example, a solution to the flow within the 3D vessel tree model is generated using in the incompressible 3D Navier-Stokes equations of flow, according to Equations 1 and 2: ρū,t+ρū·∇ū=−∇p+div(τ)+f div(ū)=0 (1) & (2) where u is the fluid velocity, p is the pressure, f is a body force (here, assumed to be zero and=2μwith=½(∇ū+∇ūT) is the viscous stress tensor for a Newtonian fluid. Alternatively, in some examples, the fluid dynamics machine learning model212may include a graph-theory based reduced order model, as illustrated inFIG.7. The graph-theory based reduced order model may be run in machine learning mode or in analysis mode. In machine learning mode, training data701on flow, stenosis geometry, measured values of FFR, iFR, QFR, etc. are used to define a dense graph of discrepancies702between ground truth data and a low-fidelity model of flow, given by a 1D non-linear theory model703. The ground truth data can be given by either in silico simulations of 3D Navier-Stokes equations704or by in vivo data acquired in the catheterization laboratory705. A dense graph of discrepancies702may be generated by analyzing a large number of permutations of stenosis diameter, length, eccentricity, flow through the stenosis, or a combination thereof. Once a dense graph of discrepancies is generated, a reduced order model706may be derived through nonlocal calculus, deep neural networks, or direct traversal over the vertices of the graph. The reduced order model may be an algebraic equation, or an ordinary or partially differential equation. In analysis mode, the inputs707to the reduced order model are a geometry given by a 1D segmented vessel tree extracted by process610, and boundary conditions on flow and pressure extracted through data assimilation process308. The reduced order model706and inputs707produce the desired state of vessel occlusion708(anatomical and functional through an FFR calculation, through an iFR calculation, or through a QFR calculation) and a state of microvascular disease. The 1D non-linear theory model703is a 1D non-linear Navier-Stokes model, which includes two partial differential equations for mass balance and momentum balance, with unknowns being the flow Q and the cross-sectional average pressure, P. In an example, the solution to the flow obtained with the 1D non-linear theory model is generated using conversation of mass and momentum of an incompressible Newtonian fluid according to the following system of equations, Equations 3 and 4. {∂A∂t+∂(AU)∂x=0∂U∂t+U∂U∂x+1ρf∂P∂x=fρfA,(3)and(4) where x is the axial coordinate along the vessel, t is the time, A(x,t) is the cross-sectional area of the lumen, U(x,t) is the axial blood flow velocity averaged over the cross-section, P(x,t) is the blood pressure averaged over the cross-section, pf is the density of blood assumed to be constant, and f(x,t) is the frictional force per unit length. The momentum correction factor in the convection acceleration term of Equation 1 can be assumed to be equal to one. Equations 3 and 4 can also be derived by integrating the incompressible Navier-Stokes equations over a generic cross section of a cylindrical domain. In any event, the fluid dynamics machine learning models herein may be formed with data-driven algorithms for inferring solutions to these general nonlinear partial differential equations, through physics-informed surrogate classification models. For principled physical laws that govern the time-dependent dynamics of a system, or some empirical validated rules or other domain expertise, information about the physical laws may be used as a regularization agent that constrains the space of admissible solutions to a manageable size. In return, encoding such structured information into a machine learning model results in amplifying the information content of the data that the algorithm sees, enabling it to quickly steer itself towards the right solution and generalize well even when only a few training examples are available. Furthermore, the reduced order models proposed herein will be trained using graph theory and discrepancies between a low fidelity 1D nonlinear model of blood flow and ground truth data given by either 3D high-resolution Navier Stokes models or in-vivo anatomical and hemodynamic data, to accurately and efficiently capture hemodynamics around stenosis. In various examples, of the reduced order model is defined from the graph of discrepancies via one of the following three methods: a) CNN, b) non-local calculus, c) exploration of graphs using transversal algorithms (see, e.g., Banerjee et al., A graph theoretic framework for representation, exploration and analysis on computed states of physical systems, Computer Methods in Applied Mechanics and Engineer, 2019, which is hereby incorporated by reference). In an example, the fluid dynamics machine learning model is configured to include hidden fluid mechanics (HFM), a physics informed deep learning framework capable of encoding an class of physical laws governing fluid motions, namely the Navier-Stokes equations, as described in Raissi et al.,Hidden Fluid Mechanics, A Navier-Stokes Informed Deep Learning Framework for Assimilating Flow Visualization Data, dated on cover 13 Aug. 2018, herein incorporated by reference. In an example, the fluid dynamics machine learning model applies the underlying conservation laws (i.e., for mass, momentum, and energy) to infer hidden quantities of interest such as velocity and pressure fields merely from the 3D vessel tree model generated at different times from angiography image data taken at different times. The fluid dynamics machine learning model may apply an algorithm that is agnostic to the geometry or the initial and boundary conditions. That makes the HFM configuration highly flexible in choosing the types of vessel image data that can be sued for training and for diagnoses by the model. The fluid dynamics machine learning model is trained to predict pressure and velocity values in both two- and three-dimensional flows of imaged vessels. Such information can be used to determine other physics related properties such as pressure or wall shear stresses in arteries. In some examples, the computational fluid dynamics model is configured to determine a lumped parameter model attached to each vessel of the 3D vessel tree model or the 1D vessel tree model. The computational fluid dynamics model may include a series of lumped parameter model (LPM) for different vessels as shown inFIG.10A. An example LPM1000is provided for a heart model coupled to the inflow face of 3D vessel tree1002. The vessel tree1002is formed of multiple different vessels labeled with different letters, A inlet, B-H aortic outlets, and a-k coronary artery outlets. LPM1006is used for each of the outlets B-H, representing the microcirculation of vessels other than the coronary arteries. An LPM1008is provided for the coronary outlets a-k and is coupled to the LPM1000representing the heart. Parameters for this model are estimated from the patient's data on flow and pressure (either measured in the catheterization laboratory or estimated using data assimilation techniques described in308or using morphometric considerations such as Murray's Law). A LPM1004, including both left and right sides of the heart, may be used to run analyses in closed-loop configuration. A model of this nature may be used to calculate hemodynamics under pulsatile conditions. A second example LPM is shown inFIG.10B. Here, inflow boundary conditions may be defined by either the patient's mean arterial pressure of by a mean flow measured or estimated from the dynamic angiographic data. Patient-specific outflow boundary conditions may be defined by flow down each vessel estimated via the fluid dynamics machine learning process308that assimilates flow data over a sampling period, or by LPM (resistors) coupled to each of the outlet faces of the coronary tree. The LPM for each branch of the vessel tree (whether 3D or 1D) may be also estimated once the computed solutions for pressure and flow within the vessel tree are known, assuming a pressure gradient across the LPM down to an fixed level of capillary pressure. A model of this nature may be used to simulate hemodynamics under steady-steady conditions. This process can be repeated for each available hemodynamic state (e.g. baseline and hyperemia). LPMs are used to represent cardiovascular regions where the full detail of the flow solution is not required, but it is important for the model to include the relationship between pressures, flows—and in some cases, volumes—in these regions. They are ideal in regions when detailed spatial information on the vascular geometry is neither available nor of fundamental importance. Their parameters exert a fundamental influence on the pressure and velocity fields in the vessel tree model, and so the parameters must be tuned to achieve physiological values which agree with the data assimilated from the patient images, any additional clinical recordings which are available for the individual in question. When insufficient data is available for parameterization for the specific patient, data from the scientific literature on expected values can be used to assist in determining appropriate parameters. In an example, the anatomical and functional assessment of CAD follows the workflow1100depicted inFIG.11. Several angiographic images1101taken under different orientations are fed to a machine learning segmentation module1102(corresponding to processes302,304, and402and404), which automatically produces 2D segmented images of the angiographic images1103. These images are then fed to an algorithm1104that generates 3D and 1D segmented vessel tress1105, following a combination of processes given inFIG.6. This workflow is used to automatically characterize the lumen diameters and therefore anatomical severity of CAD. Further, a series of angiographic images1106defining the transport of the dye down the regions of interest in the vessel tree, are feed to a series of fluid dynamics machine learning algorithms1107that assimilate information on blood velocity and/or pressure for each vessel of interest in the vessel tree. This system produces boundary conditions on velocity and pressure1108, which, together with the vessel tree1105are fed as inputs for the Graph-theory derived reduced order model1109, which ultimately produces the desired functional metrics of CAD1110, including FFR, iFR, QFR and microvascular resistance. Additional Aspects Aspect 1. A computer-implemented method for assessing coronary artery disease, the method comprising:(a) receiving, by one or more processors, angiography image data of a vessel inspection region for a subject, wherein the angiography image data comprises angiography images captured over a sampling time period;(b) applying, by the one or more processors, the angiography image data to a vessel segmentation machine learning model and generating, using the vessel segmentation machine learning model, two-dimensional (2D) segmented vessel images for the vessel inspection region;(c) by the one or more processors, generating from the 2D segmented vessel images a three-dimensional (3D) segmented vessel tree geometric model of vessels within the vessel inspection region;(d) applying, by the one or more processors, the 3D segmented vessel tree geometric model to a fluid dynamics machine learning model to assimilate, using the fluid dynamics machine learning model, flow data over the sampling time period for one or more vessels within the vessel inspection region;(e) applying, by the one or more processors, the 3D segmented vessel tree geometric model and the assimilated flow data to a 3D high-fidelity computational fluid dynamics model; and(f) determining, by the one or more processors, a state of vessel occlusion for the one or more vessels within the vessel inspection region. Aspect 2. The computer-implemented method of aspect 1, further comprising: determining, by the one or more processors, a state of microvascular disease for the one or more vessels within the vessel inspection region by performing (a)-(e) at at least two different hemodynamic states. Aspect 3. The computer-implemented method of aspect 1, wherein the vessel segmentation machine learning model is a convolutional neural network. Aspect 4. The computer-implemented method of aspect 1, further comprising:applying, by the one or more processors, to the received angiography image data at least one of a de-noising process, a linear filtering process, an image size normalization process, and a pixel intensity normalization process to produce filtered angiography image data. Aspect 5. The computer-implemented method of aspect 4, further comprising:feeding the filtered angiography image data to an angiography processing network (APN), trained to remove from address the main challenges the angiography image data low contrast images, catheters, and/or overlapping bonny structures. Aspect 6. The computer-implemented method of aspect 5, further comprising:feeding an output of the APN to a semantic image segmentation to produce automatic binary 2D segmented vessel images. Aspect 7. The computer-implemented method of aspect 1, wherein generating the 3D segmented vessel tree geometric model comprises:finding a centerline for each of the 2D segmented vessel images;co-locating points from each of the 2D segmented vessel images using an epipolar geometry;triangulating 3D points having a projection that maps on the co-located points; anddetermining vessel contours based on the triangulated 3D points. Aspect 8. The computer-implemented method of aspect 1, wherein generating the 3D segmented vessel tree geometric model comprises:generating a plurality of 3D rotation matrices from a plurality of the 2D segmented vessel images;generating the 3D segmented vessel tree geometric model by solving a linear least squares system of equations mapping the plurality of the 2D segmented vessel images into a 3D space. Aspect 9. The computer-implemented method of aspect 1, wherein generating the 3D segmented vessel tree geometric model comprises:forward projecting voxels in a 3D space onto the plurality of 2D segmented vessel images; andidentifying the set of 3D voxels which project inside the plurality of 2D segmented vessel images. Aspect 10. The computer-implemented method of aspect 1, wherein generating the 3D segmented vessel tree geometric model comprises:using active contours to deform via internal and external forces, cylindrical geometries onto the plurality of segmented 2D images. Aspect 11. The computer-implemented method of aspects 1, further comprising:applying, by one or more processors, to the 3D segmented vessel tree geometric model at least one of a smoothing algorithm, and surface spline fitting algorithm Aspect 12. The computer-implemented method of aspect 1, generating the 3D segmented vessel tree geometric model by performing a back-projection the 2D segmented vessel images. Aspect 13. The computer-implemented method of aspect 1, wherein the fluid dynamics machine learning model comprises at least one network of the type: convolutional neural network (CNN), autoencoder, or long short-term memory (LSTM). Aspect 14. The computer-implemented method of aspect 1, wherein fluid dynamics machine learning model is a Navier Stokes informed deep learning framework configured to determine pressure data and velocity data over a 3D vessel space. Aspect 15. The computer-implemented method of aspect 14, wherein the Navier Stokes informed deep learning framework comprises one or more methods of the type: Kalman Filtering, Physics-informed Neural Network, iterative assimilation algorithm based upon contrast arrival time at anatomical landmarks, and TIMI frame counting. Aspect 16. The computer-implemented method of aspect 1, wherein determining the flow data over the sampling time period comprises determining pressure and flow velocity data for the one or more vessels over the sampling time period. Aspect 17. The computer-implemented method of aspect 1, wherein determining the flow data over the sampling time period comprises determining pressure and flow velocity data for a plurality of connected vessels in the vessel inspection region. Aspect 18. The computer-implemented method of aspect 1, wherein the computational fluid dynamics model comprises of one or more of:multi-scale 3D Navier-Stokes simulations with reduced-order (lumped parameter) models; reduced-order Navier-Stokes (1D) simulations with reduced-order models; or reduced order model simulations (lumped parameter models, 0D) models for the segmented vessel tree geometric models. Aspect 19. The computer-implemented method of aspect 1, wherein the lumped parameter boundary condition parameters are determined by the fluid dynamics machine learning model for one or more vessels in the vessel inspection region. Aspect 20. The computer-implemented method of aspect 19, further comprising determining a lumped parameter model of flow for a first vessel and determining a lumped parameter model of flow for each vessel branching from the first vessel. Aspect 21. The computer-implemented method of any of aspects 1 further comprising:determining fractional flow reserve (FFR), instantaneous wave-free ratio (iFR), or quantitative flow ratio (QFR) for the one or more vessels from the flow data; anddetermining the state of vessel occlusion from the FFR, iFR or QFR for the one or more vessels. Aspect 22. The computer-implemented method of aspects 1 further comprising:determining coronary flow reserve (CFR) for the one or more vessels from the flow data, from one or more physiological states; anddetermining the state of microvascular disease from the CFR for the one or more vessels. Aspect 23. The computer-implemented method of aspect 22, wherein the one or more physiological states includes a baseline physiological state and a hyperemic physiological state. Aspect 24. The computer-implemented method of aspect 1, wherein determining the state of vessel occlusion comprises determining a presence of stenosis in the one or more vessels. Aspect 25. The computer-implemented method of aspect 1, wherein determining the state of microvascular disease comprises determining the lumped parameter models on the boundaries of the vessels in vessel inspection region at a plurality of hemodynamic states. Aspect 26. The computer-implemented method of aspect 1, further comprising: feeding a plurality of synthetic angiography images, a plurality of clinical angiography images, and a plurality of augmented angiography images to train the vessel segmentation machine learning model. Aspect 27. The computer-implemented method of aspect 1, wherein the angiography image data comprises the angiography images captured over the sampling time period includes angiography images captured during a baseline state and angiography images captured during a pharmacologically-induced hyperemia state. Aspect 28. A computer-implemented method for assessing coronary artery disease, the method comprising:(a) receiving, by one or more processors, a plurality of angiography images of a vessel inspection region for a subject, wherein the angiography images are captured over a sampling time period and wherein the vessel inspection region comprises one or more vessels;(b) applying, by the one or more processors, the angiography images to a vessel segmentation machine learning model and generating, using the vessel segmentation machine learning model, two-dimensional (2D) segmented vessel images for the one or more vessels;(c) by the one or more processors, generating, from the 2D segmented vessel images, a one-dimensional (1D) segmented vessel tree geometric model of the one or more vessels;(d) applying, by the one or more processors, the 1D segmented vessel tree geometric model to a fluid dynamics machine learning model to assimilate, using the fluid dynamics machine learning model, flow data over the sampling time period for the one or more vessels;(e) applying, by the one or more processors, the 1D segmented vessel tree geometric model and the assimilated flow data to a graph-theory based reduced order model based on computational fluid dynamics model; and(f) determining, by the one or more processors, a state of vessel occlusion. Aspect 29. The computer-implemented method of aspect 28, further comprising: determining, by the one or more processors, a state of microvascular disease for the one or more vessels within the vessel inspection region by performing (a)-(e) at at least two different hemodynamic states. Aspect 30. The computer-implemented method of aspect 28, wherein the angiography images captured over the sampling time period comprise angiography images captured during a baseline state and angiography images captured during a hyperemic state. Aspect 31. The computer-implemented method of aspect 30, wherein generating from the 2D segmented vessel images the 1D segmented vessel tree geometric model of vessels within the vessel inspection region comprises producing a skeletonization of the 3D segmented vessel tree geometric model, given by pathlines/centerlines in 3D-space of vessels included in the 3D segmented vessel tree geometric model, and a series of 2D cross-sectional contours separated by arbitrary distances along each pathline/centerline of the tree. Aspect 32. The computer-implemented method of aspect 28, wherein the vessel segmentation machine learning model is convolutional neural network. Aspect 33. The computer-implemented method of aspect 28, further comprising:applying, by the one or more processors, to the received angiography images a de-noising process, a linear filtering process, an image size normalization process, and a pixel intensity normalization process. Aspect 34. The computer-implemented method of aspect 28, wherein the fluid dynamics machine learning model comprises at least one network of the type: convolutional neural network (CNN), autoencoder, or long short-term memory (LSTM), or a graph-theory based reduced order model of flow and pressure. Aspect 35. The computer-implemented method of aspect 28, wherein fluid dynamics machine learning model is a Navier Stokes informed deep learning framework configured to determine pressure data and velocity data over a 1D vessel space. Aspect 36. The computer-implemented method of aspect 35, wherein the Navier Stokes informed deep learning framework comprises one or more methods of the type: Kalman Filtering, Physics-informed Neural Network, iterative assimilation algorithm based upon contrast arrival time at anatomical landmarks, and TIMI frame counting. Aspect 37. The computer-implemented method of aspect 28, wherein fluid dynamics machine learning model is a graph-theory based reduced order model, obtained by defining a dense graph of discrepancies between ground truth data and a low-fidelity model of flow, given by a 1D non-linear theory model. Aspect 38. The computer-implemented model of aspect 37, where the ground truth data used to define the dense graph can be either in silico simulations of 3D Navier-Stokes equations of in vivo data acquired in the catheterization laboratory. Aspect 39. The computer-implemented model of aspect 37, where vertices of the dense graph are defined by exploring discrepancies between ground-truth data and a low-fidelity 1D non-linear models of flow include permutations of stenosis diameter, length, eccentricity, flow through the stenosis and a combination thereof. Aspect 40. The computer-implemented model of aspect 37, where the graph-theory based reduced order model is obtained through non-local calculus, a deep neural network, or a direct traverse over the vertices of the generated graph. Aspect 41. The computer-implemented model of aspect 37, where the graph-theory based reduced order model may be an algebraic equation or an ordinary or partial differential equation. Aspect 41. The computer-implemented method of aspect 28, wherein determining the flow data over the sampling time period comprises determining pressure and flow velocity data for the one or more vessels over the sampling time period. Aspect 43. The computer-implemented method of aspect 42, wherein the one or more vessels comprises a plurality of connected vessels. Aspect 44. The computer-implemented method of aspect 28, wherein the computational fluid dynamics model comprises a graph-theory based reduced-order model simulations, with inputs given by the 1D segmented vessel tree geometric model, and boundary conditions on flow and pressure assimilated by the computational fluid dynamics machine learning model. Aspect 45. The computer-implemented method of aspect 28, further comprising determining a lumped parameter model of flow for each of the one or more vessels. Aspect 46. The computer-implemented method of aspect 28, further comprising:determining fractional flow reserve (FFR), instantaneous wave-free ratio (iFR), or quantitative flow ratio (QFR) for the one or more vessels from the flow data; anddetermining the state of vessel occlusion from the FFR, iFR or QFR for the one or more vessels. Aspect 47. The computer-implemented method of aspect 46, further comprising:determining coronary flow reserve (CFR) for the one or more vessels from the flow data, from one or more physiological states; anddetermining the state of microvascular disease from the CFR for the one or more vessels. Aspect 48. The computer-implemented method of aspect 28, wherein determining the state of vessel occlusion comprises determining a presence of stenosis in the one or more vessels. Aspect 49. The computer-implemented method of aspect 28, further comprising: feeding a plurality of synthetic angiography images, a plurality of clinical angiography images, and a plurality of augmented angiography images to train the vessel segmentation machine learning model. Aspect 50. A computing device configured to assessing CAD comprising: one or more processors and one or more computer-readable memories storing instructions that when executed cause the one or processors to: receive, by one or more processors, angiography image data of a vessel inspection region for a subject, wherein the angiography image data comprises angiography images captured over a sampling time period; apply, by the one or more processors, the angiography image data to a vessel segmentation machine learning model and generating, using the vessel segmentation machine learning model, two-dimensional (2D) segmented vessel images for the vessel inspection region; by the one or more processors, generate from the 2D segmented vessel images a three-dimensional (3D) segmented vessel tree geometric model of vessels within the vessel inspection region; and apply, by the one or more processors, the 3D segmented vessel tree model to a fluid dynamics-machine learning model to assimilate, using the fluid dynamics machine learning model, flow data over the sampling time period for one or more vessels within the vessel inspection region; apply, by the one or more processors, the 3D segmented vessel tree model and the assimilated flow data to a 3D, high-fidelity, computational fluid dynamics model; and determine, by the one or more processors, a state of vessel occlusion for the one or more vessels within the vessel inspection region. Aspect 50. A computing device configured to assessing CAD comprising: one or more processors and one or more computer-readable memories storing instructions that when executed cause the one or processors to: receive, by one or more processors, a plurality of angiography images of a vessel inspection region for a subject, wherein the angiography images are captured over a sampling time period and wherein the vessel inspection region comprises one or more vessels; apply, by the one or more processors, the angiography images to a vessel segmentation machine learning model and generate, using the vessel segmentation machine learning model, two-dimensional (2D) segmented vessel images for the one or more vessels; by the one or more processors, generate, from the 2D segmented vessel images, a one-dimensional (1D) segmented vessel tree geometric model of the one or more vessels; apply, by the one or more processors, the 1D segmented vessel tree model to a fluid dynamics machine learning model to assimilate, using the fluid dynamics machine learning model, flow data over the sampling time period for the one or more vessels; apply, by the one or more processors, the 1D segmented vessel tree model and the assimilated flow data to a graph-theory based reduced order model based on computational fluid dynamics model; and determine, by the one or more processors, a state of vessel occlusion. Throughout this specification, plural instances may implement components, operations, or structures described as a single instance. Although individual operations of one or more methods are illustrated and described as separate operations, one or more of the individual operations may be performed concurrently, and nothing requires that the operations be performed in the order illustrated. Structures and functionality presented as separate components in example configurations may be implemented as a combined structure or component. Similarly, structures and functionality presented as a single component may be implemented as separate components. These and other variations, modifications, additions, and improvements fall within the scope of the target matter herein. Additionally, certain embodiments are described herein as including logic or a number of routines, subroutines, applications, or instructions. These may constitute either software (e.g., code embodied on a non-transitory, machine-readable medium) or hardware. In hardware, the routines, etc., are tangible units capable of performing certain operations and may be configured or arranged in a certain manner. In example embodiments, one or more computer systems (e.g., a standalone, client or server computer system) or one or more hardware modules of a computer system (e.g., a processor or a group of processors) may be configured by software (e.g., an application or application portion such as a Contrast Agent Injection System shown inFIG.2) as a hardware module that operates to perform certain operations as described herein. In various embodiments, a hardware module may be implemented mechanically or electronically. For example, a hardware module may comprise dedicated circuitry or logic that is permanently configured (e.g., as a special-purpose processor, such as a field programmable gate array (FPGA) or an application-specific integrated circuit (ASIC)) to perform certain operations. A hardware module may also comprise programmable logic or circuitry (e.g., as encompassed within a general-purpose processor or other programmable processor) that is temporarily configured by software to perform certain operations. It will be appreciated that the decision to implement a hardware module mechanically, in dedicated and permanently configured circuitry, or in temporarily configured circuitry (e.g., configured by software) may be driven by cost and time considerations. Accordingly, the term “hardware module” should be understood to encompass a tangible entity, be that an entity that is physically constructed, permanently configured (e.g., hardwired), or temporarily configured (e.g., programmed) to operate in a certain manner or to perform certain operations described herein. Considering embodiments in which hardware modules are temporarily configured (e.g., programmed), each of the hardware modules need not be configured or instantiated at any one instance in time. For example, where the hardware modules comprise a general-purpose processor configured using software, the general-purpose processor may be configured as respective different hardware modules at different times. Software may accordingly configure a processor, for example, to constitute a particular hardware module at one instance of time and to constitute a different hardware module at a different instance of time. Hardware modules can provide information to, and receive information from, other hardware modules. Accordingly, the described hardware modules may be regarded as being communicatively coupled. Where multiple of such hardware modules exist contemporaneously, communications may be achieved through signal transmission (e.g., over appropriate circuits and buses) that connect the hardware modules. In embodiments in which multiple hardware modules are configured or instantiated at different times, communications between such hardware modules may be achieved, for example, through the storage and retrieval of information in memory structures to which the multiple hardware modules have access. For example, one hardware module may perform an operation and store the output of that operation in a memory device to which it is communicatively coupled. A further hardware module may then, at a later time, access the memory device to retrieve and process the stored output. Hardware modules may also initiate communications with input or output devices, and can operate on a resource (e.g., a collection of information). The various operations of example methods described herein may be performed, at least partially, by one or more processors that are temporarily configured (e.g., by software) or permanently configured to perform the relevant operations. Whether temporarily or permanently configured, such processors may constitute processor-implemented modules that operate to perform one or more operations or functions. The modules referred to herein may, in some example embodiments, comprise processor-implemented modules. Similarly, the methods or routines described herein may be at least partially processor-implemented. For example, at least some of the operations of a method may be performed by one or more processors or processor-implemented hardware modules. The performance of certain of the operations may be distributed among the one or more processors, not only residing within a single machine, but deployed across a number of machines. In some example embodiments, the processor or processors may be located in a single location (e.g., within a home environment, an office environment or as a server farm), while in other embodiments the processors may be distributed across a number of locations. The performance of certain of the operations may be distributed among the one or more processors, not only residing within a single machine, but deployed across a number of machines. In some example embodiments, the one or more processors or processor-implemented modules may be located in a single geographic location (e.g., within a home environment, an office environment, or a server farm). In other example embodiments, the one or more processors or processor-implemented modules may be distributed across a number of geographic locations. Unless specifically stated otherwise, discussions herein using words such as “processing,” “computing,” “calculating,” “determining,” “presenting,” “displaying,” or the like may refer to actions or processes of a machine (e.g., a computer) that manipulates or transforms data represented as physical (e.g., electronic, magnetic, or optical) quantities within one or more memories (e.g., volatile memory, non-volatile memory, or a combination thereof), registers, or other machine components that receive, store, transmit, or display information. As used herein any reference to “one embodiment” or “an embodiment” means that a particular element, feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment. The appearances of the phrase “in one embodiment” in various places in the specification are not necessarily all referring to the same embodiment. Some embodiments may be described using the expression “coupled” and “connected” along with their derivatives. For example, some embodiments may be described using the term “coupled” to indicate that two or more elements are in direct physical or electrical contact. The term “coupled,” however, may also mean that two or more elements are not in direct contact with each other, but yet still co-operate or interact with each other. The embodiments are not limited in this context. Those skilled in the art will recognize that a wide variety of modifications, alterations, and combinations can be made with respect to the above described embodiments without departing from the scope of the invention, and that such modifications, alterations, and combinations are to be viewed as being within the ambit of the inventive concept. While the present invention has been described with reference to specific examples, which are intended to be illustrative only and not to be limiting of the invention, it will be apparent to those of ordinary skill in the art that changes, additions and/or deletions may be made to the disclosed embodiments without departing from the spirit and scope of the invention. The foregoing description is given for clearness of understanding; and no unnecessary limitations should be understood therefrom, as modifications within the scope of the invention may be apparent to those having ordinary skill in the art. | 72,661 |
11861852 | DETAILED DESCRIPTION Position tracking systems are used to track the physical positions of people and/or objects in a physical space (e.g., a store). These systems typically use a sensor (e.g., a camera) to detect the presence of a person and/or object and a computer to determine the physical position of the person and/or object based on signals from the sensor. In a store setting, other types of sensors can be installed to track the movement of inventory within the store. For example, weight sensors can be installed on racks and shelves to determine when items have been removed from those racks and shelves. By tracking both the positions of persons in a store and when items have been removed from shelves, it is possible for the computer to determine which person in the store removed the item and to charge that person for the item without needing to ring up the item at a register. In other words, the person can walk into the store, take items, and leave the store without stopping for the conventional checkout process. For larger physical spaces (e.g., convenience stores and grocery stores), additional sensors can be installed throughout the space to track the position of people and/or objects as they move about the space. For example, additional cameras can be added to track positions in the larger space and additional weight sensors can be added to track additional items and shelves. Increasing the number of cameras poses a technical challenge because each camera only provides a field of view for a portion of the physical space. This means that information from each camera needs to be processed independently to identify and track people and objects within the field of view of a particular camera. The information from each camera then needs to be combined and processed as a collective in order to track people and objects within the physical space. Additional information is disclosed in U.S. patent application Ser. No. 16/663,633 entitled, “Scalable Position Tracking System For Tracking Position In Large Spaces” and U.S. patent application Ser. No. 16/664,470 entitled, “Customer-Based Video Feed” which are both hereby incorporated by reference herein as if reproduced in their entirety. Tracking System Overview FIG.1is a schematic diagram of an embodiment of a tracking system100that is configured to track objects within a space102. As discussed above, the tracking system100may be installed in a space102(e.g. a store) so that shoppers need not engage in the conventional checkout process. Although the example of a store is used in this disclosure, this disclosure contemplates that the tracking system100may be installed and used in any type of physical space (e.g. a room, an office, an outdoor stand, a mall, a supermarket, a convenience store, a pop-up store, a warehouse, a storage center, an amusement park, an airport, an office building, etc.). Generally, the tracking system100(or components thereof) is used to track the positions of people and/or objects within these spaces102for any suitable purpose. For example, at an airport, the tracking system100can track the positions of travelers and employees for security purposes. As another example, at an amusement park, the tracking system100can track the positions of park guests to gauge the popularity of attractions. As yet another example, at an office building, the tracking system100can track the positions of employees and staff to monitor their productivity levels. InFIG.1, the space102is a store that comprises a plurality of items that are available for purchase. The tracking system100may be installed in the store so that shoppers need not engage in the conventional checkout process to purchase items from the store. In this example, the store may be a convenience store or a grocery store. In other examples, the store may not be a physical building, but a physical space or environment where shoppers may shop. For example, the store may be a grab and go pantry at an airport, a kiosk in an office building, an outdoor market at a park, etc. InFIG.1, the space102comprises one or more racks112. Each rack112comprises one or more shelves that are configured to hold and display items. In some embodiments, the space102may comprise refrigerators, coolers, freezers, or any other suitable type of furniture for holding or displaying items for purchase. The space102may be configured as shown or in any other suitable configuration. In this example, the space102is a physical structure that includes an entryway through which shoppers can enter and exit the space102. The space102comprises an entrance area114and an exit area116. In some embodiments, the entrance area114and the exit area116may overlap or are the same area within the space102. The entrance area114is adjacent to an entrance (e.g. a door) of the space102where a person enters the space102. In some embodiments, the entrance area114may comprise a turnstile or gate that controls the flow of traffic into the space102. For example, the entrance area114may comprise a turnstile that only allows one person to enter the space102at a time. The entrance area114may be adjacent to one or more devices (e.g. sensors108or a scanner115) that identify a person as they enter space102. As an example, a sensor108may capture one or more images of a person as they enter the space102. As another example, a person may identify themselves using a scanner115. Examples of scanners115include, but are not limited to, a QR code scanner, a barcode scanner, a near-field communication (NFC) scanner, or any other suitable type of scanner that can receive an electronic code embedded with information that uniquely identifies a person. For instance, a shopper may scan a personal device (e.g. a smart phone) on a scanner115to enter the store. When the shopper scans their personal device on the scanner115, the personal device may provide the scanner115with an electronic code that uniquely identifies the shopper. After the shopper is identified and/or authenticated, the shopper is allowed to enter the store. In one embodiment, each shopper may have a registered account with the store to receive an identification code for the personal device. After entering the space102, the shopper may move around the interior of the store. As the shopper moves throughout the space102, the shopper may shop for items by removing items from the racks112. The shopper can remove multiple items from the racks112in the store to purchase those items. When the shopper has finished shopping, the shopper may leave the store via the exit area116. The exit area116is adjacent to an exit (e.g. a door) of the space102where a person leaves the space102. In some embodiments, the exit area116may comprise a turnstile or gate that controls the flow of traffic out of the space102. For example, the exit area116may comprise a turnstile that only allows one person to leave the space102at a time. In some embodiments, the exit area116may be adjacent to one or more devices (e.g. sensors108or a scanner115) that identify a person as they leave the space102. For example, a shopper may scan their personal device on the scanner115before a turnstile or gate will open to allow the shopper to exit the store. When the shopper scans their personal device on the scanner115, the personal device may provide an electronic code that uniquely identifies the shopper to indicate that the shopper is leaving the store. When the shopper leaves the store, an account for the shopper is charged for the items that the shopper removed from the store. Through this process the tracking system100allows the shopper to leave the store with their items without engaging in a conventional checkout process. Global Plane Overview In order to describe the physical location of people and objects within the space102, a global plane104is defined for the space102. The global plane104is a user-defined coordinate system that is used by the tracking system100to identify the locations of objects within a physical domain (i.e. the space102). Referring toFIG.1as an example, a global plane104is defined such that an x-axis and a y-axis are parallel with a floor of the space102. In this example, the z-axis of the global plane104is perpendicular to the floor of the space102. A location in the space102is defined as a reference location101or origin for the global plane104. InFIG.1, the global plane104is defined such that reference location101corresponds with a corner of the store. In other examples, the reference location101may be located at any other suitable location within the space102. In this configuration, physical locations within the space102can be described using (x,y) coordinates in the global plane104. As an example, the global plane104may be defined such that one unit in the global plane104corresponds with one meter in the space102. In other words, an x-value of one in the global plane104corresponds with an offset of one meter from the reference location101in the space102. In this example, a person that is standing in the corner of the space102at the reference location101will have an (x,y) coordinate with a value of (0,0) in the global plane104. If person moves two meters in the positive x-axis direction and two meters in the positive y-axis direction, then their new (x,y) coordinate will have a value of (2,2). In other examples, the global plane104may be expressed using inches, feet, or any other suitable measurement units. Once the global plane104is defined for the space102, the tracking system100uses (x,y) coordinates of the global plane104to track the location of people and objects within the space102. For example, as a shopper moves within the interior of the store, the tracking system100may track their current physical location within the store using (x,y) coordinates of the global plane104. Tracking System Hardware In one embodiment, the tracking system100comprises one or more clients105, one or more servers106, one or more scanners115, one or more sensors108, and one or more weight sensors110. The one or more clients105, one or more servers106, one or more scanners115, one or more sensors108, and one or more weight sensors110may be in signal communication with each other over a network107. The network107may be any suitable type of wireless and/or wired network including, but not limited to, all or a portion of the Internet, an Intranet, a Bluetooth network, a WIFI network, a Zigbee network, a Z-wave network, a private network, a public network, a peer-to-peer network, the public switched telephone network, a cellular network, a local area network (LAN), a metropolitan area network (MAN), a wide area network (WAN), and a satellite network. The network107may be configured to support any suitable type of communication protocol as would be appreciated by one of ordinary skill in the art. The tracking system100may be configured as shown or in any other suitable configuration. Sensors The tracking system100is configured to use sensors108to identify and track the location of people and objects within the space102. For example, the tracking system100uses sensors108to capture images or videos of a shopper as they move within the store. The tracking system100may process the images or videos provided by the sensors108to identify the shopper, the location of the shopper, and/or any items that the shopper picks up. Examples of sensors108include, but are not limited to, cameras, video cameras, web cameras, printed circuit board (PCB) cameras, depth sensing cameras, time-of-flight cameras, LiDARs, structured light cameras, or any other suitable type of imaging device. Each sensor108is positioned above at least a portion of the space102and is configured to capture overhead view images or videos of at least a portion of the space102. In one embodiment, the sensors108are generally configured to produce videos of portions of the interior of the space102. These videos may include frames or images302of shoppers within the space102. Each frame302is a snapshot of the people and/or objects within the field of view of a particular sensor108at a particular moment in time. A frame302may be a two-dimensional (2D) image or a three-dimensional (3D) image (e.g. a point cloud or a depth map). In this configuration, each frame302is of a portion of a global plane104for the space102. Referring toFIG.4as an example, a frame302comprises a plurality of pixels that are each associated with a pixel location402within the frame302. The tracking system100uses pixel locations402to describe the location of an object with respect to pixels in a frame302from a sensor108. In the example shown inFIG.4, the tracking system100can identify the location of different marker304within the frame302using their respective pixel locations402. The pixel location402corresponds with a pixel row and a pixel column where a pixel is located within the frame302. In one embodiment, each pixel is also associated with a pixel value404that indicates a depth or distance measurement in the global plane104. For example, a pixel value404may correspond with a distance between a sensor108and a surface in the space102. Each sensor108has a limited field of view within the space102. This means that each sensor108may only be able to capture a portion of the space102within their field of view. To provide complete coverage of the space102, the tracking system100may use multiple sensors108configured as a sensor array. InFIG.1, the sensors108are configured as a three by four sensor array. In other examples, a sensor array may comprise any other suitable number and/or configuration of sensors108. In one embodiment, the sensor array is positioned parallel with the floor of the space102. In some embodiments, the sensor array is configured such that adjacent sensors108have at least partially overlapping fields of view. In this configuration, each sensor108captures images or frames302of a different portion of the space102which allows the tracking system100to monitor the entire space102by combining information from frames302of multiple sensors108. The tracking system100is configured to map pixel locations402within each sensor108to physical locations in the space102using homographies118. A homography118is configured to translate between pixel locations402in a frame302captured by a sensor108and (x,y) coordinates in the global plane104(i.e. physical locations in the space102). The tracking system100uses homographies118to correlate between a pixel location402in a particular sensor108with a physical location in the space102. In other words, the tracking system100uses homographies118to determine where a person is physically located in the space102based on their pixel location402within a frame302from a sensor108. Since the tracking system100uses multiple sensors108to monitor the entire space102, each sensor108is uniquely associated with a different homography118based on the sensor's108physical location within the space102. This configuration allows the tracking system100to determine where a person is physically located within the entire space102based on which sensor108they appear in and their location within a frame302captured by that sensor108. Additional information about homographies118is described inFIGS.2-7. Weight Sensors The tracking system100is configured to use weight sensors110to detect and identify items that a person picks up within the space102. For example, the tracking system100uses weight sensors110that are located on the shelves of a rack112to detect when a shopper removes an item from the rack112. Each weight sensor110may be associated with a particular item which allows the tracking system100to identify which item the shopper picked up. A weight sensor110is generally configured to measure the weight of objects (e.g. products) that are placed on or near the weight sensor110. For example, a weight sensor110may comprise a transducer that converts an input mechanical force (e.g. weight, tension, compression, pressure, or torque) into an output electrical signal (e.g. current or voltage). As the input force increases, the output electrical signal may increase proportionally. The tracking system100is configured to analyze the output electrical signal to determine an overall weight for the items on the weight sensor110. Examples of weight sensors110include, but are not limited to, a piezoelectric load cell or a pressure sensor. For example, a weight sensor110may comprise one or more load cells that are configured to communicate electrical signals that indicate a weight experienced by the load cells. For instance, the load cells may produce an electrical current that varies depending on the weight or force experienced by the load cells. The load cells are configured to communicate the produced electrical signals to a server105and/or a client106for processing. Weight sensors110may be positioned onto furniture (e.g. racks112) within the space102to hold one or more items. For example, one or more weight sensors110may be positioned on a shelf of a rack112. As another example, one or more weight sensors110may be positioned on a shelf of a refrigerator or a cooler. As another example, one or more weight sensors110may be integrated with a shelf of a rack112. In other examples, weight sensors110may be positioned in any other suitable location within the space102. In one embodiment, a weight sensor110may be associated with a particular item. For instance, a weight sensor110may be configured to hold one or more of a particular item and to measure a combined weight for the items on the weight sensor110. When an item is picked up from the weight sensor110, the weight sensor110is configured to detect a weight decrease. In this example, the weight sensor110is configured to use stored information about the weight of the item to determine a number of items that were removed from the weight sensor110. For example, a weight sensor110may be associated with an item that has an individual weight of eight ounces. When the weight sensor110detects a weight decrease of twenty-four ounces, the weight sensor110may determine that three of the items were removed from the weight sensor110. The weight sensor110is also configured to detect a weight increase when an item is added to the weight sensor110. For example, if an item is returned to the weight sensor110, then the weight sensor110will determine a weight increase that corresponds with the individual weight for the item associated with the weight sensor110. Servers A server106may be formed by one or more physical devices configured to provide services and resources (e.g. data and/or hardware resources) for the tracking system100. Additional information about the hardware configuration of a server106is described inFIG.38. In one embodiment, a server106may be operably coupled to one or more sensors108and/or weight sensors110. The tracking system100may comprise any suitable number of servers106. For example, the tracking system100may comprise a first server106that is in signal communication with a first plurality of sensors108in a sensor array and a second server106that is in signal communication with a second plurality of sensors108in the sensor array. As another example, the tracking system100may comprise a first server106that is in signal communication with a plurality of sensors108and a second server106that is in signal communication with a plurality of weight sensors110. In other examples, the tracking system100may comprise any other suitable number of servers106that are each in signal communication with one or more sensors108and/or weight sensors110. A server106may be configured to process data (e.g. frames302and/or video) for one or more sensors108and/or weight sensors110. In one embodiment, a server106may be configured to generate homographies118for sensors108. As discussed above, the generated homographies118allow the tracking system100to determine where a person is physically located within the entire space102based on which sensor108they appear in and their location within a frame302captured by that sensor108. In this configuration, the server106determines coefficients for a homography118based on the physical location of markers in the global plane104and the pixel locations of the markers in an image from a sensor108. Examples of the server106performing this process are described inFIGS.2-7. In one embodiment, a server106is configured to calibrate a shelf position within the global plane104using sensors108. This process allows the tracking system100to detect when a rack112or sensor108has moved from its original location within the space102. In this configuration, the server106periodically compares the current shelf location of a rack112to an expected shelf location for the rack112using a sensor108. In the event that the current shelf location does not match the expected shelf location, then the server106will use one or more other sensors108to determine whether the rack112has moved or whether the first sensor108has moved. An example of the server106performing this process is described inFIGS.8and9. In one embodiment, a server106is configured to hand off tracking information for an object (e.g. a person) as it moves between the fields of view of adjacent sensors108. This process allows the tracking system100to track people as they move within the interior of the space102. In this configuration, the server106tracks an object's movement within the field of view of a first sensor108and then hands off tracking information (e.g. an object identifier) for the object as it enters the field of view of a second adjacent sensor108. An example of the server106performing this process is described inFIGS.10and11. In one embodiment, a server106is configured to detect shelf interactions using a virtual curtain. This process allows the tracking system100to identify items that a person picks up from a rack112. In this configuration, the server106is configured to process an image captured by a sensor108to determine where a person is interacting with a shelf of a rack112. The server106uses a predetermined zone within the image as a virtual curtain that is used to determine which region and which shelf of a rack112that a person is interacting with. An example of the server106performing this process is described inFIGS.12-14. In one embodiment, a server106is configured to detect when an item has been picked up from a rack112and to determine which person to assign the item to using a predefined zone that is associated with the rack112. This process allows the tracking system100to associate items on a rack112with the person that picked up the item. In this configuration, the server106detects that an item has been picked up using a weight sensor110. The server106then uses a sensor108to identify a person within a predefined zone that is associated with the rack112. Once the item and the person have been identified, the server106will add the item to a digital cart that is associated with the identified person. An example of the server106performing this process is described inFIGS.15and18. In one embodiment, a server106is configured to identify an object that has a non-uniform weight and to assign the item to a person's digital cart. This process allows the tracking system100to identify items that a person picks up that cannot be identified based on just their weight. For example, the weight of fresh food is not constant and will vary from item to item. In this configuration, the server106uses a sensor108to identify markers (e.g. text or symbols) on an item that has been picked up. The server106uses the identified markers to then identify which item was picked up. The server106then uses the sensor108to identify a person within a predefined zone that is associated with the rack112. Once the item and the person have been identified, the server106will add the item to a digital cart that is associated with the identified person. An example of the server106performing this process is described inFIGS.16and18. In one embodiment, a server106is configured to identify items that have been misplaced on a rack112. This process allows the tracking system100to remove items from a shopper's digital cart when the shopper puts down an item regardless of whether they put the item back in its proper location. For example, a person may put back an item in the wrong location on the rack112or on the wrong rack112. In this configuration, the server106uses a weight sensor110to detect that an item has been put back on rack112and to determine that the item is not in the correct location based on its weight. The server106then uses a sensor108to identify the person that put the item on the rack112and analyzes their digital cart to determine which item they put back based on the weights of the items in their digital cart. An example of the server106performing this process is described inFIGS.17and18. Clients In some embodiments, one or more sensors108and/or weight sensors110are operably coupled to a server106via a client105. In one embodiment, the tracking system100comprises a plurality of clients105that may each be operably coupled to one or more sensors108and/or weight sensors110. For example, first client105may be operably coupled to one or more sensors108and/or weight sensors110and a second client105may be operably coupled to one or more other sensors108and/or weight sensors110. A client105may be formed by one or more physical devices configured to process data (e.g. frames302and/or video) for one or more sensors108and/or weight sensors110. A client105may act as an intermediary for exchanging data between a server106and one or more sensors108and/or weight sensors110. The combination of one or more clients105and a server106may also be referred to as a tracking subsystem. In this configuration, a client105may be configured to provide image processing capabilities for images or frames302that are captured by a sensor108. The client105is further configured to send images, processed images, or any other suitable type of data to the server106for further processing and analysis. In some embodiments, a client105may be configured to perform one or more of the processes described above for the server106. Sensor Mapping Process FIG.2is a flowchart of an embodiment of a sensor mapping method200for the tracking system100. The tracking system100may employ method200to generate a homography118for a sensor108. As discussed above, a homography118allows the tracking system100to determine where a person is physically located within the entire space102based on which sensor108they appear in and their location within a frame302captured by that sensor108. Once generated, the homography118can be used to translate between pixel locations402in images (e.g. frames302) captured by a sensor108and (x,y) coordinates306in the global plane104(i.e. physical locations in the space102). The following is a non-limiting example of the process for generating a homography118for single sensor108. This same process can be repeated for generating a homography118for other sensors108. At step202, the tracking system100receives (x,y) coordinates306for markers304in the space102. Referring toFIG.3as an example, each marker304is an object that identifies a known physical location within the space102. The markers304are used to demarcate locations in the physical domain (i.e. the global plane104) that can be mapped to pixel locations402in a frame302from a sensor108. In this example, the markers304are represented as stars on the floor of the space102. A marker304may be formed of any suitable object that can be observed by a sensor108. For example, a marker304may be tape or a sticker that is placed on the floor of the space102. As another example, a marker304may be a design or marking on the floor of the space102. In other examples, markers304may be positioned in any other suitable location within the space102that is observable by a sensor108. For instance, one or more markers304may be positioned on top of a rack112. In one embodiment, the (x,y) coordinates306for markers304are provided by an operator. For example, an operator may manually place markers304on the floor of the space102. The operator may determine an (x,y) location306for a marker304by measuring the distance between the marker304and the reference location101for the global plane104. The operator may then provide the determined (x,y) location306to a server106or a client105of the tracking system100as an input. Referring to the example inFIG.3, the tracking system100may receive a first (x,y) coordinate306A for a first marker304A in a space102and a second (x,y) coordinate306B for a second marker304B in the space102. The first (x,y) coordinate306A describes the physical location of the first marker304A with respect to the global plane104of the space102. The second (x,y) coordinate306B describes the physical location of the second marker304B with respect to the global plane104of the space102. The tracking system100may repeat the process of obtaining (x,y) coordinates306for any suitable number of additional markers304within the space102. Once the tracking system100knows the physical location of the markers304within the space102, the tracking system100then determines where the markers304are located with respect to the pixels in the frame302of a sensor108. Returning toFIG.2at step204, the tracking system100receives a frame302from a sensor108. Referring toFIG.4as an example, the sensor108captures an image or frame302of the global plane104for at least a portion of the space102. In this example, the frame302comprises a plurality of markers304. Returning toFIG.2at step206, the tracking system100identifies markers304within the frame302of the sensor108. In one embodiment, the tracking system100uses object detection to identify markers304within the frame302. For example, the markers304may have known features (e.g. shape, pattern, color, text, etc.) that the tracking system100can search for within the frame302to identify a marker304. Referring to the example inFIG.3, each marker304has a star shape. In this example, the tracking system100may search the frame302for star shaped objects to identify the markers304within the frame302. The tracking system100may identify the first marker304A, the second marker304B, and any other markers304within the frame302. In other examples, the tracking system100may use any other suitable features for identifying markers304within the frame302. In other embodiments, the tracking system100may employ any other suitable image processing technique for identifying markers302with the frame302. For example, the markers304may have a known color or pixel value. In this example, the tracking system100may use thresholds to identify the markers304within frame302that correspond with the color or pixel value of the markers304. Returning toFIG.2at step208, the tracking system100determines the number of identified markers304within the frame302. Here, tracking system100counts the number of markers304that were detected within the frame302. Referring to the example inFIG.3, the tracking system100detects eight markers304within the frame302. Returning toFIG.2at step210, the tracking system100determines whether the number of identified markers304is greater than or equal to a predetermined threshold value. In some embodiments, the predetermined threshold value is proportional to a level of accuracy for generating a homography118for a sensor108. Increasing the predetermined threshold value may increase the accuracy when generating a homography118while decreasing the predetermined threshold value may decrease the accuracy when generating a homography118. As an example, the predetermined threshold value may be set to a value of six. In the example shown inFIG.3, the tracking system100identified eight markers304which is greater than the predetermined threshold value. In other examples, the predetermined threshold value may be set to any other suitable value. The tracking system100returns to step204in response to determining that the number of identified markers304is less than the predetermined threshold value. In this case, the tracking system100returns to step204to capture another frame302of the space102using the same sensor108to try to detect more markers304. Here, the tracking system100tries to obtain a new frame302that includes a number of markers304that is greater than or equal to the predetermined threshold value. For example, the tracking system100may receive new frame302of the space102after an operator adds one or more additional markers304to the space102. As another example, the tracking system100may receive new frame302after lighting conditions have been changed to improve the detectability of the markers304within the frame302. In other examples, the tracking system100may receive new frame302after any kind of change that improves the detectability of the markers304within the frame302. The tracking system100proceeds to step212in response to determining that the number of identified markers304is greater than or equal to the predetermined threshold value. At step212, the tracking system100determines pixel locations402in the frame302for the identified markers304. For example, the tracking system100determines a first pixel location402A within the frame302that corresponds with the first marker304A and a second pixel location402B within the frame302that corresponds with the second marker304B. The first pixel location402A comprises a first pixel row and a first pixel column indicating where the first marker304A is located in the frame302. The second pixel location402B comprises a second pixel row and a second pixel column indicating where the second marker304B is located in the frame302. At step214, the tracking system100generates a homography118for the sensor108based on the pixel locations402of identified markers304with the frame302of the sensor108and the (x,y) coordinate306of the identified markers304in the global plane104. In one embodiment, the tracking system100correlates the pixel location402for each of the identified markers304with its corresponding (x,y) coordinate306. Continuing with the example inFIG.3, the tracking system100associates the first pixel location402A for the first marker304A with the first (x,y) coordinate306A for the first marker304A. The tracking system100also associates the second pixel location402B for the second marker304B with the second (x,y) coordinate306B for the second marker304B. The tracking system100may repeat the process of associating pixel locations402and (x,y) coordinates306for all of the identified markers304. The tracking system100then determines a relationship between the pixel locations402of identified markers304with the frame302of the sensor108and the (x,y) coordinates306of the identified markers304in the global plane104to generate a homography118for the sensor108. The generated homography118allows the tracking system100to map pixel locations402in a frame302from the sensor108to (x,y) coordinates306in the global plane104. Additional information about a homography118is described inFIGS.5A and5B. Once the tracking system100generates the homography118for the sensor108, the tracking system100stores an association between the sensor108and the generated homography118in memory (e.g. memory3804). The tracking system100may repeat the process described above to generate and associate homographies118with other sensors108. Continuing with the example inFIG.3, the tracking system100may receive a second frame302from a second sensor108. In this example, the second frame302comprises the first marker304A and the second marker304B. The tracking system100may determine a third pixel location402in the second frame302for the first marker304A, a fourth pixel location402in the second frame302for the second marker304B, and pixel locations402for any other markers304. The tracking system100may then generate a second homography118based on the third pixel location402in the second frame302for the first marker304A, the fourth pixel location402in the second frame302for the second marker304B, the first (x,y) coordinate306A in the global plane104for the first marker304A, the second (x,y) coordinate306B in the global plane104for the second marker304B, and pixel locations402and (x,y) coordinates306for other markers304. The second homography118comprises coefficients that translate between pixel locations402in the second frame302and physical locations (e.g. (x,y) coordinates306) in the global plane104. The coefficients of the second homography118are different from the coefficients of the homography118that is associated with the first sensor108. This process uniquely associates each sensor108with a corresponding homography118that maps pixel locations402from the sensor108to (x,y) coordinates306in the global plane104. Homographies An example of a homography118for a sensor108is described inFIGS.5A and5B. Referring toFIG.5A, a homography118comprises a plurality of coefficients configured to translate between pixel locations402in a frame302and physical locations (e.g. (x,y) coordinates306) in the global plane104. In this example, the homography118is configured as a matrix and the coefficients of the homography118are represented as H11, H12, H13, H14, H21, H22, H23, H24, H31, H32, H33, H34, H41, H42, H43, and H44. The tracking system100may generate the homography118by defining a relationship or function between pixel locations402in a frame302and physical locations (e.g. (x,y) coordinates306) in the global plane104using the coefficients. For example, the tracking system100may define one or more functions using the coefficients and may perform a regression (e.g. least squares regression) to solve for values for the coefficients that project pixel locations402of a frame302of a sensor to (x,y) coordinates306in the global plane104. Referring to the example inFIG.3, the homography118for the sensor108is configured to project the first pixel location402A in the frame302for the first marker304A to the first (x,y) coordinate306A in the global plane104for the first marker304A and to project the second pixel location402B in the frame302for the second marker304B to the second (x,y) coordinate306B in the global plane104for the second marker304B. In other examples, the tracking system100may solve for coefficients of the homography118using any other suitable technique. In the example shown inFIG.5A, the z-value at the pixel location402may correspond with a pixel value404. In this case, the homography118is further configured to translate between pixel values404in a frame302and z-coordinates (e.g. heights or elevations) in the global plane104. Using Homographies Once the tracking system100generates a homography118, the tracking system100may use the homography118to determine the location of an object (e.g. a person) within the space102based on the pixel location402of the object in a frame302of a sensor108. For example, the tracking system100may perform matrix multiplication between a pixel location402in a first frame302and a homography118to determine a corresponding (x,y) coordinate306in the global plane104. For example, the tracking system100receives a first frame302from a sensor108and determines a first pixel location in the frame302for an object in the space102. The tracking system100may then apply the homography118that is associated with the sensor108to the first pixel location402of the object to determine a first (x,y) coordinate306that identifies a first x-value and a first y-value in the global plane104where the object is located. In some instances, the tracking system100may use multiple sensors108to determine the location of the object. Using multiple sensors108may provide more accuracy when determining where an object is located within the space102. In this case, the tracking system100uses homographies118that are associated with different sensors108to determine the location of an object within the global plane104. Continuing with the previous example, the tracking system100may receive a second frame302from a second sensor108. The tracking system100may determine a second pixel location402in the second frame302for the object in the space102. The tracking system100may then apply a second homography118that is associated the second sensor108to the second pixel location402of the object to determine a second (x,y) coordinate306that identifies a second x-value and a second y-value in the global plane104where the object is located. When the first (x,y) coordinate306and the second (x,y) coordinate306are the same, the tracking system100may use either the first (x,y) coordinate306or the second (x,y) coordinate306as the physical location of the object within the space102. The tracking system100may employ any suitable clustering technique between the first (x,y) coordinate306and the second (x,y) coordinate306when the first (x,y) coordinate306and the second (x,y) coordinate306are not the same. In this case, the first (x,y) coordinate306and the second (x,y) coordinate306are different so the tracking system100will need to determine the physical location of the object within the space102based off the first (x,y) location306and the second (x,y) location306. For example, the tracking system100may generate an average (x,y) coordinate for the object by computing an average between the first (x,y) coordinate306and the second (x,y) coordinate306. As another example, the tracking system100may generate a median (x,y) coordinate for the object by computing a median between the first (x,y) coordinate306and the second (x,y) coordinate306. In other examples, the tracking system100may employ any other suitable technique to resolve differences between the first (x,y) coordinate306and the second (x,y) coordinate306. The tracking system100may use the inverse of the homography118to project from (x,y) coordinates306in the global plane104to pixel locations402in a frame302of a sensor108. For example, the tracking system100receives an (x,y) coordinate306in the global plane104for an object. The tracking system100identifies a homography118that is associated with a sensor108where the object is seen. The tracking system100may then apply the inverse homography118to the (x,y) coordinate306to determine a pixel location402where the object is located in the frame302for the sensor108. The tracking system100may compute the matrix inverse of the homograph500when the homography118is represented as a matrix. Referring toFIG.5Bas an example, the tracking system100may perform matrix multiplication between a (x,y) coordinates306in the global plane104and the inverse homography118to determine a corresponding pixel location402in the frame302for the sensor108. Sensor Mapping Using a Marker Grid FIG.6is a flowchart of an embodiment of a sensor mapping method600for the tracking system100using a marker grid702. The tracking system100may employ method600to reduce the amount of time it takes to generate a homography118for a sensor108. For example, using a marker grid702reduces the amount of setup time required to generate a homography118for a sensor108. Typically, each marker304is placed within a space102and the physical location of each marker304is determined independently. This process is repeated for each sensor108in a sensor array. In contrast, a marker grid702is a portable surface that comprises a plurality of markers304. The marker grid702may be formed using carpet, fabric, poster board, foam board, vinyl, paper, wood, or any other suitable type of material. Each marker304is an object that identifies a particular location on the marker grid702. Examples of markers304include, but are not limited to, shapes, symbols, and text. The physical locations of each marker304on the marker grid702are known and are stored in memory (e.g. marker grid information716). Using a marker grid702simplifies and speeds the up the process of placing and determining the location of markers304because the marker grid702and its markers304can be quickly repositioned anywhere within the space102without having to individually move markers304or add new markers304to the space102. Once generated, the homography118can be used to translate between pixel locations402in frame302captured by a sensor108and (x,y) coordinates306in the global plane104(i.e. physical locations in the space102). At step602, the tracking system100receives a first (x,y) coordinate306A for a first corner704of a marker grid702in a space102. Referring toFIG.7as an example, the marker grid702is configured to be positioned on a surface (e.g. the floor) within the space102that is observable by one or more sensors108. In this example, the tracking system100receives a first (x,y) coordinate306A in the global plane104for a first corner704of the marker grid702. The first (x,y) coordinate306A describes the physical location of the first corner704with respect to the global plane104. In one embodiment, the first (x,y) coordinate306A is based on a physical measurement of a distance between a reference location101in the space102and the first corner704. For example, the first (x,y) coordinate306A for the first corner704of the marker grid702may be provided by an operator. In this example, an operator may manually place the marker grid702on the floor of the space102. The operator may determine an (x,y) location306for the first corner704of the marker grid702by measuring the distance between the first corner704of the marker grid702and the reference location101for the global plane104. The operator may then provide the determined (x,y) location306to a server106or a client105of the tracking system100as an input. In another embodiment, the tracking system100may receive a signal from a beacon located at the first corner704of the marker grid702that identifies the first (x,y) coordinate306A. An example of a beacon includes, but is not limited to, a Bluetooth beacon. For example, the tracking system100may communicate with the beacon and determine the first (x,y) coordinate306A based on the time-of-flight of a signal that is communicated between the tracking system100and the beacon. In other embodiments, the tracking system100may obtain the first (x,y) coordinate306A for the first corner704using any other suitable technique. Returning toFIG.6at step604, the tracking system100determines (x,y) coordinates306for the markers304on the marker grid702. Returning to the example inFIG.7, the tracking system100determines a second (x,y) coordinate306B for a first marker304A on the marker grid702. The tracking system100comprises marker grid information716that identifies offsets between markers304on the marker grid702and the first corner704of the marker grid702. In this example, the offset comprises a distance between the first corner704of the marker grid702and the first marker304A with respect to the x-axis and the y-axis of the global plane104. Using the marker grid information1912, the tracking system100is able to determine the second (x,y) coordinate306B for the first marker304A by adding an offset associated with the first marker304A to the first (x,y) coordinate306A for the first corner704of the marker grid702. In one embodiment, the tracking system100determines the second (x,y) coordinate306B based at least in part on a rotation of the marker grid702. For example, the tracking system100may receive a fourth (x,y) coordinate306D that identifies x-value and a y-value in the global plane104for a second corner706of the marker grid702. The tracking system100may obtain the fourth (x,y) coordinate306D for the second corner706of the marker grid702using a process similar to the process described in step602. The tracking system100determines a rotation angle712between the first (x,y) coordinate306A for the first corner704of the marker grid702and the fourth (x,y) coordinate306D for the second corner706of the marker grid702. In this example, the rotation angle712is about the first corner704of the marker grid702within the global plane104. The tracking system100then determines the second (x,y) coordinate306B for the first marker304A by applying a translation by adding the offset associated with the first marker304A to the first (x,y) coordinate306A for the first corner704of the marker grid702and applying a rotation using the rotation angle712about the first (x,y) coordinate306A for the first corner704of the marker grid702. In other examples, the tracking system100may determine the second (x,y) coordinate306B for the first marker304A using any other suitable technique. The tracking system100may repeat this process for one or more additional markers304on the marker grid702. For example, the tracking system100determines a third (x,y) coordinate306C for a second marker304B on the marker grid702. Here, the tracking system100uses the marker grid information716to identify an offset associated with the second marker304A. The tracking system100is able to determine the third (x,y) coordinate306C for the second marker304B by adding the offset associated with the second marker304B to the first (x,y) coordinate306A for the first corner704of the marker grid702. In another embodiment, the tracking system100determines a third (x,y) coordinate306C for a second marker304B based at least in part on a rotation of the marker grid702using a process similar to the process described above for the first marker304A. Once the tracking system100knows the physical location of the markers304within the space102, the tracking system100then determines where the markers304are located with respect to the pixels in the frame302of a sensor108. At step606, the tracking system100receives a frame302from a sensor108. The frame302is of the global plane104that includes at least a portion of the marker grid702in the space102. The frame302comprises one or more markers304of the marker grid702. The frame302is configured similar to the frame302described inFIGS.2-4. For example, the frame302comprises a plurality of pixels that are each associated with a pixel location402within the frame302. The pixel location402identifies a pixel row and a pixel column where a pixel is located. In one embodiment, each pixel is associated with a pixel value404that indicates a depth or distance measurement. For example, a pixel value404may correspond with a distance between the sensor108and a surface within the space102. At step610, the tracking system100identifies markers304within the frame302of the sensor108. The tracking system100may identify markers304within the frame302using a process similar to the process described in step206ofFIG.2. For example, the tracking system100may use object detection to identify markers304within the frame302. Referring to the example inFIG.7, each marker304is a unique shape or symbol. In other examples, each marker304may have any other unique features (e.g. shape, pattern, color, text, etc.). In this example, the tracking system100may search for objects within the frame302that correspond with the known features of a marker304. Tracking system100may identify the first marker304A, the second marker304B, and any other markers304on the marker grid702. In one embodiment, the tracking system100compares the features of the identified markers304to the features of known markers304on the marker grid702using a marker dictionary718. The marker dictionary718identifies a plurality of markers304that are associated with a marker grid702. In this example, the tracking system100may identify the first marker304A by identifying a star on the marker grid702, comparing the star to the symbols in the marker dictionary718, and determining that the star matches one of the symbols in the marker dictionary718that corresponds with the first marker304A. Similarly, the tracking system100may identify the second marker304B by identifying a triangle on the marker grid702, comparing the triangle to the symbols in the marker dictionary718, and determining that the triangle matches one of the symbols in the marker dictionary718that corresponds with the second marker304B. The tracking system100may repeat this process for any other identified markers304in the frame302. In another embodiment, the marker grid702may comprise markers304that contain text. In this example, each marker304can be uniquely identified based on its text. This configuration allows the tracking system100to identify markers304in the frame302by using text recognition or optical character recognition techniques on the frame302. In this case, the tracking system100may use a marker dictionary718that comprises a plurality of predefined words that are each associated with a marker304on the marker grid702. For example, the tracking system100may perform text recognition to identify text with the frame302. The tracking system100may then compare the identified text to words in the marker dictionary718. Here, the tracking system100checks whether the identified text matched any of the known text that corresponds with a marker304on the marker grid702. The tracking system100may discard any text that does not match any words in the marker dictionary718. When the tracking system100identifies text that matches a word in the marker dictionary718, the tracking system100may identify the marker304that corresponds with the identified text. For instance, the tracking system100may determine that the identified text matches the text associated with the first marker304A. The tracking system100may identify the second marker304B and any other markers304on the marker grid702using a similar process. Returning toFIG.6at step610, the tracking system100determines a number of identified markers304within the frame302. Here, tracking system100counts the number of markers304that were detected within the frame302. Referring to the example inFIG.7, the tracking system100detects five markers304within the frame302. Returning toFIG.6at step614, the tracking system100determines whether the number of identified markers304is greater than or equal to a predetermined threshold value. The tracking system100may compare the number of identified markers304to the predetermined threshold value using a process similar to the process described in step210ofFIG.2. The tracking system100returns to step606in response to determining that the number of identified markers304is less than the predetermined threshold value. In this case, the tracking system100returns to step606to capture another frame302of the space102using the same sensor108to try to detect more markers304. Here, the tracking system100tries to obtain a new frame302that includes a number of markers304that is greater than or equal to the predetermined threshold value. For example, the tracking system100may receive new frame302of the space102after an operator repositions the marker grid702within the space102. As another example, the tracking system100may receive new frame302after lighting conditions have been changed to improve the detectability of the markers304within the frame302. In other examples, the tracking system100may receive new frame302after any kind of change that improves the detectability of the markers304within the frame302. The tracking system100proceeds to step614in response to determining that the number of identified markers304is greater than or equal to the predetermined threshold value. Once the tracking system100identifies a suitable number of markers304on the marker grid702, the tracking system100then determines a pixel location402for each of the identified markers304. Each marker304may occupy multiple pixels in the frame302. This means that for each marker304, the tracking system100determines which pixel location402in the frame302corresponds with its (x,y) coordinate306in the global plane104. In one embodiment, the tracking system100using bounding boxes708to narrow or restrict the search space when trying to identify pixel location402for markers304. A bounding box708is a defined area or region within the frame302that contains a marker304. For example, a bounding box708may be defined as a set of pixels or a range of pixels of the frame302that comprise a marker304. At step614, the tracking system100identifies bounding boxes708for markers304within the frame302. In one embodiment, the tracking system100identifies a plurality of pixels in the frame302that correspond with a marker304and then defines a bounding box708that encloses the pixels corresponding with the marker304. The tracking system100may repeat this process for each of the markers304. Returning to the example inFIG.7, the tracking system100may identify a first bounding box708A for the first marker304A, a second bounding box708B for the second marker304B, and bounding boxes708for any other identified markers304within the frame302. In another embodiment, the tracking system may employ text or character recognition to identify the first marker304A when the first marker304A comprises text. For example, the tracking system100may use text recognition to identify pixels with the frame302that comprises a word corresponding with a marker304. The tracking system100may then define a bounding box708that encloses the pixels corresponding with the identified word. In other embodiments, the tracking system100may employ any other suitable image processing technique for identifying bounding boxes708for the identified markers304. Returning toFIG.6at step616, the tracking system100identifies a pixel710within each bounding box708that corresponds with a pixel location402in the frame302for a marker304. As discussed above, each marker304may occupy multiple pixels in the frame302and the tracking system100determines which pixel710in the frame302corresponds with the pixel location402for an (x,y) coordinate306in the global plane104. In one embodiment, each marker304comprises a light source. Examples of light sources include, but are not limited to, light emitting diodes (LEDs), infrared (IR) LEDs, incandescent lights, or any other suitable type of light source. In this configuration, a pixel710corresponds with a light source for a marker304. In another embodiment, each marker304may comprise a detectable feature that is unique to each marker304. For example, each marker304may comprise a unique color that is associated with the marker304. As another example, each marker304may comprise a unique symbol or pattern that is associated with the marker304. In this configuration, a pixel710corresponds with the detectable feature for the marker304. Continuing with the previous example, the tracking system100identifies a first pixel710A for the first marker304, a second pixel710B for the second marker304, and pixels710for any other identified markers304. At step618, the tracking system100determines pixel locations402within the frame302for each of the identified pixels710. For example, the tracking system100may identify a first pixel row and a first pixel column of the frame302that corresponds with the first pixel710A. Similarly, the tracking system100may identify a pixel row and a pixel column in the frame302for each of the identified pixels710. The tracking system100generates a homography118for the sensor108after the tracking system100determines (x,y) coordinates306in the global plane104and pixel locations402in the frame302for each of the identified markers304. At step620, the tracking system100generates a homography118for the sensor108based on the pixel locations402of identified markers304in the frame302of the sensor108and the (x,y) coordinate306of the identified markers304in the global plane104. In one embodiment, the tracking system100correlates the pixel location402for each of the identified markers304with its corresponding (x,y) coordinate306. Continuing with the example inFIG.7, the tracking system100associates the first pixel location402for the first marker304A with the second (x,y) coordinate306B for the first marker304A. The tracking system100also associates the second pixel location402for the second marker304B with the third (x,y) location306C for the second marker304B. The tracking system100may repeat this process for all of the identified markers304. The tracking system100then determines a relationship between the pixel locations402of identified markers304with the frame302of the sensor108and the (x,y) coordinate306of the identified markers304in the global plane104to generate a homography118for the sensor108. The generated homography118allows the tracking system100to map pixel locations402in a frame302from the sensor108to (x,y) coordinates306in the global plane104. The generated homography118is similar to the homography described inFIGS.5A and5B. Once the tracking system100generates the homography118for the sensor108, the tracking system100stores an association between the sensor108and the generated homography118in memory (e.g. memory3804). The tracking system100may repeat the process described above to generate and associate homographies118with other sensors108. The marker grid702may be moved or repositioned within the space108to generate a homography118for another sensor108. For example, an operator may reposition the marker grid702to allow another sensor108to view the markers304on the marker grid702. As an example, the tracking system100may receive a second frame302from a second sensor108. In this example, the second frame302comprises the first marker304A and the second marker304B. The tracking system100may determine a third pixel location402in the second frame302for the first marker304A and a fourth pixel location402in the second frame302for the second marker304B. The tracking system100may then generate a second homography118based on the third pixel location402in the second frame302for the first marker304A, the fourth pixel location402in the second frame302for the second marker304B, the (x,y) coordinate306B in the global plane104for the first marker304A, the (x,y) coordinate306C in the global plane104for the second marker304B, and pixel locations402and (x,y) coordinates306for other markers304. The second homography118comprises coefficients that translate between pixel locations402in the second frame302and physical locations (e.g. (x,y) coordinates306) in the global plane104. The coefficients of the second homography118are different from the coefficients of the homography118that is associated with the first sensor108. In other words, each sensor108is uniquely associated with a homography118that maps pixel locations402from the sensor108to physical locations in the global plane104. This process uniquely associates a homography118to a sensor108based on the physical location (e.g. (x,y) coordinate306) of the sensor108in the global plane104. Shelf Position Calibration FIG.8is a flowchart of an embodiment of a shelf position calibration method800for the tracking system100. The tracking system100may employ method800to periodically check whether a rack112or sensor108has moved within the space102. For example, a rack112may be accidently bumped or moved by a person which causes the rack's112position to move with respect to the global plane104. As another example, a sensor108may come loose from its mounting structure which causes the sensor108to sag or move from its original location. Any changes in the position of a rack112and/or a sensor108after the tracking system100has been calibrated will reduce the accuracy and performance of the tracking system100when tracking objects within the space102. The tracking system100employs method800to detect when either a rack112or a sensor108has moved and then recalibrates itself based on the new position of the rack112or sensor108. A sensor108may be positioned within the space102such that frames302captured by the sensor108will include one or more shelf markers906that are located on a rack112. A shelf marker906is an object that is positioned on a rack112that can be used to determine a location (e.g. an (x,y) coordinate306and a pixel location402) for the rack112. The tracking system100is configured to store the pixel locations402and the (x,y) coordinates306of the shelf markers906that are associated with frames302from a sensor108. In one embodiment, the pixel locations402and the (x,y) coordinates306of the shelf markers906may be determined using a process similar to the process described inFIG.2. In another embodiment, the pixel locations402and the (x,y) coordinates306of the shelf markers906may be provided by an operator as an input to the tracking system100. A shelf marker906may be an object similar to the marker304described inFIGS.2-7. In some embodiments, each shelf marker906on a rack112is unique from other shelf markers906on the rack112. This feature allows the tracking system100to determine an orientation of the rack112. Referring to the example inFIG.9, each shelf marker906is a unique shape that identifies a particular portion of the rack112. In this example, the tracking system100may associate a first shelf marker906A and a second shelf marker906B with a front of the rack112. Similarly, the tracking system100may also associate a third shelf marker906C and a fourth shelf marker906D with a back of the rack112. In other examples, each shelf marker906may have any other uniquely identifiable features (e.g. color or patterns) that can be used to identify a shelf marker906. Returning toFIG.8at step802, the tracking system100receives a first frame302A from a first sensor108. Referring toFIG.9as an example, the first sensor108captures the first frame302A which comprises at least a portion of a rack112within the global plane104for the space102. Returning toFIG.8at step804, the tracking system100identifies one or more shelf markers906within the first frame302A. Returning again to the example inFIG.9, the rack112comprises four shelf markers906. In one embodiment, the tracking system100may use object detection to identify shelf markers906within the first frame302A. For example, the tracking system100may search the first frame302A for known features (e.g. shapes, patterns, colors, text, etc.) that correspond with a shelf marker906. In this example, the tracking system100may identify a shape (e.g. a star) in the first frame302A that corresponds with a first shelf marker906A. In other embodiments, the tracking system100may use any other suitable technique to identify a shelf marker906within the first frame302A. The tracking system100may identify any number of shelf markers906that are present in the first frame302A. Once the tracking system100identifies one or more shelf markers906that are present in the first frame302A of the first sensor108, the tracking system100then determines their pixel locations402in the first frame302A so they can be compared to expected pixel locations402for the shelf markers906. Returning toFIG.8at step806, the tracking system100determines current pixel locations402for the identified shelf markers906in the first frame302A. Returning to the example inFIG.9, the tracking system100determines a first current pixel location402A for the shelf marker906within the first frame302A. The first current pixel location402A comprises a first pixel row and first pixel column where the shelf marker906is located within the first frame302A. Returning toFIG.8at step808, the tracking system100determines whether the current pixel locations402for the shelf markers906match the expected pixel locations402for the shelf markers906in the first frame302A. Returning to the example inFIG.9, the tracking system100determines whether the first current pixel location402A matches a first expected pixel location402for the shelf marker906. As discussed above, when the tracking system100is initially calibrated, the tracking system100stores pixel location information908that comprises expected pixel locations402within the first frame302A of the first sensor108for shelf markers906of a rack112. The tracking system100uses the expected pixel locations402as reference points to determine whether the rack112has moved. By comparing the expected pixel location402for a shelf marker906with its current pixel location402, the tracking system100can determine whether there are any discrepancies that would indicate that the rack112has moved. The tracking system100may terminate method800in response to determining that the current pixel locations402for the shelf markers906in the first frame302A match the expected pixel location402for the shelf markers906. In this case, the tracking system100determines that neither the rack112nor the first sensor108has moved since the current pixel locations402match the expected pixel locations402for the shelf marker906. The tracking system100proceeds to step810in response to a determination at step808that one or more current pixel locations402for the shelf markers906does not match an expected pixel location402for the shelf markers906. For example, the tracking system100may determine that the first current pixel location402A does not match the first expected pixel location402for the shelf marker906. In this case, the tracking system100determines that rack112and/or the first sensor108has moved since the first current pixel location402A does not match the first expected pixel location402for the shelf marker906. Here, the tracking system100proceeds to step810to identify whether the rack112has moved or the first sensor108has moved. At step810, the tracking system100receives a second frame302B from a second sensor108. The second sensor108is adjacent to the first sensor108and has at least a partially overlapping field of view with the first sensor108. The first sensor108and the second sensor108is positioned such that one or more shelf markers906are observable by both the first sensor108and the second sensor108. In this configuration, the tracking system100can use a combination of information from the first sensor108and the second sensor108to determine whether the rack112has moved or the first sensor108has moved. Returning to the example inFIG.9, the second frame304B comprises the first shelf marker906A, the second shelf marker906B, the third shelf marker906C, and the fourth shelf marker906D of the rack112. Returning toFIG.8at step812, the tracking system100identifies the shelf markers906that are present within the second frame302B from the second sensor108. The tracking system100may identify shelf markers906using a process similar to the process described in step804. Returning again to the example inFIG.9, tracking system100may search the second frame302B for known features (e.g. shapes, patterns, colors, text, etc.) that correspond with a shelf marker906. For example, the tracking system100may identify a shape (e.g. a star) in the second frame302B that corresponds with the first shelf marker906A. Once the tracking system100identifies one or more shelf markers906that are present in the second frame302B of the second sensor108, the tracking system100then determines their pixel locations402in the second frame302B so they can be compared to expected pixel locations402for the shelf markers906. Returning toFIG.8at step814, the tracking system100determines current pixel locations402for the identified shelf markers906in the second frame302B. Returning to the example inFIG.9, the tracking system100determines a second current pixel location402B for the shelf marker906within the second frame302B. The second current pixel location402B comprises a second pixel row and a second pixel column where the shelf marker906is located within the second frame302B from the second sensor108. Returning toFIG.8at step816, tracking system100determines whether the current pixel locations402for the shelf markers906match the expected pixel locations402for the shelf markers906in the second frame302B. Returning to the example inFIG.9, the tracking system100determines whether the second current pixel location402B matches a second expected pixel location402for the shelf marker906. Similar to as discussed above in step808, the tracking system100stores pixel location information908that comprises expected pixel locations402within the second frame302B of the second sensor108for shelf markers906of a rack112when the tracking system100is initially calibrated. By comparing the second expected pixel location402for the shelf marker906to its second current pixel location402B, the tracking system100can determine whether the rack112has moved or whether the first sensor108has moved. The tracking system100determines that the rack112has moved when the current pixel location402and the expected pixel location402for one or more shelf markers906do not match for multiple sensors108. When a rack112moves within the global plane104, the physical location of the shelf markers906moves which causes the pixel locations402for the shelf markers906to also move with respect to any sensors108viewing the shelf markers906. This means that the tracking system100can conclude that the rack112has moved when multiple sensors108observe a mismatch between current pixel locations402and expected pixel locations402for one or more shelf markers906. The tracking system100determines that the first sensor108has moved when the current pixel location402and the expected pixel location402for one or more shelf markers906do not match only for the first sensor108. In this case, the first sensor108has moved with respect to the rack112and its shelf markers906which causes the pixel locations402for the shelf markers906to move with respect to the first sensor108. The current pixel locations402of the shelf markers906will still match the expected pixel locations402for the shelf markers906for other sensors108because the position of these sensors108and the rack112has not changed. The tracking system proceeds to step818in response to determining that the current pixel location402matches the second expected pixel location402for the shelf marker906in the second frame302B for the second sensor108. In this case, the tracking system100determines that the first sensor108has moved. At step818, the tracking system100recalibrates the first sensor108. In one embodiment, the tracking system100recalibrates the first sensor108by generating a new homography118for the first sensor108. The tracking system100may generate a new homography118for the first sensor108using shelf markers906and/or other markers304. The tracking system100may generate the new homography118for the first sensor108using a process similar to the processes described inFIGS.2and/or6. As an example, the tracking system100may use an existing homography118that is currently associated with the first sensor108to determine physical locations (e.g. (x,y) coordinates306) for the shelf markers906. The tracking system110may then use the current pixel locations402for the shelf markers906with their determined (x,y) coordinates306to generate a new homography118for first sensor108. For instance, the tracking system100may use an existing homography118that is associated with the first sensor108to determine a first (x,y) coordinate306in the global plane104where a first shelf marker906is located, a second (x,y) coordinate306in the global plane104where a second shelf marker906is located, and (x,y) coordinates306for any other shelf markers906. The tracking system100may apply the existing homography118for the first sensor108to the current pixel location402for the first shelf marker906in the first frame302A to determine the first (x,y) coordinate306for the first marker906using a process similar to the process described inFIG.5A. The tracking system100may repeat this process for determining (x,y) coordinates306for any other identified shelf markers906. Once the tracking system100determines (x,y) coordinates306for the shelf markers906and the current pixel locations402in the first frame302A for the shelf markers906, the tracking system100may then generate a new homography118for the first sensor108using this information. For example, the tracking system100may generate the new homography118based on the current pixel location402for the first marker906A, the current pixel location402for the second marker906B, the first (x,y) coordinate306for the first marker906A, the second (x,y) coordinate306for the second marker906B, and (x,y) coordinates306and pixel locations402for any other identified shelf markers906in the first frame302A. The tracking system100associates the first sensor108with the new homography118. This process updates the homography118that is associated with the first sensor108based on the current location of the first sensor108. In another embodiment, the tracking system100may recalibrate the first sensor108by updating the stored expected pixel locations for the shelf marker906for the first sensor108. For example, the tracking system100may replace the previous expected pixel location402for the shelf marker906with its current pixel location402. Updating the expected pixel locations402for the shelf markers906with respect to the first sensor108allows the tracking system100to continue to monitor the location of the rack112using the first sensor108. In this case, the tracking system100can continue comparing the current pixel locations402for the shelf markers906in the first frame302A for the first sensor108with the new expected pixel locations402in the first frame302A. At step820, the tracking system100sends a notification that indicates that the first sensor108has moved. Examples of notifications include, but are not limited to, text messages, short message service (SMS) messages, multimedia messaging service (MMS) messages, push notifications, application popup notifications, emails, or any other suitable type of notifications. For example, the tracking system100may send a notification indicating that the first sensor108has moved to a person associated with the space102. In response to receiving the notification, the person may inspect and/or move the first sensor108back to its original location. Returning to step816, the tracking system100proceeds to step822in response to determining that the current pixel location402does not match the expected pixel location402for the shelf marker906in the second frame302B. In this case, the tracking system100determines that the rack112has moved. At step822, the tracking system100updates the expected pixel location information402for the first sensor108and the second sensor108. For example, the tracking system100may replace the previous expected pixel location402for the shelf marker906with its current pixel location402for both the first sensor108and the second sensor108. Updating the expected pixel locations402for the shelf markers906with respect to the first sensor108and the second sensor108allows the tracking system100to continue to monitor the location of the rack112using the first sensor108and the second sensor108. In this case, the tracking system100can continue comparing the current pixel locations402for the shelf markers906for the first sensor108and the second sensor108with the new expected pixel locations402. At step824, the tracking system100sends a notification that indicates that the rack112has moved. For example, the tracking system100may send a notification indicating that the rack112has moved to a person associated with the space102. In response to receiving the notification, the person may inspect and/or move the rack112back to its original location. The tracking system100may update the expected pixel locations402for the shelf markers906again once the rack112is moved back to its original location. Object Tracking Handoff FIG.10is a flowchart of an embodiment of a tracking hand off method1000for the tracking system100. The tracking system100may employ method1000to hand off tracking information for an object (e.g. a person) as it moves between the fields of view of adjacent sensors108. For example, the tracking system100may track the position of people (e.g. shoppers) as they move around within the interior of the space102. Each sensor108has a limited field of view which means that each sensor108can only track the position of a person within a portion of the space102. The tracking system100employs a plurality of sensors108to track the movement of a person within the entire space102. Each sensor108operates independent from one another which means that the tracking system100keeps track of a person as they move from the field of view of one sensor108into the field of view of an adjacent sensor108. The tracking system100is configured such that an object identifier1118(e.g. a customer identifier) is assigned to each person as they enter the space102. The object identifier1118may be used to identify a person and other information associated with the person. Examples of object identifiers1118include, but are not limited to, names, customer identifiers, alphanumeric codes, phone numbers, email addresses, or any other suitable type of identifier for a person or object. In this configuration, the tracking system100tracks a person's movement within the field of view of a first sensor108and then hands off tracking information (e.g. an object identifier1118) for the person as it enters the field of view of a second adjacent sensor108. In one embodiment, the tracking system100comprises adjacency lists1114for each sensor108that identifies adjacent sensors108and the pixels within the frame302of the sensor108that overlap with the adjacent sensors108. Referring to the example inFIG.11, a first sensor108and a second sensor108have partially overlapping fields of view. This means that a first frame302A from the first sensor108partially overlaps with a second frame302B from the second sensor108. The pixels that overlap between the first frame302A and the second frame302B are referred to as an overlap region1110. In this example, the tracking system100comprises a first adjacency list1114A that identifies pixels in the first frame302A that correspond with the overlap region1110between the first sensor108and the second sensor108. For example, the first adjacency list1114A may identify a range of pixels in the first frame302A that correspond with the overlap region1110. The first adjacency list114A may further comprise information about other overlap regions between the first sensor108and other adjacent sensors108. For instance, a third sensor108may be configured to capture a third frame302that partially overlaps with the first frame302A. In this case, the first adjacency list1114A will further comprise information that identifies pixels in the first frame302A that correspond with an overlap region between the first sensor108and the third sensor108. Similarly, the tracking system100may further comprise a second adjacency list1114B that is associated with the second sensor108. The second adjacency list1114B identifies pixels in the second frame302B that correspond with the overlap region1110between the first sensor108and the second sensor108. The second adjacency list1114B may further comprise information about other overlap regions between the second sensor108and other adjacent sensors108. InFIG.11, the second tracking list1112B is shown as a separate data structure from the first tracking list1112A, however, the tracking system100may use a single data structure to store tracking list information that is associated with multiple sensors108. Once the first person1106enters the space102, the tracking system100will track the object identifier1118associated with the first person1106as well as pixel locations402in the sensors108where the first person1106appears in a tracking list1112. For example, the tracking system100may track the people within the field of view of a first sensor108using a first tracking list1112A, the people within the field of view of a second sensor108using a second tracking list1112B, and so on. In this example, the first tracking list1112A comprises object identifiers1118for people being tracked using the first sensor108. The first tracking list1112A further comprises pixel location information that indicates the location of a person within the first frame302A of the first sensor108. In some embodiments, the first tracking list1112A may further comprise any other suitable information associated with a person being tracked by the first sensor108. For example, the first tracking list1112A may identify (x,y) coordinates306for the person in the global plane104, previous pixel locations402within the first frame302A for a person, and/or a travel direction1116for a person. For instance, the tracking system100may determine a travel direction1116for the first person1106based on their previous pixel locations402within the first frame302A and may store the determined travel direction1116in the first tracking list1112A. In one embodiment, the travel direction1116may be represented as a vector with respect to the global plane104. In other embodiments, the travel direction1116may be represented using any other suitable format. Returning toFIG.10at step1002, the tracking system100receives a first frame302A from a first sensor108. Referring toFIG.11as an example, the first sensor108captures an image or frame302A of a global plane104for at least a portion of the space102. In this example, the first frame1102comprises a first object (e.g. a first person1106) and a second object (e.g. a second person1108). In this example, the first frame302A captures the first person1106and the second person1108as they move within the space102. Returning toFIG.10at step1004, the tracking system100determines a first pixel location402A in the first frame302A for the first person1106. Here, the tracking system100determines the current location for the first person1106within the first frame302A from the first sensor108. Continuing with the example inFIG.11, the tracking system100identifies the first person1106in the first frame302A and determines a first pixel location402A that corresponds with the first person1106. In a given frame302, the first person1106is represented by a collection of pixels within the frame302. Referring to the example inFIG.11, the first person1106is represented by a collection of pixels that show an overhead view of the first person1106. The tracking system100associates a pixel location402with the collection of pixels representing the first person1106to identify the current location of the first person1106within a frame302. In one embodiment, the pixel location402of the first person1106may correspond with the head of the first person1106. In this example, the pixel location402of the first person1106may be located at about the center of the collection of pixels that represent the first person1106. As another example, the tracking system100may determine a bounding box708that encloses the collection of pixels in the first frame302A that represent the first person1106. In this example, the pixel location402of the first person1106may located at about the center of the bounding box708. As another example, the tracking system100may use object detection or contour detection to identify the first person1106within the first frame302A. In this example, the tracking system100may identify one or more features for the first person1106when they enter the space102. The tracking system100may later compare the features of a person in the first frame302A to the features associated with the first person1106to determine if the person is the first person1106. In other examples, the tracking system100may use any other suitable techniques for identifying the first person1106within the first frame302A. The first pixel location402A comprises a first pixel row and a first pixel column that corresponds with the current location of the first person1106within the first frame302A. Returning toFIG.10at step1006, the tracking system100determines the object is within the overlap region1110between the first sensor108and the second sensor108. Returning to the example inFIG.11, the tracking system100may compare the first pixel location402A for the first person1106to the pixels identified in the first adjacency list1114A that correspond with the overlap region1110to determine whether the first person1106is within the overlap region1110. The tracking system100may determine that the first object1106is within the overlap region1110when the first pixel location402A for the first object1106matches or is within a range of pixels identified in the first adjacency list1114A that corresponds with the overlap region1110. For example, the tracking system100may compare the pixel column of the pixel location402A with a range of pixel columns associated with the overlap region1110and the pixel row of the pixel location402A with a range of pixel rows associated with the overlap region1110to determine whether the pixel location402A is within the overlap region1110. In this example, the pixel location402A for the first person1106is within the overlap region1110. At step1008, the tracking system100applies a first homography118to the first pixel location402A to determine a first (x,y) coordinate306in the global plane104for the first person1106. The first homography118is configured to translate between pixel locations402in the first frame302A and (x,y) coordinates306in the global plane104. The first homography118is configured similar to the homography118described inFIGS.2-5B. As an example, the tracking system100may identify the first homography118that is associated with the first sensor108and may use matrix multiplication between the first homography118and the first pixel location402A to determine the first (x,y) coordinate306in the global plane104. At step1010, the tracking system100identifies an object identifier1118for the first person1106from the first tracking list1112A associated with the first sensor108. For example, the tracking system100may identify an object identifier1118that is associated with the first person1106. At step1012, the tracking system100stores the object identifier1118for the first person1106in a second tracking list1112B associated with the second sensor108. Continuing with the previous example, the tracking system100may store the object identifier1118for the first person1106in the second tracking list1112B. Adding the object identifier1118for the first person1106to the second tracking list1112B indicates that the first person1106is within the field of view of the second sensor108and allows the tracking system100to begin tracking the first person1106using the second sensor108. Once the tracking system100determines that the first person1106has entered the field of view of the second sensor108, the tracking system100then determines where the first person1106is located in the second frame302B of the second sensor108using a homography118that is associated with the second sensor108. This process identifies the location of the first person1106with respect to the second sensor108so they can be tracked using the second sensor108. At step1014, the tracking system100applies a homography118that is associated with the second sensor108to the first (x,y) coordinate306to determine a second pixel location402B in the second frame302B for the first person1106. The homography118is configured to translate between pixel locations402in the second frame302B and (x,y) coordinates306in the global plane104. The homography118is configured similar to the homography118described inFIGS.2-5B. As an example, the tracking system100may identify the homography118that is associated with the second sensor108and may use matrix multiplication between the inverse of the homography118and the first (x,y) coordinate306to determine the second pixel location402B in the second frame302B. At step1016, the tracking system100stores the second pixel location402B with the object identifier1118for the first person1106in the second tracking list1112B. In some embodiments, the tracking system100may store additional information associated with the first person1106in the second tracking list1112B. For example, the tracking system100may be configured to store a travel direction1116or any other suitable type of information associated with the first person1106in the second tracking list1112B. After storing the second pixel location402B in the second tracking list1112B, the tracking system100may begin tracking the movement of the person within the field of view of the second sensor108. The tracking system100will continue to track the movement of the first person1106to determine when they completely leave the field of view of the first sensor108. At step1018, the tracking system100receives a new frame302from the first sensor108. For example, the tracking system100may periodically receive additional frames302from the first sensor108. For instance, the tracking system100may receive a new frame302from the first sensor108every millisecond, every second, every five second, or at any other suitable time interval. At step1020, the tracking system100determines whether the first person1106is present in the new frame302. If the first person1106is present in the new frame302, then this means that the first person1106is still within the field of view of the first sensor108and the tracking system100should continue to track the movement of the first person1106using the first sensor108. If the first person1106is not present in the new frame302, then this means that the first person1106has left the field of view of the first sensor108and the tracking system100no longer needs to track the movement of the first person1106using the first sensor108. The tracking system100may determine whether the first person1106is present in the new frame302using a process similar to the process described in step1004. The tracking system100returns to step1018to receive additional frames302from the first sensor108in response to determining that the first person1106is present in the new frame1102from the first sensor108. The tracking system100proceeds to step1022in response to determining that the first person1106is not present in the new frame302. In this case, the first person1106has left the field of view for the first sensor108and no longer needs to be tracked using the first sensor108. At step1022, the tracking system100discards information associated with the first person1106from the first tracking list1112A. Once the tracking system100determines that the first person has left the field of view of the first sensor108, then the tracking system100can stop tracking the first person1106using the first sensor108and can free up resources (e.g. memory resources) that were allocated to tracking the first person1106. The tracking system100will continue to track the movement of the first person1106using the second sensor108until the first person1106leaves the field of view of the second sensor108. For example, the first person1106may leave the space102or may transition to the field of view of another sensor108. Shelf Interaction Detection FIG.12is a flowchart of an embodiment of a shelf interaction detection method1200for the tracking system100. The tracking system100may employ method1200to determine where a person is interacting with a shelf of a rack112. In addition to tracking where people are located within the space102, the tracking system100also tracks which items1306a person picks up from a rack112. As a shopper picks up items1306from a rack112, the tracking system100identifies and tracks which items1306the shopper has picked up, so they can be automatically added to a digital cart1410that is associated with the shopper. This process allows items1306to be added to the person's digital cart1410without having the shopper scan or otherwise identify the item1306they picked up. The digital cart1410comprises information about items1306the shopper has picked up for purchase. In one embodiment, the digital cart1410comprises item identifiers and a quantity associated with each item in the digital cart1410. For example, when the shopper picks up a canned beverage, an item identifier for the beverage is added to their digital cart1410. The digital cart1410will also indicate the number of the beverages that the shopper has picked up. Once the shopper leaves the space102, the shopper will be automatically charged for the items1306in their digital cart1410. InFIG.13, a side view of a rack112is shown from the perspective of a person standing in front of the rack112. In this example, the rack112may comprise a plurality of shelves1302for holding and displaying items1306. Each shelf1302may be partitioned into one or more zones1304for holding different items1306. InFIG.13, the rack112comprises a first shelf1302A at a first height and a second shelf1302B at a second height. Each shelf1302is partitioned into a first zone1304A and a second zone1304B. The rack112may be configured to carry a different item1306(i.e. items1306A,1306B,1306C, and1036D) within each zone1304on each shelf1302. In this example, the rack112may be configured to carry up to four different types of items1306. In other examples, the rack112may comprise any other suitable number of shelves1302and/or zones1304for holding items1306. The tracking system100may employ method1200to identify which item1306a person picks up from a rack112based on where the person is interacting with the rack112. Returning toFIG.12at step1202, the tracking system100receives a frame302from a sensor108. Referring toFIG.14as an example, the sensor108captures a frame302of at least a portion of the rack112within the global plane104for the space102. InFIG.14, an overhead view of the rack112and two people standing in front of the rack112is shown from the perspective of the sensor108. The frame302comprises a plurality of pixels that are each associated with a pixel location402for the sensor108. Each pixel location402comprises a pixel row, a pixel column, and a pixel value. The pixel row and the pixel column indicate the location of a pixel within the frame302of the sensor108. The pixel value corresponds with a z-coordinate (e.g. a height) in the global plane104. The z-coordinate corresponds with a distance between sensor108and a surface in the global plane104. The frame302further comprises one or more zones1404that are associated with zones1304of the rack112. Each zone1404in the frame302corresponds with a portion of the rack112in the global plane104. Referring to the example inFIG.14, the frame302comprises a first zone1404A and a second zone1404B that are associated with the rack112. In this example, the first zone1404A and the second zone1404B correspond with the first zone1304A and the second zone1304B of the rack112, respectively. The frame302further comprises a predefined zone1406that is used as a virtual curtain to detect where a person1408is interacting with the rack112. The predefined zone1406is an invisible barrier defined by the tracking system100that the person1408reaches through to pick up items1306from the rack112. The predefined zone1406is located proximate to the one or more zones1304of the rack112. For example, the predefined zone1406may be located proximate to the front of the one or more zones1304of the rack112where the person1408would reach to grab for an item1306on the rack112. In some embodiments, the predefined zone1406may at least partially overlap with the first zone1404A and the second zone1404B. Returning toFIG.12at step1204, the tracking system100identifies an object within a predefined zone1406of the frame1402. For example, the tracking system100may detect that the person's1408hand enters the predefined zone1406. In one embodiment, the tracking system100may compare the frame1402to a previous frame that was captured by the sensor108to detect that the person's1408hand has entered the predefined zone1406. In this example, the tracking system100may use differences between the frames302to detect that the person's1408hand enters the predefined zone1406. In other embodiments, the tracking system100may employ any other suitable technique for detecting when the person's1408hand has entered the predefined zone1406. In one embodiment, the tracking system100identifies the rack112that is proximate to the person1408. Returning to the example inFIG.14, the tracking system100may determine a pixel location402A in the frame302for the person1408. The tracking system100may determine a pixel location402A for the person1408using a process similar to the process described in step1004ofFIG.10. The tracking system100may use a homography118associated with the sensor108to determine an (x,y) coordinate306in the global plane104for the person1408. The homography118is configured to translate between pixel locations402in the frame302and (x,y) coordinates306in the global plane104. The homography118is configured similar to the homography118described inFIGS.2-5B. As an example, the tracking system100may identify the homography118that is associated with the sensor108and may use matrix multiplication between the homography118and the pixel location402A of the person1408to determine an (x,y) coordinate306in the global plane104. The tracking system100may then identify which rack112is closest to the person1408based on the person's1408(x,y) coordinate306in the global plane104. The tracking system100may identify an item map1308corresponding with the rack112that is closest to the person1408. In one embodiment, the tracking system100comprises an item map1308that associates items1306with particular locations on the rack112. For example, an item map1308may comprise a rack identifier and a plurality of item identifiers. Each item identifier is mapped to a particular location on the rack112. Returning to the example inFIG.13, a first item1306A is mapped to a first location that identifies the first zone1304A and the first shelf1302A of the rack112, a second item1306B is mapped to a second location that identifies the second zone1304B and the first shelf1302A of the rack112, a third item1306C is mapped to a third location that identifies the first zone1304A and the second shelf1302B of the rack112, and a fourth item1306D is mapped to a fourth location that identifies the second zone1304B and the second shelf1302B of the rack112. Returning toFIG.12at step1206, the tracking system100determines a pixel location402B in the frame302for the object that entered the predefined zone1406. Continuing with the previous example, the pixel location402B comprises a first pixel row, a first pixel column, and a first pixel value for the person's1408hand. In this example, the person's1408hand is represented by a collection of pixels in the predefined zone1406. In one embodiment, the pixel location402of the person's1408hand may be located at about the center of the collection of pixels that represent the person's1408hand. In other examples, the tracking system100may use any other suitable technique for identifying the person's1408hand within the frame302. Once the tracking system100determines the pixel location402B of the person's1408hand, the tracking system100then determines which shelf1302and zone1304of the rack112the person1408is reaching for. At step1208, the tracking system100determines whether the pixel location402B for the object (i.e. the person's1408hand) corresponds with a first zone1304A of the rack112. The tracking system100uses the pixel location402B of the person's1408hand to determine which side of the rack112the person1408is reaching into. Here, the tracking system100checks whether the person is reaching for an item on the left side of the rack112. Each zone1304of the rack112is associated with a plurality of pixels in the frame302that can be used to determine where the person1408is reaching based on the pixel location402B of the person's1408hand. Continuing with the example inFIG.14, the first zone1304A of the rack112corresponds with the first zone1404A which is associated with a first range of pixels1412in the frame302. Similarly, the second zone1304B of the rack112corresponds with the second zone1404B which is associated with a second range of pixels1414in the frame302. The tracking system100may compare the pixel location402B of the person's1408hand to the first range of pixels1412to determine whether the pixel location402B corresponds with the first zone1304A of the rack112. In this example, the first range of pixels1412corresponds with a range of pixel columns in the frame302. In other examples, the first range of pixels1412may correspond with a range of pixel rows or a combination of pixel row and columns in the frame302. In this example, the tracking system100compares the first pixel column of the pixel location402B to the first range of pixels1412to determine whether the pixel location1410corresponds with the first zone1304A of the rack112. In other words, the tracking system100compares the first pixel column of the pixel location402B to the first range of pixels1412to determine whether the person1408is reaching for an item1306on the left side of the rack112. InFIG.14, the pixel location402B for the person's1408hand does not correspond with the first zone1304A of the rack112. The tracking system100proceeds to step1210in response to determining that the pixel location402B for the object corresponds with the first zone1304A of the rack112. At step1210, the tracking system100identifies the first zone1304A of the rack112based on the pixel location402B for the object that entered the predefined zone1406. In this case, the tracking system100determines that the person1408is reaching for an item on the left side of the rack112. Returning to step1208, the tracking system100proceeds to step1212in response to determining that the pixel location402B for the object that entered the predefined zone1406does not correspond with the first zone1304B of the rack112. At step1212, the tracking system100identifies the second zone1304B of the rack112based on the pixel location402B of the object that entered the predefined zone1406. In this case, the tracking system100determines that the person1408is reaching for an item on the right side of the rack112. In other embodiments, the tracking system100may compare the pixel location402B to other ranges of pixels that are associated with other zones1304of the rack112. For example, the tracking system100may compare the first pixel column of the pixel location402B to the second range of pixels1414to determine whether the pixel location402B corresponds with the second zone1304B of the rack112. In other words, the tracking system100compares the first pixel column of the pixel location402B to the second range of pixels1414to determine whether the person1408is reaching for an item1306on the right side of the rack112. Once the tracking system100determines which zone1304of the rack112the person1408is reaching into, the tracking system100then determines which shelf1302of the rack112the person1408is reaching into. At step1214, the tracking system100identifies a pixel value at the pixel location402B for the object that entered the predefined zone1406. The pixel value is a numeric value that corresponds with a z-coordinate or height in the global plane104that can be used to identify which shelf1302the person1408was interacting with. The pixel value can be used to determine the height the person's1408hand was at when it entered the predefined zone1406which can be used to determine which shelf1302the person1408was reaching into. At step1216, the tracking system100determines whether the pixel value corresponds with the first shelf1302A of the rack112. Returning to the example inFIG.13, the first shelf1302A of the rack112corresponds with a first range of z-values or heights1310A and the second shelf1302B corresponds with a second range of z-values or heights1310B. The tracking system100may compare the pixel value to the first range of z-values1310A to determine whether the pixel value corresponds with the first shelf1302A of the rack112. As an example, the first range of z-values1310A may be a range between 2 meters and 1 meter with respect to the z-axis in the global plane104. The second range of z-values1310B may be a range between 0.9 meters and 0 meters with respect to the z-axis in the global plane104. The pixel value may have a value that corresponds with 1.5 meters with respect to the z-axis in the global plane104. In this example, the pixel value is within the first range of z-values1310A which indicates that the pixel value corresponds with the first shelf1302A of the rack112. In other words, the person's1408hand was detected at a height that indicates the person1408was reaching for the first shelf1302A of the rack112. The tracking system100proceeds to step1218in response to determining that the pixel value corresponds with the first shelf of the rack112. At step1218, the tracking system100identifies the first shelf1302A of the rack112based on the pixel value. Returning to step1216, the tracking system100proceeds to step1220in response to determining that the pixel value does not correspond with the first shelf1302A of the rack112. At step1220, the tracking system100identifies the second shelf1302B of the rack112based on the pixel value. In other embodiments, the tracking system100may compare the pixel value to other z-value ranges that are associated with other shelves1302of the rack112. For example, the tracking system100may compare the pixel value to the second range of z-values1310B to determine whether the pixel value corresponds with the second shelf1302B of the rack112. Once the tracking system100determines which side of the rack112and which shelf1302of the rack112the person1408is reaching into, then the tracking system100can identify an item1306that corresponds with the identified location on the rack112. At step1222, the tracking system100identifies an item1306based on the identified zone1304and the identified shelf1302of the rack112. The tracking system100uses the identified zone1304and the identified shelf1302to identify a corresponding item1306in the item map1308. Returning to the example inFIG.14, the tracking system100may determine that the person1408is reaching into the right side (i.e. zone1404B) of the rack112and the first shelf1302A of the rack112. In this example, the tracking system100determines that the person1408is reaching for and picked up item1306B from the rack112. In some instances, multiple people may be near the rack112and the tracking system100may need to determine which person is interacting with the rack112so that it can add a picked-up item1306to the appropriate person's digital cart1410. Returning to the example inFIG.14, a second person1420is also near the rack112when the first person1408is picking up an item1306from the rack112. In this case, the tracking system100should assign any picked-up items to the first person1408and not the second person1420. In one embodiment, the tracking system100determines which person picked up an item1306based on their proximity to the item1306that was picked up. For example, the tracking system100may determine a pixel location402A in the frame302for the first person1408. The tracking system100may also identify a second pixel location402C for the second person1420in the frame302. The tracking system100may then determine a first distance1416between the pixel location402A of the first person1408and the location on the rack112where the item1306was picked up. The tracking system100also determines a second distance1418between the pixel location402C of the second person1420and the location on the rack112where the item1306was picked up. The tracking system100may then determine that the first person1408is closer to the item1306than the second person1420when the first distance1416is less than the second distance1418. In this example, the tracking system100identifies the first person1408as the person that most likely picked up the item1306based on their proximity to the location on the rack112where the item1306was picked up. This process allows the tracking system100to identify the correct person that picked up the item1306from the rack112before adding the item1306to their digital cart1410. Returning toFIG.12at step1224, the tracking system100adds the identified item1306to a digital cart1410associated with the person1408. In one embodiment, the tracking system100uses weight sensors110to determine a number of items1306that were removed from the rack112. For example, the tracking system100may determine a weight decrease amount on a weight sensor110after the person1408removes one or more items1306from the weight sensor110. The tracking system100may then determine an item quantity based on the weight decrease amount. For example, the tracking system100may determine an individual item weight for the items1306that are associated with the weight sensor110. For instance, the weight sensor110may be associated with an item1306that that has an individual weight of sixteen ounces. When the weight sensor110detects a weight decrease of sixty-four ounces, the weight sensor110may determine that four of the items1306were removed from the weight sensor110. In other embodiments, the digital cart1410may further comprise any other suitable type of information associated with the person1408and/or items1306that they have picked up. Item Assignment Using a Local Zone FIG.15is a flowchart of an embodiment of an item assigning method1500for the tracking system100. The tracking system100may employ method1500to detect when an item1306has been picked up from a rack112and to determine which person to assign the item to using a predefined zone1808that is associated with the rack112. In a busy environment, such as a store, there may be multiple people standing near a rack112when an item is removed from the rack112. Identifying the correct person that picked up the item1306can be challenging. In this case, the tracking system100uses a predefined zone1808that can be used to reduce the search space when identifying a person that picks up an item1306from a rack112. The predefined zone1808is associated with the rack112and is used to identify an area where a person can pick up an item1306from the rack112. The predefined zone1808allows the tracking system100to quickly ignore people are not within an area where a person can pick up an item1306from the rack112, for example behind the rack112. Once the item1306and the person have been identified, the tracking system100will add the item to a digital cart1410that is associated with the identified person. At step1502, the tracking system100detects a weight decrease on a weight sensor110. Referring toFIG.18as an example, the weight sensor110is disposed on a rack112and is configured to measure a weight for the items1306that are placed on the weight sensor110. In this example, the weight sensor110is associated with a particular item1306. The tracking system100detects a weight decrease on the weight sensor110when a person1802removes one or more items1306from the weight sensor110. Returning toFIG.15at step1504, the tracking system100identifies an item1306associated with the weight sensor110. In one embodiment, the tracking system100comprises an item map1308A that associates items1306with particular locations (e.g. zones1304and/or shelves1302) and weight sensors110on the rack112. For example, an item map1308A may comprise a rack identifier, weight sensor identifiers, and a plurality of item identifiers. Each item identifier is mapped to a particular weight sensor110(i.e. weight sensor identifier) on the rack112. The tracking system100determines which weight sensor110detected a weight decrease and then identifies the item1306or item identifier that corresponds with the weight sensor110using the item map1308A. At step1506, the tracking system100receives a frame302of the rack112from a sensor108. The sensor108captures a frame302of at least a portion of the rack112within the global plane104for the space102. The frame302comprises a plurality of pixels that are each associated with a pixel location402. Each pixel location402comprises a pixel row and a pixel column. The pixel row and the pixel column indicate the location of a pixel within the frame302. The frame302comprises a predefined zone1808that is associated with the rack112. The predefined zone1808is used for identifying people that are proximate to the front of the rack112and in a suitable position for retrieving items1306from the rack112. For example, the rack112comprises a front portion1810, a first side portion1812, a second side portion1814, and a back portion1814. In this example, a person may be able to retrieve items1306from the rack112when they are either in front or to the side of the rack112. A person is unable to retrieve items1306from the rack112when they are behind the rack112. In this case, the predefined zone1808may overlap with at least a portion of the front portion1810, the first side portion1812, and the second side portion1814of the rack112in the frame1806. This configuration prevents people that are behind the rack112from being considered as a person who picked up an item1306from the rack112. InFIG.18, the predefined zone1808is rectangular. In other examples, the predefined zone1808may be semi-circular or in any other suitable shape. After the tracking system100determines that an item1306has been picked up from the rack112, the tracking system100then begins to identify people within the frame302that may have picked up the item1306from the rack112. At step1508, the tracking system100identifies a person1802within the frame302. The tracking system100may identify a person1802within the frame302using a process similar to the process described in step1004ofFIG.10. In other examples, the tracking system100may employ any other suitable technique for identifying a person1802within the frame302. At step1510, the tracking system100determines a pixel location402A in the frame302for the identified person1802. The tracking system100may determine a pixel location402A for the identified person1802using a process similar to the process described in step1004ofFIG.10. The pixel location402A comprises a pixel row and a pixel column that identifies the location of the person1802in the frame302of the sensor108. At step1511, the tracking system100applies a homography118to the pixel location402A of the identified person1802to determine an (x,y) coordinate306in the global plane104for the identified person1802. The homography118is configured to translate between pixel locations402in the frame302and (x,y) coordinates306in the global plane104. The homography118is configured similar to the homography118described inFIGS.2-5B. As an example, the tracking system100may identify the homography118that is associated with the sensor108and may use matrix multiplication between the homography118and the pixel location402A of the identified person1802to determine the (x,y) coordinate306in the global plane104. At step1512, the tracking system100determines whether the identified person1802is within a predefined zone1808associated with the rack112in the frame302. Continuing with the example inFIG.18, the predefined zone1808is associated with a range of (x,y) coordinates306in the global plane104. The tracking system100may compare the (x,y) coordinate306for the identified person1802to the range of (x,y) coordinates306that are associated with the predefined zone1808to determine whether the (x,y) coordinate306for the identified person1802is within the predefined zone1808. In other words, the tracking system100uses the (x,y) coordinate306for the identified person1802to determine whether the identified person1802is within an area suitable for picking up items1306from the rack112. In this example, the (x,y) coordinate306for the person1802corresponds with a location in front of the rack112and is within the predefined zone1808which means that the identified person1802is in a suitable area for retrieving items1306from the rack112. In another embodiment, the predefined zone1808is associated with a plurality of pixels (e.g. a range of pixel rows and pixel columns) in the frame302. The tracking system100may compare the pixel location402A to the pixels associated with the predefined zone1808to determine whether the pixel location402A is within the predefined zone1808. In other words, the tracking system100uses the pixel location402A of the identified person1802to determine whether the identified person1802is within an area suitable for picking up items1306from the rack112. In this example, the tracking system100may compare the pixel column of the pixel location402A with a range of pixel columns associated with the predefined zone1808and the pixel row of the pixel location402A with a range of pixel rows associated with the predefined zone1808to determine whether the identified person1802is within the predefined zone1808. In this example, the pixel location402A for the person1802is standing in front of the rack112and is within the predefined zone1808which means that the identified person1802is in a suitable area for retrieving items1306from the rack112. The tracking system100proceeds to step1514in response to determining that the identified person1802is within the predefined zone1808. Otherwise, the tracking system100returns to step1508to identify another person within the frame302. In this case, the tracking system100determines the identified person1802is not in a suitable area for retrieving items1306from the rack112, for example the identified person1802is standing behind of the rack112. In some instances, multiple people may be near the rack112and the tracking system100may need to determine which person is interacting with the rack112so that it can add a picked-up item1306to the appropriate person's digital cart1410. Returning to the example inFIG.18, a second person1826is standing next to the side of rack112in the frame302when the first person1802picks up an item1306from the rack112. In this example, the second person1826is closer to the rack112than the first person1802, however, the tracking system100can ignore the second person1826because the pixel location402B of the second person1826is outside of the predetermined zone1808that is associated with the rack112. For example, the tracking system100may identify an (x,y) coordinate306in the global plane104for the second person1826and determine that the second person1826is outside of the predefined zone1808based on their (x,y) coordinate306. As another example, the tracking system100may identify a pixel location402B within the frame302for the second person1826and determine that the second person1826is outside of the predefined zone1808based on their pixel location402B. As another example, the frame302further comprises a third person1832standing near the rack112. In this case, the tracking system100determines which person picked up the item1306based on their proximity to the item1306that was picked up. For example, the tracking system100may determine an (x,y) coordinate306in the global plane104for the third person1832. The tracking system100may then determine a first distance1828between the (x,y) coordinate306of the first person1802and the location on the rack112where the item1306was picked up. The tracking system100also determines a second distance1830between the (x,y) coordinate306of the third person1832and the location on the rack112where the item1306was picked up. The tracking system100may then determine that the first person1802is closer to the item1306than the third person1832when the first distance1828is less than the second distance1830. In this example, the tracking system100identifies the first person1802as the person that most likely picked up the item1306based on their proximity to the location on the rack112where the item1306was picked up. This process allows the tracking system100to identify the correct person that picked up the item1306from the rack112before adding the item1306to their digital cart1410. As another example, the tracking system100may determine a pixel location402C in the frame302for a third person1832. The tracking system100may then determine the first distance1828between the pixel location402A of the first person1802and the location on the rack112where the item1306was picked up. The tracking system100also determines the second distance1830between the pixel location402C of the third person1832and the location on the rack112where the item1306was picked up. Returning toFIG.15at step1514, the tracking system100adds the item1306to a digital cart1410that is associated with the identified person1802. The tracking system100may add the item1306to the digital cart1410using a process similar to the process described in step1224ofFIG.12. Item Identification FIG.16is a flowchart of an embodiment of an item identification method1600for the tracking system100. The tracking system100may employ method1600to identify an item1306that has a non-uniform weight and to assign the item1306to a person's digital cart1410. For items1306with a uniform weight, the tracking system100is able to determine the number of items1306that are removed from a weight sensor110based on a weight difference on the weight sensor110. However, items1306such as fresh food do not have a uniform weight which means that the tracking system100is unable to determine how many items1306were removed from a shelf1302based on weight measurements. In this configuration, the tracking system100uses a sensor108to identify markers1820(e.g. text or symbols) on an item1306that has been picked up and to identify a person near the rack112where the item1306was picked up. For example, a marker1820may be located on the packaging of an item1806or on a strap for carrying the item1806. Once the item1306and the person have been identified, the tracking system100can add the item1306to a digital cart1410that is associated with the identified person. At step1602, the tracking system100detects a weight decrease on a weight sensor110. Returning to the example inFIG.18, the weight sensor110is disposed on a rack112and is configured to measure a weight for the items1306that are placed on the weight sensor110. In this example, the weight sensor110is associated with a particular item1306. The tracking system100detects a weight decrease on the weight sensor110when a person1802removes one or more items1306from the weight sensor110. After the tracking system100detects that an item1306was removed from a rack112, the tracking system100will use a sensor108to identify the item1306that was removed and the person who picked up the item1306. Returning toFIG.16at step1604, the tracking system100receives a frame302from a sensor108. The sensor108captures a frame302of at least a portion of the rack112within the global plane104for the space102. In the example shown inFIG.18, the sensor108is configured such that the frame302from the sensor108captures an overhead view of the rack112. The frame302comprises a plurality of pixels that are each associated with a pixel location402. Each pixel location402comprises a pixel row and a pixel column. The pixel row and the pixel column indicate the location of a pixel within the frame302. The frame302comprises a predefined zone1808that is configured similar to the predefined zone1808described in step1504ofFIG.15. In one embodiment, the frame1806may further comprise a second predefined zone that is configured as a virtual curtain similar to the predefined zone1406that is described inFIGS.12-14. For example, the tracking system100may use the second predefined zone to detect that the person's1802hand reaches for an item1306before detecting the weight decrease on the weight sensor110. In this example, the second predefined zone is used to alert the tracking system100that an item1306is about to be picked up from the rack112which may be used to trigger the sensor108to capture a frame302that includes the item1306being removed from the rack112. At step1606, the tracking system100identifies a marker1820on an item1306within a predefined zone1808in the frame302. A marker1820is an object with unique features that can be detected by a sensor108. For instance, a marker1820may comprise a uniquely identifiable shape, color, symbol, pattern, text, a barcode, a QR code, or any other suitable type of feature. The tracking system100may search the frame302for known features that correspond with a marker1820. Referring to the example inFIG.18, the tracking system100may identify a shape (e.g. a star) on the packaging of the item1806in the frame302that corresponds with a marker1820. As another example, the tracking system100may use character or text recognition to identify alphanumeric text that corresponds with a marker1820when the marker1820comprises text. In other examples, the tracking system100may use any other suitable technique to identify a marker1820within the frame302. Returning toFIG.16at step1608, the tracking system100identifies an item1306associated with the marker1820. In one embodiment, the tracking system100comprises an item map1308B that associates items1306with particular markers1820. For example, an item map1308B may comprise a plurality of item identifiers that are each mapped to a particular marker1820(i.e. marker identifier). The tracking system100identifies the item1306or item identifier that corresponds with the marker1820using the item map1308B. In some embodiments, the tracking system100may also use information from a weight sensor110to identify the item1306. For example, the tracking system100may comprise an item map1308A that associates items1306with particular locations (e.g. zone1304and/or shelves1302) and weight sensors110on the rack112. For example, an item map1308A may comprise a rack identifier, weight sensor identifiers, and a plurality of item identifiers. Each item identifier is mapped to a particular weight sensor110(i.e. weight sensor identifier) on the rack112. The tracking system100determines which weight sensor110detected a weight decrease and then identifies the item1306or item identifier that corresponds with the weight sensor110using the item map1308A. After the tracking system100identifies the item1306that was picked up from the rack112, the tracking system100then determines which person picked up the item1306from the rack112. At step1610, the tracking system100identifies a person1802within the frame302. The tracking system100may identify a person1802within the frame302using a process similar to the process described in step1004ofFIG.10. In other examples, the tracking system100may employ any other suitable technique for identifying a person1802within the frame302. At step1612, the tracking system100determines a pixel location402A for the identified person1802. The tracking system100may determine a pixel location402A for the identified person1802using a process similar to the process described in step1004ofFIG.10. The pixel location402A comprises a pixel row and a pixel column that identifies the location of the person1802in the frame302of the sensor108. At step1613, the tracking system100applies a homography118to the pixel location402A of the identified person1802to determine an (x,y) coordinate306in the global plane104for the identified person1802. The tracking system100may determine the (x,y) coordinate306in the global plane104for the identified person1802using a process similar to the process described in step1511ofFIG.15. At step1614, the tracking system100determines whether the identified person1802is within the predefined zone1808. Here, the tracking system100determines whether the identified person1802is in a suitable area for retrieving items1306from the rack112. The tracking system100may determine whether the identified person1802is within the predefined zone1808using a process similar to the process described in step1512ofFIG.15. The tracking system100proceeds to step1616in response to determining that the identified person1802is within the predefined zone1808. In this case, the tracking system100determines the identified person1802is in a suitable area for retrieving items1306from the rack112, for example the identified person1802is standing in front of the rack112. Otherwise, the tracking system100returns to step1610to identify another person within the frame302. In this case, the tracking system100determines the identified person1802is not in a suitable area for retrieving items1306from the rack112, for example the identified person1802is standing behind of the rack112. In some instances, multiple people may be near the rack112and the tracking system100may need to determine which person is interacting with the rack112so that it can add a picked-up item1306to the appropriate person's digital cart1410. The tracking system100may identify which person picked up the item1306from the rack112using a process similar to the process described in step1512ofFIG.15. At step1614, the tracking system100adds the item1306to a digital cart1410that is associated with the person1802. The tracking system100may add the item1306to the digital cart1410using a process similar to the process described in step1224ofFIG.12. Misplaced Item Identification FIG.17is a flowchart of an embodiment of a misplaced item identification method1700for the tracking system100. The tracking system100may employ method1700to identify items1306that have been misplaced on a rack112. While a person is shopping, the shopper may decide to put down one or more items1306that they have previously picked up. In this case, the tracking system100should identify which items1306were put back on a rack112and which shopper put the items1306back so that the tracking system100can remove the items1306from their digital cart1410. Identifying an item1306that was put back on a rack112is challenging because the shopper may not put the item1306back in its correct location. For example, the shopper may put back an item1306in the wrong location on the rack112or on the wrong rack112. In either of these cases, the tracking system100has to correctly identify both the person and the item1306so that the shopper is not charged for item1306when they leave the space102. In this configuration, the tracking system100uses a weight sensor110to first determine that an item1306was not put back in its correct location. The tracking system100then uses a sensor108to identify the person that put the item1306on the rack112and analyzes their digital cart1410to determine which item1306they most likely put back based on the weights of the items1306in their digital cart1410. At step1702, the tracking system100detects a weight increase on a weight sensor110. Returning to the example inFIG.18, a first person1802places one or more items1306back on a weight sensor110on the rack112. The weight sensor110is configured to measure a weight for the items1306that are placed on the weight sensor110. The tracking system100detects a weight increase on the weight sensor110when a person1802adds one or more items1306to the weight sensor110. At step1704, the tracking system100determines a weight increase amount on the weight sensor110in response to detecting the weight increase on the weight sensor110. The weight increase amount corresponds with a magnitude of the weight change detected by the weight sensor110. Here, the tracking system100determines how much of a weight increase was experienced by the weight sensor110after one or more items1306were placed on the weight sensor110. In one embodiment, the tracking system100determines that the item1306placed on the weight sensor110is a misplaced item1306based on the weight increase amount. For example, the weight sensor110may be associated with an item1306that has a known individual item weight. This means that the weight sensor110is only expected to experience weight changes that are multiples of the known item weight. In this configuration, the tracking system100may determine that the returned item1306is a misplaced item1306when the weight increase amount does not match the individual item weight or multiples of the individual item weight for the item1306associated with the weight sensor110. As an example, the weight sensor110may be associated with an item1306that has an individual weight of ten ounces. If the weight sensor110detects a weight increase of twenty-five ounces, the tracking system100can determine that the item1306placed weight sensor114is not an item1306that is associated with the weight sensor110because the weight increase amount does not match the individual item weight or multiples of the individual item weight for the item1306that is associated with the weight sensor110. After the tracking system100detects that an item1306has been placed back on the rack112, the tracking system100will use a sensor108to identify the person that put the item1306back on the rack112. At step1706, the tracking system100receives a frame302from a sensor108. The sensor108captures a frame302of at least a portion of the rack112within the global plane104for the space102. In the example shown inFIG.18, the sensor108is configured such that the frame302from the sensor108captures an overhead view of the rack112. The frame302comprises a plurality of pixels that are each associated with a pixel location402. Each pixel location402comprises a pixel row and a pixel column. The pixel row and the pixel column indicate the location of a pixel within the frame302. In some embodiments, the frame302further comprises a predefined zone1808that is configured similar to the predefined zone1808described in step1504ofFIG.15. At step1708, the tracking system100identifies a person1802within the frame302. The tracking system100may identify a person1802within the frame302using a process similar to the process described in step1004ofFIG.10. In other examples, the tracking system100may employ any other suitable technique for identifying a person1802within the frame302. At step1710, the tracking system100determines a pixel location402A in the frame302for the identified person1802. The tracking system100may determine a pixel location402A for the identified person1802using a process similar to the process described in step1004ofFIG.10. The pixel location402A comprises a pixel row and a pixel column that identifies the location of the person1802in the frame302of the sensor108. At step1712, the tracking system100determines whether the identified person1802is within a predefined zone1808of the frame302. Here, the tracking system100determines whether the identified person1802is in a suitable area for putting items1306back on the rack112. The tracking system100may determine whether the identified person1802is within the predefined zone1808using a process similar to the process described in step1512ofFIG.15. The tracking system100proceeds to step1714in response to determining that the identified person1802is within the predefined zone1808. In this case, the tracking system100determines the identified person1802is in a suitable area for putting items1306back on the rack112, for example the identified person1802is standing in front of the rack112. Otherwise, the tracking system100returns to step1708to identify another person within the frame302. In this case, the tracking system100determines the identified person is not in a suitable area for retrieving items1306from the rack112, for example the person is standing behind of the rack112. In some instances, multiple people may be near the rack112and the tracking system100may need to determine which person is interacting with the rack112so that it can remove the returned item1306from the appropriate person's digital cart1410. The tracking system100may determine which person put back the item1306on the rack112using a process similar to the process described in step1512ofFIG.15. After the tracking system100identifies which person put back the item1306on the rack112, the tracking system100then determines which item1306from the identified person's digital cart1410has a weight that closest matches the item1306that was put back on the rack112. At step1714, the tracking system100identifies a plurality of items1306in a digital cart1410that is associated with the person1802. Here, the tracking system100identifies the digital cart1410that is associated with the identified person1802. For example, the digital cart1410may be linked with the identified person's1802object identifier1118. In one embodiment, the digital cart1410comprises item identifiers that are each associated with an individual item weight. At step1716, the tracking system100identifies an item weight for each of the items1306in the digital cart1410. In one embodiment, the tracking system100may comprises a set of item weights stored in memory and may look up the item weight for each item1306using the item identifiers that are associated with the item's1306in the digital cart1410. At step1718, the tracking system100identifies an item1306from the digital cart1410with an item weight that closest matches the weight increase amount. For example, the tracking system100may compare the weight increase amount measured by the weight sensor110to the item weights associated with each of the items1306in the digital cart1410. The tracking system100may then identify which item1306corresponds with an item weight that closest matches the weight increase amount. In some cases, the tracking system100is unable to identify an item1306in the identified person's digital cart1410that a weight that matches the measured weight increase amount on the weight sensor110. In this case, the tracking system100may determine a probability that an item1306was put down for each of the items1306in the digital cart1410. The probability may be based on the individual item weight and the weight increase amount. For example, an item1306with an individual weight that is closer to the weight increase amount will be associated with a higher probability than an item1306with an individual weight that is further away from the weight increase amount. In some instances, the probabilities are a function of the distance between a person and the rack112. In this case, the probabilities associated with items1306in a person's digital cart1410depend on how close the person is to the rack112where the item1306was put back. For example, the probabilities associated with the items1306in the digital cart1410may be inversely proportional to the distance between the person and the rack112. In other words, the probabilities associated with the items in a person's digital cart1410decay as the person moves further away from the rack112. The tracking system100may identify the item1306that has the highest probability of being the item1306that was put down. In some cases, the tracking system100may consider items1306that are in multiple people's digital carts1410when there are multiple people within the predefined zone1808that is associated with the rack112. For example, the tracking system100may determine a second person is within the predefined zone1808that is associated with the rack112. In this example, the tracking system100identifies items1306from each person's digital cart1410that may correspond with the item1306that was put back on the rack112and selects the item1306with an item weight that closest matches the item1306that was put back on the rack112. For instance, the tracking system100identifies item weights for items1306in a second digital cart1410that is associated with the second person. The tracking system100identifies an item1306from the second digital cart1410with an item weight that closest matches the weight increase amount. The tracking system100determines a first weight difference between a first identified item1306from digital cart1410of the first person1802and the weight increase amount and a second weight difference between a second identified item1306from the second digital cart1410of the second person. In this example, the tracking system100may determine that the first weight difference is less than the second weight difference, which indicates that the item1306identified in the first person's digital cart1410closest matches the weight increase amount, and then removes the first identified item1306from their digital cart1410. After the tracking system100identifies the item1306that most likely put back on the rack112and the person that put the item1306back, the tracking system100removes the item1306from their digital cart1410. At step1720, the tracking system100removes the identified item1306from the identified person's digital cart1410. Here, the tracking system100discards information associated with the identified item1306from the digital cart1410. This process ensures that the shopper will not be charged for item1306that they put back on a rack112regardless of whether they put the item1306back in its correct location. Auto-Exclusion Zones In order to track the movement of people in the space102, the tracking system100should generally be able to distinguish between the people (i.e., the target objects) and other objects (i.e., non-target objects), such as the racks112, displays, and any other non-human objects in the space102. Otherwise, the tracking system100may waste memory and processing resources detecting and attempting to track these non-target objects. As described elsewhere in this disclosure (e.g., inFIGS.24-26and corresponding description below), in some cases, people may be tracked may be performed by detecting one or more contours in a set of image frames (e.g., a video) and monitoring movements of the contour between frames. A contour is generally a curve associated with an edge of a representation of a person in an image. While the tracking system100may detect contours in order to track people, in some instances, it may be difficult to distinguish between contours that correspond to people (e.g., or other target objects) and contours associated with non-target objects, such as racks112, signs, product displays, and the like. Even if sensors108are calibrated at installation to account for the presence of non-target objects, in many cases, it may be challenging to reliably and efficiently recalibrate the sensors108to account for changes in positions of non-target objects that should not be tracked in the space102. For example, if a rack112, sign, product display, or other furniture or object in space102is added, removed, or moved (e.g., all activities which may occur frequently and which may occur without warning and/or unintentionally), one or more of the sensors108may require recalibration or adjustment. Without this recalibration or adjustment, it is difficult or impossible to reliably track people in the space102. Prior to this disclosure, there was a lack of tools for efficiently recalibrating and/or adjusting sensors, such as sensors108, in a manner that would provide reliable tracking. This disclosure encompasses the recognition not only of the previously unrecognized problems described above (e.g., with respect to tracking people in space102, which may change over time) but also provides unique solutions to these problems. As described in this disclosure, during an initial time period before people are tracked, pixel regions from each sensor108may be determined that should be excluded during subsequent tracking. For example, during the initial time period, the space102may not include any people such that contours detected by each sensor108correspond only to non-target objects in the space for which tracking is not desired. Thus, pixel regions, or “auto-exclusion zones,” corresponding to portions of each image generated by sensors108that are not used for object detection and tracking (e.g., the pixel coordinates of contours that should not be tracked). For instance, the auto-exclusion zones may correspond to contours detected in images that are associated with non-target objects, contours that are spuriously detected at the edges of a sensor's field-of-view, and the like). Auto-exclusion zones can be determined automatically at any desired or appropriate time interval to improve the usability and performance of tracking system100. After the auto-exclusion zones are determined, the tracking system100may proceed to track people in the space102. The auto-exclusion zones are used to limit the pixel regions used by each sensor108for tracking people. For example, pixels corresponding to auto-exclusion zones may be ignored by the tracking system100during tracking. In some cases, a detected person (e.g., or other target object) may be near or partially overlapping with one or more auto-exclusion zones. In these cases, the tracking system100may determine, based on the extent to which a potential target object's position overlaps with the auto-exclusion zone, whether the target object will be tracked. This may reduce or eliminate false positive detection of non-target objects during person tracking in the space102, while also improving the efficiency of tracking system100by reducing wasted processing resources that would otherwise be expended attempting to track non-target objects. In some embodiments, a map of the space102may be generated that presents the physical regions that are excluded during tracking (i.e., a map that presents a representation of the auto-exclusion zone(s) in the physical coordinates of the space). Such a map, for example, may facilitate trouble-shooting of the tracking system by allowing an administrator to visually confirm that people can be tracked in appropriate portions of the space102. FIG.19illustrates the determination of auto-exclusion zones1910,1914and the subsequent use of these auto-exclusion zones1910,1914for improved tracking of people (e.g., or other target objects) in the space102. In general, during an initial time period (t<t0), top-view image frames are received by the client(s)105and/or server106from sensors108and used to determine auto-exclusion zones1910,1914. For instance, the initial time period at t<t0may correspond to a time when no people are in the space102. For example, if the space102is open to the public during a portion of the day, the initial time period may be before the space102is opened to the public. In some embodiments, the server106and/or client105may provide, for example, an alert or transmit a signal indicating that the space102should be emptied of people (e.g., or other target objects to be tracked) in order for auto-exclusion zones1910,1914to be identified. In some embodiments, a user may input a command (e.g., via any appropriate interface coupled to the server106and/or client(s)105) to initiate the determination of auto-exclusion zones1910,1914immediately or at one or more desired times in the future (e.g., based on a schedule). An example top-view image frame1902used for determining auto-exclusion zones1910,1914is shown inFIG.19. Image frame1902includes a representation of a first object1904(e.g., a rack112) and a representation of a second object1906. For instance, the first object1904may be a rack112, and the second object1906may be a product display or any other non-target object in the space102. In some embodiments, the second object1906may not correspond to an actual object in the space but may instead be detected anomalously because of lighting in the space102and/or a sensor error. Each sensor108generally generates at least one frame1902during the initial time period, and these frame(s)1902is/are used to determine corresponding auto-exclusion zones1910,1914for the sensor108. For instance, the sensor client105may receive the top-view image1902, and detect contours (i.e., the dashed lines around zones1910,1914) corresponding to the auto-exclusion zones1910,1914as illustrated in view1908. The contours of auto-exclusion zones1910,1914generally correspond to curves that extend along a boundary (e.g., the edge) of objects1904,1906in image1902. The view1908generally corresponds to a presentation of image1902in which the detected contours corresponding to auto-exclusion zones1910,1914are presented but the corresponding objects1904,1906, respectively, are not shown. For an image frame1902that includes color and depth data, contours for auto-exclusion zones1910,1914may be determined at a given depth (e.g., a distance away from sensor108) based on the color data in the image1902. For example, a steep gradient of a color value may correspond to an edge of an object and used to determine, or detect, a contour. For example, contours for the auto-exclusion zones1910,1914may be determined using any suitable contour or edge detection method such as Canny edge detection, threshold-based detection, or the like. The client105determines pixel coordinates1912and1916corresponding to the locations of the auto-exclusions zones1910and1914, respectively. The pixel coordinates1912,1916generally correspond to the locations (e.g., row and column numbers) in the image frame1902that should be excluded during tracking. In general, objects associated with the pixel coordinates1912,1916are not tracked by the tracking system100. Moreover, certain objects which are detected outside of the auto-exclusion zones1910,1914may not be tracked under certain conditions. For instance, if the position of the object (e.g., the position associated with region1920, discussed below with respect to view1914) overlaps at least a threshold amount with an auto-exclusion zone1910,1914, the object may not be tracked. This prevents the tracking system100(i.e., or the local client105associated with a sensor108or a subset of sensors108) from attempting to unnecessarily track non-target objects. In some cases, auto-exclusion zones1910,1914correspond to non-target (e.g., inanimate) objects in the field-of-view of a sensor108(e.g., a rack112, which is associated with contour1910). However, auto-exclusion zones1910,1914may also or alternatively correspond to other aberrant features or contours detected by a sensor108(e.g., caused by sensor errors, inconsistent lighting, or the like). Following the determination of pixel coordinates1912,1916to exclude during tracking, objects may be tracked during a subsequent time period corresponding to t>t0. An example image frame1918generated during tracking is shown inFIG.19. In frame1918, region1920is detected as possibly corresponding to what may or may not be a target object. For example, region1920may correspond to a pixel mask or bounding box generated based on a contour detected in frame1902. For example, a pixel mask may be generated to fill in the area inside the contour or a bounding box may be generated to encompass the contour. For example, a pixel mask may include the pixel coordinates within the corresponding contour. For instance, the pixel coordinates1912of auto-exclusion zone1910may effectively correspond to a mask that overlays or “fills in” the auto-exclusion zone1910. Following the detection of region1920, the client105determines whether the region1920corresponds to a target object which should tracked or is sufficiently overlapping with auto-exclusion zone1914to consider region1920as being associated with a non-target object. For example, the client105may determine whether at least a threshold percentage of the pixel coordinates1916overlap with (e.g., are the same as) pixel coordinates of region1920. The overlapping region1922of these pixel coordinates is illustrated in frame1918. For example, the threshold percentage may be about 50% or more. In some embodiments, the threshold percentage may be as small as about 10%. In response to determining that at least the threshold percentage of pixel coordinates overlap, the client105generally does not determine a pixel position for tracking the object associated with region1920. However, if overlap1922correspond to less than the threshold percentage, an object associated with region1920is tracked, as described further below (e.g., with respect toFIGS.24-26). As described above, sensors108may be arranged such that adjacent sensors108have overlapping fields-of-view. For instance, fields-of-view of adjacent sensors108may overlap by between about 10% to 30%. As such, the same object may be detected by two different sensors108and either included or excluded from tracking in the image frames received from each sensor108based on the unique auto-exclusion zones determined for each sensor108. This may facilitate more reliable tracking than was previously possible, even when one sensor108may have a large auto-exclusion zone (i.e., where a large proportion of pixel coordinates in image frames generated by the sensor108are excluded from tracking). Accordingly, if one sensor108malfunctions, adjacent sensors108may still provide adequate tracking in the space102. If region1920corresponds to a target object (i.e., a person to track in the space102), the tracking system100proceeds to track the region1920. Example methods of tracking are described in greater detail below with respect toFIGS.24-26. In some embodiments, the server106uses the pixel coordinates1912,1916to determine corresponding physical coordinates (e.g., coordinates2012,2016illustrated inFIG.20, described below). For instance, the client105may determine pixel coordinates1912,1916corresponding to the local auto-exclusion zones1910,1914of a sensor108and transmit these coordinates1912,1916to the server106. As shown inFIG.20, the server106may use the pixel coordinates1912,1916received from the sensor108to determine corresponding physical coordinates2010,2016. For instance, a homography generated for each sensor108(seeFIGS.2-7and the corresponding description above), which associates pixel coordinates (e.g., coordinates1912,1916) in an image generated by a given sensor108to corresponding physical coordinates (e.g., coordinates2012,2016) in the space102, may be employed to convert the excluded pixel coordinates1912,1916(ofFIG.19) to excluded physical coordinates2012,2016in the space102. These excluded coordinates2010,2016may be used along with other coordinates from other sensors108to generate the global auto-exclusion zone map2000of the space102which is illustrated inFIG.20. This map2000, for example, may facilitate trouble-shooting of the tracking system100by facilitating quantification, identification, and/or verification of physical regions2002of space102where objects may (and may not) be tracked. This may allow an administrator or other individual to visually confirm that objects can be tracked in appropriate portions of the space102). If regions2002correspond to known high-traffic zones of the space102, system maintenance may be appropriate (e.g., which may involve replacing, adjusting, and/or adding additional sensors108). FIG.21is a flowchart illustrating an example method2100for generating and using auto-exclusion zones (e.g., zones1910,1914ofFIG.19). Method2100may begin at step2102where one or more image frames1902are received during an initial time period. As described above, the initial time period may correspond to an interval of time when no person is moving throughout the space102, or when no person is within the field-of-view of one or more sensors108from which the image frame(s)1902is/are received. In a typical embodiment, one or more image frames1902are generally received from each sensor108of the tracking system100, such that local regions (e.g., auto-exclusion zones1910,1914) to exclude for each sensor108may be determined. In some embodiments, a single image frame1902is received from each sensor108to detect auto-exclusion zones1910,1914. However, in other embodiments, multiple image frames1902are received from each sensor108. Using multiple image frames1902to identify auto-exclusions zones1910,1914for each sensor108may improve the detection of any spurious contours or other aberrations that correspond to pixel coordinates (e.g., coordinates1912,1916ofFIG.19) which should be ignored or excluded during tracking. At step2104, contours (e.g., dashed contour lines corresponding to auto-exclusion zones1910,1914ofFIG.19) are detected in the one or more image frames1902received at step2102. Any appropriate contour detection algorithm may be used including but not limited to those based on Canny edge detection, threshold-based detection, and the like. In some embodiments, the unique contour detection approaches described in this disclosure may be used (e.g., to distinguish closely spaced contours in the field-of-view, as described below, for example, with respect toFIGS.22and23). At step2106, pixel coordinates (e.g., coordinates1912,1916ofFIG.19) are determined for the detected contours (from step2104). The coordinates may be determined, for example, based on a pixel mask that overlays the detected contours. A pixel mask may for example, correspond to pixels within the contours. In some embodiments, pixel coordinates correspond to the pixel coordinates within a bounding box determined for the contour (e.g., as illustrated inFIG.22, described below). For instance, the bounding box may be a rectangular box with an area that encompasses the detected contour. At step2108, the pixel coordinates are stored. For instance, the client105may store the pixel coordinates corresponding to auto-exclusion zones1910,1914in memory (e.g., memory3804ofFIG.38, described below). As described above, the pixel coordinates may also or alternatively be transmitted to the server106(e.g., to generate a map2000of the space, as illustrated in the example ofFIG.20). At step2110, the client105receives an image frame1918during a subsequent time during which tracking is performed (i.e., after the pixel coordinates corresponding to auto-exclusion zones are stored at step2108). The frame is received from sensor108and includes a representation of an object in the space102. At step2112, a contour is detected in the frame received at step2110. For example, the contour may correspond to a curve along the edge of object represented in the frame1902. The pixel coordinates determined at step2106may be excluded (or not used) during contour detection. For instance, image data may be ignored and/or removed (e.g., given a value of zero, or the color equivalent) at the pixel coordinates determined at step2106, such that no contours are detected at these coordinates. In some cases, a contour may be detected outside of these coordinates. In some cases, a contour may be detected that is partially outside of these coordinates but overlaps partially with the coordinates (e.g., as illustrated in image1918ofFIG.19). At step2114, the client105generally determines whether the detected contour has a pixel position that sufficiently overlaps with pixel coordinates of the auto-exclusion zones1910,1914determined at step2106. If the coordinates sufficiently overlap, the contour or region1920(i.e., and the associated object) is not tracked in the frame. For instance, as described above, the client105may determine whether the detected contour or region1920overlaps at least a threshold percentage (e.g., of 50%) with a region associated with the pixel coordinates (e.g., see overlapping region1922ofFIG.19). If the criteria of step2114are satisfied, the client105generally, at step2116, does not determine a pixel position for the contour detected at step2112. As such, no pixel position is reported to the server106, thereby reducing or eliminating the waste of processing resources associated with attempting to track an object when it is not a target object for which tracking is desired. Otherwise, if the criteria of step2114are satisfied, the client105determines a pixel position for the contour or region1920at step2118. Determining a pixel position from a contour may involve, for example, (i) determining a region1920(e.g., a pixel mask or bounding box) associated with the contour and (ii) determining a centroid or other characteristic position of the region as the pixel position. At step2120, the determined pixel position is transmitted to the server106to facilitate global tracking, for example, using predetermined homographies, as described elsewhere in this disclosure (e.g., with respect toFIGS.24-26). For example, the server106may receive the determined pixel position, access a homography associating pixel coordinates in images generated by the sensor108from which the frame at step2110was received to physical coordinates in the space102, and apply the homography to the pixel coordinates to generate corresponding physical coordinates for the tracked object associated with the contour detected at step2112. Modifications, additions, or omissions may be made to method2100depicted inFIG.21. Method2100may include more, fewer, or other steps. For example, steps may be performed in parallel or in any suitable order. While at times discussed as tracking system100, client(s)105, server106, or components of any of thereof performing steps, any suitable system or components of the system may perform one or more steps of the method. Contour-Based Detection of Closely Spaced People In some cases, two people are near each other, making it difficult or impossible to reliably detect and/or track each person (e.g., or other target object) using conventional tools. In some cases, the people may be initially detected and tracked using depth images at an approximate waist depth (i.e., a depth corresponding to the waist height of an average person being tracked). Tracking at an approximate waist depth may be more effective at capturing all people regardless of their height or mode of movement. For instance, by detecting and tacking people at an approximate waist depth, the tracking system100is highly likely to detect tall and short individuals and individuals who may be using alternative methods of movement (e.g., wheelchairs, and the like). However, if two people with a similar height are standing near each other, it may be difficult to distinguish between the two people in the top-view images at the approximate waist depth. Rather than detecting two separate people, the tracking system100may initially detect the people as a single larger object. This disclosure encompasses the recognition that at a decreased depth (i.e., a depth nearer the heads of the people), the people may be more readily distinguished. This is because the people's heads are more likely to be imaged at the decreased depth, and their heads are smaller and less likely to be detected as a single merged region (or contour, as described in greater detail below). As another example, if two people enter the space102standing close to one another (e.g., holding hands), they may appear to be a single larger object. Since the tracking system100may initially detect the two people as one person, it may be difficult to properly identify these people if these people separate while in the space102. As yet another example, if two people who briefly stand close together are momentarily “lost” or detected as only a single, larger object, it may be difficult to correctly identify the people after they separate from one another. As described elsewhere in this disclosure (e.g., with respect toFIGS.19-21and24-26), people (e.g., the people in the example scenarios described above) may be tracked by detecting contours in top-view image frames generated by sensors108and tracking the positions of these contours. However, when two people are closely spaced, a single merged contour (see merged contour2220ofFIG.22described below) may be detected in a top-view image of the people. This single contour generally cannot be used to track each person individually, resulting in considerable downstream errors during tracking. For example, even if two people separate after having been closely spaced, it may be difficult or impossible using previous tools to determine which person was which, and the identity of each person may be unknown after the two people separate. Prior to this disclosure, there was a lack of reliable tools for detecting people (e.g., and other target objects) under the example scenarios described above and under other similar circumstances. The systems and methods described in this disclosure provide improvements to previous technology by facilitating the improved detection of closely spaced people. For example, the systems and methods described in this disclosure may facilitate the detection of individual people when contours associated with these people would otherwise be merged, resulting in the detection of a single person using conventional detection strategies. In some embodiments, improved contour detection is achieved by detecting contours at different depths (e.g., at least two depths) to identify separate contours at a second depth within a larger merged contour detected at a first depth used for tracking. For example, if two people are standing near each other such that contours are merged to form a single contour, separate contours associated with heads of the two closely spaced people may be detected at a depth associated with the persons' heads. In some embodiments, a unique statistical approach may be used to differentiate between the two people by selecting bounding regions for the detected contours with a low similarity value. In some embodiments, certain criteria are satisfied to ensure that the detected contours correspond to separate people, thereby providing more reliable person (e.g., or other target object) detection than was previously possible. For example, two contours detected at an approximate head depth may be required to be within a threshold size range in order for the contours to be used for subsequent tracking. In some embodiments, an artificial neural network may be employed to detect separate people that are closely spaced by analyzing top-view images at different depths. FIG.22is a diagram illustrating the detection of two closely spaced people2202,2204based on top-view depth images2212and angled-view images2214received from sensors108a,busing the tracking system100. In one embodiment, sensors108a,bmay each be one of sensors108of tracking system100described above with respect toFIG.1. In another embodiment, sensors108a,bmay each be one of sensors108of a separate virtual store system (e.g., layout cameras and/or rack cameras) as described in U.S. patent application Ser. No. 16/664,470 entitled, “Customer-Based Video Feed” which is incorporated by reference herein. In this embodiment, the sensors108of tracking system100may be mapped to the sensors108of the virtual store system using a homography. Moreover, this embodiment can retrieve identifiers and the relative position of each person from the sensors108of the virtual store system using the homography between tracking system100and the virtual store system. Generally, sensor108ais an overhead sensor configured to generate top-view depth images2212(e.g., color and/or depth images) of at least a portion of the space102. Sensor108amay be mounted, for example, in a ceiling of the space102. Sensor108amay generate image data corresponding to a plurality of depths which include but are not necessarily limited to the depths2210a-cillustrated inFIG.22. Depths2210a-care generally distances measured from the sensor108a. Each depth2210a-cmay be associated with a corresponding height (e.g., from the floor of the space102in which people2202,2204are detected and/or tracked). Sensor108aobserves a field-of-view2208a. Top-view images2212generated by sensor108amay be transmitted to the sensor client105a. The sensor client105ais communicatively coupled (e.g., via wired connection of wirelessly) to the sensor108aand the server106. Server106is described above with respect toFIG.1. In this example, sensor108bis an angled-view sensor, which is configured to generate angled-view images2214(e.g., color and/or depth images) of at least a portion of the space102. Sensor108bhas a field of view2208b, which overlaps with at least a portion of the field-of-view2208aof sensor108a. The angled-view images2214generated by the angled-view sensor108bare transmitted to sensor client105b. Sensor client105bmay be a client105described above with respect toFIG.1. In the example ofFIG.22, sensors108a,bare coupled to different sensor clients105a,b. However, it should be understood that the same sensor client105may be used for both sensors108a,b(e.g., such that clients105a,bare the same client105). In some cases, the use of different sensor clients105a,bfor sensors108a,bmay provide improved performance because image data may still be obtained for the area shared by fields-of-view2208a,beven if one of the clients105a,bwere to fail. In the example scenario illustrated inFIG.22, people2202,2204are located sufficiently close together such that conventional object detection tools fail to detect the individual people2202,2204(e.g., such that people2202,2204would not have been detected as separate objects). This situation may correspond, for example, to the distance2206abetween people2202,2204being less than a threshold distance2206b(e.g., of about 6 inches). The threshold distance2206bcan generally be any appropriate distance determined for the system100. For example, the threshold distance2206bmay be determined based on several characteristics of the system2200and the people2202,2204being detected. For example, the threshold distance2206bmay be based on one or more of the distance of the sensor108afrom the people2202,2204, the size of the people2202,2204, the size of the field-of-view2208a, the sensitivity of the sensor108a, and the like. Accordingly, the threshold distance2206bmay range from just over zero inches to over six inches depending on these and other characteristics of the tracking system100. People2202,2204may be any target object an individual may desire to detect and/or track based on data (i.e., top-view images2212and/or angled-view images2214) from sensors108a,b. The sensor client105adetects contours in top-view images2212received from sensor108a. Typically, the sensor client105adetects contours at an initial depth2210a. The initial depth2210amay be associated with, for example, a predetermined height (e.g., from the ground) which has been established to detect and/or track people2202,2204through the space102. For example, for tracking humans, the initial depth2210amay be associated with an average shoulder or waist height of people expected to be moving in the space102(e.g., a depth which is likely to capture a representation for both tall and short people traversing the space102). The sensor client105amay use the top-view images2212generated by sensor108ato identify the top-view image2212corresponding to when a first contour2202aassociated with the first person2202merges with a second contour2204aassociated with the second person2204. View2216illustrates contours2202a,2204aat a time prior to when these contours2202a,2204amerge (i.e., prior to a time (tclose) when the first and second people2202,2204are within the threshold distance2206bof each other). View2216corresponds to a view of the contours detected in a top-view image2212received from sensor108a(e.g., with other objects in the image not shown). A subsequent view2218corresponds to the image2212at or near tclosewhen the people2202,2204are closely spaced and the first and second contours2202a,2204amerge to form merged contour2220. The sensor client105amay determine a region2222which corresponds to a “size” of the merged contour2220in image coordinates (e.g., a number of pixels associated with contour2220). For example, region2222may correspond to a pixel mask or a bounding box determined for contour2220. Example approaches to determining pixel masks and bounding boxes are described above with respect to step2104ofFIG.21. For example, region2222may be a bounding box determined for the contour2220using a non-maximum suppression object-detection algorithm. For instance, the sensor client105amay determine a plurality of bounding boxes associated with the contour2220. For each bounding box, the client105amay calculate a score. The score, for example, may represent an extent to which that bounding box is similar to the other bounding boxes. The sensor client105amay identify a subset of the bounding boxes with a score that is greater than a threshold value (e.g., 80% or more), and determine region2222based on this identified subset. For example, region2222may be the bounding box with the highest score or a bounding comprising regions shared by bounding boxes with a score that is above the threshold value. In order to detect the individual people2202and2204, the sensor client105amay access images2212at a decreased depth (i.e., at one or both of depths2212band2212c) and use this data to detect separate contours2202b,2204b, illustrated in view2224. In other words, the sensor client105amay analyze the images2212at a depth nearer the heads of people2202,2204in the images2212in order to detect the separate people2202,2204. In some embodiments, the decreased depth may correspond to an average or predetermined head height of persons expected to be detected by the tracking system100in the space102. In some cases, contours2202b,2204bmay be detected at the decreased depth for both people2202,2204. However, in other cases, the sensor client105amay not detect both heads at the decreased depth. For example, if a child and an adult are closely spaced, only the adult's head may be detected at the decreased depth (e.g., at depth2210b). In this scenario, the sensor client105amay proceed to a slightly increased depth (e.g., to depth2210c) to detect the head of the child. For instance, in such scenarios, the sensor client105aiteratively increases the depth from the decreased depth towards the initial depth2210ain order to detect two distinct contours2202b,2204b(e.g., for both the adult and the child in the example described above). For instance, the depth may first be decreased to depth2210band then increased to depth2210cif both contours2202band2204bare not detected at depth2210b. This iterative process is described in greater detail below with respect to method2300ofFIG.23. As described elsewhere in this disclosure, in some cases, the tracking system100may maintain a record of features, or descriptors, associated with each tracked person (see, e.g.,FIG.30, described below). As such, the sensor client105amay access this record to determine unique depths that are associated with the people2202,2204, which are likely associated with merged contour2220. For instance, depth2210bmay be associated with a known head height of person2202, and depth2212cmay be associated with a known head height of person2204. Once contours2202band2204bare detected, the sensor client determines a region2202cassociated with pixel coordinates2202dof contour2202band a region2204cassociated with pixel coordinates2204dof contour2204b. For example, as described above with respect to region2222, regions2202cand2204cmay correspond to pixel masks or bounding boxes generated based on the corresponding contours2202b,2204b, respectively. For example, pixel masks may be generated to “fill in” the area inside the contours2202b,2204bor bounding boxes may be generated which encompass the contours2202b,2204b. The pixel coordinates2202d,2204dgenerally correspond to the set of positions (e.g., rows and columns) of pixels within regions2202c,2204c. In some embodiments, a unique approach is employed to more reliably distinguish between closely spaced people2202and2204and determine associated regions2202cand2204c. In these embodiments, the regions2202cand2204care determined using a unique method referred to in this disclosure as “non-minimum suppression.” Non-minimum suppression may involve, for example, determining bounding boxes associated with the contour2202b,2204b(e.g., using any appropriate object detection algorithm as appreciated by a person of skilled in the relevant art). For each bounding box, a score may be calculated. As described above with respect to non-maximum suppression, the score may represent an extent to which the bounding box is similar to the other bounding boxes. However, rather than identifying bounding boxes with high scores (e.g., as with non-maximum suppression), a subset of the bounding boxes is identified with scores that are less than a threshold value (e.g., of about 20%). This subset may be used to determine regions2202c,2204c. For example, regions2202c,2204cmay include regions shared by each bounding box of the identified subsets. In other words, bounding boxes that are not below the minimum score are “suppressed” and not used to identify regions2202b,2204b. Prior to assigning a position or identity to the contours2202b,2204band/or the associated regions2202c,2204c, the sensor client105amay first check whether criteria are satisfied for distinguishing the region2202cfrom region2204c. The criteria are generally designed to ensure that the contours2202b,2204b(and/or the associated regions2202c,2204c) are appropriately sized, shaped, and positioned to be associated with the heads of the corresponding people2202,2204. These criteria may include one or more requirements. For example, one requirement may be that the regions2202c,2204coverlap by less than or equal to a threshold amount (e.g., of about 50%, e.g., of about 10%). Generally, the separate heads of different people2202,2204should not overlap in a top-view image2212. Another requirement may be that the regions2202c,2204care within (e.g., bounded by, e.g., encompassed by) the merged-contour region2222. This requirement, for example, ensures that the head contours2202b,2204bare appropriately positioned above the merged contour2220to correspond to heads of people2202,2204. If the contours2202b,2204bdetected at the decreased depth are not within the merged contour2220, then these contours2202b,2204bare likely not the associated with heads of the people2202,2204associated with the merged contour2220. Generally, if the criteria are satisfied, the sensor client105aassociates region2202cwith a first pixel position2202eof person2202and associates region2204cwith a second pixel position2204eof person2204. Each of the first and second pixel positions2202e,2204egenerally corresponds to a single pixel position (e.g., row and column) associated with the location of the corresponding contour2202b,2204bin the image2212. The first and second pixel positions2202e,2204eare included in the pixel positions2226which may be transmitted to the server106to determine corresponding physical (e.g., global) positions2228, for example, based on homographies2230(e.g., using a previously determined homography for sensor108aassociating pixel coordinates in images2212generated by sensor108ato physical coordinates in the space102). As described above, sensor108bis positioned and configured to generate angled-view images2214of at least a portion of the field of-of-view2208aof sensor108a. The sensor client105breceives the angled-view images2214from the second sensor108b. Because of its different (e.g., angled) view of people2202,2204in the space102, an angled-view image2214obtained at tclosemay be sufficient to distinguish between the people2202,2204. A view2232of contours2202d,2204ddetected at tcloseis shown inFIG.22. The sensor client105bdetects a contour2202fcorresponding to the first person2202and determines a corresponding region2202gassociated with pixel coordinates2202hof contour2202fThe sensor client105bdetects a contour2204fcorresponding to the second person2204and determines a corresponding region2204gassociated with pixel coordinates2204hof contour2204f. Since contours2202f,2204fdo not merge and regions2202g,2204gare sufficiently separated (e.g., they do not overlap and/or are at least a minimum pixel distance apart), the sensor client105bmay associate region2202gwith a first pixel position2202iof the first person2202and region2204gwith a second pixel position2204iof the second person2204. Each of the first and second pixel positions2202i,2204igenerally corresponds to a single pixel position (e.g., row and column) associated with the location of the corresponding contour2202f,2204fin the image2214. Pixel positions2202i,2204imay be included in pixel positions2234which may be transmitted to server106to determine physical positions2228of the people2202,2204(e.g., using a previously determined homography for sensor108bassociating pixel coordinates of images2214generated by sensor108bto physical coordinates in the space102). In an example operation of the tracking system100sensor108ais configured to generate top-view color-depth images of at least a portion of the space102. When people2202and2204are within a threshold distance of each another, the sensor client105aidentifies an image frame (e.g., associated with view2218) corresponding to a time stamp (e.g., tclose) where contours2202a,2204aassociated with the first and second person2202,2204, respectively, are merged and form contour2220. In order to detect each person2202and2204in the identified image frame (e.g., associated with view2218), the client105amay first attempt to detect separate contours for each person2202,2204at a first decreased depth2210b. As described above, depth2210bmay be a predetermined height associated with an expected head height of people moving through the space102. In some embodiments, depth2210bmay be a depth previously determined based on a measured height of person2202and/or a measured height of person2204. For example, depth2210bmay be based on an average height of the two people2202,2204. As another example, depth2210bmay be a depth corresponding to a predetermined head height of person2202(as illustrated in the example ofFIG.22). If two contours2202b,2204bare detected at depth2210b, these contours may be used to determine pixel positions2202e,2204eof people2202and2204, as described above. If only one contour2202bis detected at depth2210b(e.g., if only one person2202,2204is tall enough to be detected at depth2210b), the region associated with this contour2202bmay be used to determine the pixel position2202eof the corresponding person, and the next person may be detected at an increased depth2210c. Depth2210cis generally greater than2210bbut less than depth2210a. In the illustrative example ofFIG.22, depth2210ccorresponds to a predetermined head height of person2204. If contour2204bis detected for person2204at depth2210c, a pixel position2204eis determined based on pixel coordinates2204dassociated with the contour2204b(e.g., following determination that the criteria described above are satisfied). If a contour2204bis not detected at depth2210c, the client105amay attempt to detect contours at progressively increased depths until a contour is detected or a maximum depth (e.g., the initial depth2210a) is reached. For example, the sensor client105amay continue to search for the contour2204bat increased depths (i.e., depths between depth2210cand the initial depth2210a). If the maximum depth (e.g., depth2210a) is reached without the contour2204bbeing detected, the client105agenerally determines that the separate people2202,2204cannot be detected. FIG.23is a flowchart illustrating a method2300of operating tracking system100to detect closely spaced people2202,2204. Method2300may begin at step2302where the sensor client105areceives one or more frames of top-view depth images2212generated by sensor108a. At step2304, the sensor client105aidentifies a frame in which a first contour2202aassociated with the first person2202is merged with a second contour2204aassociated with the second person2204. Generally, the merged first and second contours (i.e., merged contour2220) is determined at the first depth2212ain the depth images2212received at step2302. The first depth2212amay correspond to a waist or should depth of persons expected to be tracked in the space102. The detection of merged contour2220corresponds to the first person2202being located in the space within a threshold distance2206bfrom the second person2204, as described above. At step2306, the sensor client105adetermines a merged-contour region2222. Region2222is associated with pixel coordinates of the merged contour2220. For instance, region2222may correspond to coordinates of a pixel mask that overlays the detected contour. As another example, region2222may correspond to pixel coordinates of a bounding box determined for the contour (e.g., using any appropriate object detection algorithm). In some embodiments, a method involving non-maximum suppression is used to detect region2222. In some embodiments, region2222is determined using an artificial neural network. For example, an artificial neural network may be trained to detect contours at various depths in top-view images generated by sensor108a. At step2308, the depth at which contours are detected in the identified image frame from step2304is decreased (e.g., to depth2210billustrated inFIG.22). At step2310a, the sensor client105adetermines whether a first contour (e.g., contour2202b) is detected at the current depth. If the contour2202bis not detected, the sensor client105aproceeds, at step2312a, to an increased depth (e.g., to depth2210c). If the increased depth corresponds to having reached a maximum depth (e.g., to reaching the initial depth2210a), the process ends because the first contour2202bwas not detected. If the maximum depth has not been reached, the sensor client105areturns to step2310aand determines if the first contour2202bis detected at the newly increased current depth. If the first contour2202bis detected at step2310a, the sensor client105a, at step2316a, determines a first region2202cassociated with pixel coordinates2202dof the detected contour2202b. In some embodiments, region2202cmay be determined using a method of non-minimal suppression, as described above. In some embodiments, region2202cmay be determined using an artificial neural network. The same or a similar approach—illustrated in steps2210b,2212b,2214b, and2216b—may be used to determine a second region2204cassociated with pixel coordinates2204dof the contour2204b. For example, at step2310b, the sensor client105adetermines whether a second contour2204bis detected at the current depth. If the contour2204bis not detected, the sensor client105aproceeds, at step2312b, to an increased depth (e.g., to depth2210c). If the increased depth corresponds to having reached a maximum depth (e.g., to reaching the initial depth2210a), the process ends because the second contour2204bwas not detected. If the maximum depth has not been reached, the sensor client105areturns to step2310band determines if the second contour2204bis detected at the newly increased current depth. If the second contour2204bis detected at step2210a, the sensor client105a, at step2316a, determines a second region2204cassociated with pixel coordinates2204dof the detected contour2204b. In some embodiments, region2204cmay be determined using a method of non-minimal suppression or an artificial neural network, as described above. At step2318, the sensor client105adetermines whether criteria are satisfied for distinguishing the first and second regions determined in steps2316aand2316b, respectively. For example, the criteria may include one or more requirements. For example, one requirement may be that the regions2202c,2204coverlap by less than or equal to a threshold amount (e.g., of about 10%). Another requirement may be that the regions2202c,2204care within (e.g., bounded by, e.g., encompassed by) the merged-contour region2222(determined at step2306). If the criteria are not satisfied, method2300generally ends. Otherwise, if the criteria are satisfied at step2318, the method2300proceeds to steps2320and2322where the sensor client105aassociates the first region2202bwith a first pixel position2202eof the first person2202(step2320) and associates the second region2204bwith a first pixel position2202eof the first person2204(step2322). Associating the regions2202c,2204cto pixel positions2202e,2204emay correspond to storing in a memory pixel coordinates2202d,2204dof the regions2202c,2204cand/or an average pixel position corresponding to each of the regions2202c,2204calong with an object identifier for the people2202,2204. At step2324, the sensor client105amay transmit the first and second pixel positions (e.g., as pixel positions2226) to the server106. At step2326, the server106may apply a homography (e.g., of homographies2230) for the sensor2202to the pixel positions to determine corresponding physical (e.g., global) positions2228for the first and second people2202,2204. Examples of generating and using homographies2230are described in greater detail above with respect toFIGS.2-7. Modifications, additions, or omissions may be made to method2300depicted inFIG.23. Method2300may include more, fewer, or other steps. For example, steps may be performed in parallel or in any suitable order. While at times discussed as system2200, sensor client22105a, master server2208, or components of any of thereof performing steps, any suitable system or components of the system may perform one or more steps of the method. Multi-Sensor Image Tracking on a Local and Global Planes As described elsewhere in this disclosure (e.g., with respect toFIGS.19-23above), tracking people (e.g., or other target objects) in space102using multiple sensors108presents several previously unrecognized challenges. This disclosure encompasses not only the recognition of these challenges but also unique solutions to these challenges. For instance, systems and methods are described in this disclosure that track people both locally (e.g., by tracking pixel positions in images received from each sensor108) and globally (e.g., by tracking physical positions on a global plane corresponding to the physical coordinates in the space102). Person tracking may be more reliable when performed both locally and globally. For example, if a person is “lost” locally (e.g., if a sensor108fails to capture a frame and a person is not detected by the sensor108), the person may still be tracked globally based on an image from a nearby sensor108(e.g., the angled-view sensor108bdescribed with respect toFIG.22above), an estimated local position of the person determined using a local tracking algorithm, and/or an estimated global position determined using a global tracking algorithm. As another example, if people appear to merge (e.g., if detected contours merge into a single merged contour, as illustrated in view2216ofFIG.22above) at one sensor108, an adjacent sensor108may still provide a view in which the people are separate entities (e.g., as illustrated in view2232ofFIG.22above). Thus, information from an adjacent sensor108may be given priority for person tracking. In some embodiments, if a person tracked via a sensor108is lost in the local view, estimated pixel positions may be determined using a tracking algorithm and reported to the server106for global tracking, at least until the tracking algorithm determines that the estimated positions are below a threshold confidence level. FIGS.24A-Cillustrate the use of a tracking subsystem2400to track a person2402through the space102.FIG.24Aillustrates a portion of the tracking system100ofFIG.1when used to track the position of person2402based on image data generated by sensors108a-c. The position of person2402is illustrated at three different time points: t1, t2, and t3. Each of the sensors108a-cis a sensor108ofFIG.1, described above. Each sensor108a-chas a corresponding field-of-view2404a-c, which corresponds to the portion of the space102viewed by the sensor108a-c. As shown inFIG.24A, each field-of-view2404a-coverlaps with that of the adjacent sensor(s)108a-c. For example, the adjacent fields-of-view2404a-cmay overlap by between about 10% and 30%. Sensors108a-cgenerally generate top-view images and transmit corresponding top-view image feeds2406a-cto a tracking subsystem2400. The tracking subsystem2400includes the client(s)105and server106ofFIG.1. The tracking system2400generally receives top-view image feeds2406a-cgenerated by sensors108a-c, respectively, and uses the received images (seeFIG.24B) to track a physical (e.g., global) position of the person2402in the space102(seeFIG.24C). Each sensor108a-cmay be coupled to a corresponding sensor client105of the tracking subsystem2400. As such, the tracking subsystem2400may include local particle filter trackers2444for tracking pixel positions of person2402in images generated by sensors108a-b, global particle filter trackers2446for tracking physical positions of person2402in the space102. FIG.24Bshows example top-view images2408a-c,2418a-c, and2426a-cgenerated by each of the sensors108a-cat times t1, t2, and t3. Certain of the top-view images include representations of the person2402(i.e., if the person2402was in the field-of-view2404a-cof the sensor108a-cat the time the image2408a-c,2418a-c, and2426a-cwas obtained). For example, at time t1, images2408a-care generated by sensors108a-c, respectively, and provided to the tracking subsystem2400. The tracking subsystem2400detects a contour2410associated with person2402in image2408a. For example, the contour2410may correspond to a curve outlining the border of a representation of the person2402in image2408a(e.g., detected based on color (e.g., RGB) image data at a predefined depth in image2408a, as described above with respect toFIG.19). The tracking subsystem2400determines pixel coordinates2412a, which are illustrated in this example by the bounding box2412bin image2408a. Pixel position2412cis determined based on the coordinates2412a. The pixel position2412cgenerally refers to the location (i.e., row and column) of the person2402in the image2408a. Since the object2402is also within the field-of-view2404bof the second sensor108bat t1(seeFIG.24A), the tracking system also detects a contour2414in image2408band determines corresponding pixel coordinates2416a(i.e., associated with bounding box2416b) for the object2402. Pixel position2416cis determined based on the coordinates2416a. The pixel position2416cgenerally refers to the pixel location (i.e., row and column) of the person2402in the image2408b. At time t1, the object2402is not in the field-of-view2404cof the third sensor108c(seeFIG.24A). Accordingly, the tracking subsystem2400does not determine pixel coordinates for the object2402based on the image2408creceived from the third sensor108c. Turning now toFIG.24C, the tracking subsystem2400(e.g., the server106of the tacking subsystem2400) may determine a first global position2438based on the determined pixel positions2412cand2416c(e.g., corresponding to pixel coordinates2412a,2416aand bounding boxes2412b,2416b, described above). The first global position2438corresponds to the position of the person2402in the space102, as determined by the tracking subsystem2400. In other words, the tracking subsystem2400uses the pixel positions2412c,2416cdetermined via the two sensors108a,bto determine a single physical position2438for the person2402in the space102. For example, a first physical position2412dmay be determined from the pixel position2412cassociated with bounding box2412busing a first homography associating pixel coordinates in the top-view images generated by the first sensor108ato physical coordinates in the space102. A second physical position2416dmay similarly be determined using the pixel position2416cassociated with bounding box2416busing a second homography associating pixel coordinates in the top-view images generated by the second sensor108bto physical coordinates in the space102. In some cases, the tracking subsystem2400may compare the distance between first and second physical positions2412dand2416dto a threshold distance2448to determine whether the positions2412d,2416dcorrespond to the same person or different people (see, e.g., step2620ofFIG.26, described below). The first global position2438may be determined as an average of the first and second physical positions2410d,2414d. In some embodiments, the global position is determined by clustering the first and second physical positions2410d,2414d(e.g., using any appropriate clustering algorithm). The first global position2438may correspond to (x,y) coordinates of the position of the person2402in the space102. Returning toFIG.24A, at time t2, the object2402is within fields-of-view2404aand2404bcorresponding to sensors108a,b. As shown inFIG.24B, a contour2422is detected in image2418band corresponding pixel coordinates2424a, which are illustrated by bounding box2424b, are determined. Pixel position2424cis determined based on the coordinates2424a. The pixel position2424cgenerally refers to the location (i.e., row and column) of the person2402in the image2418b. However, in this example, the tracking subsystem2400fails to detect, in image2418afrom sensor108a, a contour associated with object2402. This may be because the object2402was at the edge of the field-of-view2404a, because of a lost image frame from feed2406a, because the position of the person2402in the field-of-view2404acorresponds to an auto-exclusion zone for sensor108a(seeFIGS.19-21and corresponding description above), or because of any other malfunction of sensor108aand/or the tracking subsystem2400. In this case, the tracking subsystem2400may locally (e.g., at the particular client105which is coupled to sensor108a) estimate pixel coordinates2420aand/or corresponding pixel position2420bfor object2402. For example, a local particle filter tracker2444for object2402in images generated by sensor108amay be used to determine the estimated pixel position2420b. FIGS.25A,B illustrate the operation of an example particle filter tracker2444,2446(e.g., for determining estimated pixel position2420a).FIG.25Aillustrates a region2500in pixel coordinates or physical coordinates of space102. For example, region2500may correspond to a pixel region in an image or to a region in physical space. In a first zone2502, an object (e.g., person2402) is detected at position2504. The particle filter determines several estimated subsequent positions2506for the object. The estimated subsequent positions2506are illustrated as the dots or “particles” inFIG.25Aand are generally determined based on a history of previous positions of the object. Similarly, another zone2508shows a position2510for another object (or the same object at a different time) along with estimated subsequent positions2512of the “particles” for this object. For the object at position2504, the estimated subsequent positions2506are primarily clustered in a similar area above and to the right of position2504, indicating that the particle filter tracker2444,2446may provide a relatively good estimate of a subsequent position. Meanwhile, the estimated subsequent positions2512are relatively randomly distributed around position2510for the object, indicating that the particle filter tracker2444,2446may provide a relatively poor estimate of a subsequent position.FIG.25Bshows a distribution plot2550of the particles illustrated inFIG.25A, which may be used to quantify the quality of an estimated position based on a standard deviation value (σ). InFIG.25B, curve2552corresponds to the position distribution of anticipated positions2506, and curve2554corresponds to the position distribution of the anticipated positions2512. Curve2554has to a relatively narrow distribution such that the anticipated positions2506are primarily near the mean position (μ). For example, the narrow distribution corresponds to the particles primarily having a similar position, which in this case is above and to right of position2504. In contrast, curve2554has a broader distribution, where the particles are more randomly distributed around the mean position (μ). Accordingly, the standard deviation of curve2552(σ1) is smaller than the standard deviation curve2554(σ2). Generally, a standard deviation (e.g., either σ1or σ2) may be used as a measure of an extent to which an estimated pixel position generated by the particle filter tracker2444,2446is likely to be correct. If the standard deviation is less than a threshold standard deviation (σthreshold), as is the case with curve2552and σ1, the estimated position generated by a particle filter tracker2444,2446may be used for object tracking. Otherwise, the estimated position generally is not used for object tracking. Referring again toFIG.24C, the tracking subsystem2400(e.g., the server106of tracking subsystem2400) may determine a second global position2440for the object2402in the space102based on the estimated pixel position2420bassociated with estimated bounding box2420ain frame2418aand the pixel position2424cassociated with bounding box2424bfrom frame2418b. For example, a first physical position2420cmay be determined using a first homography associating pixel coordinates in the top-view images generated by the first sensor108ato physical coordinates in the space102. A second physical position2424dmay be determined using a second homography associating pixel coordinates in the top-view images generated by the second sensor108bto physical coordinates in the space102. The tracking subsystem2400(i.e., server106of the tracking subsystem2400) may determine the second global position2440based on the first and second physical positions2420c,2424d, as described above with respect to time t1. The second global position2440may correspond to (x,y) coordinates of the person2402in the space102. Turning back toFIG.24A, at time t3, the object2402is within the field-of-view2404bof sensor108band the field-of-view2404cof sensor108c. Accordingly, these images2426b,cmay be used to track person2402.FIG.24Bshows that a contour2428and corresponding pixel coordinates2430a, pixel region2430b, and pixel position2430care determined in frame2426bfrom sensor108b, while a contour2432and corresponding pixel coordinates2434a, pixel region2434b, and pixel position2434care detected in frame2426cfrom sensor108c. As shown inFIG.24Cand as described in greater detail above for times t1and t2, the tracking subsystem2400may determine a third global position2442for the object2402in the space based on the pixel position2430cassociated with bounding box2430bin frame2426band the pixel position2434cassociated with bounding box2434bfrom frame2426c. For example, a first physical position2430dmay be determined using a second homography associating pixel coordinates in the top-view images generated by the second sensor108bto physical coordinates in the space102. A second physical position2434dmay be determined using a third homography associating pixel coordinates in the top-view images generated by the third sensor108cto physical coordinates in the space102. The tracking subsystem2400may determine the global position2442based on the first and second physical positions2430d,2434d, as described above with respect to times t1and t2. FIG.26is a flow diagram illustrating the tracking of person2402in space the102based on top-view images (e.g., images2408a-c,2418a0c,2426a-cfrom feeds2406a,b, generated by sensors108a,b, described above. Field-of-view2404aof sensor108aand field-of-view2404bof sensors108bgenerally overlap by a distance2602. In one embodiment, distance2602may be about 10% to 30% of the fields-of-view2404a,b. In this example, the tracking subsystem2400includes the first sensor client105a, the second sensor client105b, and the server106. Each of the first and second sensor clients105a,bmay be a client105described above with respect toFIG.1. The first sensor client105ais coupled to the first sensor108aand configured to track, based on the first feed2406a, a first pixel position2112cof the person2402. The second sensor client105bis coupled to the second sensor108band configured to track, based on the second feed2406b, a second pixel position2416cof the same person2402. The server106generally receives pixel positions from clients105a,band tracks the global position of the person2402in the space102. In some embodiments, the server106employs a global particle filter tracker2446to track a global physical position of the person2402and one or more other people2604in the space102). Tracking people both locally (i.e., at the “pixel level” using clients105a,b) and globally (i.e., based on physical positions in the space102) improves tracking by reducing and/or eliminating noise and/or other tracking errors which may result from relying on either local tracking by the clients105a,bor global tracking by the server106alone. FIG.26illustrates a method2600implemented by sensor clients105a,band server106. Sensor client105areceives the first data feed2406afrom sensor108aat step2606a. The feed may include top-view images (e.g., images2408a-c,2418a-c,2426a-cofFIG.24). The images may be color images, depth images, or color-depth images. In an image from the feed2406a(e.g., corresponding to a certain timestamp), the sensor client105adetermines whether a contour is detected at step2608a. If a contour is detected at the timestamp, the sensor client105adetermines a first pixel position2412cfor the contour at step2610a. For instance, the first pixel position2412cmay correspond to pixel coordinates associated with a bounding box2412bdetermined for the contour (e.g., using any appropriate object detection algorithm). As another example, the sensor client105amay generate a pixel mask that overlays the detected contour and determine pixel coordinates of the pixel mask, as described above with respect to step2104ofFIG.21. If a contour is not detected at step2608a, a first particle filter tracker2444may be used to estimate a pixel position (e.g., estimated position2420b), based on a history of previous positions of the contour2410, at step2612a. For example, the first particle filter tracker2444may generate a probability-weighted estimate of a subsequent first pixel position corresponding to the timestamp (e.g., as described above with respect toFIGS.25A,B). Generally, if the confidence level (e.g., based on a standard deviation) of the estimated pixel position2420bis below a threshold value (e.g., seeFIG.25Band related description above), no pixel position is determined for the timestamp by the sensor client105a, and no pixel position is reported to server106for the timestamp. This prevents the waste of processing resources which would otherwise be expended by the server106in processing unreliable pixel position data. As described below, the server106can often still track person2402, even when no pixel position is provided for a given timestamp, using the global particle filter tracker2446(see steps2626,2632, and2636below). The second sensor client105breceives the second data feed2406bfrom sensor108bat step2606b. The same or similar steps to those described above for sensor client105aare used to determine a second pixel position2416cfor a detected contour2414or estimate a pixel position based on a second particle filter tracker2444. At step2608b, the sensor client105bdetermines whether a contour2414is detected in an image from feed2406bat a given timestamp. If a contour2414is detected at the timestamp, the sensor client105bdetermines a first pixel position2416cfor the contour2414at step2610b(e.g., using any of the approaches described above with respect to step2610a). If a contour2414is not detected, a second particle filter tracker2444may be used to estimate a pixel position at step2612b(e.g., as described above with respect to step2612a). If the confidence level of the estimated pixel position is below a threshold value (e.g., based on a standard deviation value for the tracker2444), no pixel position is determined for the timestamp by the sensor client105b, and no pixel position is reported for the timestamp to the server106. While steps2606a,b-2612a,bare described as being performed by sensor client105aand105b, it should be understood that in some embodiments, a single sensor client105may receive the first and second image feeds2406a,bfrom sensors108a,band perform the steps described above. Using separate sensor clients105a,bfor separate sensors108a,bor sets of sensors108may provide redundancy in case of client105malfunctions (e.g., such that even if one sensor client105fails, feeds from other sensors may be processed by other still-functioning clients105). At step2614, the server106receives the pixel positions2412c,2416cdetermined by the sensor clients105a,b. At step2616, the server106may determine a first physical position2412dbased on the first pixel position2412cdetermined at step2610aor estimated at step2612aby the first sensor client105a. For example, the first physical position2412dmay be determined using a first homography associating pixel coordinates in the top-view images generated by the first sensor108ato physical coordinates in the space102. At step2618, the server106may determine a second physical position2416dbased on the second pixel position2416cdetermined at step2610bor estimated at step2612bby the first sensor client105b. For instance, the second physical position2416dmay be determined using a second homography associating pixel coordinates in the top-view images generated by the second sensor108bto physical coordinates in the space102. At step2620the server106determines whether the first and second positions2412d,2416d(from steps2616and2618) are within a threshold distance2448(e.g., of about six inches) of each other. In general, the threshold distance2448may be determined based on one or more characteristics of the system tracking system100and/or the person2402or another target object being tracked. For example, the threshold distance2448may be based on one or more of the distance of the sensors108a-bfrom the object, the size of the object, the fields-of-view2404a-b, the sensitivity of the sensors108a-b, and the like. Accordingly, the threshold distance2448may range from just over zero inches to greater than six inches depending on these and other characteristics of the tracking system100. If the positions2412d,2416dare within the threshold distance2448of each other at step2620, the server106determines that the positions2412d,2416dcorrespond to the same person2402at step2622. In other words, the server106determines that the person detected by the first sensor108ais the same person detected by the second sensor108b. This may occur, at a given timestamp, because of the overlap2604between field-of-view2404aand field-of-view2404bof sensors108aand108b, as illustrated inFIG.26. At step2624, the server106determines a global position2438(i.e., a physical position in the space102) for the object based on the first and second physical positions from steps2616and2618. For instance, the server106may calculate an average of the first and second physical positions2412d,2416d. In some embodiments, the global position2438is determined by clustering the first and second physical positions2412d,2416d(e.g., using any appropriate clustering algorithm). At step2626, a global particle filter tracker2446is used to track the global (e.g., physical) position2438of the person2402. An example of a particle filter tracker is described above with respect toFIGS.25A,B. For instance, the global particle filter tracker2446may generate probability-weighted estimates of subsequent global positions at subsequent times. If a global position2438cannot be determined at a subsequent timestamp (e.g., because pixel positions are not available from the sensor clients105a,b), the particle filter tracker2446may be used to estimate the position. If at step2620the first and second physical positions2412d,2416dare not within the threshold distance2448from each other, the server106generally determines that the positions correspond to different objects2402,2604at step2628. In other words, the server106may determine that the physical positions determined at steps2616and2618are sufficiently different, or far apart, for them to correspond to the first person2402and a different second person2604in the space102. At step2630, the server106determines a global position for the first object2402based on the first physical position2412cfrom step2616. Generally, in the case of having only one physical position2412con which to base the global position, the global position is the first physical position2412c. If other physical positions are associated with the first object (e.g., based on data from other sensors108, which for clarity are not shown inFIG.26), the global position of the first person2402may be an average of the positions or determined based on the positions using any appropriate clustering algorithm, as described above. At step2632, a global particle filter tracker2446may be used to track the first global position of the first person2402, as is also described above. At step2634, the server106determines a global position for the second person2404based on the second physical position2416cfrom step2618. Generally, in the case of having only one physical position2416con which to base the global position, the global position is the second physical position2416c. If other physical positions are associated with the second object (e.g., based on data from other sensors108, which not shown inFIG.26for clarity), the global position of the second person2604may be an average of the positions or determined based on the positions using any appropriate clustering algorithm. At step2636, a global particle filter tracker2446is used to track the second global position of the second object, as described above. Modifications, additions, or omissions may be made to the method2600described above with respect toFIG.26. The method may include more, fewer, or other steps. For example, steps may be performed in parallel or in any suitable order. While at times discussed as a tracking subsystem2400, sensor clients105a,b, server106, or components of any thereof performing steps, any suitable system or components of the system may perform one or more steps of the method2600. Candidate Lists When the tracking system100is tracking people in the space102, it may be challenging to reliably identify people under certain circumstances such as when they pass into or near an auto-exclusion zone (seeFIGS.19-21and corresponding description above), when they stand near another person (seeFIGS.22-23and corresponding description above), and/or when one or more of the sensors108, client(s)105, and/or server106malfunction. For instance, after a first person becomes close to or even comes into contact with (e.g., “collides” with) a second person, it may difficult to determine which person is which (e.g., as described above with respect toFIG.22). Conventional tracking systems may use physics-based tracking algorithms in an attempt to determine which person is which based on estimated trajectories of the people (e.g., estimated as though the people are marbles colliding and changing trajectories according to a conservation of momentum, or the like). However, identities of people may be more difficult to track reliably, because movements may be random. As described above, the tracking system100may employ particle filter tracking for improved tracking of people in the space102(see e.g.,FIGS.24-26and the corresponding description above). However, even with these advancements, the identities of people being tracked may be difficult to determine at certain times. This disclosure particularly encompasses the recognition that positions of people who are shopping in a store (i.e., moving about a space, selecting items, and picking up the items) are difficult or impossible to track using previously available technology because movement of these people is random and does not follow a readily defined pattern or model (e.g., such as the physics-based models of previous approaches). Accordingly, there is a lack of tools for reliably and efficiently tracking people (e.g., or other target objects). This disclosure provides a solution to the problems of previous technology, including those described above, by maintaining a record, which is referred to in this disclosure as a “candidate list,” of possible person identities, or identifiers (i.e., the usernames, account numbers, etc. of the people being tracked), during tracking. A candidate list is generated and updated during tracking to establish the possible identities of each tracked person. Generally, for each possible identity or identifier of a tracked person, the candidate list also includes a probability that the identity, or identifier, is believed to be correct. The candidate list is updated following interactions (e.g., collisions) between people and in response to other uncertainty events (e.g., a loss of sensor data, imaging errors, intentional trickery, etc.). In some cases, the candidate list may be used to determine when a person should be re-identified (e.g., using methods described in greater detail below with respect toFIGS.29-32). Generally, re-identification is appropriate when the candidate list of a tracked person indicates that the person's identity is not sufficiently well known (e.g., based on the probabilities stored in the candidate list being less than a threshold value). In some embodiments, the candidate list is used to determine when a person is likely to have exited the space102(i.e., with at least a threshold confidence level), and an exit notification is only sent to the person after there is high confidence level that the person has exited (see, e.g., view2730ofFIG.27, described below). In general, processing resources may be conserved by only performing potentially complex person re-identification tasks when a candidate list indicates that a person's identity is no longer known according to pre-established criteria. FIG.27is a flow diagram illustrating how identifiers2701a-cassociated with tracked people (e.g., or any other target object) may be updated during tracking over a period of time from an initial time t0to a final time t5by tracking system100. People may be tracked using tracking system100based on data from sensors108, as described above.FIG.27depicts a plurality of views2702,2716,2720,2724,2728,2730at different time points during tracking. In some embodiments, views2702,2716,2720,2724,2728,2730correspond to a local frame view (e.g., as described above with respect toFIG.22) from a sensor108with coordinates in units of pixels (e.g., or any other appropriate unit for the data type generated by the sensor108). In other embodiments, views2702,2716,2720,2724,2728,2730correspond to global views of the space102determined based on data from multiple sensors108with coordinates corresponding to physical positions in the space (e.g., as determined using the homographies described in greater detail above with respect toFIGS.2-7). For clarity and conciseness, the example ofFIG.27is described below in terms of global views of the space102(i.e., a view corresponding to the physical coordinates of the space102). The tracked object regions2704,2708,2712correspond to regions of the space102associated with the positions of corresponding people (e.g., or any other target object) moving through the space102. For example, each tracked object region2704,2708,2712may correspond to a different person moving about in the space102. Examples of determining the regions2704,2708,2712are described above, for example, with respect toFIGS.21,22, and24. As one example, the tracked object regions2704,2708,2712may be bounding boxes identified for corresponding objects in the space102. As another example, tracked object regions2704,2708,2712may correspond to pixel masks determined for contours associated with the corresponding objects in the space102(see, e.g., step2104ofFIG.21for a more detailed description of the determination of a pixel mask). Generally, people may be tracked in the space102and regions2704,2708,2712may be determined using any appropriate tracking and identification method. View2702at initial time t0includes a first tracked object region2704, a second tracked object region2708, and a third tracked object region2712. The view2702may correspond to a representation of the space102from a top view with only the tracked object regions2704,2708,2712shown (i.e., with other objects in the space102omitted). At time to, the identities of all of the people are generally known (e.g., because the people have recently entered the space102and/or because the people have not yet been near each other). The first tracked object region2704is associated with a first candidate list2706, which includes a probability (PA=100%) that the region2704(or the corresponding person being tracked) is associated with a first identifier2701a. The second tracked object region2708is associated with a second candidate list2710, which includes a probability (PB=100%) that the region2708(or the corresponding person being tracked) is associated with a second identifier2701b. The third tracked object region2712is associated with a third candidate list2714, which includes a probability (PC=100%) that the region2712(or the corresponding person being tracked) is associated with a third identifier2701c. Accordingly, at time t1, the candidate lists2706,2710,2714indicate that the identity of each of the tracked object regions2704,2708,2712is known with all probabilities having a value of one hundred percent. View2716shows positions of the tracked objects2704,2708,2712at a first time t1, which is after the initial time t0. At time t1, the tracking system detects an event which may cause the identities of the tracked object regions2704,2708to be less certain. In this example, the tracking system100detects that the distance2718abetween the first object region274and the second object region2708is less than or equal to a threshold distance2718b. Because the tracked object regions were near each other (i.e., within the threshold distance2718b), there is a non-zero probability that the regions may be misidentified during subsequent times. The threshold distance2718bmay be any appropriate distance, as described above with respect toFIG.22. For example, the tracking system100may determine that the first object region2704is within the threshold distance2718bof the second object region2708by determining first coordinates of the first object region2704, determining second coordinates of the second object region2708, calculating a distance2718a, and comparing distance2718ato the threshold distance2718b. In some embodiments, the first and second coordinates correspond to pixel coordinates in an image capturing the first and second people, and the distance2718acorresponds to a number of pixels between these pixel coordinates. For example, as illustrated in view2716ofFIG.27, the distance2718amay correspond to the pixel distance between centroids of the tracked object regions2704,2708. In other embodiments, the first and second coordinates correspond to physical, or global, coordinates in the space102, and the distance2718acorresponds to a physical distance (e.g., in units of length, such as inches). For example, physical coordinates may be determined using the homographies described in greater detail above with respect toFIGS.2-7. After detecting that the identities of regions2704,2708are less certain (i.e., that the first object region2704is within the threshold distance2718bof the second object region2708), the tracking system100determines a probability2717that the first tracked object region2704switched identifiers2701a-cwith the second tracked object region2708. For example, when two contours become close in an image, there is a chance that the identities of the contours may be incorrect during subsequent tracking (e.g., because the tracking system100may assign the wrong identifier2701a-cto the contours between frames). The probability2717that the identifiers2701a-cswitched may be determined, for example, by accessing a predefined probability value (e.g., of 50%). In other cases, the probability2717may be based on the distance2718abetween the object regions2704,2708. For example, as the distance2718decreases, the probability2717that the identifiers2701a-cswitched may increase. In the example ofFIG.27, the determined probability2717is 20%, because the object regions2704,2708are relatively far apart but there is some overlap between the regions2704,2708. In some embodiments, the tracking system100may determine a relative orientation between the first object region2704and the second object region2708, and the probability2717that the object regions2704,2708switched identifiers2701a-cmay be based on this relative orientation. The relative orientation may correspond to an angle between a direction a person associated with the first region2704is facing and a direction a person associated with the second region2708is facing. For example, if the angle between the directions faced by people associated with first and second regions2704,2708is near 180° (i.e., such that the people are facing in opposite directions), the probability2717that identifiers2701a-cswitched may be decreased because this case may correspond to one person accidentally backing into the other person. Based on the determined probability2717that the tracked object regions2704,2708switched identifiers2701a-c(e.g., 20% in this example), the tracking system100updates the first candidate list2706for the first object region2704. The updated first candidate list2706includes a probability (PA=80%) that the first region2704is associated with the first identifier2701aand a probability (PB=20%) that the first region2704is associated with the second identifier2701b. The second candidate list2710for the second object region2708is similarly updated based on the probability2717that the first object region2704switched identifiers2701a-cwith the second object region2708. The updated second candidate list2710includes a probability (PA=20%) that the second region2708is associated with the first identifier2701aand a probability (PB=80%) that the second region2708is associated with the second identifier2701b. View2720shows the object regions2704,2708,2712at a second time point t2, which follows time t1. At time t2, a first person corresponding to the first tracked region2704stands close to a third person corresponding to the third tracked region2712. In this example case, the tracking system100detects that the distance2722between the first object region2704and the third object region2712is less than or equal to the threshold distance2718b(i.e., the same threshold distance2718bdescribed above with respect to view2716). After detecting that the first object region2704is within the threshold distance2718bof the third object region2712, the tracking system100determines a probability2721that the first tracked object region2704switched identifiers2701a-cwith the third tracked object region2712. As described above, the probability2721that the identifiers2701a-cswitched may be determined, for example, by accessing a predefined probability value (e.g., of 50%). In some cases, the probability2721may be based on the distance2722between the object regions2704,2712. For example, since the distance2722is greater than distance2718a(from view2716, described above), the probability2721that the identifiers2701a-cswitched may be greater at time t1than at time t2. In the example of view2720ofFIG.27, the determined probability2721is 10% (which is smaller than the switching probability2717of 20% determined at time t1). Based on the determined probability2721that the tracked object regions2704,2712switched identifiers2701a-c(e.g., of 10% in this example), the tracking system100updates the first candidate list2706for the first object region2704. The updated first candidate list2706includes a probability (PA=73%) that the first object region2704is associated with the first identifier2701a, a probability (PB=17%) that the first object region2704is associated with the second identifier2701b, and a probability (PC=10%) that the first object region2704is associated with the third identifier2701c. The third candidate list2714for the third object region2712is similarly updated based on the probability2721that the first object region2704switched identifiers2701a-cwith the third object region2712. The updated third candidate list2714includes a probability (PA=7%) that the third object region2712is associated with the first identifier2701a, a probability (PB=3%) that the third object region2712is associated with the second identifier2701b, and a probability (PC=90%) that the third object region2712is associated with the third identifier2701c. Accordingly, even though the third object region2712never interacted with (e.g., came within the threshold distance2718bof) the second object region2708, there is still a non-zero probability (PB=3%) that the third object region2712is associated with the second identifier2701b, which was originally assigned (at time to) to the second object region2708. In other words, the uncertainty in object identity that was detected at time t1is propagated to the third object region2712via the interaction with region2704at time t2. This unique “propagation effect” facilitates improved object identification and can be used to narrow the search space (e.g., the number of possible identifiers2701a-cthat may be associated with a tracked object region2704,2708,2712) when object re-identification is needed (as described in greater detail below and with respect toFIGS.29-32). View2724shows third object region2712and an unidentified object region2726at a third time point t3, which follows time t2. At time t3, the first and second people associated with regions2704,2708come into contact (e.g., or “collide”) or are otherwise so close to one another that the tracking system100cannot distinguish between the people. For example, contours detected for determining the first object region2704and the second object region2708may have merged resulting in the single unidentified object region2726. Accordingly, the position of object region2726may correspond to the position of one or both of object regions2704and2708. At time t3, the tracking system100may determine that the first and second object regions2704,2708are no longer detected because a first contour associated with the first object region2704is merged with a second contour associated with the second object region2708. The tracking system100may wait until a subsequent time t4(shown in view2728) when the first and second object regions2704,2708are again detected before the candidate lists2706,2710are updated. Time t4generally corresponds to a time when the first and second people associated with regions2704,2708have separated from each other such that each person can be tracked in the space102. Following a merging event such as is illustrated in view2724, the probability2725that regions2704and2708have switched identifiers2701a-cmay be 50%. At time t4, updated candidate list2706includes an updated probability (PA=60%) that the first object region2704is associated with the first identifier2701a, an updated probability (PB=35%) that the first object region2704is associated with the second identifier2701b, and an updated probability (PC=5%) that the first object region2704is associated with the third identifier2701c. Updated candidate list2710includes an updated probability (PA=33%) that the second object region2708is associated with the first identifier2701a, an updated probability (PB=62%) that the second object region2708is associated with the second identifier2701b, and an updated probability (PC=5%) that the second object region2708is associated with the third identifier2701c. Candidate list2714is unchanged. Still referring to view2728, the tracking system100may determine that a highest value probability of a candidate list is less than a threshold value (e.g., Pthreshold=70%). In response to determining that the highest probability of the first candidate list2706is less than the threshold value, the corresponding object region2704may be re-identified (e.g., using any method of re-identification described in this disclosure, for example, with respect toFIGS.29-32). For instance, the first object region2704may be re-identified because the highest probability (PA=60%) is less than the threshold probability (Pthreshold=70%). The tracking system100may extract features, or descriptors, associated with observable characteristics of the first person (or corresponding contour) associated with the first object region2704. The observable characteristics may be a height of the object (e.g., determined from depth data received from a sensor), a color associated with an area inside the contour (e.g., based on color image data from a sensor108), a width of the object, an aspect ratio (e.g., width/length) of the object, a volume of the object (e.g., based on depth data from sensor108), or the like. Examples of other descriptors are described in greater detail below with respect toFIG.30. As described in greater detail below, a texture feature (e.g., determined using a local binary pattern histogram (LBPH) algorithm) may be calculated for the person. Alternatively or additionally, an artificial neural network may be used to associate the person with the correct identifier2701a-c(e.g., as described in greater detail below with respect toFIG.29-32). Using the candidate lists2706,2710,2714may facilitate more efficient re-identification than was previously possible because, rather than checking all possible identifiers2701a-c(e.g., and other identifiers of people in space102not illustrated inFIG.27) for a region2704,2708,2712that has an uncertain identity, the tracking system100may identify a subset of all the other identifiers2701a-cthat are most likely to be associated with the unknown region2704,2708,2712and only compare descriptors of the unknown region2704,2708,2712to descriptors associated with the subset of identifiers2701a-c. In other words, if the identity of a tracked person is not certain, the tracking system100may only check to see if the person is one of the few people indicated in the person's candidate list, rather than comparing the unknown person to all of the people in the space102. For example, only identifiers2701a-cassociated with a non-zero probability, or a probability greater than a threshold value, in the candidate list2706are likely to be associated with the correct identifier2701a-cof the first region2704. In some embodiments, the subset may include identifiers2701a-cfrom the first candidate list2706with probabilities that are greater than a threshold probability value (e.g., of 10%). Thus, the tracking system100may compare descriptors of the person associated with region2704to predetermined descriptors associated with the subset. As described in greater detail below with respect toFIGS.29-32, the predetermined features (or descriptors) may be determined when a person enters the space102and associated with the known identifier2701a-cof the person during the entrance time period (i.e., before any events may cause the identity of the person to be uncertain. In the example ofFIG.27, the object region2708may also be re-identified at or after time t4because the highest probability PB=62% is less than the example threshold probability of 70%. View2730corresponds to a time t5at which only the person associated with object region2712remains within the space102. View2730illustrates how the candidate lists2706,2710,2714can be used to ensure that people only receive an exit notification2734when the system100is certain the person has exited the space102. In these embodiments, the tracking system100may be configured to transmit an exit notification2734to devices associated with these people when the probability that a person has exited the space102is greater than an exit threshold (e.g., Pexit=95% or greater). An exit notification2734is generally sent to the device of a person and includes an acknowledgement that the tracking system100has determined that the person has exited the space102. For example, if the space102is a store, the exit notification2734provides a confirmation to the person that the tracking system100knows the person has exited the store and is, thus, no longer shopping. This may provide assurance to the person that the tracking system100is operating properly and is no longer assigning items to the person or incorrectly charging the person for items that he/she did not intend to purchase. As people exit the space102, the tracking system100may maintain a record2732of exit probabilities to determine when an exit notification2734should be sent. In the example ofFIG.27, at time t5(shown in view2730), the record2732includes an exit probability (PA,exit=93%) that a first person associated with the first object region2704has exited the space102. Since PA,exitis less than the example threshold exit probability of 95%, an exit notification2734would not be sent to the first person (e.g., to his/her device). Thus, even though the first object region2704is no longer detected in the space102, an exit notification2734is not sent, because there is still a chance that the first person is still in the space102(i.e., because of identity uncertainties that are captured and recorded via the candidate lists2706,2710,2714). This prevents a person from receiving an exit notification2734before he/she has exited the space102. The record2732includes an exit probability (PB,exit=97%) that the second person associated with the second object region2708has exited the space102. Since PB,exitis greater than the threshold exit probability of 95%, an exit notification2734is sent to the second person (e.g., to his/her device). The record2732also includes an exit probability (PC,exit=10%) that the third person associated with the third object region2712has exited the space102. Since PC,exitis less than the threshold exit probability of 95%, an exit notification2734is not sent to the third person (e.g., to his/her device). FIG.28is a flowchart of a method2800for creating and/or maintaining candidate lists2706,2710,2714by tracking system100. Method2800generally facilitates improved identification of tracked people (e.g., or other target objects) by maintaining candidate lists2706,2710,2714which, for a given tracked person, or corresponding tracked object region (e.g., region2704,2708,2712), include possible identifiers2701a-cfor the object and a corresponding probability that each identifier2701a-cis correct for the person. By maintaining candidate lists2706,2710,2714for tracked people, the people may be more effectively and efficiently identified during tracking. For example, costly person re-identification (e.g., in terms of system resources expended) may only be used when a candidate list indicates that a person's identity is sufficiently uncertain. Method2800may begin at step2802where image frames are received from one or more sensors108. At step2804, the tracking system100uses the received frames to track objects in the space102. In some embodiments, tracking is performed using one or more of the unique tools described in this disclosure (e.g., with respect toFIGS.24-26). However, in general, any appropriate method of sensor-based object tracking may be employed. At step2806, the tracking system100determines whether a first person is within a threshold distance2718bof a second person. This case may correspond to the conditions shown in view2716ofFIG.27, described above, where first object region2704is distance2718aaway from second object region2708. As described above, the distance2718amay correspond to a pixel distance measured in a frame or a physical distance in the space102(e.g., determined using a homography associating pixel coordinates to physical coordinates in the space102). If the first and second people are not within the threshold distance2718bof each other, the system100continues tracking objects in the space102(i.e., by returning to step2804). However, if the first and second people are within the threshold distance2718bof each other, method2800proceeds to step2808, where the probability2717that the first and second people switched identifiers2701a-cis determined. As described above, the probability2717that the identifiers2701a-cswitched may be determined, for example, by accessing a predefined probability value (e.g., of 50%). In some embodiments, the probability2717is based on the distance2718abetween the people (or corresponding object regions2704,2708), as described above. In some embodiments, as described above, the tracking system100determines a relative orientation between the first person and the second person, and the probability2717that the people (or corresponding object regions2704,2708) switched identifiers2701a-cis determined, at least in part, based on this relative orientation. At step2810, the candidate lists2706,2710for the first and second people (or corresponding object regions2704,2708) are updated based on the probability2717determined at step2808. For instance, as described above, the updated first candidate list2706may include a probability that the first object is associated with the first identifier2701aand a probability that the first object is associated with the second identifier2701b. The second candidate list2710for the second person is similarly updated based on the probability2717that the first object switched identifiers2701a-cwith the second object (determined at step2808). The updated second candidate list2710may include a probability that the second person is associated with the first identifier2701aand a probability that the second person is associated with the second identifier2701b. At step2812, the tracking system100determines whether the first person (or corresponding region2704) is within a threshold distance2718bof a third object (or corresponding region2712). This case may correspond, for example, to the conditions shown in view2720ofFIG.27, described above, where first object region2704is distance2722away from third object region2712. As described above, the threshold distance2718bmay correspond to a pixel distance measured in a frame or a physical distance in the space102(e.g., determined using an appropriate homography associating pixel coordinates to physical coordinates in the space102). If the first and third people (or corresponding regions2704and2712) are within the threshold distance2718bof each other, method2800proceeds to step2814, where the probability2721that the first and third people (or corresponding regions2704and2712) switched identifiers2701a-cis determined. As described above, this probability2721that the identifiers2701a-cswitched may be determined, for example, by accessing a predefined probability value (e.g., of 50%). The probability2721may also or alternatively be based on the distance2722between the objects2727and/or a relative orientation of the first and third people, as described above. At step2816, the candidate lists2706,2714for the first and third people (or corresponding regions2704,2712) are updated based on the probability2721determined at step2808. For instance, as described above, the updated first candidate list2706may include a probability that the first person is associated with the first identifier2701a, a probability that the first person is associated with the second identifier2701b, and a probability that the first object is associated with the third identifier2701c. The third candidate list2714for the third person is similarly updated based on the probability2721that the first person switched identifiers with the third person (i.e., determined at step2814). The updated third candidate list2714may include, for example, a probability that the third object is associated with the first identifier2701a, a probability that the third object is associated with the second identifier2701b, and a probability that the third object is associated with the third identifier2701c. Accordingly, if the steps of method2800proceed in the example order illustrated inFIG.28, the candidate list2714of the third person includes a non-zero probability that the third object is associated with the second identifier2701b, which was originally associated with the second person. If, at step2812, the first and third people (or corresponding regions2704and2712) are not within the threshold distance2718bof each other, the system100generally continues tracking people in the space102. For example, the system100may proceed to step2818to determine whether the first person is within a threshold distance of an nthperson (i.e., some other person in the space102). At step2820, the system100determines the probability that the first and nthpeople switched identifiers2701a-c, as described above, for example, with respect to steps2808and2814. At step2822, the candidate lists for the first and nthpeople are updated based on the probability determined at step2820, as described above, for example, with respect to steps2810and2816before method2800ends. If, at step2818, the first person is not within the threshold distance of the nthperson, the method2800proceeds to step2824. At step2824, the tracking system100determines if a person has exited the space102. For instance, as described above, the tracking system100may determine that a contour associated with a tracked person is no longer detected for at least a threshold time period (e.g., of about 30 seconds or more). The system100may additionally determine that a person exited the space102when a person is no longer detected and a last determined position of the person was at or near an exit position (e.g., near a door leading to a known exit from the space102). If a person has not exited the space102, the tracking system100continues to track people (e.g., by returning to step2802). If a person has exited the space102, the tracking system100calculates or updates record2732of probabilities that the tracked objects have exited the space102at step2826. As described above, each exit probability of record2732generally corresponds to a probability that a person associated with each identifier2701a-chas exited the space102. At step2828, the tracking system100determines if a combined exit probability in the record2732is greater than a threshold value (e.g., of 95% or greater). If a combined exit probability is not greater than the threshold, the tracking system100continues to track objects (e.g., by continuing to step2818). If an exit probability from record2732is greater than the threshold, a corresponding exit notification2734may be sent to the person linked to the identifier2701a-cassociated with the probability at step2830, as described above with respect to view2730ofFIG.27. This may prevent or reduce instances where an exit notification2734is sent prematurely while an object is still in the space102. For example, it may be beneficial to delay sending an exit notification2734until there is a high certainty that the associated person is no longer in the space102. In some cases, several tracked people must exit the space102before an exit probability in record2732for a given identifier2701a-cis sufficiently large for an exit notification2734to be sent to the person (e.g., to a device associated with the person). Modifications, additions, or omissions may be made to method2800depicted inFIG.28. Method2800may include more, fewer, or other steps. For example, steps may be performed in parallel or in any suitable order. While at times discussed as tracking system100or components thereof performing steps, any suitable system or components of the system may perform one or more steps of the method2800. Person Re-Identification As described above, in some cases, the identity of a tracked person can become unknown (e.g., when the people become closely spaced or “collide”, or when the candidate list of a person indicates the person's identity is not known, as described above with respect toFIGS.27-28), and the person may need to be re-identified. This disclosure contemplates a unique approach to efficiently and reliably re-identifying people by the tracking system100. For example, rather than relying entirely on resource-expensive machine learning-based approaches to re-identify people, a more efficient and specially structured approach may be used where “lower-cost” descriptors related to observable characteristics (e.g., height, color, width, volume, etc.) of people are used first for person re-identification. “Higher-cost” descriptors (e.g., determined using artificial neural network models) are only used when the lower-cost methods cannot provide reliable results. For instance, in some embodiments, a person may first be re-identified based on his/her height, hair color, and/or shoe color. However, if these descriptors are not sufficient for reliably re-identifying the person (e.g., because other people being tracked have similar characteristics), progressively higher-level approaches may be used (e.g., involving artificial neural networks that are trained to recognize people) which may be more effective at person identification but which generally involve the use of more processing resources. As an example, each person's height may be used initially for re-identification. However, if another person in the space102has a similar height, a height descriptor may not be sufficient for re-identifying the people (e.g., because it is not possible to distinguish between people with a similar heights based on height alone), and a higher-level approach may be used (e.g., using a texture operator or an artificial neural network to characterize the person). In some embodiments, if the other person with a similar height has never interacted with the person being re-identified (e.g., as recorded in each person's candidate list—seeFIG.27and corresponding description above), height may still be an appropriate feature for re-identifying the person (e.g., because the other person with a similar height is not associated with a candidate identity of the person being re-identified). FIG.29illustrates a tracking subsystem2900configured to track people (e.g., and/or other target objects) based on sensor data2904received from one or more sensors108. In general, the tracking subsystem2900may include one or both of the server106and the client(s)105ofFIG.1, described above. Tracking subsystem2900may be implemented using the device3800described below with respect toFIG.38. Tracking subsystem2900may track object positions2902, over a period of time using sensor data2904(e.g., top-view images) generated by at least one of sensors108. Object positions2902may correspond to local pixel positions (e.g., pixel positions2226,2234ofFIG.22) determined at a single sensor108and/or global positions corresponding to physical positions (e.g., positions2228ofFIG.22) in the space102(e.g., using the homographies described above with respect toFIGS.2-7). In some cases, object positions2902may correspond to regions detected in an image, or in the space102, that are associated with the location of a corresponding person (e.g., regions2704,2708,2712ofFIG.27, described above). People may be tracked and corresponding positions2902may be determined, for example, based on pixel coordinates of contours detected in top-view images generated by sensor(s)108. Examples of contour-based detection and tracking are described above, for example, with respect toFIGS.24and27. However, in general, any appropriate method of sensor-based tracking may be used to determine positions2902. For each object position2902, the subsystem2900maintains a corresponding candidate list2906(e.g., as described above with respect toFIG.27). The candidate lists2906are generally used to maintain a record of the most likely identities of each person being tracked (i.e., associated with positions2902). Each candidate list2906includes probabilities which are associated with identifiers2908of people that have entered the space102. The identifiers2908may be any appropriate representation (e.g., an alphanumeric string, or the like) for identifying a person (e.g., a username, name, account number, or the like associated with the person being tracked). In some embodiments, the identifiers2908may be anonymized (e.g., using hashing or any other appropriate anonymization technique). Each of the identifiers2908is associated with one or more predetermined descriptors2910. The predetermined descriptors2910generally correspond to information about the tracked people that can be used to re-identify the people when necessary (e.g., based on the candidate lists2906). The predetermined descriptors2910may include values associated with observable and/or calculated characteristics of the people associated with the identifiers2908. For instance, the descriptors2910may include heights, hair colors, clothing colors, and the like. As described in greater detail below, the predetermined descriptors2910are generally determined by the tracking subsystem2900during an initial time period (e.g., when a person associated with a given tracked position2902enters the space) and are used to re-identify people associated with tracked positions2902when necessary (e.g., based on candidate lists2906). When re-identification is needed (or periodically during tracking) for a given person at position2902, the tracking subsystem2900may determine measured descriptors2912for the person associated with the position2902.FIG.30illustrates the determination of descriptors2910,2912based on a top-view depth image3002received from a sensor108. A representation2904aof a person corresponding to the tracked object position2902is observable in the image3002. The tracking subsystem2900may detect a contour3004bassociated with the representation3004a. The contour3004bmay correspond to a boundary of the representation3004a(e.g., determined at a given depth in image3002). Tracking subsystem2900generally determines descriptors2910,2912based on the representation3004aand/or the contour3004b. In some cases, the representation3004bappears within a predefined region-of-interest3006of the image3002in order for descriptors2910,2912to be determined by the tracking subsystem2900. This may facilitate more reliable descriptor2910,2912determination, for example, because descriptors2910,2912may be more reproducible and/or reliable when the person being imaged is located in the portion of the sensor's field-of-view that corresponds to this region-of-interest3006. For example, descriptors2910,2912may have more consistent values when the person is imaged within the region-of-interest3006. Descriptors2910,2912determined in this manner may include, for example, observable descriptors3008and calculated descriptors3010. For example, the observable descriptors3008may correspond to characteristics of the representation3004aand/or contour3004bwhich can be extracted from the image3002and which correspond to observable features of the person. Examples of observable descriptors3008include a height descriptor3012(e.g., a measure of the height in pixels or units of length) of the person based on representation3004aand/or contour3004b), a shape descriptor3014(e.g., width, length, aspect ratio, etc.) of the representation3004aand/or contour3004b, a volume descriptor3016of the representation3004aand/or contour3004b, a color descriptor3018of representation3004a(e.g., a color of the person's hair, clothing, shoes, etc.), an attribute descriptor3020associated with the appearance of the representation3004aand/or contour3004b(e.g., an attribute such as “wearing a hat,” “carrying a child,” “pushing a stroller or cart,”), and the like. In contrast to the observable descriptors3008, the calculated descriptors3010generally include values (e.g., scalar or vector values) which are calculated using the representation3004aand/or contour3004band which do not necessarily correspond to an observable characteristic of the person. For example, the calculated descriptors3010may include image-based descriptors3022and model-based descriptors3024. Image-based descriptors3022may, for example, include any descriptor values (i.e., scalar and/or vector values) calculated from image3002. For example, a texture operator such as a local binary pattern histogram (LBPH) algorithm may be used to calculate a vector associated with the representation3004a. This vector may be stored as a predetermined descriptor2910and measured at subsequent times as a descriptor2912for re-identification. Since the output of a texture operator, such as the LBPH algorithm may be large (i.e., in terms of the amount of memory required to store the output), it may be beneficial to select a subset of the output that is most useful for distinguishing people. Accordingly, in some cases, the tracking subsystem2900may select a portion of the initial data vector to include in the descriptor2910,2912. For example, principal component analysis may be used to select and retain a portion of the initial data vector that is most useful for effective person re-identification. In contrast to the image-based descriptors3022, model-based descriptors3024are generally determined using a predefined model, such as an artificial neural network. For example, a model-based descriptor3024may be the output (e.g., a scalar value or vector) output by an artificial neural network trained to recognize people based on their corresponding representation3004aand/or contour3004bin top-view image3002. For example, a Siamese neural network may be trained to associate representations3004aand/or contours3004bin top-view images3002with corresponding identifiers2908and subsequently employed for re-identification2929. Returning toFIG.29, the descriptor comparator2914of the tracking subsystem2900may be used to compare the measured descriptor2912to corresponding predetermined descriptors2910in order to determine the correct identity of a person being tracked. For example, the measured descriptor2912may be compared to a corresponding predetermined descriptor2910in order to determine the correct identifier2908for the person at position2902. For instance, if the measured descriptor2912is a height descriptor3012, it may be compared to predetermined height descriptors2910for identifiers2908, or a subset of the identifiers2908determined using the candidate list2906. Comparing the descriptors2910,2912may involve calculating a difference between scalar descriptor values (e.g., a difference in heights3012, volumes3018, etc.), determining whether a value of a measured descriptor2912is within a threshold range of the corresponding predetermined descriptor2910(e.g., determining if a color value3018of the measured descriptor2912is within a threshold range of the color value3018of the predetermined descriptor2910), determining a cosine similarity value between vectors of the measured descriptor2912and the corresponding predetermined descriptor2910(e.g., determining a cosine similarity value between a measured vector calculated using a texture operator or neural network and a predetermined vector calculated in the same manner). In some embodiments, only a subset of the predetermined descriptors2910are compared to the measured descriptor2912. The subset may be selected using the candidate list2906for the person at position2902that is being re-identified. For example, the person's candidate list2906may indicate that only a subset (e.g., two, three, or so) of a larger number of identifiers2908are likely to be associated with the tracked object position2902that requires re-identification. When the correct identifier2908is determined by the descriptor comparator2914, the comparator2914may update the candidate list2906for the person being re-identified at position2902(e.g., by sending update2916). In some cases, a descriptor2912may be measured for an object that does not require re-identification (e.g., a person for which the candidate list2906indicates there is 100% probability that the person corresponds to a single identifier2908). In these cases, measured identifiers2912may be used to update and/or maintain the predetermined descriptors2910for the person's known identifier2908(e.g., by sending update2918). For instance, a predetermined descriptor2910may need to be updated if a person associated with the position2902has a change of appearance while moving through the space102(e.g., by adding or removing an article of clothing, by assuming a different posture, etc.). FIG.31Aillustrates positions over a period of time of tracked people3102,3104,3106, during an example operation of tracking system2900. The first person3102has a corresponding trajectory3108represented by the solid line inFIG.31A. Trajectory3108corresponds to the history of positions of person3102in the space102during the period of time. Similarly, the second person3104has a corresponding trajectory3110represented by the dashed-dotted line inFIG.31A. Trajectory3110corresponds to the history of positions of person3104in the space102during the period of time. The third person3106has a corresponding trajectory3112represented by the dotted line inFIG.31A. Trajectory3112corresponds to the history of positions of person3112in the space102during the period of time. When each of the people3102,3104,3106first enter the space102(e.g., when they are within region3114), predetermined descriptors2910are generally determined for the people3102,3104,3106and associated with the identifiers2908of the people3102,3104,3106. The predetermined descriptors2910are generally accessed when the identity of one or more of the people3102,3104,3106is not sufficiently certain (e.g., based on the corresponding candidate list2906and/or in response to a “collision event,” as described below) in order to re-identify the person3102,3104,3106. For example, re-identification may be needed following a “collision event” between two or more of the people3102,3104,3106. A collision event typically corresponds to an image frame in which contours associated with different people merge to form a single contour (e.g., the detection of merged contour2220shown inFIG.22may correspond to detecting a collision event). In some embodiments, a collision event corresponds to a person being located within a threshold distance of another person (see, e.g., distance2718aand2722inFIG.27and the corresponding description above). More generally, a collision event may correspond to any event that results in a person's candidate list2906indicating that re-identification is needed (e.g., based on probabilities stored in the candidate list2906—seeFIGS.27-28and the corresponding description above). In the example ofFIG.31A, when the people3102,3104,3106are within region3114, the tracking subsystem2900may determine a first height descriptor3012associated with a first height of the first person3102, a first contour descriptor3014associated with a shape of the first person3102, a first anchor descriptor3024corresponding to a first vector generated by an artificial neural network for the first person3102, and/or any other descriptors2910described with respect toFIG.30above. Each of these descriptors is stored for use as a predetermined descriptor2910for re-identifying the first person3102. These predetermined descriptors2910are associated with the first identifier (i.e., of identifiers2908) of the first person3102. When the identity of the first person3102is certain (e.g., prior to the first collision event at position3116), each of the descriptors2910described above may be determined again to update the predetermined descriptors2910. For example, if person3102moves to a position in the space102that allows the person3102to be within a desired region-of-interest (e.g., region-of-interest3006ofFIG.30), new descriptors2912may be determined. The tracking subsystem2900may use these new descriptors2912to update the previously determined descriptors2910(e.g., see update2918ofFIG.29). By intermittently updating the predetermined descriptors2910, changes in the appearance of people being tracked can be accounted for (e.g., if a person puts on or removes an article of clothing, assumes a different posture, etc.). At a first timestamp associated with a time t1, the tracking subsystem2900detects a collision event between the first person3102and third person3106at position3116illustrated inFIG.31A. For example, the collision event may correspond to a first tracked position of the first person3102being within a threshold distance of a second tracked position of the third person3106at the first timestamp. In some embodiments, the collision event corresponds to a first contour associated with the first person3102merging with a third contour associated with the third person3106at the first timestamp. More generally, the collision event may be associated with any occurrence which causes a highest value probability of a candidate list associated with the first person3102and/or the third person3106to fall below a threshold value (e.g., as described above with respect to view2728ofFIG.27). In other words, any event causing the identity of person3102to become uncertain may be considered a collision event. After the collision event is detected, the tracking subsystem2900receives a top-view image (e.g., top-view image3002ofFIG.30) from sensor108. The tracking subsystem2900determines, based on the top-view image, a first descriptor for the first person3102. As described above, the first descriptor includes at least one value associated with an observable, or calculated, characteristic of the first person3104(e.g., of representation3004aand/or contour3004bofFIG.30). In some embodiments, the first descriptor may be a “lower-cost” descriptor that requires relative few processing resources to determine, as described above. For example, the tracking subsystem2900may be able to determine a lower-cost descriptor more efficiently than it can determine a higher-cost descriptor (e.g., a model-based descriptor3024described above with respect toFIG.30). For instance, a first number of processing cores used to determine the first descriptor may be less than a second number of processing cores used to determine a model-based descriptor3024(e.g., using an artificial neural network). Thus, it may be beneficial to re-identify a person, whenever possible, using a lower-cost descriptor whenever possible. However, in some cases, the first descriptor may not be sufficient for re-identifying the first person3102. For example, if the first person3102and the third person3106correspond to people with similar heights, a height descriptor3012generally cannot be used to distinguish between the people3102,3106. Accordingly, before the first descriptor2912is used to re-identify the first person3102, the tracking subsystem2900may determine whether certain criteria are satisfied for distinguishing the first person3102from the third person3106based on the first descriptor2912. In some embodiments, the criteria are not satisfied when a difference, determined during a time interval associated with the collision event (e.g., at a time at or near time t1), between the descriptor2912of the first person3102and a corresponding descriptor2912of the third person3106is less than a minimum value. FIG.31Billustrates the evaluation of these criteria based on the history of descriptor values for people3102and3106over time. Plot3150, shown inFIG.31B, shows a first descriptor value3152for the first person3102over time and a second descriptor value3154for the third person3106over time. In general, descriptor values may fluctuate over time because of changes in the environment, the orientation of people relative to sensors108, sensor variability, changes in appearance, etc. The descriptor values3152,3154may be associated with a shape descriptor3014, a volume3016, a contour-based descriptor3022, or the like, as described above with respect toFIG.30. At time t1, the descriptor values3152,3154have a relatively large difference3156that is greater than the threshold difference3160, illustrated inFIG.31B. Accordingly, in this example, at or near (e.g., within a brief time interval of a few seconds or minutes following t1), the criteria are satisfied and the descriptor2912associated with descriptor values3152,3154can generally be used to re-identify the first and third people3102,3106. When the criteria are satisfied for distinguishing the first person3102from the third person3106based on the first descriptor2912(as is the case at t1), the descriptor comparator2914may compare the first descriptor2912for the first person3102to each of the corresponding predetermined descriptors2910(i.e., for all identifiers2908). However, in some embodiments, comparator2914may compare the first descriptor2912for the first person3102to predetermined descriptors2910for only a select subset of the identifiers2908. The subset may be selected using the candidate list2906for the person that is being re-identified (see, e.g., step3208of method3200described below with respect toFIG.32). For example, the person's candidate list2906may indicate that only a subset (e.g., two, three, or so) of a larger number of identifiers2908are likely to be associated with the tracked object position2902that requires re-identification. Based on this comparison, the tracking subsystem2900may identify the predetermined descriptor2910that is most similar to the first descriptor2912. For example, the tracking subsystem2900may determine that a first identifier2908corresponds to the first person3102by, for each member of the set (or the determined subset) of the predetermined descriptors2910, calculating an absolute value of a difference in a value of the first descriptor2912and a value of the predetermined descriptor2910. The first identifier2908may be selected as the identifier2908associated with the smallest absolute value. Referring again toFIG.31A, at time t2, a second collision event occurs at position3118between people3102,3106. Turning back toFIG.31B, the descriptor values3152,3154have a relatively small difference3158at time t2(e.g., compared to difference3156at time t1), which is less than the threshold value3160. Thus, at time t2, the descriptor2912associated with descriptor values3152,3154generally cannot be used to re-identify the first and third people3102,3106, and the criteria for using the first descriptor2912are not satisfied. Instead, a different, and likely a “higher-cost” descriptor2912(e.g., a model-based descriptor3024) should be used to re-identify the first and third people3102,3106at time t2. For example, when the criteria are not satisfied for distinguishing the first person3102from the third person3106based on the first descriptor2912(as is the case in this example at time t2), the tracking subsystem2900determines a new descriptor2912for the first person3102. The new descriptor2912is typically a value or vector generated by an artificial neural network configured to identify people in top-view images (e.g., a model-based descriptor3024ofFIG.30). The tracking subsystem2900may determine, based on the new descriptor2912, that a first identifier2908from the predetermined identifiers2908(or a subset determined based on the candidate list2906, as described above) corresponds to the first person3102. For example, the tracking subsystem2900may determine that the first identifier2908corresponds to the first person3102by, for each member of the set (or subset) of predetermined identifiers2908, calculating an absolute value of a difference in a value of the first identifier2908and a value of the predetermined descriptors2910. The first identifier2908may be selected as the identifier2908associated with the smallest absolute value. In cases where the second descriptor2912cannot be used to reliably re-identify the first person3102using the approach described above, the tracking subsystem2900may determine a measured descriptor2912for all of the “candidate identifiers” of the first person3102. The candidate identifiers generally refer to the identifiers2908of people (e.g., or other tracked objects) that are known to be associated with identifiers2908appearing in the candidate list2906of the first person3102(e.g., as described above with respect toFIGS.27and28). For instance, the candidate identifiers may be identifiers2908of tracked people (i.e., at tracked object positions2902) that appear in the candidate list2906of the person being re-identified.FIG.31Cillustrates how predetermined descriptors3162,3164,3166for a first, second, and third identifier2908may be compared to each of the measured descriptors3168,3170,3172for people3102,3104,3106. The comparison may involve calculating a cosine similarity value between a vectors associated with the descriptors. Based on the results of the comparison, each person3102,3104,3106is assigned the identifier2908corresponding to the best-matching predetermined descriptor3162,3164,3166. A best matching descriptor may correspond to a highest cosine similarity value (i.e., nearest to one). FIG.32illustrates a method3200for re-identifying tracked people using tracking subsystem2900illustrated inFIG.29and described above. The method3200may begin at step3202where the tracking subsystem2900receives top-view image frames from one or more sensors108. At step3204, the tracking subsystem2900tracks a first person3102and one or more other people (e.g., people3104,3106) in the space102using at least a portion of the top-view images generated by the sensors108. For instance, tracking may be performed as described above with respect toFIGS.24-26, or using any appropriate object tracking algorithm. The tracking subsystem2900may periodically determine updated predetermined descriptors associated with the identifiers2908(e.g., as described with respect to update2918ofFIG.29). In some embodiments, the tracking subsystem2900, in response to determining the updated descriptors, determines that one or more of the updated predetermined descriptors is different by at least a threshold amount from a corresponding previously predetermined descriptor2910. In this case, the tracking subsystem2900may save both the updated descriptor and the corresponding previously predetermined descriptor2910. This may allow for improved re-identification when characteristics of the people being tracked may change intermittently during tracking. At step3206, the tracking subsystem2900determines whether re-identification of the first tracked person3102is needed. This may be based on a determination that contours have merged in an image frame (e.g., as illustrated by merged contour2220ofFIG.22) or on a determination that a first person3102and a second person3104are within a threshold distance (e.g., distance2918bofFIG.29) of each other, as described above. In some embodiments, a candidate list2906may be used to determine that re-identification of the first person3102is required. For instance, if a highest probability from the candidate list2906associated with the tracked person3102is less than a threshold value (e.g., 70%), re-identification may be needed (see alsoFIGS.27-28and the corresponding description above). If re-identification is not needed, the tracking subsystem2900generally continues to track people in the space (e.g., by returning to step3204). If the tracking subsystem2900determines at step3206that re-identification of the first tracked person3102is needed, the tracking subsystem2900may determine candidate identifiers for the first tracked person3102at step3208. The candidate identifiers generally include a subset of all of the identifiers2908associated with tracked people in the space102, and the candidate identifiers may be determined based on the candidate list2906for the first tracked person3102. In other words, the candidate identifiers are a subset of the identifiers2906which are most likely to include the correct identifier2908for the first tracked person3102based on a history of movements of the first tracked person3102and interactions of the first tracked person3102with the one or more other tracked people3104,3106in the space102(e.g., based on the candidate list2906that is updated in response to these movements and interactions). At step3210, the tracking subsystem2900determines a first descriptor2912for the first tracked person3102. For example, the tracking subsystem2900may receive, from a first sensor108, a first top-view image of the first person3102(e.g., such as image3002ofFIG.30). For instance, as illustrated in the example ofFIG.30, in some embodiments, the image3002used to determine the descriptor2912includes the representation3004aof the object within a region-of-interest3006within the full frame of the image3002. This may provide for more reliable descriptor2912determination. In some embodiments, the image data2904include depth data (i.e., image data at different depths). In such embodiments, the tracking subsystem2900may determine the descriptor2912based on a depth region-of-interest, where the depth region-of-interest corresponds to depths in the image associated with the head of person3102. In these embodiments, descriptors2912may be determined that are associated with characteristics or features of the head of the person3102. At step3212, the tracking subsystem2900may determine whether the first descriptor2912can be used to distinguish the first person3102from the candidate identifiers (e.g., one or both of people3104,3106) by, for example, determining whether certain criteria are satisfied for distinguishing the first person3102from the candidates based on the first descriptor2912. In some embodiments, the criteria are not satisfied when a difference, determined during a time interval associated with the collision event, between the first descriptor2912and corresponding descriptors2910of the candidates is less than a minimum value, as described in greater detail above with respect toFIGS.31A,B. If the first descriptor can be used to distinguish the first person3102from the candidates (e.g., as was the case at time t1in the example ofFIG.31A,B), the method3200proceeds to step3214at which point the tracking subsystem2900determines an updated identifier for the first person3102based on the first descriptor2912. For example, the tracking subsystem2900may compare (e.g., using comparator2914) the first descriptor2912to the set of predetermined descriptors2910that are associated with the candidate objects determined for the first person3102at step3208. In some embodiments, the first descriptor2912is a data vector associated with characteristics of the first person in the image (e.g., a vector determined using a texture operator such as the LBPH algorithm), and each of the predetermined descriptors2910includes a corresponding predetermined data vector (e.g., determined for each tracked pers3102,3104,3106upon entering the space102). In such embodiments, the tracking subsystem2900compares the first descriptor2912to each of the predetermined descriptors2910associated with the candidate objects by calculating a cosine similarity value between the first data vector and each of the predetermined data vectors. The tracking subsystem2900determines the updated identifier as the identifier2908of the candidate object with the cosine similarity value nearest one (i.e., the vector that is most “similar” to the vector of the first descriptor2912). At step3216, the identifiers2908of the other tracked people3104,3106may be updated as appropriate by updating other people's candidate lists2906. For example, if the first tracked person3102was found to be associated with an identifier2908that was previously associated with the second tracked person3104. Steps3208to3214may be repeated for the second person3104to determine the correct identifier2908for the second person3104. In some embodiments, when the identifier2908for the first person3102is updated, the identifiers2908for people (e.g., one or both of people3104and3106) that are associated with the first person's candidate list2906are also updated at step3216. As an example, the candidate list2906of the first person3102may have a non-zero probability that the first person3102is associated with a second identifier2908originally linked to the second person3104and a third probability that the first person3102is associated with a third identifier2908originally linked to the third person3106. In this case, after the identifier2908of the first person3102is updated, the identifiers2908of the second and third people3104,3106may also be updated according to steps3208-3214. If, at step3212, the first descriptor2912cannot be used to distinguish the first person3102from the candidates (e.g., as was the case at time t2in the example of FIG.31A,B), the method3200proceeds to step3218to determine a second descriptor2912for the first person3102. As described above, the second descriptor2912may be a “higher-level” descriptor such as a model-based descriptor3024ofFIG.30). For example, the second descriptor2912may be less efficient (e.g., in terms of processing resources required) to determine than the first descriptor2912. However, the second descriptor2912may be more effective and reliable, in some cases, for distinguishing between tracked people. At step3220, the tracking system2900determines whether the second descriptor2912can be used to distinguish the first person3102from the candidates (from step3218) using the same or a similar approach to that described above with respect to step3212. For example, the tracking subsystem2900may determine if the cosine similarity values between the second descriptor2912and the predetermined descriptors2910are greater than a threshold cosine similarity value (e.g., of 0.5). If the cosine similarity value is greater than the threshold, the second descriptor2912generally can be used. If the second descriptor2912can be used to distinguish the first person3102from the candidates, the tracking subsystem2900proceeds to step3222, and the tracking subsystem2900determines the identifier2908for the first person3102based on the second descriptor2912and updates the candidate list2906for the first person3102accordingly. The identifier2908for the first person3102may be determined as described above with respect to step3214(e.g., by calculating a cosine similarity value between a vector corresponding to the first descriptor2912and previously determined vectors associated with the predetermined descriptors2910). The tracking subsystem2900then proceeds to step3216described above to update identifiers2908(i.e., via candidate lists2906) of other tracked people3104,3106as appropriate. Otherwise, if the second descriptor2912cannot be used to distinguish the first person3102from the candidates, the tracking subsystem2900proceeds to step3224, and the tracking subsystem2900determines a descriptor2912for all of the first person3102and all of the candidates. In other words, a measured descriptor2912is determined for all people associated with the identifiers2908appearing in the candidate list2906of the first person3102(e.g., as described above with respect toFIG.31C). At step3226, the tracking subsystem2900compares the second descriptor2912to predetermined descriptors2910associated with all people related to the candidate list2906of the first person3102. For instance, the tracking subsystem2900may determine a second cosine similarity value between a second data vector determined using an artificial neural network and each corresponding vector from the predetermined descriptor values2910for the candidates (e.g., as illustrated inFIG.31C, described above). The tracking subsystem2900then proceeds to step3228to determine and update the identifiers2908of all candidates based on the comparison at step3226before continuing to track people3102,3104,3106in the space102(e.g., by returning to step3204). Modifications, additions, or omissions may be made to method3200depicted inFIG.32. Method3200may include more, fewer, or other steps. For example, steps may be performed in parallel or in any suitable order. While at times discussed as tracking system2900(e.g., by server106and/or client(s)105) or components thereof performing steps, any suitable system or components of the system may perform one or more steps of the method3200. Action Detection for Assigning Items to the Correct Person As described above with respect toFIGS.12-15when a weight event is detected at a rack112, the item associated with the activated weight sensor110may be assigned to the person nearest the rack112. However, in some cases, two or more people may be near the rack112and it may not be clear who picked up the item. Accordingly, further action may be required to properly assign the item to the correct person. In some embodiments, a cascade of algorithms (e.g., from more simple approaches based on relatively straightforwardly determined image features to more complex strategies involving artificial neural networks) may be employed to assign an item to the correct person. The cascade may be triggered, for example, by (i) the proximity of two or more people to the rack112, (ii) a hand crossing into the zone (or a “virtual curtain”) adjacent to the rack (e.g., see zone3324ofFIG.33Band corresponding description below) and/or, (iii) a weight signal indicating an item was removed from the rack112. When it is initially uncertain who picked up an item, a unique contour-based approach may be used to assign an item to the correct person. For instance, if two people may be reaching into a rack112to pick up an item, a contour may be “dilated” from a head height to a lower height in order to determine which person's arm reached into the rack112to pick up the item. However, if the results of this efficient contour-based approach do not satisfy certain confidence criteria, a more computationally expensive approach (e.g., involving neural network-based pose estimation) may be used. In some embodiments, the tacking system100, upon detecting that more than one person may have picked up an item, may store a set of buffer frames that are most likely to contain useful information for effectively assigning the item to the correct person. For instance, the stored buffer frames may correspond to brief time intervals when a portion of a person enters the zone adjacent to a rack112(e.g., zone3324ofFIG.33B, described above) and/or when the person exits this zone. However, in some cases, it may still be difficult or impossible to assign an item to a person even using more advance artificial neural network-based pose estimation techniques. In these cases, the tracking system100may store further buffer frames in order to track the item through the space102after it exits the rack112. When the item comes to a stopped position (e.g., with a sufficiently low velocity), the tracking system100determines which person is closer to the stopped item, and the item is generally assigned to the nearest person. This process may be repeated until the item is confidently assigned to the correct person. FIG.33Aillustrates an example scenario in which a first person3302and a second person3304are near a rack112storing items3306a-c. Each item3306a-cis stored on corresponding weight sensors110a-c. A sensor108, which is communicatively coupled to the tracking subsystem3300(i.e., to the server106and/or client(s)105), generates a top-view depth image3308for a field-of-view3310which includes the rack112and people3302,3304. The top-view depth image3308includes a representation112aof the rack112and representations3302a,3304aof the first and second people3302,3304, respectively. The rack112(e.g., or its representation112a) may be divided into three zones3312a-cwhich correspond to the locations of weight sensors110a-cand the associated items3306a-c, respectively. In this example scenario, one of the people3302,3304picks up an item3306cfrom weight sensor110c, and tracking subsystem3300receives a trigger signal3314indicating an item3306chas been removed from the rack112. The tracking subsystem3300includes the client(s)105and server106described above with respect toFIG.1. The trigger signal3314may indicate the change in weight caused by the item3306cbeing removed from sensor110c. After receiving the signal3314, the server106accesses the top-view image3308, which may correspond to a time at, just prior to, and/or just following the time the trigger signal3314was received. In some embodiments, the trigger signal3314may also or alternatively be associated with the tracking system100detecting a person3302,3304entering a zone adjacent to the rack (e.g., as described with respect to the “virtual curtain” ofFIGS.12-15above and/or zone3324described in greater detail below) to determine to which person3302,3304the item3306cshould be assigned. Since representations3302aand3304aindicate that both people3302,3304are near the rack112, further analysis is required to assign item3306cto the correct person3302,3304. Initially, the tracking system100may determine if an arm of either person3302or3304may be reaching toward zone3312cto pick up item3306c. However, as shown in regions3316and3318in image3308, a portion of both representations3302a,3304aappears to possibly be reaching toward the item3306cin zone3312c. Thus, further analysis is required to determine whether the first person3302or the second person3304picked up item3306c. Following the initial inability to confidently assign item3306cto the correct person3302,3304, the tracking system100may use a contour-dilation approach to determine whether person3302or3304picked up item3306c.FIG.33Billustrates implementation of a contour-dilation approach to assigning item3306cto the correct person3302or3304. In general, contour dilation involves iterative dilation of a first contour associated with the first person3302and a second contour associated with the second person3304from a first smaller depth to a second larger depth. The dilated contour that crosses into the zone3324adjacent to the rack112first may correspond to the person3302,3304that picked up the item3306c. Dilated contours may need to satisfy certain criteria to ensure that the results of the contour-dilation approach should be used for item assignment. For example, the criteria may include a requirement that a portion of a contour entering the zone3324adjacent to the rack112is associated with either the first person3302or the second person3304within a maximum number of iterative dilations, as is described in greater detail with respect to the contour-detection views3320,3326,3328, and3332shown inFIG.33B. If these criteria are not satisfied, another method should be used to determine which person3302or3304picked up item3306c. FIG.33Bshows a view3320, which includes a contour3302bdetected at a first depth in the top-view image3308. The first depth may correspond to an approximate head height of a typical person3322expected to be tracked in the space102, as illustrated inFIG.33B. Contour3302bdoes not enter or contact the zone3324which corresponds to the location of a space adjacent to the front of the rack112(e.g., as described with respect to the “virtual curtain” ofFIGS.12-15above). Therefore, the tracking system100proceeds to a second depth in image3308and detects contours3302cand3304bshown in view3326. The second depth is greater than the first depth of view3320. Since neither of the contours3302cor3304benter zone3324, the tracking system100proceeds to a third depth in the image3308and detects contours3302dand3304c, as shown in view3328. The third depth is greater than the second depth, as illustrated with respect to person3322inFIG.33B. In view3328, contour3302dappears to enter or touch the edge of zone3324. Accordingly, the tracking system100may determine that the first person3302, who is associated with contour3302d, should be assigned the item3306c. In some embodiments, after initially assigning the item3306cto person3302, the tracking system100may project an “arm segment”3330to determine whether the arm segment3330enters the appropriate zone3312cthat is associated with item3306c. The arm segment3330generally corresponds to the expected position of the person's extended arm in the space occluded from view by the rack112. If the location of the projected arm segment3330does not correspond with an expected location of item3306c(e.g., a location within zone3312c), the item is not assigned to (or is unassigned from) the first person3302. Another view3332at a further increased fourth depth shows a contour3302eand contour3304d. Each of these contours3302eand3304dappear to enter or touch the edge of zone3324. However, since the dilated contours associated with the first person3302(reflected in contours3302b-e) entered or touched zone3324within fewer iterations (or at a smaller depth) than did the dilated contours associated with the second person3304(reflected in contours3304b-d), the item3306cis generally assigned to the first person3302. In general, in order for the item3306cto be assigned to one of the people3302,3304using contour dilation, a contour may need to enter zone3324within a maximum number of dilations (e.g., or before a maximum depth is reached). For example, if the item3306cwas not assigned by the fourth depth, the tracking system100may have ended the contour-dilation method and moved on to another approach to assigning the item3306c, as described below. In some embodiments the contour-dilation approach illustrated inFIG.33Bfails to correctly assign item3306cto the correct person3302,3304. For example, the criteria described above may not be satisfied (e.g., a maximum depth or number of iterations may be exceeded) or dilated contours associated with the different people3302or3304may merge, rendering the results of contour-dilation unusable. In such cases, the tracking system100may employ another strategy to determine which person3302,3304cpicked up item3306c. For example, the tracking system100may use a pose estimation algorithm to determine a pose of each person3302,3304. FIG.33Cillustrates an example output of a pose-estimation algorithm which includes a first “skeleton”3302ffor the first person3302and a second “skeleton”3304efor the second person3304. In this example, the first skeleton3302fmay be assigned a “reaching pose” because an arm of the skeleton appears to be reaching outward. This reaching pose may indicate that the person3302is reaching to pick up item3306c. In contrast, the second skeleton3304edoes not appear to be reaching to pick up item3306c. Since only the first skeleton3302fappears to be reaching for the item3306c, the tracking system100may assign the item3306cto the first person3302. If the results of pose estimation were uncertain (e.g., if both or neither of the skeletons3302f,3304eappeared to be reaching for item3306c), a different method of item assignment may be implemented by the tracking system100(e.g., by tracking the item3306cthrough the space102, as described below with respect toFIGS.36-37). FIG.34illustrates a method3400for assigning an item3306cto a person3302or3304using tracking system100. The method3400may begin at step3402where the tracking system100receives an image feed comprising frames of top-view images generated by the sensor108and weight measurements from weight sensors110a-c. At step3404, the tracking system100detects an event associated with picking up an item33106c. In general, the event may be based on a portion of a person3302,3304entering the zone adjacent to the rack112(e.g., zone3324ofFIG.33B) and/or a change of weight associated with the item33106cbeing removed from the corresponding weight sensor110c. At step3406, in response to detecting the event at step3404, the tracking system100determines whether more than one person3302,3304may be associated with the detected event (e.g., as in the example scenario illustrated inFIG.33A, described above). For example, this determination may be based on distances between the people and the rack112, an inter-person distance between the people, a relative orientation between the people and the rack112(e.g., a person3302,3304not facing the rack112may not be candidate for picking up the item33106c). If only one person3302,3304may be associated with the event, that person3302,3304is associated with the item3306cat step3408. For example, the item3306cmay be assigned to the nearest person3302,3304, as described with respect toFIGS.12-14above. At step3410, the item3306cis assigned to the person3302,3304determined to be associated with the event detected at step3404. For example, the item3306cmay be added to a digital cart associated with the person3302,3304. Generally, if the action (i.e., picking up the item3306c) was determined to have been performed by the first person3302, the action (and the associated item3306c) is assigned to the first person3302, and, if the action was determined to have been performed by the second person3304, the action (and associated item3306c) is assigned to the second person3304. Otherwise, if, at step3406, more than one person3302,3304may be associated with the detected event, a select set of buffer frames of top-view images generated by sensor108may be stored at step3412. In some embodiments, the stored buffer frames may include only three or fewer frames of top-view images following a triggering event. The triggering event may be associated with the person3302,3304entering the zone adjacent to the rack112(e.g., zone3324ofFIG.33B), the portion of the person3302,3304exiting the zone adjacent to the rack112(e.g., zone3324ofFIG.33B), and/or a change in weight determined by a weight sensor110a-c. In some embodiments, the buffer frames may include image frames from the time a change in weight was reported by a weight sensor110until the person3302,3304exits the zone adjacent to the rack112(e.g., zone3324ofFIG.33B). The buffer frames generally include a subset of all possible frames available from the sensor108. As such, by storing, and subsequently analyzing, only these stored buffer frames (or a portion of the stored buffer frames), the tracking system100may assign actions (e.g., and an associated item106a-c) to a correct person3302,3304more efficiently (e.g., in terms of the use of memory and processing resources) than was possible using previous technology. At step3414, a region-of-interest from the images may be accessed. For example, following storing the buffer frames, the tracking system100may determine a region-of-interest of the top-view images to retain. For example, the tracking system100may only store a region near the center of each view (e.g., region3006illustrated inFIG.30and described above). At step3416, the tracking system100determines, using at least one of the buffer frames stored at step3412and a first action-detection algorithm, whether an action associated with the detected event was performed by the first person3302or the second person3304. The first action-detection algorithm is generally configured to detect the action based on characteristics of one or more contours in the stored buffer frames. As an example, the first action-detection algorithm may be the contour-dilation algorithm described above with respect toFIG.33B. An example implementation of a contour-based action-detection method is also described in greater detail below with respect to method3500illustrated inFIG.35. In some embodiments, the tracking system100may determine a subset of the buffer frames to use with the first action-detection algorithm. For example, the subset may correspond to when the person3302,3304enters the zone adjacent to the rack112(e.g., zone3324illustrated inFIG.33B). At step3418, the tracking system100determines whether results of the first action-detection algorithm satisfy criteria indicating that the first algorithm is appropriate for determining which person3302,3304is associated with the event (i.e., picking up item3306c, in this example). For example, for the contour-dilation approach described above with respect toFIG.33Band below with respect toFIG.35, the criteria may be a requirement to identify the person3302,3304associated with the event within a threshold number of dilations (e.g., before reaching a maximum depth). Whether the criteria are satisfied at step3416may be based at least in part on the number of iterations required to implement the first action-detection algorithm. If the criteria are satisfied at step3418, the tracking system100proceeds to step3410and assigns the item3306cto the person3302,3304associated with the event determined at step3416. However, if the criteria are not satisfied at step3418, the tracking system100proceeds to step3420and uses a different action-detection algorithm to determine whether the action associated with the event detected at step3404was performed by the first person3302or the second person3304. This may be performed by applying a second action-detection algorithm to at least one of the buffer frames selected at step3412. The second action-detection algorithm may be configured to detect the action using an artificial neural network. For example, the second algorithm may be a pose estimation algorithm used to determine whether a pose of the first person3302or second person3304corresponds to the action (e.g., as described above with respect toFIG.33C). In some embodiments, the tracking system100may determine a second subset of the buffer frames to use with the second action detection algorithm. For example, the subset may correspond to the time when the weight change is reported by the weight sensor110. The pose of each person3302,3304at the time of the weight change may provide a good indication of which person3302,3304picked up the item3306c. At step3422, the tracking system100may determine whether the second algorithm satisfies criteria indicating that the second algorithm is appropriate for determining which person3302,3304is associated with the event (i.e., with picking up item3306c). For example, if the poses (e.g., determined from skeletons3302fand3304eofFIG.33C, described above) of each person3302,3304still suggest that either person3302,3304could have picked up the item3306c, the criteria may not be satisfied, and the tracking system100proceeds to step3424to assign the object using another approach (e.g., by tracking movement of the item3306a-cthrough the space102, as described in greater detail below with respect toFIGS.36and37). Modifications, additions, or omissions may be made to method3400depicted inFIG.34. Method3400may include more, fewer, or other steps. For example, steps may be performed in parallel or in any suitable order. While at times discussed as tracking system100or components thereof performing steps, any suitable system or components of the system may perform one or more steps of the method3400. As described above, the first action-detection algorithm of step3416may involve iterative contour dilation to determine which person3302,3304is reaching to pick up an item3306a-cfrom rack112.FIG.35illustrates an example method3500of contour dilation-based item assignment. The method3500may begin from step3416ofFIG.34, described above, and proceed to step3502. At step3502, the tracking system100determines whether a contour is detected at a first depth (e.g., the first depth ofFIG.33Bdescribed above). For example, in the example illustrated inFIG.33B, contour3302bis detected at the first depth. If a contour is not detected, the tracking system100proceeds to step3504to determine if the maximum depth (e.g., the fourth depth ofFIG.33B) has been reached. If the maximum depth has not been reached, the tracking system100iterates (i.e., moves) to the next depth in the image at step3506. Otherwise, if the maximum depth has been reached, method3500ends. If at step3502, a contour is detected, the tracking system proceeds to step3508and determines whether a portion of the detected contour overlaps, enters, or otherwise contacts the zone adjacent to the rack112(e.g., zone3324illustrated inFIG.33B). In some embodiments, the tracking system100determines if a projected arm segment (e.g., arm segment3330ofFIG.33B) of a contour extends into an appropriate zone3312a-cof the rack112. If no portion of the contour extends into the zone adjacent to the rack112, the tracking system100determines whether the maximum depth has been reached at step3504. If the maximum depth has not been reached, the tracking system100iterates to the next larger depth and returns to step3502. At step3510, the tracking system100determines the number of iterations (i.e., the number of times step3506was performed) before the contour was determined to have entered the zone adjacent to the rack112at step3508. At step3512, this number of iterations is compared to the number of iterations for a second (i.e., different) detected contour. For example, steps3502to35010may be repeated to determine the number of iterations (at step3506) for the second contour to enter the zone adjacent to the rack112. If the number of iterations is less than that of the second contour, the item is assigned to the first person3302at step3514. Otherwise, the item may be assigned to the second person3304at step3516. For example, as described above with respect toFIG.33B, the first dilated contours3302b-eentered the zone3324adjacent to the rack112within fewer iterations than did the second dilated contours3304b. In this example, the item is assigned to the person3302associated with the first contour3302b-d. In some embodiments, a dilated contour (i.e., the contour generated via two or more passes through step3506) must satisfy certain criteria in order for it to be used for assigning an item. For instance, a contour may need to enter the zone adjacent to the rack within a maximum number of dilations (e.g., or before a maximum depth is reached), as described above. As another example, a dilated contour may need to include less than a threshold number of pixels. If a contour is too large it may be a “merged contour” that is associated with two closely spaced people (seeFIG.22and the corresponding description above). Modifications, additions, or omissions may be made to method3500depicted inFIG.35. Method3500may include more, fewer, or other steps. For example, steps may be performed in parallel or in any suitable order. While at times discussed as tracking system100or components thereof performing steps, any suitable system or components of the system may perform one or more steps of the method3500. Item Tracking-Based Item Assignment As described above, in some cases, an item3306a-ccannot be assigned to the correct person even using a higher-level algorithm such as the artificial neural network-based pose estimation described above with respect toFIGS.33C and34. In these cases, the position of the item3306cafter it exits the rack112may be tracked in order to assign the item3306cto the correct person3302,3304. In some embodiments, the tracking system100does this by tracking the item3306cafter it exits the rack112, identifying a position where the item stops moving, and determining which person3302,3304is nearest to the stopped item3306c. The nearest person3302,3304is generally assigned the item3306c. FIGS.36A,B illustrate this item tracking-based approach to item assignment.FIG.36Ashows a top-view image3602generated by a sensor108.FIG.36Bshows a plot3620of the item's velocity3622over time. As shown inFIG.36A, image3602includes a representation of a person3604holding an item3606which has just exited a zone3608adjacent to a rack112. Since a representation of a second person3610may also have been associated with picking up the item3606, item-based tracking is required to properly assign the item3606to the correct person3604,3610(e.g., as described above with respect people3302,3304and item3306cforFIGS.33-35). Tracking system100may (i) track the position of the item3606over time after the item3606exits the rack112, as illustrated in tracking views3610and3616, and (ii) determine the velocity of the item3606, as shown in curve3622of plot3620inFIG.36B. The velocity3622shown inFIG.36Bis zero at the inflection points corresponding to a first stopped time (tstopped,1) and a second stopped time (tstopped,2). More generally, the time when the item3606is stopped may correspond to a time when the velocity3622is less than a threshold velocity3624. Tracking view3612ofFIG.36Ashows the position3604aof the first person3604, a position3606aof item3606, and a position3610aof the second person3610at the first stopped time. At the first stopped time (tstopped,1) the positions3604a,3610aare both near the position3606aof the item3606. Accordingly, the tracking system100may not be able to confidently assign item3606to the correct person3604or3610. Thus, the tracking system100continues to track the item3606. Tracking view3614shows the position3604aof the first person3604, the position3606aof the item3606, and the position3610aof the second person3610at the second stopped time (tstopped,2). Since only the position3604aof the first person3604is near the position3606aof the item3606, the item3606is assigned to the first person3604. More specifically, the tracking system100may determine, at each stopped time, a first distance3626between the stopped item3606and the first person3604and a second distance3628between the stopped item3606and the second person3610. Using these distances3626,3628, the tracking system100determines whether the stopped position of the item3606in the first frame is nearer the first person3604or nearer the second person3610and whether the distance3626,3628is less than a threshold distance3630. At the first stopped time of view3612, both distances3626,3628are less than the threshold distance3630. Thus, the tracking system100cannot reliably determine which person3604,3610should be assigned the item3606. In contrast, at the second stopped time of view3614, only the first distance3626is less than the threshold distance3630. Therefore, the tracking system may assign the item3606to the first person3604at the second stopped time. FIG.37illustrates an example method3700of assigning an item3606to a person3604or3610based on item tracking using tracking system100. Method3700may begin at step3424of method3400illustrated inFIG.34and described above and proceed to step3702. At step3702, the tracking system100may determine that item tracking is needed (e.g., because the action-detection based approaches described above with respect toFIGS.33-35were unsuccessful). At step3504, the tracking system100stores and/or accesses buffer frames of top-view images generated by sensor108. The buffer frames generally include frames from a time period following a portion of the person3604or3610exiting the zone3608adjacent to the rack11236. At step3706, the tracking system100tracks, in the stored frames, a position of the item3606. The position may be a local pixel position associated with the sensor108(e.g., determined by client105) or a global physical position in the space102(e.g., determined by server106using an appropriate homography). In some embodiments, the item3606may include a visually observable tag that can be viewed by the sensor108and detected and tracked by the tracking system100using the tag. In some embodiments, the item3606may be detected by the tracking system100using a machine learning algorithm. To facilitate detection of many item types under a broad range of conditions (e.g., different orientations relative to the sensor108, different lighting conditions, etc.), the machine learning algorithm may be trained using synthetic data (e.g., artificial image data that can be used to train the algorithm). At step3708, the tracking system100determines whether a velocity3622of the item3606is less than a threshold velocity3624. For example, the velocity3622may be calculated, based on the tracked position of the item3606. For instance, the distance moved between frames may be used to calculate a velocity3622of the item3606. A particle filter tracker (e.g., as described above with respect toFIGS.24-26) may be used to calculate item velocity3622based on estimated future positions of the item. If the item velocity3622is below the threshold3624, the tracking system100identifies, a frame in which the velocity3622of the item3606is less than the threshold velocity3624and proceeds to step3710. Otherwise, the tracking system100continues to track the item3606at step3706. At step3710, the tracking system100determines, in the identified frame, a first distance3626between the stopped item3606and a first person3604and a second distance3628between the stopped item3606and a second person3610. Using these distances3626,3628, the tracking system100determines, at step3712, whether the stopped position of the item3606in the first frame is nearer the first person3604or nearer the second person3610and whether the distance3626,3628is less than a threshold distance3630. In general, in order for the item3606to be assigned to the first person3604, the item3606should be within the threshold distance3630from the first person3604, indicating the person is likely holding the item3606, and closer to the first person3604than to the second person3610. For example, at step3712, the tracking system100may determine that the stopped position is a first distance3626away from the first person3604and a second distance3628away from the second person3610. The tracking system100may determine an absolute value of a difference between the first distance3626and the second distance3628and may compare the absolute value to a threshold distance3630. If the absolute value is less than the threshold distance3630, the tracking system returns to step3706and continues tracking the item3606. Otherwise, the tracking system100is greater than the threshold distance3630and the item3606is sufficiently close to the first person3604, the tracking system proceeds to step3714and assigns the item3606to the first person3604. Modifications, additions, or omissions may be made to method3700depicted inFIG.37. Method3700may include more, fewer, or other steps. For example, steps may be performed in parallel or in any suitable order. While at times discussed as tracking system100or components thereof performing steps, any suitable system or components of the system may perform one or more steps of the method3700. Hardware Configuration FIG.38is an embodiment of a device3800(e.g. a server106or a client105) configured to track objects and people within a space102. The device3800comprises a processor3802, a memory3804, and a network interface3806. The device3800may be configured as shown or in any other suitable configuration. The processor3802comprises one or more processors operably coupled to the memory3804. The processor3802is any electronic circuitry including, but not limited to, state machines, one or more central processing unit (CPU) chips, logic units, cores (e.g. a multi-core processor), field-programmable gate array (FPGAs), application specific integrated circuits (ASICs), or digital signal processors (DSPs). The processor3802may be a programmable logic device, a microcontroller, a microprocessor, or any suitable combination of the preceding. The processor3802is communicatively coupled to and in signal communication with the memory3804. The one or more processors are configured to process data and may be implemented in hardware or software. For example, the processor3802may be 8-bit, 16-bit, 32-bit, 64-bit or of any other suitable architecture. The processor3802may include an arithmetic logic unit (ALU) for performing arithmetic and logic operations, processor registers that supply operands to the ALU and store the results of ALU operations, and a control unit that fetches instructions from memory and executes them by directing the coordinated operations of the ALU, registers and other components. The one or more processors are configured to implement various instructions. For example, the one or more processors are configured to execute instructions to implement a tracking engine3808. In this way, processor3802may be a special purpose computer designed to implement the functions disclosed herein. In an embodiment, the tracking engine3808is implemented using logic units, FPGAs, ASICs, DSPs, or any other suitable hardware. The tracking engine3808is configured operate as described inFIGS.1-18. For example, the tracking engine3808may be configured to perform the steps of methods200,600,800,1000,1200,1500,1600, and1700as described inFIGS.2,6,8,10,12,15,16, and17, respectively. The memory3804comprises one or more disks, tape drives, or solid-state drives, and may be used as an over-flow data storage device, to store programs when such programs are selected for execution, and to store instructions and data that are read during program execution. The memory3804may be volatile or non-volatile and may comprise read-only memory (ROM), random-access memory (RAM), ternary content-addressable memory (TCAM), dynamic random-access memory (DRAM), and static random-access memory (SRAM). The memory3804is operable to store tracking instructions3810, homographies118, marker grid information716, marker dictionaries718, pixel location information908, adjacency lists1114, tracking lists1112, digital carts1410, item maps1308, and/or any other data or instructions. The tracking instructions3810may comprise any suitable set of instructions, logic, rules, or code operable to execute the tracking engine3808. The homographies118are configured as described inFIGS.2-5B. The marker grid information716is configured as described inFIGS.6-7. The marker dictionaries718are configured as described inFIGS.6-7. The pixel location information908is configured as described inFIGS.8-9. The adjacency lists1114are configured as described inFIGS.10-11. The tracking lists1112are configured as described inFIGS.10-11. The digital carts1410are configured as described inFIGS.12-18. The item maps1308are configured as described inFIGS.12-18. The network interface3806is configured to enable wired and/or wireless communications. The network interface3806is configured to communicate data between the device3800and other, systems, or domain. For example, the network interface3806may comprise a WIFI interface, a LAN interface, a WAN interface, a modem, a switch, or a router. The processor3802is configured to send and receive data using the network interface3806. The network interface3806may be configured to use any suitable type of communication protocol as would be appreciated by one of ordinary skill in the art. While several embodiments have been provided in the present disclosure, it should be understood that the disclosed systems and methods might be embodied in many other specific forms without departing from the spirit or scope of the present disclosure. The present examples are to be considered as illustrative and not restrictive, and the intention is not to be limited to the details given herein. For example, the various elements or components may be combined or integrated in another system or certain features may be omitted, or not implemented. In addition, techniques, systems, subsystems, and methods described and illustrated in the various embodiments as discrete or separate may be combined or integrated with other systems, modules, techniques, or methods without departing from the scope of the present disclosure. Other items shown or discussed as coupled or directly coupled or communicating with each other may be indirectly coupled or communicating through some interface, device, or intermediate component whether electrically, mechanically, or otherwise. Other examples of changes, substitutions, and alterations are ascertainable by one skilled in the art and could be made without departing from the spirit and scope disclosed herein. To aid the Patent Office, and any readers of any patent issued on this application in interpreting the claims appended hereto, applicants note that they do not intend any of the appended claims to invoke 35 U.S.C. § 112(f) as it exists on the date of filing hereof unless the words “means for” or “step for” are explicitly used in the particular claim. | 324,528 |
11861853 | DETAILED DESCRIPTION In the drawings, like reference numerals designate identical or corresponding parts throughout the several views. Further, as used herein, the words “a,” “an” and the like generally carry a meaning of “one or more,” unless stated otherwise. The drawings are generally drawn to scale unless specified otherwise or illustrating schematic structures or flowcharts. Furthermore, the terms “approximately,” “approximate,” “about,” and similar terms generally refer to ranges that include the identified value within a margin of 20%, 10%, or preferably 5%, and any values therebetween. The present disclosure provides effective yet simple modular components for autonomous or intelligent traffic systems. Advanced Driver Assistance Systems (ADAS) are being made to improve automotive safety. Vehicles may offer driver assistance technologies including Autonomous Emergency Braking and a safe distance warning. ADAS may take into consideration environmental conditions and vehicle performance characteristics. Environmental conditions can be obtained using vehicle environment sensors. Vehicle cameras can capture a continuous camera stream. The term ego vehicle refers to a vehicle that contains vehicle environment sensors that perceive the environment around the vehicle. Edge computing devices are computing devices that are proximate to the data source, such as vehicle environment sensors. FIG.1illustrates estimation of ego-vehicle speed using a continuous camera stream. The present disclosure includes a 3D Convolutional Neural Network (3D-CNN) architecture trained on short videos using corresponding grayscale image frames102and corresponding focus masks, such as masks that focus on road lane lines104, lane line segmentation masks. The neural network architecture is used to estimate the speed112of the ego vehicle, which can, in turn help in ADAS, including, among other things, to estimate the speed of vehicles of interest (VOI) in the surrounding environment. FIG.2is a top view of an exemplary ego vehicle having video cameras mounted thereon. Video cameras mounted on an ego vehicle may be used to obtain video images to be used to estimate the speed of the ego vehicle. The ego vehicle200can be of any make in the market. A non-limiting ego vehicle can be equipped with a number of exterior cameras204and interior cameras210. One camera214for speed estimation can be mounted on the front dash, the front windshield, or embedded on the front portion of the exterior body and/or on the ego vehicle roof in order to capture images in front of the ego vehicle for external vehicles. The camera is preferably mounted integrally with a rearview/side mirror on the driver's side of the ego vehicle on a forward-facing surface (i.e., facing traffic preceding the ego vehicle). In this position the camera is generally oriented within the view of an individual inside the ego vehicle such that a driver can concurrently check for oncoming traffic behind the ego vehicle using the rearview side mirror and monitor the position of preceding vehicles. FIG.3illustrates an exemplary exterior-facing camera, which may be, but is not limited to, a USB camera310with a base that can be attached to the rearview mirror, side mirror, windshield, dashboard, front body panel, or roof of the ego vehicle200, to name a few. The camera310can be a USB camera for connection to an edge computing device, that is proximate to the USB camera, by a USB cable. The USB camera310may be of any make which can channel a video stream. In one embodiment, the speed estimation apparatus is an all-in-one portable module that is removably mounted on a ego vehicle200. Preferably the all-in-one portable module has a camera back plate which is curved to generally match the contours of the forward-facing surface of a side view mirror, e.g., an ovoidal shape having a flat inner surface matching the contours of the forward face of the side view mirror and a curved dome-like front surface with the camera lens/opening located at an apex of the dome shape. The back plate is optionally integral with a neck portion that terminates in a thin plate having a length of 5-20 cm which can be inserted into the gap between the window and patrol vehicle door to secure the all-in-one portable module to the ego vehicle. A cable and/or wireless capability may be included to transfer captured images to the edge computing device while the ego vehicle is moving. The video camera310is capable of capturing a sequence of image frames at a predetermined frame rate. The frame rate may be fixed or may be adjusted in a manual setting, or may be set based on the mode of image capture. For example, a video camera may have an adjustable frame rate for image capture, or may automatically set a frame rate depending on the type of image capture. A burst image may be set for one predetermined frame rate, while video capture may be set for another predetermined frame rate. In embodiments, ego vehicle speed is estimated based on video images of the surrounding environment. In some embodiments, the speed estimation is determined using machine learning technology. 2D Convolutional Neural Networks have proven to be excellent at extracting feature maps for images and are predominantly used for understanding the spatial aspects of images relevant to image classification and object detection. However, 2D Convolutional Neural Networks cannot capture the spatio-temporal features of videos spread across multiple continuous frames. Neural network time series models can be configured for video classification. Neural network approaches that have been used for time series prediction include recurrent neural networks (RNN) and long short-term memory (LSTM) neural networks. In addition, 3D Convolutional Neural Networks can learn spatio-temporal features and thus help in video classification, human action recognition, and sign language recognition. Attention on top of 3D-CNN has also been used. See Rohit Girdhar, Joao Carreira, Carl Doersch, and Andrew Zisserman. Video action transformer network. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 244-253, 2019; Chao-Yuan Wu, Christoph Feichtenhofer, Haoqi Fan, Kaiming He, Philipp Krahenbuhl, and Ross Girshick. Long-term feature banks for detailed video understanding. InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition,pages 284-293, 2019; and Xiaolong Wang, Ross Girshick, Abhinav Gupta, and Kaiming He. Non-local neural networks. InProceedings of the IEEE conference on computer vision and pattern recognition,pages 7794-7803, 2018, each incorporated herein by reference in their entirety. However, they are limited to action recognition use cases. Regression can also be performed using 3D-CNNs. See Agne Grinciunaite, Amogh Gudi, Emrah Tasli, and Marten den Uyl. Human pose estimation in space and time using 3d cnn. InEuropean Conference on Computer Vision,pages 32-39. Springer, 2016; Xiaoming Deng, Shuo Yang, Yinda Zhang, Ping Tan, Liang Chang, and Hongan Wang. Hand3d: Hand pose estimation using 3d neural network.arXiv preprint arXiv:1704.02224, 2017; and Liuhao Ge, Hui Liang, Junsong Yuan, and Daniel Thalmann. 3d convolutional neural networks for efficient and robust hand pose estimation from single depth images. In Proceedings of theIEEE conference on computer vision and pattern recognition,pages 1991-2000, 2017, each incorporated herein by reference in their entirety. However, the approaches perform regression perform spatial localization-related tasks such as human pose or 3D hand pose. Vision Transformers (ViTs) capitalize on processes used in transformers in the field of Natural Language Processing. A non-overlapping takes patches of an image and creates token embeddings after performing linear projection. See Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Thai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, et al. An image is worth 16x16 words: Transformers for image recognition at scale.arXiv preprint arXiv:2010.11929, 2020, incorporated herein by reference in its entirety. These embeddings are concatenated with position embeddings, after which they are processed with the transformer block, which contains layer normalization, Multi-Head Attention, and MLP operations to produce a final classification output. ViTs have been used to replace CNNs, they lack the inductive bias, whereas CNN's are translation invariant due to the local neighborhood structure of the convolution kernels. Moreover, transformers have quadratic complexity for their operations and scale with the input dimensions. On the other hand, ViTs provide global attention and long-range interaction. The inventors have determined that a hybrid CNN-Transformer with a CNN backbone, referred to as 3D-CNN with masked attention (3D-CMA) can outperform the pure ViT approach. Video transformer architectures can be classified based on the embeddings (backbone and minimal embeddings), tokenization (patch tokenization, frame tokenization, clip tokenization), and positional embeddings. In disclosed embodiments, the ego vehicle speed is estimated by relying purely on video streams from a monocular camera. The ego vehicle speed can be estimated by onboard hardware that implements a neural network time series model. In some embodiments, the ego vehicle speed is estimated using a hybrid CNN-Transformer (3D-CMA). FIG.4is a block diagram of an onboard hardware implementation of an ego vehicle speed estimation system in accordance with an exemplary aspect of the disclosure. The hardware implementation of the speed estimation system400includes an image/video capturing device (video camera310) and an edge computing device420. The video camera310is capable of capturing a sequence of image frames at a predetermined frame rate. The frame rate may be fixed or may be adjusted in a manual setting, or be set based on the mode of image capture. For example, a video camera may have an adjustable frame rate for image capture, or may automatically set a frame rate depending on the type of image capture. A burst image may be set for one predetermined frame rate, while video capture may be set for another predetermined frame rate. The edge computing device420is configured as an embedded processing circuitry for ego vehicle speed estimation. In one embodiment, the edge computing device420is a portable, or removably mounted, computing device which is equipped with a Graphical Processing Unit (GPU) or a type of machine learning engine, as well as a general purpose central processing unit (CPU)422, and its internal modules. The edge computing device420provides computing power that is sufficient for machine learning inferencing in real time for tasks including vehicle speed estimation and object detection, preferably all with a single monocular camera. Internal modules can include communication modules, such as Global System for Mobile Communication (GSM)426and Global Positioning System (GPS)424, as well as an input interface414for connection to the vehicle network (Controller Area Network, CAN). A supervisory unit412may control input and output communication with the vehicle internal network. In one embodiment, the GPU/CPU configured edge computing device420is an NVIDIA Jetson Series (including Orin, Xavier, Tx2, Nano) system on module or an equivalent high-performance processing module from any other manufacturer like Intel, etc. The video camera310may be connected to the edge computing device420by a plug-in wired connection, such as USB, or may communicate with the edge computing device420by a wireless connection, such as Bluetooth Low Energy, depending on distance to the edge device and/or communication quality in a vehicle. This set up is powered by the vehicle's battery as a power source. A power management component416may control or regulate power to the GPU/CPU422, on an as needed basis. A time-series model must be utilized to capture the relative motion between adjacent image data samples. As a basis, a 2D convolution operation over an image I using a kernel K of size m×n is: S(i,j)=(I*K)(i,j)=∑m∑nI(i,j)K(i-m,j-n) See Ian J. Goodfellow, Yoshua Bengio, and Aaron Courville.Deep Learning.MIT Press, Cambridge, MA, USA, 2016, which is incorporated herein by reference in its entirety. Expanding further on the above equation, the 3D convolution operation can be expressed as: S(h,i,j)=(I*K)(h,i,j)=∑l∑m∑nI(h,i,j)K(h-l,i-m,j-n) where h is the additional dimension that includes the number of frames the kernel has to go through. In one embodiment, the kernel is convoluted with the concatenation of the grayscale images and lane line segmentation masks. To this extent, a 3D-CNN network is incorporated to preserve the temporal information of the input signals and compute the ego vehicle speed. 3D-CNNs can learn spatial and temporal features simultaneously using 3D kernels. In one embodiment, small receptive fields of 3×3×3 are used as the convolutional kernels throughout the network. Many 3D-CNN architectures lose big chunks of temporal information after the first 3D pooling layer. This is especially valid in the case of short-term spatio-temporal features propagated by utilizing smaller temporal windows. The pooling kernel size is d×k×k, where d is the kernel temporal depth, and s is the spatial kernel size. In one embodiment, d=1 is used for the first max pooling layer to preserve the temporal information. In this embodiment, it can be ensured that the temporal information does not collapse entirely after the initial convolutional layers. FIG.5is a block diagram of an architecture of 3D-CNN with masked attention (3D-CMA).FIG.6is a block diagram of an architecture having lane line segmentation including an encoder and a decoder as part of the masked attention layer. In some embodiments, a masked-attention layer504is added into the 3D-CNN architecture500to guide the model to focus on relevant features that help with ego-vehicle speed computation. In one embodiment, the relevant features are road lane lines. An image of an outdoor scene captured from a moving car typically has significant clutter and random motion that can obscure the model learning. For example, a scene can be obstructed by other moving vehicles, moving pedestrians, or birds and other animals. Road work zones and temporary markers or lane markings may create unusual views of the road. In some cases, road markings may transition from temporary markings in work zones to regular lane line markings. Some roads may offer periodic mile markers. A 3D-CNN model is preferably trained to filter out the irrelevant movements (such as that of other cars, pedestrians, etc.) that do not contribute towards the ego-vehicle speed estimation and focus only on features that matter. However, such a 3D-CNN model typically requires training with large quantities of data. In a more practical scenario where unlimited resources are not available, adding masked-attention helps to attain improved model performance with faster model convergence. As shown herein, the error in speed estimation is reduced by adding masked-attention to the 3D-CNN network500. Further details about the impact of masked-attention are described as part of an ablation study below. Convolutional neural networks comprise a learned set of filters, where each filter extracts a different feature from the image. An object is to inhibit or exhibit the activation of features based on the appearance of objects of interest in the images. Typical scenes captured by car-mounted imaging devices include background objects such as the sky, and other vehicles in the environment, which do not contribute to ego-vehicle speed estimation. In fact, the relative motion of environmental vehicles often contributes negatively to the ability of the neural network to inhibit irrelevant features. To inhibit and exhibit features based on relevance, a masked-attention map506is concatenated to the input image502before passing an input image through the neural network. RegardingFIG.6, a single-shot network504is used with a shared encoder614and three separate decoders that accomplish specific tasks such as object detection, drivable area segmentation, and lane line segmentation. Preferably, there are no complex and/or redundant shared blocks between different decoders, which reduces computational consumption. CSP-Darknet is preferably used as the backbone network614of the encoder, while the neck is mainly composed of Spatial Pyramid Pooling (SPP) module616and Feature Pyramid Network (FPN) module. See Chien-Yao Wang, Alexey Bochkovskiy, and Hong-Yuan Mark Liao. Scaled-yolov4: Scaling cross stage partial network, 2020; Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Spatial pyramid pooling in deep convolutional networks for visual recognition. InComputer Vision—ECCV2014,pages 346-361. Springer International Publishing, 2014; and Tsung-Yi Lin, Piotr Dollár, Ross Girshick, Kaiming He, Bharath Hariharan, and Serge Belongie. Feature pyramid networks for object detection, 2016, each incorporated herein by reference in their entirety. SPP generates and fuses features of different scales, and FPN fuses features at different semantic levels, making the generated features contain multiple scales and semantic level information. In one embodiment, the masked-attention map506is generated from input video sequences502using the lane line segmentation branch504. The concatenation512of lane line segmentation as an additional channel to the camera channel allows the 3D-CNN510to focus on the apparent displacement of the lane line segments in the video sequences to best estimate the ego-vehicle speed. Referring back toFIG.6, the architecture504for lane line segmentation includes an encoder614and a decoder618. The backbone network614is used to extract the features of the input image612. Typically, some classic image classification network serves as the backbone. In one embodiment, CSP Darknet is used as the backbone. The SPP616generates and fuses features of different scales. The lane line segmentation head618is configured such that after three upsampling processes, an output feature map622is restored to the size of (W; H; 2), which represents the probability of each pixel in the input image612for the lane line and the background. In some embodiments, other road features may be used in the segmentation for masked attention. Other road features can include, but are not limited to, periodic reflectors marking road boundaries, road center rumble ridges, road barriers having reflective markings, and mile marker posts. In some embodiments, the background is used to classify a road condition. Road conditions can include wet road, dry road, icy road, or snow conditions, to name a few. In some embodiments, the background can be used to classify the type of road, including paved road vs an unfinished road. In some embodiments, multiple branches may be used in addition to lane line segmentation branch504for determining masked attention maps. Each of the multiple branches may be for each of the different types of road features that can be used to focus attention for speed estimation. The 3D-CNN architecture with masked-attention (3D-CMA) for ego vehicle speed estimation is illustrated inFIG.5. In the 3D-CNN architecture ofFIG.5, the RGB stream can be converted to grayscale since color information is not vital for speed estimation. However, a masked attention map406is concatenated512as an additional channel to the grayscale image502. To reduce the computational complexity and memory requirement, the original input streams are resized to 64×64 before feeding them into the network510. Thus, the input to the model has a dimension of n×64×64×2, where n is the number of frames in the temporal sequence. In one embodiment, all convolutional 3D layers516,522use a fixed kernel size of 3×3×3. The initial pooling layer518uses a kernel size of 1×2×2 to preserve the temporal information. The subsequent pooling layer524, which appears at the center of the network, compresses the temporal and spatial domains with a kernel size of 2×2×2. Six 3D convolutional layers516,522,526,528are incorporated with the number of filters for the layers from 1-6 being 32; 32; 64; 64; 128; 128 respectively. Finally, four fully connected layers532, 434, 436, 438 have 512; 256; 64 and 1 nodes. The L2 loss function which is used for training the 3D-CNN is as follows: ℒspeed=1n∑i=0n(Si-Sˆi)2=1n∑i=0n(Si-WTX)=1n∑i=0n(Si-WT(Xl+XM))2 where n is the number of frames in the input and Si is the speed value ground truth of ith corresponding frame, and Si is the inferred speed value. Xi is the grayscale image channel, and XMis the masked-attention channel for every frame. W is the weight tensor of the 3D convolutional kernel. The ego vehicle speed estimation may encounter varying conditions, such as varying road markings, varying road conditions, and even varying road surface types. The ego vehicle speed estimation can be configured to go into power conserve modes depending on such varying conditions. In some embodiments, the onboard hardware implementation of an ego vehicle speed estimation system400may be configured to use power efficiently. The hardware implementation400can be configured to halt processing of the 3D-CNN network when the segmented features do not include road features that may be used to determine ego vehicle speed. The hardware implementation400can be configured to monitor ego vehicle speed obtained from internal sensors while the 3D-CNN network is in the halted state. The hardware implementation400can be configured to intermittently perform processing using the 3D-CNN network. The hardware implementation400can be configured to continuously monitor vehicle speed while the ego vehicle is in an operating state and periodically estimate speed of the ego vehicle using the 3D-CNN network. The effectiveness of the 3D-CMA model was evaluated. First, the public datasets used in experiments are described. Then the metrics used for evaluation are described. The 3D-CMA model architecture is compared against a ViViT, a state-of-the-art vision transformer architectures. Additionally, some ablation studies are described to characterize the contribution of masked-attention within the network architecture and compare its performance by discarding the same from the 3D-CNN. A Video Vision Transformer(ViViT) is used for some cases due to its representation of the 3D convolution in the form of Tubelet embedding. See Anurag Arnab, Mostafa Dehghani, Georg Heigold, Chen Sun, Mario Luc{hacek over ( )}ic′, and Cordelia Schmid. Vivit: A video vision transformer. InProceedings of the IEEE/CVF International Conference on Computer Vision,pages 6836-6846, 2021, incorporated herein by reference in its entirety. ViViT is easily reproducible and has a good balance between the parameters and accuracy for small datasets. Moreover, ViViT-H scores an accuracy of 95.8, just below the 95.9 accuracy score by Swin-L as per the Video Transformers Survey over HowTo100M. See Javier Selva, Anders S Johansen, Sergio Escalera, Kamal Nasrollahi, Thomas B Moeslund, and Albert Clapés. Video transformers: A survey. arXiv preprint arXiv:2201.05991, 2022; and Antoine Miech, Dimitri Zhukov, Jean-Baptiste Alayrac, Makarand Tapaswi, Ivan Laptev, and Josef Sivic. Howto100m: Learning a text-video embedding by watching hundred million narrated video clips. InProceedings of the IEEE/CVF International Conference on Computer Vision,pages 2630-2640, 2019, each incorporated herein by reference in their entirety. FIG.7is a block diagram of an architecture of ViViT. In the ViViT, the frames from the video(N) are tokenized using 3D-Convolutional tubelet embeddings and further passed to multiple transformer encoders to regress the speed value finally. The ViViT includes extracting non-overlapping, spatio-temporal “tubes” from the input volume, and to linearly project this tod. This method is an extension of ViT's embedding to 3D, and corresponds to a 3D convolution. For a tubelet of dimension t×h×w,nt=[Tt],nh=[Hh]andnw=[Ww], tokens are extracted from temporal, height, and width dimensions respectively. Smaller tubelet dimensions thus result in more tokens which increases the computation. A ViT extracts N non-overlapping image patches, xi∈Rh×w, 602 performs a linear projection and then rasterises them into 1D tokens zi∈d. The sequence of tokens input to the following transformer encoder is Z=[zcls, Ex1,Ex2, . . . , ExN]+p where the projection by E is equivalent to a 2D convolution. As shown inFIG.7, an optional learned classification token zcls704is prepended to this sequence, and its representation at the final layer of the encoder serves as the final representation used by the classification layer. In addition, a learned positional embedding, p∈N×d706, is added to the tokens to retain positional information, as the subsequent self-attention operations in the transformer are permutation invariant. The tokens are then passed through an encoder consisting of a sequence of L transformer layers710. Each layer 1 comprises of Multi-Headed Self-Attention724, layer normalisation (LN)618,626, and MLP blocks716. The Transformer Encoder can be trained with the spatio-temporal embeddings. There is a lack of standardized datasets available for the estimation of ego-vehicle speed from a monocular camera stream. DBNet is a large-scale dataset for driving behavior research which includes aligned videos and vehicular speed from 1000 km driving stretch. See Yiping Chen, Jingkang Wang, Jonathan Li, Cewu Lu, Zhipeng Luo, Han Xue, and Cheng Wang. Lidar-video driving dataset: Learning driving policies effectively. In2018IEEE/CVF Conference on Computer Vision and Pattern Recognition,pages 5870-5878, 2018, incorporated herein by reference in its entirety. However, the test set is not available for public usage. Likewise, the test set of comma.ai speed challenge is not open to the public. See comma.ai speed challenge, 2018, incorporated herein by reference in its entirety. KITTI dataset has been utilized for speed estimation using motion and monocular depth estimation. See Róbert-Adrian Rill. Speed estimation evaluation on the kitti benchmark based on motion and monocular depth information, 2019, incorporated herein by reference in its entirety. However, there is no information about the train and test splits used for the evaluation of the models. In the present disclosure, two public datasets are utilized for experiments—nulmages and KITTI. Some sample images extracted from video sequences for nulmages and KITTI are shown inFIGS.7A-7D. nulmages is derived from nuScenes and is a large-scale autonomous driving dataset having 93 k video clips of 6 seconds each. See Holger Caesar, Varun Bankiti, Alex H. Lang, Sourabh Vora, Venice Erin Liong, Qiang Xu, Anush Krishnan, Yu Pan, Giancarlo Baldan, and Oscar Beijbom. nuscenes: A multimodal dataset for autonomous driving. In CVPR, 2020, incorporated herein by reference in its entirety. The dataset is collated from two diverse cities—Boston and Singapore. Each video clip consists of 13 frames spaced out at 2 Hz. The annotated images include rain, snow, and night time, which are important for autonomous driving applications. Each sample in the nulmages dataset comprises of an annotated camera image with an associated timestamp and past and future images. It is to be noted that the six previous and six future images are not annotated. The sample frame has meta-data information available as token ids regarding the previous and future frames associated with the particular sample. The vehicle speed is extracted from the CAN bus data and linked to the sample data through sample tokens. The train and test splits of the nulmages dataset have been strictly followed for training and evaluating the AI models. The distribution of speed data across train and test splits of the nulmages dataset are shown inFIGS.8A-8D. The KITTI Vision Benchmark Suite is a public dataset containing raw data recordings that are captured and synchronized at 10 Hz. See Andreas Geiger, Philip Lenz, and Raquel Urtasun. Are we ready for autonomous driving? the kitti vision benchmark suite. In2012IEEE Conference on Computer Vision and Pattern Recognition,pages 3354-3361, 2012; and A Geiger, P Lenz, C Stiller, and R Urtasun. Vision meets robotics: The kitti dataset.The International Journal of Robotics Research,32(11):1231-1237, 2013, each incorporated herein by reference in their entirety. Geiger et al.,2012, presented the benchmark challenges, their creation and use for evaluating state-of-the-art computer vision methods, while Geiger et al., 2013, was a follow-up work that provided technical details on the raw data itself, describing the recording platform, the data format and the utilities. The dataset was captured by driving around the mid-size city of Karlsruhe. The “synched+rectified” processed data is utilized where images are rectified and undistorted and where the data frame numbers correspond across all sensor streams. While the dataset provides both grayscale and color stereo sequences, an RGB stream is utilized extracted from camera ID 03 only. The ego-vehicle speed values are extracted from IMU sensor readings. The raw data is split across six categories—City, Residential, Road, Campus, Person, and Calibration. For an experiment, data from City and Road categories is utilized. Some video samples in the City category have prolonged periods where the car is stationary. Such video samples are discarded where the vehicle was stationary for most of the video samples. To facilitate future benchmarks from the research community for ego-vehicle speed estimation, train and test splits are reported in Table 1. The distribution of speed data across train and test splits from the KITTI dataset is shown inFIGS.9A-9D. TABLE 1train and test video samples for KITTI datasetKITTICategoryTrainTestCity2011_09_26_drive0002, 0005, 000900010011, 0013, 001401170048, 0051, 00560059, 0084, 00910095, 0096, 01040106, 01132011_09_28_drive00012011_09_29_drive0071Road2011_09_26_drive015, 0027, 002800700029, 0032, 005201012011_09_29_drive0004, 0016, 00420047 The conventional evaluation protocol used in the literature for the task of regression—Mean Absolute Error (MAE) and Root Mean Square Error (RMSE)—was used. The MAE and RMSE are computed as follows : RMSE=)(1n)∑i-1n(yi-yˆi)2MAE=(1n)∑i=1n❘"\[LeftBracketingBar]"yˆˆi-yi❘"\[RightBracketingBar]" where yidenotes the ground truth ego-vehicle speed value and ŷidenotes the predicted speed value by the AI model. RGB images from the camera mounted in front of the vehicle are used and ego-vehicle velocity coming from the CAN-BUS across both public datasets. This information is synchronized. The KITTI dataset has a camera image resolution of 1238_374. The temporal dimension we used for the KITTI dataset is ten frames. The KITTI dataset is sampled at 10 Hz, which means that the models are fed with video frames containing visual information from a time window of 1 sec. The ego-vehicle velocity assigned to any temporal sequence is the speed value tagged to the closest time stamp of the 10th frame in the input sequence. On the other hand, the camera image resolution for the nulmages dataset is 1600_900. nulmages dataset is sampled at 2 Hz. Six frames each are taken, preceding and succeeding the sample frame. This means that the models are fed with video frames containing visual information spanning a time window of approximately 6 sec. The ego vehicle velocity assigned to any temporal sequence is the speed value tagged to the closest time-stamp of the sample frame (7th frame in the input sequence). For the experiments with ViViT, non-overlapping, spatio-temporal tubelet embeddings of dimension t×h×w are taken, where t=6, h=8, and w=8. The number of transformer layers in the implementation is 16. The number of heads for multi-headed self-attention blocks is 16, and the dimension of embeddings is 128. The AI models were trained using an Nvidia GeForce RTX-3070 Max-Q Design GPU having 8 GB VRAM. The learning rate used for training all models is 1×10−3. All models are trained for 100 epochs with early stopping criteria set to terminate the training process if validation loss does not improve for ten epochs consecutively. The optimizer utilized is Adam since it utilizes both momentum and scaling The performance of the proposed 3D-CMA architecture is evaluated and compared against the standard ViViT with spatio-temporal attention. The evaluation metrics are reported on the test set for KITTI and nulmages datasets in the subsections below. The evaluation across all datasets consistently reported better results for the 3D-CMA architecture. Evaluation scores for the nulmages dataset are shown in Table 2. Approximately 27% improvement was observed in RMSE and MAE for 3D-CMA compared to ViViT for the nulmages dataset. TABLE 2nuImages evaluation for (a) ViViT (b)3DCMAEvaluation MetricMethodRMSEMAEVIViT1.7821.3263D CMA1.2970.974 The evaluation shows 34:5% and 41:5% improvement in RMSE and MAE respectively on the KITTI dataset for 3D-CMA compared to ViViT. The results are seen in Table 3. TABLE 3Evaluation on KITTI dataset for (a) ViViT (b)3D-CMAEvaluation MetricMethodRMSEMAEVIViT5.0244.3243D CMA3.2902.528 To further understand the importance of masked-attention, an ablation study was conducted by removing masked attention input to the 3D-CNN network. It is to be noted that the input to the 3D-CNN model is a single-channel grayscale image after the removal of the masked-attention input. Evaluation scores for the nulmages dataset are shown in Table 4. The addition of masked-attention reduces RMSE by 23:6% and MAE by 25:9% for the nulmages dataset. TABLE 4Evaluation on nuImages dataset for (a)3D-CNNwithout masked-attention (b)3D-CMAEvaluation MetricMethodRMSEMAE3D-CNN without MA1.6981.3153D CMA1.2970.974 Evaluation scores for the KITTI dataset are shown in Table 5. The addition of masked-attention reduces the RMSE by 25:8% and MAE by 30:1% for the KITTI dataset. TABLE 5Evaluation on KITTI dataset for (a)3D-CNNwithout masked-attention (b)3D-CMAEvaluation MetricMethodRMSEMAE3D-CNN without MA4.4373.6173D CMA3.2902.528 To take into consideration the generalization ability of the AI models, evaluations were conducted across data sets and their accuracy was reported. It is to be noted that there is a shift in the domain when testing nuImages-trained AI models on the KITTI dataset due to the reasons stated in section 4.3. To test KITTI models on the nuImages dataset, ten frames are needed within a duration of 1 second from nuImages. Since the FPS of the nuImages dataset is only 2 FPS, evaluation was unable to encapsulate ten frames within a temporal window of 1 sec. For this reason, testing discarded KITTI models on the nuImages dataset. The KITTI video stream was pre-processed to evaluate nuImages-trained models on the KITTI dataset to ensure the temporal windows are compatible. nuImages-trained models require the temporal window to be 13 frames across 6 secs. However, KITTI dataset video streams are sampled at 10 Hz. The frame decimation was used to sample the video at 2 Hz and concatenate frames across 6 secs of the stream to encapsulate the 13 frames temporal window. The images were resized and were allowed the mismatch in the image dimensions between the two datasets to diversify the gap between them in the evaluation. The results for two models are reported below in Table 6. TABLE 6Evaluation of nuImages trained models onKITTI test data for (a) ViViT (b) 3D-CMAEvaluation MetricMethodRMSE (KITTI)MAE (KITTI)ViViT (nuImages)7.4205.9573D CMA (nuImages)5.8804.694 The present disclosure includes a modified 3D-CNN architecture with masked-attention employed for ego vehicle speed estimation using single-camera video streams. 3D-CNN is effective in capturing temporal elements within an image sequence. However, it was determined that presence of background clutter and non-cohesive motion within the video stream often confused the model. To extend some control over the focus regions within the images, the 3D-CNN is modified to employ a masked-attention mechanism to steer the model to focus on relevant regions. In one embodiment, the lane segmentation mask is concatenated as an additional channel to the input images before feeding them to the 3D-CNN. The modified 3D-CNN has demonstrated better performance in several evaluations with the inclusion of the masked-attention. The performance of the modified 3D-CNN architecture was evaluated on two publicly available datasets—nulmages and KITTI. Though there are prior works utilizing the KITTI dataset for the ego vehicle speed estimation task, none clearly stated the train and test splits being used for reporting the results. In the present disclosure, the train and test splits from KITTI Road and City categories are reported. In terms of evaluation, the 3D-CMA is compared against a recent state-of-the-art transformer network for videos, ViViT. In addition, the impact of employing masked-attention to 3D-CNN is investigated and the injection of masked-attention improved the MAE and RMSE scores across all scenarios. The increase in the RMSE and MAE scores for cross-dataset evaluation is due to the domain gap between the two datasets. However, 3D-CMA continued to perform better for the cross-data set evaluation as well. Next, further details of the hardware description of an exemplary computing environment according to embodiments is described with reference toFIG.10. InFIG.10, a controller1000is a computing device which includes a CPU1050which can perform the processes described above. The computing device may be an AI workstation running an operating system, for example Ubuntu Linux OS, Windows, a version of Unix OS, or Mac OS. The computer system1000may include one or more central processing units (CPU)1050having multiple cores. The computer system1000may include a graphics board1012having multiple GPUs, each GPU having GPU memory. The graphics board1012may perform many of the mathematical operations of the disclosed machine learning methods. The computer system1000includes main memory1002, typically random access memory RAM, which contains the software being executed by the processing cores1050and GPUs1012, as well as a non-volatile storage device1004for storing data and the software programs. Several interfaces for interacting with the computer system1000may be provided, including an I/O Bus Interface1010, Input/Peripherals1018such as a keyboard, touch pad, mouse, Display Adapter1016and one or more Displays1008, and a Network Controller1006to enable wired or wireless communication through a network 99. The interfaces, memory and processors may communicate over the system bus1026. The computer system1000includes a power supply1021, which may be a redundant power supply. In some embodiments, the computer system1000may include a server CPU and a graphics card by NVIDIA, in which the GPUs have multiple CUDA cores. In some embodiments, the computer system1000may include a machine learning engine1012. The exemplary circuit elements described in the context of the present disclosure may be replaced with other elements and structured differently than the examples provided herein. Moreover, circuitry configured to perform features described herein may be implemented in multiple circuit units (e.g., chips), or the features may be combined in circuitry on a single chipset, as shown onFIG.11. FIG.11shows a schematic diagram of a data processing system1100used within the computing system, according to exemplary aspects of the present disclosure. The data processing system1100is an example of a computer in which code or instructions implementing the processes of the illustrative aspects of the present disclosure may be located. InFIG.11, data processing system1180employs a hub architecture including a north bridge and memory controller hub (NB/MCH)1125and a south bridge and input/output (I/O) controller hub (SB/ICH)1120. The central processing unit (CPU)1130is connected to NB/MCH1125. The NB/MCH1125also connects to the memory1145via a memory bus, and connects to the graphics processor1150via an accelerated graphics port (AGP). The NB/MCH1125also connects to the SB/ICH1120via an internal bus (e.g., a unified media interface or a direct media interface). The CPU Processing unit1130may contain one or more processors and even may be implemented using one or more heterogeneous processor systems. For example,FIG.12shows one aspects of the present disclosure of CPU1130. In one aspects of the present disclosure, the instruction register1238retrieves instructions from the fast memory1240. At least part of these instructions is fetched from the instruction register1238by the control logic1236and interpreted according to the instruction set architecture of the CPU1130. Part of the instructions can also be directed to the register1232. In one aspects of the present disclosure the instructions are decoded according to a hardwired method, and in another aspect of the present disclosure the instructions are decoded according to a microprogram that translates instructions into sets of CPU configuration signals that are applied sequentially over multiple clock pulses. After fetching and decoding the instructions, the instructions are executed using the arithmetic logic unit (ALU)1234that loads values from the register1232and performs logical and mathematical operations on the loaded values according to the instructions. The results from these operations can be feedback into the register and/or stored in the fast memory1240. According to certain aspects of the present disclosures, the instruction set architecture of the CPU1130can use a reduced instruction set architecture, a complex instruction set architecture, a vector processor architecture, a very large instruction word architecture. Furthermore, the CPU1130can be based on the Von Neuman model or the Harvard model. The CPU1130can be a digital signal processor, an FPGA, an ASIC, a PLA, a PLD, or a CPLD. Further, the CPU1130can be an x86 processor by Intel or by AMD; an ARM processor, a Power architecture processor by, e.g., IBM; a SPARC architecture processor by Sun Microsystems or by Oracle; or other known CPU architecture. Referring again toFIG.11, the data processing system1180can include that the SB/ICH1120is coupled through a system bus to an I/O Bus, a read only memory (ROM)1156, universal serial bus (USB) port1164, a flash binary input/output system (BIOS)1168, and a graphics controller1158. PCI/PCIe devices can also be coupled to SB/ICH1120through a PCI bus1162. The PCI devices may include, for example, Ethernet adapters, add-in cards, and PC cards for notebook computers. The Hard disk drive1160and CD-ROM1156can use, for example, an integrated drive electronics (IDE) or serial advanced technology attachment (SATA) interface. In one aspects of the present disclosure the I/O bus can include a super I/O (SIO) device. Further, the hard disk drive (HDD)1160and optical drive1166can also be coupled to the SB/ICH1120through a system bus. In one aspects of the present disclosure, a keyboard1170, a mouse1172, a parallel port1178, and a serial port1176can be connected to the system bus through the I/O bus. Other peripherals and devices that can be connected to the SB/ICH1120using a mass storage controller such as SATA or PATA, an Ethernet port, an ISA bus, an LPC bridge, SMBus, a DMA controller, and an Audio Codec. Moreover, the present disclosure is not limited to the specific circuit elements described herein, nor is the present disclosure limited to the specific sizing and classification of these elements. For example, the skilled artisan will appreciate that the circuitry described herein may be adapted based on changes on battery sizing and chemistry, or based on the requirements of the intended back-up load to be powered. The functions and features described herein may also be executed by various distributed components of a system. For example, one or more processors may execute these system functions, wherein the processors are distributed across multiple components communicating in a network. The distributed components may include one or more client and server machines, which may share processing, as shown byFIG.13, in addition to various human interface and communication devices (e.g., display monitors, smart phones, tablets, personal digital assistants (PDAs)). More specifically,FIG.13illustrates client devices including smart phone1311, tablet1312, mobile device terminal1314and fixed terminals1316. These client devices may be commutatively coupled with a mobile network service1320via base station1356, access point1354, satellite1352or via an internet connection. Mobile network service1320may comprise central processors1322, server1324and database1326. Fixed terminals1316and mobile network service1320may be commutatively coupled via an internet connection to functions in cloud1330that may comprise security gateway1332, data center1334, cloud controller1336, data storage1338and provisioning tool1340. The network may be a private network, such as a LAN or WAN, or may be a public network, such as the Internet. Input to the system may be received via direct user input and received remotely either in real-time or as a batch process. Additionally, some aspects of the present disclosure may be performed on modules or hardware not identical to those described. Numerous modifications and variations of the present invention are possible in light of the above teachings. It is therefore to be understood that within the scope of the appended claims, the invention may be practiced otherwise than as specifically described herein. As an example, the invention may be practiced to utilize the speed of ego vehicle to estimate the speeds and locations of environment vehicles for in-vehicle motion and path planning. | 46,303 |
11861854 | DETAILED DESCRIPTION The description that follows includes systems, methods, techniques, instruction sequences, and computing machine program products that embody illustrative embodiments of the disclosure. In the following description, for the purposes of explanation, numerous specific details are set forth in order to provide an understanding of various embodiments of the inventive subject matter. It will be evident, however, to those skilled in the art, that embodiments of the inventive subject matter may be practiced without these specific details. In general, well-known instruction instances, protocols, structures, and techniques are not necessarily shown in detail. In various example embodiments, tracking one or more objects, (e.g., a car), moving within images captured in sequence (e.g., video) can be implemented using one or more convolutional neural networks trained on images of different sizes, where each image of a given size has texture data emphasized via an attention map. The one or more convolutional neural networks can implement the images and the attention net to more accurately and efficiently match pixels between images to track the object. Once the object is tracked, image manipulation effects can be performed on the object in the entire image sequence seamlessly. For example, a video sequence of a red car traveling down the street can be tracked via the above approach, and the car may recolored as blue. The resulting image sequence depicting the blue car can published on a social media network as an ephemeral message, as discussed in further detail below. FIG.1is a block diagram showing an example messaging system100for exchanging data (e.g., messages and associated content) over a network. The messaging system100includes multiple client devices102, each of which hosts a number of applications including a messaging client application104. Each messaging client application104is communicatively coupled to other instances of the messaging client application104and a messaging server system108via a network106(e.g., the Internet). Accordingly, each messaging client application104is able to communicate and exchange data with another messaging client application104and with the messaging server system108via the network106. The data exchanged between messaging client applications104, and between a messaging client application104and the messaging server system108, includes functions (e.g., commands to invoke functions) as well as payload data (e.g., text, audio, video, or other multimedia data). The messaging server system108provides server-side functionality via the network106to a particular messaging client application104. While certain functions of the messaging system100are described herein as being performed by either a messaging client application104or by the messaging server system108, it will be appreciated that the location of certain functionality within either the messaging client application104or the messaging server system108is a design choice. For example, it may be technically preferable to initially deploy certain technology and functionality within the messaging server system108, but to later migrate this technology and functionality to the messaging client application104where a client device102has a sufficient processing capacity. The messaging server system108supports various services and operations that are provided to the messaging client application104. Such operations include transmitting data to, receiving data from, and processing data generated by the messaging client application104. This data may include message content, client device information, geolocation information, media annotation and overlays, message content persistence conditions, social network information, and live event information, as examples. Data exchanges within the messaging system100are invoked and controlled through functions available via user interfaces (UIs) of the messaging client application104. Turning now specifically to the messaging server system108, an Application Programming Interface (API) server110is coupled to, and provides a programmatic interface to, an application server112. The application server112is communicatively coupled to a database server118, which facilitates access to a database120in which is stored data associated with messages processed by the application server112. The API server110receives and transmits message data (e.g., commands and message payloads) between the client devices102and the application server112. Specifically, the API server110provides a set of interfaces (e.g., routines and protocols) that can be called or queried by the messaging client application104in order to invoke functionality of the application server112. The API server110exposes various functions supported by the application server112, including account registration; login functionality; the sending of messages, via the application server112, from a particular messaging client application104to another messaging client application104; the sending of media files (e.g., images or video) from a messaging client application104to a messaging server application114for possible access by another messaging client application104; the setting of a collection of media data (e.g., a story); the retrieval of such collections; the retrieval of a list of friends of a user of a client device102; the retrieval of messages and content; the adding and deletion of friends to and from a social graph; the location of friends within the social graph; and opening and application events (e.g., relating to the messaging client application104). The application server112hosts a number of applications and subsystems, including the messaging server application114, an image processing system116, a social network system122, and a dense feature system150. The messaging server application114implements a number of message-processing technologies and functions, particularly related to the aggregation and other processing of content (e.g., textual and multimedia content) included in messages received from multiple instances of the messaging client application104. As will be described in further detail, the text and media content from multiple sources may be aggregated into collections of content (e.g., called stories or galleries). These collections are then made available, by the messaging server application114, to the messaging client application104. Other processor- and memory-intensive processing of data may also be performed server-side by the messaging server application114, in view of the hardware requirements for such processing. The application server112also includes the image processing system116, which is dedicated to performing various image processing operations, typically with respect to images or video received within the payload of a message at the messaging server application114. The social network system122supports various social networking functions and services, and makes these functions and services available to the messaging server application114. To this end, the social network system122maintains and accesses an entity graph (e.g., entity graph304inFIG.3) within the database120. Examples of functions and services supported by the social network system122include the identification of other users of the messaging system100with whom a particular user has relationships or whom the particular user is “following”, and also the identification of other entities and interests of a particular user. The dense feature system150manages tracking an object in different images, according to some example embodiments. Further details of the dense feature system150are discussed below with reference toFIGS.6-15. Although the dense feature system150is illustrated inFIG.1as being integrated into the application server112, it is appreciated that in some example embodiments, the dense feature system150is integrated into other systems, such as the client device102. Further, in some example embodiments, some engines of the dense feature system150may be integrated into the application server112and some may be integrated into the client device102. For example, the neural network (discussed below) may be trained on application server112and the trained model may then transmitted to client device102for client-side execution to track of objects in images generated by the client device102. The application server112is communicatively coupled to a database server118, which facilitates access to a database120in which is stored data associated with messages processed by the messaging server application114. FIG.2is block diagram illustrating further details regarding the messaging system100, according to example embodiments. Specifically, the messaging system100is shown to comprise the messaging client application104and the application server112, which in turn embody a number of subsystems, namely an ephemeral timer system202, a collection management system204, and an annotation system206. The ephemeral timer system202is responsible for enforcing the temporary access to content permitted by the messaging client application104and the messaging server application114. To this end, the ephemeral timer system202incorporates a number of timers that, based on duration and display parameters associated with a message or collection of messages (e.g., a SNAPCHAT Story), selectively display and enable access to messages and associated content via the messaging client application104. Further details regarding the operation of the ephemeral timer system202are provided below. The collection management system204is responsible for managing collections of media (e.g., collections of text, image, video, and audio data). In some examples, a collection of content (e.g., messages, including images, video, text, and audio) may be organized into an “event gallery” or an “event story”. Such a collection may be made available for a specified time period, such as the duration of an event to which the content relates. For example, content relating to a music concert may be made available as a “story” for the duration of that music concert. The collection management system204may also be responsible for publishing an icon that provides notification of the existence of a particular collection to the user interface of the messaging client application104. The collection management system204furthermore includes a curation interface208that allows a collection manager to manage and curate a particular collection of content. For example, the curation interface208enables an event organizer to curate a collection of content relating to a specific event (e.g., delete inappropriate content or redundant messages). Additionally, the collection management system204employs machine vision (or image recognition technology) and content rules to automatically curate a content collection. In certain embodiments, compensation may be paid to a user for inclusion of user-generated content into a collection. In such cases, the curation interface208operates to automatically make payments to such users for the use of their content. The annotation system206provides various functions that enable a user to annotate or otherwise modify or edit media content associated with a message. For example, the annotation system206provides functions related to the generation and publishing of media overlays for messages processed by the messaging system100. The annotation system206operatively supplies a media overlay (e.g., a SNAPCHAT Geofilter or filter) to the messaging client application104based on a geolocation of the client device102. In another example, the annotation system206operatively supplies a media overlay to the messaging client application104based on other information, such as social network information of the user of the client device102. A media overlay may include audio and visual content and visual effects. Examples of audio and visual content include pictures, texts, logos, animations, and sound effects. An example of a visual effect includes color overlaying. The audio and visual content or the visual effects can be applied to a media content item (e.g., a photo) at the client device102. For example, the media overlay includes text that can be overlaid on top of a photograph generated by the client device102. In another example, the media overlay includes an identification of a location (e.g., Venice Beach), a name of a live event, or a name of a merchant (e.g., Beach Coffee House). In another example, the annotation system206uses the geolocation of the client device102to identify a media overlay that includes the name of a merchant at the geolocation of the client device102. The media overlay may include other indicia associated with the merchant. The media overlays may be stored in the database120and accessed through the database server118. In one example embodiment, the annotation system206provides a user-based publication platform that enables users to select a geolocation on a map, and upload content associated with the selected geolocation. The user may also specify circumstances under which particular content should be offered to other users. The annotation system206generates a media overlay that includes the uploaded content and associates the uploaded content with the selected geolocation. In another example embodiment, the annotation system206provides a merchant-based publication platform that enables merchants to select a particular media overlay associated with a geolocation via a bidding process. For example, the annotation system206associates the media overlay of a highest-bidding merchant with a corresponding geolocation for a predefined amount of time. FIG.3is a schematic diagram illustrating data300which may be stored in the database120of the messaging server system108, according to certain example embodiments. While the content of the database120is shown to comprise a number of tables, it will be appreciated that the data could be stored in other types of data structures (e.g., as an object-oriented database). The database120includes message data stored within a message table314. An entity table302stores entity data, including an entity graph304. Entities for which records are maintained within the entity table302may include individuals, corporate entities, organizations, objects, places, events, etc. Regardless of type, any entity regarding which the messaging server system108stores data may be a recognized entity. Each entity is provided with a unique identifier, as well as an entity type identifier (not shown). The entity graph304furthermore stores information regarding relationships and associations between or among entities. Such relationships may be social, professional (e.g., work at a common corporation or organization), interested-based, or activity-based, merely for example. The database120also stores annotation data, in the example form of filters, in an annotation table312. Filters for which data is stored within the annotation table312are associated with and applied to videos (for which data is stored in a video table310) and/or images (for which data is stored in an image table308). Filters, in one example, are overlays that are displayed as overlaid on an image or video during presentation to a recipient user. Filters may be of various types, including user-selected filters from a gallery of filters presented to a sending user by the messaging client application104when the sending user is composing a message. Other types of filters include geolocation filters (also known as geo-filters) which may be presented to a sending user based on geographic location. For example, geolocation filters specific to a neighborhood or special location may be presented within a user interface by the messaging client application104, based on geolocation information determined by a Global Positioning System (GPS) unit of the client device102. Another type of filter is a data filter, which may be selectively presented to a sending user by the messaging client application104, based on other inputs or information gathered by the client device102during the message creation process. Examples of data filters include a current temperature at a specific location, a current speed at which a sending user is traveling, a battery life for a client device102, or the current time. Other annotation data that may be stored within the image table308is so-called “lens” data. A “lens” may be a real-time special effect and sound that may be added to an image or a video. As mentioned above, the video table310stores video data which, in one embodiment, is associated with messages for which records are maintained within the message table314. Similarly, the image table308stores image data associated with messages for which message data is stored in the message table314. The entity table302may associate various annotations from the annotation table312with various images and videos stored in the image table308and the video table310. A story table306stores data regarding collections of messages and associated image, video, or audio data, which are compiled into a collection (e.g., a SNAPCHAT Story or a gallery). The creation of a particular collection may be initiated by a particular user (e.g., each user for whom a record is maintained in the entity table302). A user may create a “personal story” in the form of a collection of content that has been created and sent/broadcast by that user. To this end, the user interface of the messaging client application104may include an icon that is user-selectable to enable a sending user to add specific content to his or her personal story. A collection may also constitute a “live story”, which is a collection of content from multiple users that is created manually, automatically, or using a combination of manual and automatic techniques. For example, a “live story” may constitute a curated stream of user-submitted content from various locations and events. Users whose client devices have location services enabled and are at a common location or event at a particular time may, for example, be presented with an option, via a user interface of the messaging client application104, to contribute content to a particular live story. The live story may be identified to the user by the messaging client application104, based on his or her location. The end result is a “live story” told from a community perspective. A further type of content collection is known as a “location story”, which enables a user whose client device102is located within a specific geographic location (e.g., on a college or university campus) to contribute to a particular collection. In some embodiments, a contribution to a location story may require a second degree of authentication to verify that the end user belongs to a specific organization or other entity (e.g., is a student on the university campus). FIG.4is a schematic diagram illustrating a structure of a message400, according to some embodiments, generated by a messaging client application104for communication to a further messaging client application104or the messaging server application114. The content of a particular message400is used to populate the message table314stored within the database120, accessible by the messaging server application114. Similarly, the content of a message400is stored in memory as “in-transit” or “in-flight” data of the client device102or the application server112. The message400is shown to include the following components:A message identifier402: a unique identifier that identifies the message400.A message text payload404: text, to be generated by a user via a user interface of the client device102and that is included in the message400.A message image payload406: image data captured by a camera component of a client device102or retrieved from memory of a client device102, and that is included in the message400. A message video payload408: video data captured by a camera component or retrieved from a memory component of the client device102, and that is included in the message400.A message audio payload410: audio data captured by a microphone or retrieved from the memory component of the client device102, and that is included in the message400.Message annotations412: annotation data (e.g., filters, stickers, or other enhancements) that represents annotations to be applied to the message image payload406, message video payload408, or message audio payload410of the message400.A message duration parameter414: a parameter value indicating, in seconds, the amount of time for which content of the message400(e.g., the message image payload406, message video payload408, and message audio payload410) is to be presented or made accessible to a user via the messaging client application104.A message geolocation parameter416: geolocation data (e.g., latitudinal and longitudinal coordinates) associated with the content payload of the message400. Multiple message geolocation parameter416values may be included in the payload, each of these parameter values being associated with respective content items included in the content (e.g., a specific image in the message image payload406, or a specific video in the message video payload408).A message story identifier418: identifier values identifying one or more content collections (e.g., “stories”) with which a particular content item in the message image payload406of the message400is associated. For example, multiple images within the message image payload406may each be associated with multiple content collections using identifier values.A message tag420: one or more tags, each of which is indicative of the subject matter of content included in the message payload. For example, where a particular image included in the message image payload406depicts an animal (e.g., a lion), a tag value may be included within the message tag420that is indicative of the relevant animal. Tag values may be generated manually, based on user input, or may be automatically generated using, for example, image recognition.A message sender identifier422: an identifier (e.g., a messaging system identifier, email address, or device identifier) indicative of a user of the client device102on which the message400was generated and from which the message400was sent.A message receiver identifier424: an identifier (e.g., a messaging system identifier, email address, or device identifier) indicative of a user of the client device102to which the message400is addressed. The contents (e.g., values) of the various components of the message400may be pointers to locations in tables within which content data values are stored. For example, an image value in the message image payload406may be a pointer to (or address of) a location within the image table308. Similarly, values within the message video payload408may point to data stored within the video table310, values stored within the message annotations412may point to data stored in the annotation table312, values stored within the message story identifier418may point to data stored in the story table306, and values stored within the message sender identifier422and the message receiver identifier424may point to user records stored within the entity table302. FIG.5is a schematic diagram illustrating an access-limiting process500, in terms of which access to content (e.g., an ephemeral message502, and associated multimedia payload of data) or a content collection (e.g., an ephemeral message story504) may be time-limited (e.g., made ephemeral). An ephemeral message502is shown to be associated with a message duration parameter506, the value of which determines an amount of time that the ephemeral message502will be displayed to a receiving user of the ephemeral message502by the messaging client application104. In one embodiment, where the messaging client application104is a SNAPCHAT application client, an ephemeral message502is viewable by a receiving user for up to a maximum of 10 seconds, depending on the amount of time that the sending user specifies using the message duration parameter506. The message duration parameter506and the message receiver identifier424are shown to be inputs to a message timer512, which is responsible for determining the amount of time that the ephemeral message502is shown to a particular receiving user identified by the message receiver identifier424. In particular, the ephemeral message502will only be shown to the relevant receiving user for a time period determined by the value of the message duration parameter506. The message timer512is shown to provide output to a more generalized ephemeral timer system202, which is responsible for the overall timing of display of content (e.g., an ephemeral message502) to a receiving user. The ephemeral message502is shown inFIG.5to be included within an ephemeral message story504(e.g., a personal SNAPCHAT Story, or an event story). The ephemeral message story504has an associated story duration parameter508, a value of which determines a time duration for which the ephemeral message story504is presented and accessible to users of the messaging system100. The story duration parameter508, for example, may be the duration of a music concert, where the ephemeral message story504is a collection of content pertaining to that concert. Alternatively, a user (either the owning user or a curator user) may specify the value for the story duration parameter508when performing the setup and creation of the ephemeral message story504. Additionally, each ephemeral message502within the ephemeral message story504has an associated story participation parameter510, a value of which determines the duration of time for which the ephemeral message502will be accessible within the context of the ephemeral message story504. Accordingly, a particular ephemeral message502may “expire” and become inaccessible within the context of the ephemeral message story504, prior to the ephemeral message story504itself expiring in terms of the story duration parameter508. The story duration parameter508, story participation parameter510, and message receiver identifier424each provide input to a story timer514, which operationally determines whether a particular ephemeral message502of the ephemeral message story504will be displayed to a particular receiving user and, if so, for how long. Note that the ephemeral message story504is also aware of the identity of the particular receiving user as a result of the message receiver identifier424. Accordingly, the story timer514operationally controls the overall lifespan of an associated ephemeral message story504, as well as an individual ephemeral message502included in the ephemeral message story504. In one embodiment, each and every ephemeral message502within the ephemeral message story504remains viewable and accessible for a time period specified by the story duration parameter508. In a further embodiment, a certain ephemeral message502may expire, within the context of the ephemeral message story504, based on a story participation parameter510. Note that a message duration parameter506may still determine the duration of time for which a particular ephemeral message502is displayed to a receiving user, even within the context of the ephemeral message story504. Accordingly, the message duration parameter506determines the duration of time that a particular ephemeral message502is displayed to a receiving user, regardless of whether the receiving user is viewing that ephemeral message502inside or outside the context of an ephemeral message story504. The ephemeral timer system202may furthermore operationally remove a particular ephemeral message502from the ephemeral message story504based on a determination that it has exceeded an associated story participation parameter510. For example, when a sending user has established a story participation parameter510of 24 hours from posting, the ephemeral timer system202will remove the relevant ephemeral message502from the ephemeral message story504after the specified 24 hours. The ephemeral timer system202also operates to remove an ephemeral message story504either when the story participation parameter510for each and every ephemeral message502within the ephemeral message story504has expired, or when the ephemeral message story504itself has expired in terms of the story duration parameter508. In certain use cases, a creator of a particular ephemeral message story504may specify an indefinite story duration parameter508. In this case, the expiration of the story participation parameter510for the last remaining ephemeral message502within the ephemeral message story504will determine when the ephemeral message story504itself expires. In this case, a new ephemeral message502, added to the ephemeral message story504, with a new story participation parameter510, effectively extends the life of an ephemeral message story504to equal the value of the story participation parameter510. In response to the ephemeral timer system202determining that an ephemeral message story504has expired (e.g., is no longer accessible), the ephemeral timer system202communicates with the messaging system100(e.g., specifically, the messaging client application104) to cause an indicium (e.g., an icon) associated with the relevant ephemeral message story504to no longer be displayed within a user interface of the messaging client application104. Similarly, when the ephemeral timer system202determines that the message duration parameter506for a particular ephemeral message502has expired, the ephemeral timer system202causes the messaging client application104to no longer display an indicium (e.g., an icon or textual identification) associated with the ephemeral message502. FIG.6illustrates a block diagram showing components provided within the dense feature system150, according to some embodiments. The components themselves are communicatively coupled (e.g., via appropriate interfaces) to each other and to various data sources, so as to allow information to be passed between the applications or so as to allow the applications to share and access common data. Furthermore, the components access the database(s)120via the database server(s)118. As Illustrated, the dense feature system150comprises a scale engine605, a feature detection engine610, an attention map engine615, a logic control engine620, and a training engine625. The scale engine605is responsible for identifying an input image and generating multiple scaled images of the input image. The feature detection engine610manages detecting image features (e.g., edges, curves, prominences, blobs, flat surfaces) within the multiple scaled images. In some example embodiments, the feature detection engine610is implemented using a convolutional neural network. The attention map engine615is responsible for generating an attention map that is based on the texture data of each pixel of the input image. In some example embodiments the attention map engine615is also implemented using a convolutional neural network. The logic control engine620is responsible for managing the operations of the other engines and directing transfers of data between the engines. For example, the logic control engine620can combine feature data generated by the feature detection engine610with an attention map that is generated by the attention map engine615to track objects through different images. The training engine625is responsible for training the network used in the feature detection engine610in the attention map engine615, according to some example embodiments. FIG.7illustrates a functional architecture700of the dense feature system150, according to some example embodiments. The two images being compared inFIG.7are the source image705and the target image710. The source image705may be an image that the user has captured via the client device102, and the target image710may be a stored image to be matched to the source image705. For example, if the user takes a picture of a company's logo, the picture can be treated as the source image705, whereas the target image710can then be a stored image of the logo to be matched to the source image705. If the picture and the stored image of the logo are matched, that means the source image705is indeed an image of the logo. Alternatively, in some example embodiments that implement motion estimation in a video stream, the source image705is a first image in the video stream and the target image710is the subsequent image in the video stream. By matching pixels of the source image705to pixels of the target image710, motion estimation can be performed (e.g., motion estimation for interpolation effects). With reference to the source image pipeline (e.g., objects indicated by reference numerals705,715,725, and735), the scale attention net715receives the source image705and generates a dense feature725. The dense feature725is a vector that describes one or more pixels of the source image705. Each pixel in the source image705is described by a dense feature725, according to some example embodiments. With reference to the target image pipeline (e.g., objects indicated by reference numerals710,720,730, and740), the scale attention net720receives the target image710and generates a dense feature730. Source feature735is the dense feature725corresponding to a feature in the source image705. In particular, as shown in the example, the source feature735is a black circular area of interest on the depicted vehicle's left headlamp in the source image705. The source feature735is compared against different target features in a target feature area740in the target image710(each target feature is denoted as a black filled-in circle in target feature area740). In particular, the source feature735is combined with each of the target features using an inner product operation750to generate probability values755. Each of the probability values755corresponds to a combination of the source feature735with one of the target features in the target feature area740. The probability values755are then input into an ArgMax module760, which selects the inner product having the largest value. It is appreciated that while source feature735and target features of target feature area740are illustrated as regions of interest (e.g., circles), the features can be individual pixels that are compared to each other (e.g., a headlight pixel from source image705compared against a headlight pixel from the target image710). FIG.8shows internal functional components and processes of a scale attention net (such as scale attention net715or720, as illustrated inFIG.7), according to some example embodiments. As illustrated, the input image is used to generate different scaled images800and805, which are versions of the source image705at different scales (e.g., sizes, full scale, 0.75 scale, 0.5 scale). Though only two scaled images800,805are illustrated, it is appreciated that many different scaled images can be produced. For example, the source image705can be used to generate10different scaled images, with the first being the full-size scale (e.g., scale of 1) and the tenth scaled image having a scale of 0.1. Each of the scaled images800,805is input into a feature net, such as feature nets810and820. In some example embodiments, the feature nets are implemented using convolutional neural network (CNN). The output of the feature net CNN is image feature data for a given scale. For example, feature x1840is feature data from the scaled image805, and feature x0825is image feature data from the scaled image800. The source image705is also input into an attention net815. In some example embodiments, the attention net815is implemented as a convolutional neural network (CNN). The attention net815generates attention maps, such as attention map x1835and attention map x0830. In some example embodiments, the attention maps are numerical values that place more emphasis on a given scale based on the amount of texture detail of a given pixel. For example,FIG.12Ashows an example input image of two men fighting in a snowy and rocky environment.FIG.12Bshows the same image at attention with scale1. InFIG.12B, the attention is masked in that only pixels with the highest attention on scale1are shown. Each of the pixels included inFIG.12Bof attention on scale1have rich texture data (e.g., the fighter's beard, clothes, limbs). Continuing,FIG.12C, illustrates masked attention on scale2, where only pixels with the highest attention (e.g., more texture) at scale2are illustrated. Similarly,FIG.12Dillustrates masked attention on scale3, where only pixels with the highest attention (e.g., more texture) at scale3are illustrated. InFIG.12D, at attention on scale3the background snow is included, thus background/snow feature data of an image scaled to 3 can be used with attention on scale3to place more emphasis on the snowy background at that scale. Whereas the men fighting, their clothes, and weapons, are included in scale1; thus feature data of the men fighting, their clothes, and weapons can be used on scale1to place more emphasis on the rich textured objects at that scale (e.g., men, weapons, clothing folds, etc.). Placing attention on different scales via the attention map improves image correlation because the size of details of different areas of an image change as the depicted object's scale changes (e.g., the image grows larger). The attention map can track the scale changes by placing different attention at different scales using textures to essentially track the object at different scales (e.g., different sizes). In this way, the dense feature system150emphasizes rich texture areas (e.g., beard, clothes) at a small scale but emphasizes larger areas (e.g., areas with less texture or details such as snow) at larger scales. Further details of the attention net are discussed with reference toFIG.10below. Referring toFIG.8, the feature data x1840is then combined with the attention map x1835via a multiplication operation845. Similarly, the attention map x0830is combined with feature x0825via multiplication operation845. Attention map x0830has different values than attention map x1835. In particular, attention map x1835places more emphasis on textured features for the scale of scaled image805, and attention map x0830places more emphasis on textured features of the scale of scaled image800. The values of the attention maps (e.g., the value of attention map x1835and attention map x0830) is adjusted during a training phase, according to some example embodiments. During the training phase the weighted connections of the convolutional neural network are adjusted (e.g., via back propagation) to maximize the inner product between a dense feature in the source image and its counterpart dense feature in the target image. Next, according to some example embodiments, all combinations of feature and attention data is combined in a weighted sum via addition operation843to produce dense feature847, which is a collection of all dense features for all pixels. FIG.9illustrates an architecture for a feature net900, according to some example embodiments. In the example inFIG.9, the feature net900is implemented as a number of layers of a convolutional neural network. In particular, the feature net900comprises an input layer905, which receives the input image, resnet block910, resnet block915, resnet block920, resnet block925, resnet block930, and an output layer935, which outputs the feature data for the given scale.FIG.11, discussed below, shows an example of a layers inside a resnet block (e.g., any one of resnet blocks910-930). In some example embodiments, the last resnet block of the feature net900(e.g., resent block930) does not have a REctified Linear Unit (RELU) layer as shown inFIG.11below. FIG.10illustrates an architecture for an attention net1000, according to some example embodiments. In the example ofFIG.10, the attention net1000is implemented as a number of layers of a convolutional neural network. In particular, the attention net comprises an input layer1005, a shared resnet block layer1010, resnet block layer1015, resnet block layer1020, resnet block layer1025, a SoftMax norm block layer1030, and an output layer1035. As illustrated, the shared resnet block layer1010shares the same parameter data as the five resnet blocks910-930of the feature net900(e.g., the attention net1000and the feature net900use the same five resnet blocks). In some example embodiments, the last resnet block of the attention net1000(e.g., resent block1025) does not have a RELU layer as shown inFIG.11below. FIG.11shows example internal components of a resnet block1100, as according to some example embodiments. In particular, with reference toFIG.11, resnet block comprises an input layer1105, a convolutional layer1110, a batch norm layer1115, a RELU layer1120, a convolution layer1125, and a batch norm layer1130. A skip connection from the input can be combined with the output of the batch norm layer1130via an additive operator1145. The output from the additive operator1145is then input to an optional RELU layer1135, and into an output layer1140, which outputs the data for resnet block1100. As discussed above, in some example embodiments, the last resnet block of the feature net and attention net do not include a RELU layer, e.g., RELU layer1135. FIG.13illustrates a flow diagram for a method1300for generating dense feature data, according to some example embodiments. At operation1305, the logic control engine620identifies the input image (e.g., source image705). At operation1310, the attention map engine615generates an attention map for combination with one or more feature data sets generated from different scaled images. At operation1315, the scale engine605uses the input image to generate multiple scaled images having different scales. At operation1320, for each of the generated scaled images, the feature detection engine610generates feature data (e.g., feature x1840, feature x0825inFIG.8). At operation1325, the logic control engine620combines (e.g., multiplies) feature data of a given scale with an attention map for the given scale, and further combines all combinations of attention maps and feature data using a summation operation (e.g., addition operation843). The result of operation1325is dense feature data for the input image (e.g., dense feature847). FIG.14illustrates a flow diagram for a method1400for training the dense feature system150, according to some example embodiments. At operation1405, the training engine625identifies the dense feature in the source image. For example, the training engine625identifies source feature735in source image705. At operation1410, the training engine626identifies the corresponding dense feature in the target image. For example, the training engine625identifies the target feature over the headlight (e.g., within target feature area740). To train the dense feature system150, the training engine626modifies the weights of neuron connections and neuron values until the inner product of the source feature and the target feature are at a maximum. Accordingly, at operation1415, the training engine625maximizes the inner product between the source feature and the target feature to maximize the likelihood of those two features producing the greatest inner product. During the training process, the final output function, e.g., ArgMax module760, is replaced by a SoftMax module, which compares the source features by classifying them. Further, during the training process, as mentioned above, the attention net is trained for the different scales and the appropriate attention map can be set to focus on different scales. FIG.15illustrates a flow diagram for a method1500for matching pixels in different images, according to some example embodiments. For example, the pixels being matched inFIG.15can be pixels of an object changing size (e.g., scale) in two images, a source image and a target image. At operation1505, the logic control engine620identifies the source image. The source image may be an initial frame sampled from a live video feed being captured on the client device. At operation1510, the logic control engine620identifies the target image. The target image may be a frame from the live video feed captured after the source image, for example. At operation1515, the logic control engine620implements a scale-attention net to generate dense feature data for the source image and the target image. For example, the logic control engine620can implement a scale-attention net by combining outputs from the scale engine605, the feature detection engine610, and the attention map engine615as described above with reference toFIGS.7and8. At operation1520, the logic control engine620determines the inner products between the dense features of the source and target images. At operation1525, the logic control engine620identifies matching dense feature by identifying which combinations of dense features have the greatest inner products. At operation1530, the logic control engine620generates a modified image sequence at least in part by using the dense feature to track an object depicted in an image sequence. For example, the object being tracked may be a car being video recorded on the client device102. The dense feature can more accurately and efficiently track the car on the client device and change the color of the car via pixel manipulation. The image sequence depicting the car with a different color can then be published as an ephemeral message502, as discussed above with reference toFIG.5. FIG.16is a block diagram illustrating an example software architecture1606, which may be used in conjunction with various hardware architectures herein described.FIG.16is a non-limiting example of a software architecture, and it will be appreciated that many other architectures may be implemented to facilitate the functionality described herein. The software architecture1606may execute on hardware such as a machine1600ofFIG.16that includes, among other things, processors, memory, and I/O components. A representative hardware layer1652is illustrated and can represent, for example, the machine1600ofFIG.16. The representative hardware layer1652includes a processing unit1654having associated executable instructions1604. The executable instructions1604represent the executable instructions of the software architecture1606, including implementation of the methods, components, and so forth described herein. The hardware layer1652also includes a memory/storage1656, which also has the executable instructions1604. The hardware layer1652may also comprise other hardware1658. In the example architecture ofFIG.16, the software architecture1606may be conceptualized as a stack of layers where each layer provides particular functionality. For example, the software architecture1606may include layers such as an operating system1602, libraries1620, frameworks/middleware1618, applications1616, and a presentation layer1614. Operationally, the applications1616and/or other components within the layers may invoke application programming interface (API) calls1608through the software stack and receive a response in the form of messages1612. The layers illustrated are representative in nature and not all software architectures have all layers. For example, some mobile or special-purpose operating systems may not provide a frameworks/middleware1618, while others may provide such a layer. Other software architectures may include additional or different layers. The operating system1602may manage hardware resources and provide common services. The operating system1602may include, for example, a kernel1622, services1624, and drivers1626. The kernel1622may act as an abstraction layer between the hardware and the other software layers. For example, the kernel1622may be responsible for memory management, processor management (e.g., scheduling), component management, networking, security settings, and so on. The services1624may provide other common services for the other software layers. The drivers1626are responsible for controlling or interfacing with the underlying hardware. For instance, the drivers1626include display drivers, camera drivers, Bluetooth® drivers, flash memory drivers, serial communication drivers (e.g., Universal Serial Bus (USB) drivers), Wi-Fi® drivers, audio drivers, power management drivers, and so forth depending on the hardware configuration. The libraries1620provide a common infrastructure that is used by the applications1616and/or other components and/or layers. The libraries1620provide functionality that allows other software components to perform tasks in an easier fashion than by interfacing directly with the underlying operating system1602functionality (e.g., kernel1622, services1624, and/or drivers1626). The libraries1620may include system libraries1644(e.g., C standard library) that may provide functions such as memory allocation functions, string manipulation functions, mathematical functions, and the like. In addition, the libraries1620may include API libraries1646such as media libraries (e.g., libraries to support presentation and manipulation of various media formats such as MPEG4, H.264, MP3, AAC, AMR, JPG, or PNG), graphics libraries (e.g., an OpenGL framework that may be used to render 2D and 3D graphic content on a display), database libraries (e.g., SQLite that may provide various relational database functions), web libraries (e.g., WebKit that may provide web browsing functionality), and the like. The libraries1620may also include a wide variety of other libraries1648to provide many other APIs to the applications1616and other software components/modules. The frameworks/middleware1618provide a higher-level common infrastructure that may be used by the applications1616and/or other software components/modules. For example, the frameworks/middleware1618may provide various graphic user interface (GUI) functions, high-level resource management, high-level location services, and so forth. The frameworks/middleware1618may provide a broad spectrum of other APIs that may be utilized by the applications1616and/or other software components/modules, some of which may be specific to a particular operating system1602or platform. The applications1616include built-in applications1638and/or third-party applications1640. Examples of representative built-in applications1638may include, but are not limited to, a contacts application, a browser application, a book reader application, a location application, a media application, a messaging application, and/or a game application. The third-party applications1640may include an application developed using the ANDROID™ or IOS™ software development kit (SDK) by an entity other than the vendor of the particular platform, and may be mobile software running on a mobile operating system such as IOS™, ANDROID™, WINDOWS® Phone, or other mobile operating systems. The third-party applications1640may invoke the API calls1608provided by the mobile operating system (such as the operating system1602) to facilitate functionality described herein. The applications1616may use built-in operating system functions (e.g., kernel1622, services1624, and/or drivers1626), libraries1620, and frameworks/middleware1618to create user interfaces to interact with users of the system. Alternatively, or additionally, in some systems interactions with a user may occur through a presentation layer, such as the presentation layer1614. In these systems, the application/component “logic” can be separated from the aspects of the application/component that interact with a user. FIG.17is a block diagram illustrating components of a machine1700, according to some example embodiments, able to read instructions from a machine-readable medium (e.g., a machine-readable storage medium) and perform any one or more of the methodologies discussed herein. Specifically,FIG.17shows a diagrammatic representation of the machine1700in the example form of a computer system, within which instructions1716(e.g., software, a program, an application, an applet, an app, or other executable code) for causing the machine1700to perform any one or more of the methodologies discussed herein may be executed. As such, the instructions1716may be used to implement modules or components described herein. The instructions1716transform the general, non-programmed machine1700into a particular machine1700programmed to carry out the described and illustrated functions in the manner described. In alternative embodiments, the machine1700operates as a standalone device or may be coupled (e.g., networked) to other machines. In a networked deployment, the machine1700may operate in the capacity of a server machine or a client machine in a server-client network environment, or as a peer machine in a peer-to-peer (or distributed) network environment. The machine1700may comprise, but not be limited to, a server computer, a client computer, a personal computer (PC), a tablet computer, a laptop computer, a netbook, a set-top box (STB), a personal digital assistant (PDA), an entertainment media system, a cellular telephone, a smartphone, a mobile device, a wearable device (e.g., a smart watch), a smart home device (e.g., a smart appliance), other smart devices, a web appliance, a network router, a network switch, a network bridge, or any machine capable of executing the instructions1716, sequentially or otherwise, that specify actions to be taken by the machine1700. Further, while only a single machine1700is illustrated, the term “machine” shall also be taken to include a collection of machines that individually or jointly execute the instructions1716to perform any one or more of the methodologies discussed herein. The machine1700may include processors1710, memory/storage1730, and I/O components1750, which may be configured to communicate with each other such as via a bus1702. The memory/storage1730may include a memory1732, such as a main memory, or other memory storage, and a storage unit1736, both accessible to the processors1710such as via the bus1702. The storage unit1736and memory1732store the instructions1716embodying any one or more of the methodologies or functions described herein. The instructions1716may also reside, completely or partially, within the memory1732, within the storage unit1736, within at least one of the processors1710(e.g., within the processor's cache memory), or any suitable combination thereof, during execution thereof by the machine1700. Accordingly, the memory1732, the storage unit1736, and the memory of the processors1710are examples of machine-readable media. The I/O components1750may include a wide variety of components to receive input, provide output, produce output, transmit information, exchange information, capture measurements, and so on. The specific I/O components1750that are included in a particular machine1700will depend on the type of machine. For example, portable machines such as mobile phones will likely include a touch input device or other such input mechanisms, while a headless server machine will likely not include such a touch input device. It will be appreciated that the I/O components1750may include many other components that are not shown inFIG.17. The I/O components1750are grouped according to functionality merely for simplifying the following discussion and the grouping is in no way limiting. In various example embodiments, the I/O components1750may include output components1752and input components1754. The output components1752may include visual components (e.g., a display such as a plasma display panel (PDP), a light-emitting diode (LED) display, a liquid-crystal display (LCD), a projector, or a cathode ray tube (CRT)), acoustic components (e.g., speakers), haptic components (e.g., a vibratory motor, resistance mechanisms), other signal generators, and so forth. The input components1754may include alphanumeric input components (e.g., a keyboard, a touch screen configured to receive alphanumeric input, a photo-optical keyboard, or other alphanumeric input components), point-based input components (e.g., a mouse, a touchpad, a trackball, a joystick, a motion sensor, or other pointing instruments), tactile input components (e.g., a physical button, a touch screen that provides location and/or force of touches or touch gestures, or other tactile input components), audio input components (e.g., a microphone), and the like. In further example embodiments, the I/O components1750may include biometric components1756, motion components1758, environment components1760, or position components1762among a wide array of other components. For example, the biometric components1756may include components to detect expressions (e.g., hand expressions, facial expressions, vocal expressions, body gestures, or eye tracking), measure biosignals (e.g., blood pressure, heart rate, body temperature, perspiration, or brain waves), identify a person (e.g., voice identification, retinal identification, facial identification, fingerprint identification, or electroencephalogram-based identification), and the like. The motion components1758may include acceleration sensor components (e.g., accelerometer), gravitation sensor components, rotation sensor components (e.g., gyroscope), and so forth. The environment components1760may include, for example, illumination sensor components (e.g., photometer), temperature sensor components (e.g., one or more thermometers that detect ambient temperature), humidity sensor components, pressure sensor components (e.g., barometer), acoustic sensor components (e.g., one or more microphones that detect background noise), proximity sensor components (e.g., infrared sensors that detect nearby objects), gas sensors (e.g., gas sensors to detect concentrations of hazardous gases for safety or to measure pollutants in the atmosphere), or other components that may provide indications, measurements, or signals corresponding to a surrounding physical environment. The position components1762may include location sensor components (e.g., a GPS receiver component), altitude sensor components (e.g., altimeters or barometers that detect air pressure from which altitude may be derived), orientation sensor components (e.g., magnetometers), and the like. Communication may be implemented using a wide variety of technologies. The I/O components1750may include communication components1764operable to couple the machine1700to a network1780or devices1770via a coupling1782and a coupling1772respectively. For example, the communication components1764may include a network interface component or other suitable device to interface with the network1780. In further examples, the communication components1764may include wired communication components, wireless communication components, cellular communication components, Near Field Communication (NFC) components, Bluetooth® components (e.g., Bluetooth® Low Energy), Wi-Fi® components, and other communication components to provide communication via other modalities. The devices1770may be another machine or any of a wide variety of peripheral devices (e.g., a peripheral device coupled via a Universal Serial Bus (USB)). Moreover, the communication components1764may detect identifiers or include components operable to detect identifiers. For example, the communication components1764may include Radio Frequency Identification (RFID) tag reader components, NFC smart tag detection components, optical reader components (e.g., an optical sensor to detect one-dimensional barcodes such as Universal Product Code (UPC) barcode, multi-dimensional barcodes such as Quick Response (QR) code, Aztec code, Data Matrix, Dataglyph, MaxiCode, PDF417, Ultra Code, UCC RSS-2D barcode, and other optical codes), or acoustic detection components (e.g., microphones to identify tagged audio signals). In addition, a variety of information may be derived via the communication components1764, such as location via Internet Protocol (IP) geolocation, location via Wi-Fi® signal triangulation, location via detecting an NFC beacon signal that may indicate a particular location, and so forth. Glossary “CARRIER SIGNAL” in this context refers to any intangible medium that is capable of storing, encoding, or carrying instructions for execution by the machine, and includes digital or analog communications signals or other intangible media to facilitate communication of such instructions. Instructions may be transmitted or received over the network using a transmission medium via a network interface device and using any one of a number of well-known transfer protocols. “CLIENT DEVICE” in this context refers to any machine that interfaces to a communications network to obtain resources from one or more server systems or other client devices. A client device may be, but is not limited to, a mobile phone, desktop computer, laptop, PDA, smartphone, tablet, ultrabook, netbook, multi-processor system, microprocessor-based or programmable consumer electronics system, game console, set-top box, or any other communication device that a user may use to access a network. “COMMUNICATIONS NETWORK” in this context refers to one or more portions of a network that may be an ad hoc network, an intranet, an extranet, a virtual private network (VPN), a local area network (LAN), a wireless LAN (WLAN), a wide area network (WAN), a wireless WAN (WWAN), a metropolitan area network (MAN), the Internet, a portion of the Internet, a portion of the Public Switched Telephone Network (PSTN), a plain old telephone service (POTS) network, a cellular telephone network, a wireless network, a Wi-Fi® network, another type of network, or a combination of two or more such networks. For example, a network or a portion of a network may include a wireless or cellular network and the coupling may be a Code Division Multiple Access (CDMA) connection, a Global System for Mobile communications (GSM) connection, or another type of cellular or wireless coupling. In this example, the coupling may implement any of a variety of types of data transfer technology, such as Single Carrier Radio Transmission Technology (1×RTT), Evolution-Data Optimized (EVDO) technology, General Packet Radio Service (GPRS) technology, Enhanced Data rates for GSM Evolution (EDGE) technology, third Generation Partnership Project (3GPP) including 3G, fourth generation wireless (4G) networks, Universal Mobile Telecommunications System (UMTS), High-Speed Packet Access (HSPA), Worldwide Interoperability for Microwave Access (WiMAX), Long-Term Evolution (LTE) standard, others defined by various standard-setting organizations, other long-range protocols, or other data transfer technology. “EPHEMERAL MESSAGE” in this context refers to a message that is accessible for a time-limited duration. An ephemeral message may be a text, an image, a video, and the like. The access time for the ephemeral message may be set by the message sender. Alternatively, the access time may be a default setting or a setting specified by the recipient. Regardless of the setting technique, the message is transitory. “MACHINE-READABLE MEDIUM” in this context refers to a component, a device, or other tangible media able to store instructions and data temporarily or permanently and may include, but is not limited to, random-access memory (RAM), read-only memory (ROM), buffer memory, flash memory, optical media, magnetic media, cache memory, other types of storage (e.g., Erasable Programmable Read-Only Memory (EPROM)), and/or any suitable combination thereof. The term “machine-readable medium” should be taken to include a single medium or multiple media (e.g., a centralized or distributed database, or associated caches and servers) able to store instructions. The term “machine-readable medium” shall also be taken to include any medium, or combination of multiple media, that is capable of storing instructions (e.g., code) for execution by a machine, such that the instructions, when executed by one or more processors of the machine, cause the machine to perform any one or more of the methodologies described herein. Accordingly, a “machine-readable medium” refers to a single storage apparatus or device, as well as “cloud-based” storage systems or storage networks that include multiple storage apparatus or devices. The term “machine-readable medium” excludes signals per se. “COMPONENT” in this context refers to a device, a physical entity, or logic having boundaries defined by function or subroutine calls, branch points, APIs, or other technologies that provide for the partitioning or modularization of particular processing or control functions. Components may be combined via their interfaces with other components to carry out a machine process. A component may be a packaged functional hardware unit designed for use with other components and a part of a program that usually performs a particular function of related functions. Components may constitute either software components (e.g., code embodied on a machine-readable medium) or hardware components. A “hardware component” is a tangible unit capable of performing certain operations and may be configured or arranged in a certain physical manner. In various example embodiments, one or more computer systems (e.g., a standalone computer system, a client computer system, or a server computer system) or one or more hardware components of a computer system (e.g., a processor or a group of processors) may be configured by software (e.g., an application or application portion) as a hardware component that operates to perform certain operations as described herein. A hardware component may also be implemented mechanically, electronically, or any suitable combination thereof. For example, a hardware component may include dedicated circuitry or logic that is permanently configured to perform certain operations. A hardware component may be a special-purpose processor, such as a Field-Programmable Gate Array (FPGA) or an Application-Specific Integrated Circuit (ASIC). A hardware component may also include programmable logic or circuitry that is temporarily configured by software to perform certain operations. For example, a hardware component may include software executed by a general-purpose processor or other programmable processor. Once configured by such software, hardware components become specific machines (or specific components of a machine) uniquely tailored to perform the configured functions and are no longer general-purpose processors. It will be appreciated that the decision to implement a hardware component mechanically, in dedicated and permanently configured circuitry, or in temporarily configured circuitry (e.g., configured by software) may be driven by cost and time considerations. Accordingly, the phrase “hardware component” (or “hardware-implemented component”) should be understood to encompass a tangible entity, be that an entity that is physically constructed, permanently configured (e.g., hardwired), or temporarily configured (e.g., programmed) to operate in a certain manner or to perform certain operations described herein. Considering embodiments in which hardware components are temporarily configured (e.g., programmed), each of the hardware components need not be configured or instantiated at any one instance in time. For example, where a hardware component comprises a general-purpose processor configured by software to become a special-purpose processor, the general-purpose processor may be configured as respectively different special-purpose processors (e.g., comprising different hardware components) at different times. Software accordingly configures a particular processor or processors, for example, to constitute a particular hardware component at one instance of time and to constitute a different hardware component at a different instance of time. Hardware components can provide information to, and receive information from, other hardware components. Accordingly, the described hardware components may be regarded as being communicatively coupled. Where multiple hardware components exist contemporaneously, communications may be achieved through signal transmission (e.g., over appropriate circuits and buses) between or among two or more of the hardware components. In embodiments in which multiple hardware components are configured or instantiated at different times, communications between or among such hardware components may be achieved, for example, through the storage and retrieval of information in memory structures to which the multiple hardware components have access. For example, one hardware component may perform an operation and store the output of that operation in a memory device to which it is communicatively coupled. A further hardware component may then, at a later time, access the memory device to retrieve and process the stored output. Hardware components may also initiate communications with input or output devices, and can operate on a resource (e.g., a collection of information). The various operations of example methods described herein may be performed, at least partially, by one or more processors that are temporarily configured (e.g., by software) or permanently configured to perform the relevant operations. Whether temporarily or permanently configured, such processors may constitute processor-implemented components that operate to perform one or more operations or functions described herein. As used herein, “processor-implemented component” refers to a hardware component implemented using one or more processors. Similarly, the methods described herein may be at least partially processor-implemented, with a particular processor or processors being an example of hardware. For example, at least some of the operations of a method may be performed by one or more processors or processor-implemented components. Moreover, the one or more processors may also operate to support performance of the relevant operations in a “cloud computing” environment or as a “software as a service” (SaaS). For example, at least some of the operations may be performed by a group of computers (as examples of machines including processors), with these operations being accessible via a network (e.g., the Internet) and via one or more appropriate interfaces (e.g., an API). The performance of certain of the operations may be distributed among the processors, not only residing within a single machine, but deployed across a number of machines. In some example embodiments, the processors or processor-implemented components may be located in a single geographic location (e.g., within a home environment, an office environment, or a server farm). In other example embodiments, the processors or processor-implemented components may be distributed across a number of geographic locations. “PROCESSOR” in this context refers to any circuit or virtual circuit (a physical circuit emulated by logic executing on an actual processor) that manipulates data values according to control signals (e.g., “commands”, “op codes”, “machine code”, etc.) and which produces corresponding output signals that are applied to operate a machine. A processor may, for example, be a Central Processing Unit (CPU), a Reduced Instruction Set Computing (RISC) processor, a Complex Instruction Set Computing (CISC) processor, a Graphics Processing Unit (GPU), a Digital Signal Processor (DSP), an ASIC, a Radio-Frequency Integrated Circuit (RFIC), or any combination thereof. A processor may further be a multi-core processor having two or more independent processors (sometimes referred to as “cores”) that may execute instructions contemporaneously. “TIMESTAMP” in this context refers to a sequence of characters or encoded information identifying when a certain event occurred, for example giving date and time of day, sometimes accurate to a small fraction of a second. A portion of the disclosure of this patent document contains material that is subject to copyright protection. The copyright owner has no objection to the facsimile reproduction by anyone of the patent document or the patent disclosure, as it appears in the Patent and Trademark Office patent files or records, but otherwise reserves all copyright rights whatsoever. The following notice applies to the software and data as described below and in the drawings that form a part of this document: Copyright 2017, SNAP INC., All Rights Reserved. | 72,783 |
11861855 | DETAILED DESCRIPTION The system and method of the present teachings can enable the use of aerial sensor data to augment ground sensor data when compiling a scene that can be used for autonomous vehicle navigation. Referring now toFIG.1, terrain data for a geographic area can be gathered from aerial sensors401and ground-based sensors501, and can be combined to form a richer dataset than either aerial or ground-based sensors could form separately. Aerial sensor data can include, but is not limited to including, aerial point cloud data701. Ground-based data can include, but is not limited to including, ground-based point cloud data601. Aerial data701can be collected by airborne devices that can include, but are not limited to including, unmanned aerial vehicles such as drones and all types of piloted aircraft, for example. The airborne device can include aerial sensor401that can include, but is not limited to including, LIDAR technology. A LIDAR system can include the sensor itself as well as a high-precision satellite positioning system and high-accuracy sensors such as inertial measurement units to determine the orientation of the sensor in space. These systems work together to enable processing of the raw aerial sensor data. Ground-based equipment501can collect ground-based data601. Ground-based equipment501can be mounted upon any ground-based structure such as a building, a vehicle, or a tree. Ground-based equipment501can include a LIDAR system as discussed with respect to aerial equipment. The positioning system associated with ground-based equipment501can be subject to occlusion, which can affect the quality of ground-based data601. Continuing to refer toFIG.1, with respect to combining aerial data601with ground data701, and taking into consideration that ground-based data601could have location issues, the locations of the points in the datasets must be rectified relative to each other. Ground truth location of immobile objects can be used transform the entire ground-based point cloud to repair any issues with location of the points. Immobile objects that both ground-based and aerial sensors can locate can include, but are not limited to including, walls. Immobile objects can include buildings, and buildings can include walls. Any immobile objects can be used as ground truth. The process of the present teachings can be broadened to include using any sort of immobile object as ground truth. The example provided herein, in which buildings and walls are used for ground truth, is provided for illustrative purposes only. Aerial data701can include, for example, roof-like structures102and surface data104. Ground data601can include, for example, wall-like structures106and surface data108. Roof-like structures102and surface data104can be connected to extract wall data from aerial data701. Wall-like structures that occur in both datasets can be used to register aerial and ground data. Referring now toFIG.1A, the process of the present teachings for combining aerial point cloud data701from aerial data database701A, and ground-based point cloud data601from ground data database601A, can include filtering aerial point cloud data701and the ground-based point cloud data601to create datasets that can be efficiently manipulated and combined. Ground data database601A and aerial data database701A can represent any data source, for example, data received directly from sensors, or data stored in a repository for later processing, or a combination. Ground-based data can be smoothed to account for deformities in the data such as twists. Perimeters of possible surfaces of interest can be found from the datasets, and the surfaces can be subjected to filtering criteria to identify desired features. Transforms used to register the features discovered in the ground data with the same features discovered in the aerial data can be used to register ground-based dataset601to aerial dataset701, thus forming a consistently-georeferenced combined dataset. Continuing to refer toFIG.1A, filtering the point cloud data can include, but is not limited to including, filter201that can be used, for example, to reduce the number of points under consideration. The type of filter selected can be governed by the size of the dataset and the particular features that will be considered of interest, for example. In an aspect, a conventional voxel filter as described herein can be used to filter the data, producing filtered aerial data711and filtered ground-based data605. Surface locator211can locate surfaces137in filtered ground-based data605. Sector alignment209can divide ground PC data601having located surfaces137into sectors137. Sector alignment209can aligned the sectors with each other, forming aligned sectors138/139. Polygon processor213A can locate aerial features1101from filtered aerial data711. The perimeters of features in both aligned sectors138/139and aerial features1101can be established by, for example, but not limited to, a conventional convex hull process executed in, for example, feature processor212and connect processor215. Connect processor215can connect features in aerial PC data701that can be connected to each other and form connected features1118. For example, roofs of buildings and nearby surfaces that are visible in aerial PC data701can be connected to each other to form further features such as the walls of the buildings. Ground features1102and connected features1118can be used by feature register739to register the features in the ground PC data with the like features in the aerial PC data. For example, if connected features1118are walls, similarly-situated walls can be located in ground PC data601. Transform process727can determine transforms from the registration process, and ground data transform743can apply the transforms to ground data. Referring now toFIG.2, with respect to filtering, given point cloud601, in an aspect, a conventional voxel filter can return point cloud605A with a smaller number of points than were in original point cloud601. Filtered point cloud605A can represent original point cloud601as a whole. The voxel filter can downsample data from gridded point cloud605by choosing point602in each “cell” (voxel)604in gridded point cloud605to represent the rest of the points in cell604. The set of points that lie within the bounds of voxel604can be combined into representative point602. The method for representing points in voxel604by a single point can take any number of forms. For example, a centroid or spatial average of the point distribution can be computed, or the point at the geometrical center of voxel604can be selected to represent the points in voxel604. A specific method can be chosen based on, for example, whether accuracy or computation speed is deemed more important for a particular application. In using the voxel filter, the dataset can be broken into cells of a pre-selected size, for example, but not limited to, 0.25 m. Any downsampling filter can be used, if reducing the number of points in the dataset is desired. Referring now toFIG.3, surface locator211can locate surfaces in filtered ground data605. In an aspect, a progressive morphological filter can include the process of sorting non-surface points from surface points. The process can begin by forming regularly spaced grid136from the point cloud data, and locating cells131within the grid. Within each of cells131, a point having a minimum elevation is chosen. Sliding filter window133can be used to gather subsets of the minimum elevation points identified as minimum point arrays134. Another sliding filter window can be applied to minimum point arrays134to create an array137of points having maximum elevations within minimum point arrays134. The elevations of the points in the maximum elevation arrays137can be compared to a threshold, and the points can be identified as surface or non-surface points based on their relationships to the threshold. The progressive morphological filter gradually increases the window size of the sliding filter windows and, using various pre-selected elevation difference thresholds, the buildings, vehicles, and vegetation can be removed from the dataset, while surface data are preserved. See Zhang et al.,A Progressive Morphological Filter for Removing Nonground Measurements from Airborne LIDAR Data, IEEE Transactions on Geoscience and Remote Sensing,41:4, April, 2003), which is incorporated herein by reference in its entirety. Referring now toFIG.4A, surface data, isolated according to the processes described herein or any similar process, can be broken into sectors of pre-selected sizes that can then be aligned. Sector alignment209can be used to address deformities in the dataset such as, for example, but not limited to, twists. Sector alignment209can therefore perform what amounts to an approximation of the twists in the data. In an aspect, the approximation can be linear, non-linear, or a combination. Surfaces in point sector137can be broken into sectors of a size amenable to being subjected to the alignment process, as discussed herein. In some configurations, the size of the sectors can be selected based upon characteristics such as point density of the dataset. In an aspect, the size of the sectors can include, but is not limited to including, 100 m×100 m. Continuing to refer toFIG.4A, coordinates (such as, for example, but not limited to, universal transverse mercator (UTM)) can be computed for each corner of each sector138/139, and coordinate transform (such as, for example, but not limited to, normal distributions transform (NDT)) values can be computed for each corner as well. Sectors can be aligned by applying the coordinate transform computed for the corners of a first sector to a second sector that is not aligned with the first sector. If a corner of the second sector that is not aligned with the first sector is also not aligned with a third sector at one of the transformed points, a new transform is computed for that corner that is a function of the coordinate transform of adjacent point of the third sector and the first sector. In an example, to determine which sectors are adjacent and aligned, sectors can evaluated based on the distance between any two points DA-DDin adjacent sectors138/139. For example, if the shortest distance between any two points in adjacent sectors138/139is below first pre-selected threshold DT1, sectors138/139are considered to be aligned. In some configurations, pre-selected threshold DT1can include, but is not limited to including, 0.85 m. If shortest distance DAis greater than DT1, average DAVGof all point distances DA-DDcan be taken. If DAVGis less than second pre-selected threshold DT2, sectors138/139can be considered aligned. On the other hand, if DAVGis greater than or equal to second pre-selected threshold DT2, sectors138/139can be considered not aligned. Note that any coordinate system and any coordinate transform can be used to align the sectors. Referring now toFIG.4B, continuing with the example, distances D1/D2can be computed between the corners of sectors138/139(FIG.4A), and a relationship between distances D1/D2can be computed. For example, average D3of distances D1/D2can be taken. Other relationships are possible. If third pre-selected threshold D4is greater than or equal to average D3, sectors138/139(FIG.4A) can be considered aligned. If, on the other hand, third pre-selected threshold D4is less than average D3, sectors138/139(FIG.4A) are considered not aligned. An example of eight sectors A-H in various states of alignment is shown. Average distances801-819between sectors A-H are evaluated with respect to third pre-selected threshold D4. Table821presents sector status with respect to neighborhood alignment. The contents of table821are tabulated in table823in which sectors A-H are shown alongside their total numbers of aligned neighbors in comparison to their total number of neighbors. Table825is a sorted version of table823, sorted with sectors A-H having the least need for alignment to those having the most need for alignment. For example, sectors F and G are aligned with all their neighbors, while sectors A and E are aligned with none of their neighbors. Sectors B, H, C, D, A, and E need to be aligned with at least one neighbor. The process begins by choosing the sector needing the fewest neighbors aligned, in this case sector B. Referring now toFIG.4C, sector B has three neighbors, sectors E, F, and H, two of which, sectors F and H, are aligned with sector B. To align sector E with sector B, transforms NDT1and NDT2can be applied to the corners of sector E adjacent to the corners of sector B. For example, the point in sector E at the sector E end of line861can be transformed using transform NDT1associated with the point at the sector B end of line861. If sector E had no further neighbors, the point in sector E at the sector E end of line863can be transformed using transform NDT2associated with the point at the sector B end of line863. Sector E, however, has neighbor sector C with which sector E is not aligned. Transform NDT4is associated with the sector C point adjacent to the sector E point at the end of line865. To complete the alignment of sector E, a relationship between NDT2and NDT4can be developed and applied to the sector E point at the ends of lines863and865. The relationship can include, but is not limited to including, averaging NDT2with NDT4and applying the average transform to sector E. This process is repeated until all sectors are aligned. Referring now toFIG.4D, method250for aligning sectors can include, but is not limited to including, choosing251candidate adjacent sectors, and determining253a minimum distance between points in adjacent sectors. If255the minimum distance is less than or equal to a first threshold, method250can include marking257the candidate adjacent sectors as aligned and returning to choosing251more candidate adjacent sectors. If255the minimum distance is greater than the first threshold, method250can include taking259the average distance between all points in the adjacent sectors. If261the average distance is less than or equal to a second threshold, method250can include marking257the candidate adjacent sectors as aligned and returning to choosing251more candidate adjacent sectors. If261the average distance is greater than the second threshold, method250can include determining263coordinates for each corner of each possibly misaligned sector. Referring now toFIG.4E, method250can include determining265a transform for each corner of each possibly misaligned sector. Method250can include determining267distances between the corners of the possibly misaligned sector based on the coordinates, and determining269a relationship between the corner distances in adjacent corners, the relationship being associated with the line connecting the corners. The relationship can include, but is not limited to including, an average, a weighted average, or any other way to identify a value representative of the distance between the two possibly misaligned sectors. If271the average is less than or equal to a third threshold, method250can include marking257the candidate adjacent sectors as aligned and returning to choosing251more candidate adjacent sectors. If271the average is greater than a third threshold, method250can include evaluating273alignment between all adjacent sectors, and sorting275the sectors based on the number of aligned neighbors. Method250can include choosing277the sector from the sorted list of sectors that has the largest number, but not all, of aligned neighbors. Referring now toFIG.4F, method250can include applying279the transform from the chosen sector to its non-aligned neighbor at adjacent corners. If281the non-aligned sector has another non-aligned neighbor, and if283a corner of the non-aligned neighbor would be used for alignment with a same corner that is being aligned with the chosen sector, method250can include averaging285the transforms of the same corner and applying the averaged transform to the non-aligned sector. If287there are more sectors to align, method250can include returning to step251. If287there are no more sectors to align, method250can include ending the process. If281the non-aligned sector does not have another non-aligned neighbor, and if287there are more sectors to align, method250can include returning to the beginning of method250by choosing251candidate adjacent sectors that have not been aligned. If283a corner of the non-aligned neighbor would not be used for alignment with a same corner that is being aligned with the chosen sector, method250can include continuing method250at step281. Referring now toFIG.5, aligned surface point sectors138including surface points determined previously (seeFIG.3) can include features that can include possible walls1001/1003/1005. To determine if any wall-like features are included in the dataset, a conventional polygon-forming algorithm such as, for example, but not limited to, convex hull, as described herein, can be used. Using, for example, a convex hull algorithm, origin1017/1027/1037can be chosen from a group of points that could represent a feature. Points that lie in the space relatively close to origin1017, for example, points1011/1013/1015can each be examined. Whichever point is associated with the smallest angle from the vertical axis is chosen as a first corner of a polygon that will result from the analysis. The next point is chosen based on the next smallest angle. The first corner is connected to the next point. The next point is chosen based on the next smallest angle and is connected to the previously determined point. Finally, the last chosen point is connected to the origin to form a polygon. Angle1001A that the polygon makes with x-axis1043can be compared to a pre-selected threshold value. If angle1001A is greater than the absolute value of the threshold angle, the polygon is categorized as a wall feature. If angle1001A is less than or equal to the absolute value of the threshold, the polygon is categorized as not a wall feature. Examples of other possible wall features are shown inFIG.5. The threshold angle can include, but is not limited to including, |45°| from horizontal. Referring now toFIG.6, filter201can be applied to aerial data701to create a uniform grid of points. For example, but not limited to, a voxel filter can be applied. Voxels1105can be evaluated as discussed herein to determine one representative point1103for each voxel, creating grid1107. The convex hull process, or any process that can determine shapes from point cloud data, can be applied to filtered aerial data711to sort out possible aerial features1101(FIG.1A), for example, but not limited to, roof and surface features1109/1111/1113. Angles1115/1117/1119with a horizontal axis can be evaluated with respect to a threshold angle, and shapes whose angle exceeds the threshold angle are not used for further analysis because those shapes are not likely to be roofs or ground surfaces. For example, angle1115of shape1109with respect to horizontal axis1116, if determined to be greater that the absolute value of the threshold angle, would cause shape1109to be eliminated from further analysis. Likewise with shapes1111/1113and angles1117/1119. In some configurations, the threshold angle can include |15°| from horizontal. Features that are elevated with respect to each other can represent roofs. For example, feature1117elevated with respect to feature1113, can be used to estimate the location of connected features1118(FIG.1A) such as, but not limited to, walls, by connecting the corners of feature1117with surface points from surface features such as feature1113. Wall features located in ground data601(FIG.1A) can be registered to wall features located in aerial data701. Any of conventional methods such as the generalized Iterative Closest Point (ICP) algorithm, Normal-Distribution Transform, Fast Point-Feature Histograms, and 4-Points Congruent Sets, can be used. The transformation determined from registration can be applied to ground data601(FIG.1A). Referring now toFIG.7, system1200for combining aerial point cloud data701and ground-based point cloud data601can include surface ground processor1201creating surface grid1217from ground PC data701. Surface grid1217can include surfaces located in ground PC data601. Surface ground processor1201can receive ground PC data601and provide surface grid1217to sector processor1203. Sector processor1203can break surface grid1217into sectors1219. Sectors1219can be large enough to include a representative number of points, but not too large so that processing time becomes an issue. Thus, sector size1219can vary with the density of ground PC data601. Sector processor1203can align unaligned sectors within surface grid1217, and can supply aligned sectors1219to ground feature processor1215. Ground feature processor1205can locate ground rigid features1223within aligned sectors1219, and supply ground rigid features1223to registration processor1209. Aerial rigid feature processor1207can receive aerial PC data701locate aerial rigid features1215, and supply aerial rigid features1215to registration process1209. Registration process1209can register ground rigid features1223to aerial rigid features1215, can determine transform1213from that process, and can supply transform1213to transform processor1211. Transform processor1211can apply transform1213to ground PC data601, and can provide ground PC data transform602to autonomous processor1212. In some configurations, the ground rigid features can in include walls. In some configurations, the aerial rigid features can include roofs and surfaces. System1200can optionally include a filter filtering aerial PC data701and ground PC data701. The filter can optionally include a voxel filter. Continuing to refer toFIG.7, sector processor1203can optionally include computer code executing the steps of (a) if there are more of the sectors to align, (b) choosing candidate adjacent sectors, and (c) determining a minimum distance between points in the adjacent sectors. (d) If the minimum distance is less than or equal to a first threshold, sector processor1203can include marking the candidate adjacent sectors as aligned and returning to (b). (e) If the minimum distance is greater than the first threshold, sector processor1203can include taking the average distance between all points in the adjacent sectors. (f) If the average distance is less than or equal to a second threshold, sector processor1203can include marking the candidate adjacent sectors as aligned and returning to (b). (g) If the average distance is greater than the second threshold, (1) sector processor1203can include determining coordinates for each corner of each possibly misaligned sector and (2) determining a transform for each corner of each possibly misaligned sector. (h) Sector processor1203can include determining distances between the corners of the possibly misaligned sector based on the coordinates, and (i) determining a relationship between the corner distances in adjacent corners. The relationship can be associated with the line connecting the corners. (j) If the average is less than or equal to a third threshold, sector processor1203can include marking the candidate adjacent sectors as aligned and returning to (b). (k) If the average is greater than a third threshold, evaluating alignment between all adjacent sectors, (1) sorting the sectors based on the number of aligned neighbors, (m) choosing the sector from the sorted list of sectors that has the largest number, but not all, of aligned neighbors, and (n) applying the transform from the chosen sector to its non-aligned neighbor at adjacent corners. (o) If the non-aligned sector has another non-aligned neighbor, and if a corner of the non-aligned neighbor would be used for alignment with a same corner that is being aligned with the chosen sector, sector processor1203can include (1) averaging the transforms of the same corner, and (2) applying the averaged transform to the non-aligned sector. (p) If the non-aligned sector does not have another non-aligned neighbor, and if there are more sectors to align, sector processor1203can include returning to (b). (q) If a corner of the non-aligned neighbor would not be used for alignment with a same corner that is being aligned with the chosen sector, sector processor1203can include continuing sector processor at (o). The relationship can optionally include an average. Sector processor1203can optionally include identifying, as the relationship, a value representative of the distance between the two possibly misaligned sectors. Sectors1218can optionally include about a 50-150 m2range. Referring now toFIG.8, system1300for locating walls in aerial point cloud (PC) data701and ground PC data601can include, but is not limited to including filter processor1301filtering aerial PC data701and ground PC data601, and sector processor1303creating sectors1317of a pre-selected size within filtered aerial PC data1315and ground PC data1313. Sector processor1303can align ground sectors1313in the ground PC data, and ground wall processor1305can identify possible walls1319in aligned ground PC sectors1313. System1300can include aerial wall processor1311identifying possible roofs and possible surfaces1321in the aerial PC data, polygon processor1309forming polygons1323around possible walls1319, and roofs/surfaces1321, and wall processor1307forming walls1323from the roofs and surfaces and providing them to autonomous device processor1212. Filter processor1301can optionally include downsampling ground PC data601. Filter processor1301can optionally include a voxel filter filtering ground PC data601. Configurations of the present teachings are directed to computer systems for accomplishing the methods discussed in the description herein, and to computer readable media containing programs for accomplishing these methods. The raw data and results can be stored for future retrieval and processing, printed, displayed, transferred to another computer, and/or transferred elsewhere. Communications links can be wired or wireless, for example, using cellular communication systems, military communications systems, and satellite communications systems. Parts of the system can operate on a computer having a variable number of CPUs. Other alternative computer platforms can be used. The present configuration is also directed to software for accomplishing the methods discussed herein, and computer readable media storing software for accomplishing these methods. The various modules described herein can be accomplished on the same CPU, or can be accomplished on different computers. In compliance with the statute, the present configuration has been described in language more or less specific as to structural and methodical features. It is to be understood, however, that the present configuration is not limited to the specific features shown and described, since the means herein disclosed comprise preferred forms of putting the present configuration into effect. Methods can be, in whole or in part, implemented electronically. Signals representing actions taken by elements of the system and other disclosed configurations can travel over at least one live communications network. Control and data information can be electronically executed and stored on at least one computer-readable medium. The system can be implemented to execute on at least one computer node in at least one live communications network. Common forms of at least one computer-readable medium can include, for example, but not be limited to, a floppy disk, a flexible disk, a hard disk, magnetic tape, or any other magnetic medium, a compact disk read only memory or any other optical medium, punched cards, paper tape, or any other physical medium with patterns of holes, a random access memory, a programmable read only memory, and erasable programmable read only memory (EPROM), a Flash EPROM, or any other memory chip or cartridge, or any other medium from which a computer can read. Further, the at least one computer readable medium can contain graphs in any form, subject to appropriate licenses where necessary, including, but not limited to, Graphic Interchange Format (GIF), Joint Photographic Experts Group (JPEG), Portable Network Graphics (PNG), Scalable Vector Graphics (SVG), and Tagged Image File Format (TIFF). While the present teachings have been described above in terms of specific configurations, it is to be understood that they are not limited to these disclosed configurations. Many modifications and other configurations will come to mind to those skilled in the art to which this pertains, and which are intended to be and are covered by both this disclosure and the appended claims. It is intended that the scope of the present teachings should be determined by proper interpretation and construction of the appended claims and their legal equivalents, as understood by those of skill in the art relying upon the disclosure in this specification and the attached drawings. | 29,207 |
11861856 | DETAILED DESCRIPTION In the following detailed description, numerous specific details are set forth by way of examples in order to provide a thorough understanding of the relevant disclosure. However, it should be apparent to those skilled in the art that the present disclosure may be practiced without such details. In other instances, well-known methods, procedures, systems, components, and/or circuitry have been described at a relatively high-level, without detail, in order to avoid unnecessarily obscuring aspects of the present disclosure. Various modifications to the disclosed embodiments will be readily apparent to those skilled in the art, and the general principles defined herein may be applied to other embodiments and applications without departing from the spirit and scope of the present disclosure. Thus, the present disclosure is not limited to the embodiments shown, but to be accorded the widest scope consistent with the claims. The terminology used herein is for the purpose of describing particular example embodiments only and is not intended to be limiting. As used herein, the singular forms “a,” “an,” and “the” may be intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms “comprise,” “comprises,” and/or “comprising,” “include,” “includes,” and/or “including,” when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. It will be understood that the term “system,” “engine,” “unit,” “module,” and/or “block” used herein are one method to distinguish different components, elements, parts, sections or assembly of different levels in ascending order. However, the terms may be displaced by another expression if they achieve the same purpose. Generally, the word “module,” “unit,” or “block,” as used herein, refers to logic embodied in hardware or firmware, or to a collection of software instructions. A module, a unit, or a block described herein may be implemented as software and/or hardware and may be stored in any type of non-transitory computer-readable medium or another storage device. In some embodiments, a software module/unit/block may be compiled and linked into an executable program. It will be appreciated that software modules can be callable from other modules/units/blocks or from themselves, and/or may be invoked in response to detected events or interrupts. Software modules/units/blocks configured for execution on computing devices (e.g., processor210as illustrated inFIG.2) may be provided on a computer-readable medium, such as a compact disc, a digital video disc, a flash drive, a magnetic disc, or any other tangible medium, or as a digital download (and can be originally stored in a compressed or installable format that needs installation, decompression, or decryption prior to execution). Such software code may be stored, partially or fully, on a storage device of the executing computing device, for execution by the computing device. Software instructions may be embedded in firmware, such as an EPROM. It will be further appreciated that hardware modules/units/blocks may be included in connected logic components, such as gates and flip-flops, and/or can be included of programmable units, such as programmable gate arrays or processors. The modules/units/blocks or computing device functionality described herein may be implemented as software modules/units/blocks but may be represented in hardware or firmware. In general, the modules/units/blocks described herein refer to logical modules/units/blocks that may be combined with other modules/units/blocks or divided into sub-modules/sub-units/sub-blocks despite their physical organization or storage. The description may be applicable to a system, an engine, or a portion thereof. It will be understood that when a unit, engine, module or block is referred to as being “on,” “connected to,” or “coupled to,” another unit, engine, module, or block, it may be directly on, connected or coupled to, or communicate with the other unit, engine, module, or block, or an intervening unit, engine, module, or block may be present unless the context clearly indicates otherwise. As used herein, the term “and/or” includes any and all combinations of one or more of the associated listed items. These and other features, and characteristics of the present disclosure, as well as the methods of operation and functions of the related elements of structure and the combination of parts and economies of manufacture, may become more apparent upon consideration of the following description with reference to the accompanying drawings, all of which form a part of this disclosure. It is to be expressly understood, however, that the drawings are for the purpose of illustration and description only and are not intended to limit the scope of the present disclosure. It is understood that the drawings are not to scale. The term “image” in the present disclosure is used to collectively refer to image data (e.g., scan data, projection data) and/or images of various forms, including a two-dimensional (2D) image, a three-dimensional (3D) image, a four-dimensional (4D), etc. The term “pixel” and “voxel” in the present disclosure are used interchangeably to refer to an element of an image. The term “region,” “location,” and “area” in the present disclosure may refer to a location of an anatomical structure shown in the image or an actual location of the anatomical structure existing in or on a target subject's body, since the image may indicate the actual location of a certain anatomical structure existing in or on the target subject's body. In some embodiments, an image of an object may be referred to as the object for brevity. Segmentation of an image of an object may be referred to as segmentation of the object. For example, segmentation of an organ refers to segmentation of a region corresponding to the organ in an image. The present disclosure provides mechanisms (which can include methods, systems, a computer-readable medium, etc.) for determining a target region of interest (ROI) for image registration. The methods provided in the present disclosure may include obtaining a target image related to the subject including a target portion of the subject and obtain feature information related to a movement of one or more feature portions (e.g., organs and/or tissue) of the subject. For example, the movement of the one or more feature portions may include a physiological movement, such as a respiratory movement, a cardiac movement, an artery pulsation, etc. As another example, the movement of the one or more feature portions may include a movement caused by a pose difference relating to the subject. The methods may further include identifying, from the one or more feature portions, one or more reference portions of the subject based on the feature information. A position of the target portion of the subject may be unrelated to a movement of the one or more reference portions. The target ROI may be determined based on corresponding feature information related to the one or more reference portions. In some conventional methods for determining the target ROI, feature portions of the subject related to a physiological movement or a pose difference are often excluded from the target ROI. For example, such feature portions may be segmented from the target image and removed from the target image before the determination of the target ROI. The methods provided by the present disclosure may include determining whether the movement of a feature portion related to a physiological movement or a pose difference affects the position of the target portion of the subject. If the movement of a feature portion affects the position of the target portion, the position of the feature portion may be highly relevant to the position of the target portion. In some embodiments of the present disclosure, the target ROI may incorporate such a feature portion, which may effectively improve the accuracy of the result of image registration of medical images, thereby improving the efficiency and/or accuracy of diagnosis and/or treatment performed based thereon. Additionally or alternatively, if the accuracy of image registration is improved, it may be more efficiently for further processing including, e.g., modification of the result of image registration by a user (e.g., an operator). FIG.1is a schematic diagram illustrating an exemplary medical system100according to some embodiments of the present disclosure. As shown, the medical system100may include a medical device110, a network120, one or more terminals130, a processing device140, and a storage device150. In some embodiments, the medical device110, the terminal(s)130, the processing device140, and/or the storage device150may be connected to and/or communicate with each other via a wireless connection (e.g., the network120), a wired connection, or a combination thereof. The connection between the components of the medical system100may be variable. Merely byway of example, the medical device110may be connected to the processing device140through the network120, as illustrated inFIG.1. As another example, the medical device110may be connected to the processing device140directly. As a further example, the storage device150may be connected to the processing device140through the network120, as illustrated inFIG.1, or connected to the processing device140directly. As still a further example, a terminal130may be connected to the processing device140through the network120, as illustrated inFIG.1, or connected to the processing device140directly. The medical device110may include an imaging device, a radiotherapy device, or a combination thereof. The imaging device may generate or provide image data via scanning a subject (e.g., a patient) disposed on a scanning table of the medical device110. In some embodiments, the medical device110may include a single-modality scanner and/or a multi-modality scanner. The single-modality scanner may include, for example, a computed tomography (CT) scanner. The multi-modality scanner may include a single-photon emission computed tomography-computed tomography (SPECT-CT) scanner, a positron emission tomography-computed tomography (PET-CT) scanner, a computed tomography-ultra-sonic (CT-US) scanner, a digital subtraction angiography-computed tomography (DSA-CT) scanner, or the like, or a combination thereof. In some embodiments, the image data may include projection data, images relating to the subject, etc. The projection data may be raw data generated by the medical device110by scanning the subject or data generated by a forward projection on an image relating to the subject. In some embodiments, the subject may include a body, a substance, an object, or the like, or a combination thereof. In some embodiments, the subject may include a specific portion of a body, such as a head, a thorax, and abdomen, or the like, or a combination thereof. In some embodiments, the subject may include a specific organ or region of interest, such as an esophagus, a trachea, a bronchus, a stomach, a gallbladder, a small intestine, a colon, a bladder, a ureter, a uterus, a fallopian tube, etc. In some embodiments, the medical device110may include a radiotherapy device for performing a radiation treatment. For example, the medical device110may include a multi-modality (e.g., two-modality) apparatus to acquire a medical image and perform the radiation treatment. The medical image may be a Computed Tomography (CT) image, a (Magnetic Resonance Imaging) MRI image, an ultrasonic image, a four-dimensional (4D) image, a three-dimensional (3D) image, a two-dimensional (2D) image, a diagnostic image, and a non-diagnostic image, or the like, or a combination thereof. The medical device110may include one or more diagnostic devices and/or treatment devices. For example, a CT device, a Cone beam CT, a Positron Emission Tomography (PET) CT, a Volume CT, an RT device, and a couch, or the like, or a combination thereof. As illustrated inFIG.1, the medical device110(e.g., an IGRT device) may include an imaging device111, a radiotherapy device112, a couch113, or the like, or a combination thereof. The imaging device111may obtain scan data of a subject. The radiotherapy device112may perform a radiation treatment according to a treatment image generated based on the scan data. The imaging device111and the RT device112may share the couch113in an IGRT process. In some embodiments, the medical device110may be integrated with one or more other devices that may facilitate the scanning or treatment of the subject, such as an image-recording device. The image-recording device may be configured to take various types of images related to the subject. For example, the image-recording device may be a two-dimensional (2D) camera that takes pictures of the exterior or outline of the subject. As another example, the image-recording device may be a 3D scanner (e.g., a laser scanner, an infrared scanner, a 3D CMOS sensor) that records the spatial representation of the subject. The network120may include any suitable network that can facilitate the exchange of information and/or data for the medical system100. In some embodiments, one or more components of the medical system100(e.g., the medical device110, the processing device140, the storage device150, the terminal(s)130) may communicate information and/or data with one or more other components of the medical system100via the network120. For example, the processing device140may obtain image data from the medical device110via the network120. As another example, the processing device140may obtain user instruction(s) from the terminal(s)130via the network120. The network120may be or include a public network (e.g., the Internet), a private network (e.g., a local area network (LAN)), a wired network, a wireless network (e.g., an 802.11 network, a Wi-Fi network), a frame relay network, a virtual private network (VPN), a satellite network, a telephone network, routers, hubs, switches, server computers, and/or any combination thereof. For example, the network120may include a cable network, a wireline network, a fiber-optic network, a telecommunications network, an intranet, a wireless local area network (WLAN), a metropolitan area network (MAN), a public telephone switched network (PSTN), a Bluetooth™ network, a ZigBee™ network, a near field communication (NFC) network, or the like, or any combination thereof. In some embodiments, the network120may include one or more network access points. For example, the network120may include wired and/or wireless network access points such as base stations and/or internet exchange points through which one or more components of the medical system100may be connected to the network120to exchange data and/or information. The terminal(s)130may be connected to and/or communicate with the medical device110, the processing device140, and/or the storage device150. For example, the terminal(s)130may obtain a processed image from the processing device140, such as a display image including a target ROI. As another example, the terminal(s)130may enable user interactions with the medical system100. In some embodiments, the terminal(s)130may include a mobile device131, a tablet computer132, a laptop computer133, or the like, or any combination thereof. For example, the mobile device131may include a mobile phone, a personal digital assistant (PDA), a gaming device, a navigation device, a point of sale (POS) device, a laptop, a tablet computer, a desktop, or the like, or any combination thereof. In some embodiments, the terminal(s)130may include an input device, an output device, etc. The input device may include alphanumeric and other keys that may be input via a keyboard, a touch screen (for example, with haptics or tactile feedback), a speech input, an eye-tracking input, a brain monitoring system, or any other comparable input mechanism. The input information received through the input device may be transmitted to the processing device140via, for example, a bus, for further processing. Other types of input device may include a cursor control device, such as a mouse, a trackball, or cursor direction keys, etc. The output device may include a display, a speaker, a printer, or the like, or a combination thereof. In some embodiments, the terminal(s)130may be part of the processing device140. The processing device140may process data and/or information obtained from the medical device110, the storage device150, the terminal(s)130, or other components of the medical system100. For example, the processing device140may determine, for each of one or more feature portions of the subject, whether a position of the target portion of the subject is unrelated to the movement of the one or more reference portions. In response to determining that the position of the target portion of the subject is unrelated to the movement of the one or more reference portions. As another example, the processing device140may determine the target ROI based on feature information related to the one or more reference portions. In some embodiments, the processing device140may be a single server or a server group. The server group may be centralized or distributed. In some embodiments, the processing device140may be local to or remote from the medical system100. For example, the processing device140may access information and/or data from the medical device110, the storage device150, and/or the terminal(s)130via the network120. As another example, the processing device140may be directly connected to the medical device110, the terminal(s)130, and/or the storage device150to access information and/or data. In some embodiments, the processing device140may be implemented on a cloud platform. For example, the cloud platform may include a private cloud, a public cloud, a hybrid cloud, a community cloud, a distributed cloud, and inter-cloud, a multi-cloud, or the like, or a combination thereof. In some embodiments, the processing device140may be implemented by a computing device200having one or more components as described in connection withFIG.2. The storage device150may store data, instructions, and/or any other information. In some embodiments, the storage device150may store data obtained from the processing device140, the terminal(s)130, and/or the storage device150. In some embodiments, the storage device150may store data and/or instructions that the processing device140may execute or use to perform exemplary methods described in the present disclosure. In some embodiments, the storage device150may include a mass storage device, a removable storage device, a volatile read-and-write memory, a read-only memory (ROM), or the like, or any combination thereof. Exemplary mass storage may include a magnetic disk, an optical disk, a solid-state drive, etc. Exemplary removable storage may include a flash drive, a floppy disk, an optical disk, a memory card, a zip disk, a magnetic tape, etc. Exemplary volatile read-and-write memory may include a random access memory (RAM). Exemplary RAM may include a dynamic RAM (DRAM), a double date rate synchronous dynamic RAM (DDR SDRAM), a static RAM (SRAM), a thyristor RAM (T-RAM), and a zero-capacitor RAM (Z-RAM), etc. Exemplary ROM may include a mask ROM (MROM), a programmable ROM (PROM), an erasable programmable ROM (EPROM), an electrically erasable programmable ROM (EEPROM), a compact disk ROM (CD-ROM), and a digital versatile disk ROM, etc. In some embodiments, the storage device150may be implemented on a cloud platform as described elsewhere in the disclosure. In some embodiments, the storage device150may be connected to the network120to communicate with one or more other components of the medical system100(e.g., the processing device140, the terminal(s)130). One or more components of the medical system100may access the data or instructions stored in the storage device150via the network120. In some embodiments, the storage device150may be part of the processing device140. In some embodiments, a three-dimensional coordinate system160may be used in the medical system100as illustrated inFIG.1. A first axis may be parallel to the lateral direction of the couch (e.g., the X-direction as shown inFIG.1). A second axis may be parallel to the longitudinal direction of the couch (e.g., the Z-direction as shown inFIG.1). A third axis may be parallel to a vertical direction of the couch (e.g., the Y direction as shown inFIG.1). The origin of the three-dimensional coordinate system may be any point in the space. The origin of the three-dimensional coordinate system may be determined by an operator. The origin of the three-dimensional coordinate system may be determined by the medical system100. In some embodiments, the position of the one or more portions of the subject (e.g., the one or more feature portions) may be described using the 3D coordinate system160. In some embodiments, the position of different pixels or voxels of an image may be described using the 3D coordinate system160. This description is intended to be illustrative, and not to limit the scope of the present disclosure. Many alternatives, modifications, and variations will be apparent to those skilled in the art. The features, structures, methods, and other characteristics of the exemplary embodiments described herein may be combined in various ways to obtain additional and/or alternative exemplary embodiments. For example, the storage device150may be a data storage including cloud computing platforms, such as public cloud, private cloud, community, and hybrid clouds, etc. However, those variations and modifications do not depart the scope of the present disclosure. FIG.2is a schematic diagram illustrating exemplary hardware and/or software components of an exemplary computing device200on which the processing device140may be implemented according to some embodiments of the present disclosure. As illustrated inFIG.2, the computing device200may include a processor210, a storage220, an input/output (I/O)230, and a communication port240. The processor210may execute computer instructions (e.g., program code) and perform functions of the processing device140in accordance with techniques described herein. The computer instructions may include, for example, routines, programs, objects, components, data structures, procedures, modules, and functions, which perform particular functions described herein. For example, the processor210may process image data obtained from the medical device110, the terminals130, the storage device150, and/or any other component of the medical system100. In some embodiments, the processor210may include one or more hardware processors, such as a microcontroller, a microprocessor, a reduced instruction set computer (RISC), an application-specific integrated circuits (ASICs), an application-specific instruction-set processor (ASIP), a central processing unit (CPU), a graphics processing unit (GPU), a physics processing unit (PPU), a microcontroller unit, a digital signal processor (DSP), a field programmable gate array (FPGA), an advanced RISC machine (ARM), a programmable logic device (PLD), any circuit or processor capable of executing one or more functions, or the like, or any combinations thereof. Merely for illustration, only one processor is described in the computing device200. However, it should be noted that the computing device200in the present disclosure may also include multiple processors, and thus operations and/or method operations that are performed by one processor as described in the present disclosure may also be jointly or separately performed by the multiple processors. For example, if in the present disclosure the processor of the computing device200executes both operation A and operation B, it should be understood that operation A and operation B may also be performed by two or more different processors jointly or separately in the computing device200(e.g., a first processor executes operation A and a second processor executes operation B, or the first and second processors jointly execute operation s A and B). The storage220may store data/information obtained from the medical device110, the terminals130, the storage device150, and/or any other component of the medical system100. In some embodiments, the storage220may include a mass storage, a removable storage, a volatile read-and-write memory, a read-only memory (ROM), or the like, or any combination thereof. For example, the mass storage may include a magnetic disk, an optical disk, a solid-state drive, etc. The removable storage may include a flash drive, a floppy disk, an optical disk, a memory card, a zip disk, a magnetic tape, etc. The volatile read-and-write memory may include a random access memory (RAM). The RAM may include a dynamic RAM (DRAM), a double date rate synchronous dynamic RAM (DDR SDRAM), a static RAM (SRAM), a thyristor RAM (T-RAM), and a zero-capacitor RAM (Z-RAM), etc. The ROM may include a mask ROM (MROM), a programmable ROM (PROM), an erasable programmable ROM (EPROM), an electrically erasable programmable ROM (EEPROM), a compact disk ROM (CD-ROM), and a digital versatile disk ROM, etc. In some embodiments, the storage220may store one or more programs and/or instructions to perform exemplary methods described in the present disclosure. For example, the storage220may store a program for the processing device140for determining the position of a target region of a subject (e.g., a target portion of a patient). The I/O230may input and/or output signals, data, information, etc. In some embodiments, the I/O230may enable a user interaction with the processing device140. In some embodiments, the I/O230may include an input device and an output device. Examples of the input device may include a keyboard, a mouse, a touch screen, a microphone, or the like, or a combination thereof. Examples of the output device may include a display device, a loudspeaker, a printer, a projector, or the like, or a combination thereof. Examples of the display device may include a liquid crystal display (LCD), a light-emitting diode (LED)-based display, a flat panel display, a curved screen, a television device, a cathode ray tube (CRT), a touch screen, or the like, or a combination thereof. The communication port240may be connected to a network (e.g., the network120) to facilitate data communications. The communication port240may establish connections between the processing device140and the medical device110, the terminals130, and/or the storage device150. The connection may be a wired connection, a wireless connection, any other communication connection that can enable data transmission and/or reception, and/or any combination of these connections. The wired connection may include, for example, an electrical cable, an optical cable, a telephone wire, or the like, or any combination thereof. The wireless connection may include, for example, a Bluetooth™ link, a Wi-Fi™ link, a WiMax™ link, a WLAN link, a ZigBee™ link, a mobile network link (e.g., 3G, 4G, 5G), or the like, or a combination thereof. In some embodiments, the communication port240may be and/or include a standardized communication port, such as RS232, RS485, etc. In some embodiments, the communication port240may be a specially designed communication port. For example, the communication port240may be designed in accordance with the digital imaging and communications in medicine (DICOM) protocol. FIG.3is a schematic diagram illustrating exemplary hardware and/or software components of an exemplary mobile device300on which the terminals130may be implemented according to some embodiments of the present disclosure. As illustrated inFIG.3, the mobile device300may include a communication platform310, a display320, a graphic processing unit (GPU)330, a central processing unit (CPU)340, an I/O350, a memory360, and a storage390. In some embodiments, any other suitable component, including but not limited to a system bus or a controller (not shown), may also be included in the mobile device300. In some embodiments, a mobile operating system370(e.g., iOS™, Android™, Windows Phone™) and one or more applications380may be loaded into the memory360from the storage390in order to be executed by the CPU340. The applications380may include a browser or any other suitable mobile apps for receiving and rendering information relating to image processing or other information from the processing device140. User interactions with the information stream may be achieved via the I/O350and provided to the processing device140and/or other components of the medical system100via the network120. To implement various modules, units, and their functionalities described in the present disclosure, computer hardware platforms may be used as the hardware platform(s) for one or more of the elements described herein. A computer with user interface elements may be used to implement a personal computer (PC) or any other type of workstation or terminal device. A computer may also act as a server if appropriately programmed. FIG.4is a block diagram illustrating an exemplary processing device140according to some embodiments of the present disclosure. As illustrated inFIG.4, the processing device140may include an obtaining module410, an identification module420, a determination module430, and an image registration module440. The modules may be hardware circuits of all or part of the processing device140. The modules may also be implemented as an application or set of instructions read and executed by the processing device140. Further, the modules may be any combination of the hardware circuits and the application/instructions. For example, the modules may be part of the processing device140when the processing device140is executing the application/set of instructions. The obtaining module410may obtain data related to the medical system100. In some embodiments, the obtaining module410may obtain a target image related to the subject. The target image may include a target portion of the subject. For instance, the subject may be a patient or an animal (e.g., a dog, a cat). The target portion may include one or more organs or different tissue, or a portion thereof, of the subject. In some embodiments, the target portion may relate to an injury (e.g., a fracture), a lesion, etc. Merely by way of example, the target portion may include a tumor. In some embodiments, the obtaining module410may obtain feature information related to a movement of one or more feature portions of the subject. The identification module420may identify one or more reference portions of the subject. In some embodiments, the identification module420may determine, for each of the one or more first feature portions, whether the first feature portion is one of the one or more reference portions based on a first distance, a second distance, and the feature information related to the first feature portion. In some embodiments, the processing device140may determine, for each of the one or more first feature portions, whether the second feature portion is one of the one or more reference portions. For example, the identification module420may determine, for each of the one or more second feature portions of the subject, a third distance between the target portion and the second feature portion. The identification module420may segment, from the target image, the one or more portions at risk, the one or more first feature portions, the one or more second feature portions, and the target portion. The first distance, the second distance, and/or the third distance may be determined based on a result of the segmentation. The determination module430may determine a target ROI for image registration. In some embodiments, the determination module430may determine the target ROI in the target image based on feature information related to the one or more reference portions. In some embodiments, the determination module430may determine an initial ROI in the target image. The determination module430may determine the target ROI by performing an iterative process including a plurality of iterations based on the initial ROI. The iterative process may correspond to one or more rules. In each of the plurality of iterations, the determination module430may determine whether at least one of the one or more rules is satisfied. In response to determining that at least one of the one or more rules is satisfied, the determination module430may terminate the iterative process. The image registration module440may register the target image with a reference image. Merely by way of example, the processing device140may register the target image with the reference image using a feature-based algorithm, e.g., by aligning the target ROI with the reference ROI. In some embodiments, the image registration module430may generate a reference ROI in the reference image. The reference ROI may correspond to the target ROI, indicating that both the reference ROI and the target ROI correspond to the same portion of the subject. There may be a correlation between the reference ROI and the target ROI. The reference ROI may have the same shape and/or size as the target ROI. For example, the target image may be a planning image related to a radiation treatment, and the reference image may be a treating image related to the radiation treatment. The center of the target ROI in the planning image may be a planning isocenter, and the center of the reference ROI in the treatment image may be a treatment isocenter. In some embodiments, when the dimensions of the target image and the reference image are the same, the dimensions of the target ROI and the reference ROI may be the same. For example, the target ROI and the reference ROI may both be a 2D region. Alternatively, the dimensions of the target image and the reference image may be different. For instance, the target image may be a 3D image and the reference image may be a 2D image. The target ROI may be a 3D region and the reference image may be a 2D image. The registration may include a 3D-3D registration, a 3D-2D registration, a 2D-2D registration, or the like. It should be noted that the above description is merely provided for the purposes of illustration, and not intended to limit the scope of the present disclosure. For persons having ordinary skills in the art, multiple variations or modifications may be made under the teachings of the present disclosure. However, those variations and modifications do not depart from the scope of the present disclosure. In some embodiments, a module mentioned above may be divided into two or more units. For example, the identification module may be divided into two units, one of which may be configured to identify one or more reference portions of the subject, and the other one may be configured to identify a reference region in the reference image. The reference region may correspond to the target image data. In some embodiments, the processing device140may include one or more additional modules. For example, the processing device140may further include a control module configured to generate control signals for one or more components in the medical system100. In some embodiments, one or more modules of the processing device140described above may be omitted. For example, the processing device140may be configured to determine a target ROI, and another computing device may be configured to perform an image registration operation based on the target ROI. Thus, the image registration module440may be omitted in the processing device140. FIG.5is a flowchart illustrating an exemplary process for image registration according to some embodiments of the present disclosure. At least a portion of process500may be implemented by the processing device140(e.g., the processor210of the computing device200as illustrated inFIG.2, the CPU340of the mobile device300as illustrated inFIG.3, one or more modules as shown inFIG.4). The operations of the illustrated process presented below are intended to be illustrative. In some embodiments, the process500may be accomplished with one or more additional operations not described, and/or without one or more of the operations discussed. Additionally, the order in which the operations of the process2100as illustrated inFIG.5and described below is not intended to be limiting. In502, the processing device140(e.g., the obtaining module410) may obtain a target image related to the subject. The target image may include a target portion of the subject. For instance, the subject may be a patient or an animal (e.g., a dog, a cat). The target portion may include one or more organs or different tissue, or a portion thereof, of the subject. In some embodiments, the target portion may relate to an injury (e.g., a fracture), a lesion, etc. Merely by way of example, the target portion may include a tumor. The processing device140may obtain the target image from one or more components of the medical system100(e.g., the medical device110or the storage device150). The target image may be a two-dimensional (2D) image, a three-dimensional (3D) image, a four-dimensional (4D) image (e.g., a temporal series of 3D images), etc. In some embodiments, the target image may be generated based on a first scan of the subject at a first time point. The first scan may be performed by a first imaging device, such as a CT scanner, a PET scanner, an MRI scanner, etc. The subject may hold a first pose at the first time point when the first scan is performed by the first imaging device. In some embodiments, the target image may be a planning image related to a radiation treatment. In504, the processing device140(e.g., the obtaining module410) may obtain a reference image related to the subject. The reference image may include a representation of the target portion of the subject. The processing device140may obtain the reference image from one or more components of the medical system100(e.g., the medical device110or the storage device150). In some embodiments, the reference image may be generated based on a second scan of the subject at a second time point. The second time point may be different from the first time point. For example, the second time point may be later than the first time point. The second scan may be performed by a second imaging device, such as a CT scanner, a PET scanner, an MRI scanner, etc. The subject may hold a second pose at the second time point when the second scan is performed by the second imaging device. In some embodiments, the first imaging device and the second imaging device may be of the same type. For instance, the target image may be a planning image configured for generating a radiation treatment plan. The planning image may be generated based on a scan before a radiation treatment. In some embodiments, when generating the radiation treatment plan, the planning image may be segmented to obtain one or more portions at risk. For example, the contour of the one or more portions at risk may be marked in the planning image. The radiation treatment may include multiple radiation fractions. In each of the multiple radiation fractions, a portion of a total radiation dose is delivered to the target portion of the subject. The reference image may be a treatment image configured for guiding the radiation treatment. The treatment image may be generated based on a scan before each of the multiple radiation fractions. The target portion of the subject may move, for example, due to a physiological movement. The treatment image may include a representation of the target portion of the subject to be treated, a position of the target portion of the subject to be treated, and/or other information. Alternatively, the first imaging device and the second imaging device may be of different types. For instance, the first imaging device may be a CT scanner and the second imaging device may be an MRI scanner. In506, the processing device140(e.g., the determination module430) may determine a target ROI in the target image. In some embodiments, the target ROI may include at least a part of a representation of the target portion of the subject. In some embodiments, one or more feature portions of the subject may move due to various reasons. For example, the movement of the one or more feature portions may include a physiological movement, such as a respiratory movement, a cardiac movement, an artery pulsation, etc. The physiological movement may cause a position of the feature portion to change and/or cause the shape/volume of the feature portion to change. A feature portion related to the physiological movement may be referred to as a first feature portion hereafter. The first feature portion may include, for example, the diaphragm, the rib-cage, the bladder, the rectum, or the like, or a portion thereof, or any combination thereof, of the subject. As another example, the position of the one or more feature portions may depend on the pose of the subject when receiving a scan. Such a feature portion is referred to as a second feature portion hereafter. The movement of the one or more second feature portions may be caused by a pose difference between the first pose and the second pose. Such movement caused by the pose difference is also referred to as an interfractional movement. The second feature portion may include, for example, an arm, a hand, a finger, a foot, or the like, or a portion thereof, or any combination thereof, of the subject. The processing device140may obtain feature information related to the movement of the one or more feature portions of the subject. Merely by way of example, the feature information related to the movement of a feature portion may include a moving range related to the feature portion. Additionally or alternatively, the feature information may include whether the movement of the feature portion affects the position of a lesion (e.g., a tumor, a fracture) located at a specific site. The processing device140may identify one or more reference portions of the subject from the one or more feature portions based on the feature information. A position of the target portion of the subject may be unrelated to the movement of the one or more reference portions at the time when the scan is performed. The processing device140may further determine the target ROI in the target image based on corresponding feature information related to the one or more reference portions. More details regarding the determination of the target ROI may be found elsewhere in the present disclosure, for example, inFIG.6,FIG.7, and the description thereof. In508, the processing device140(e.g., the image registration module440) may generate a reference ROI in the reference image. The reference ROI may correspond to the target ROI, indicating that both the reference ROI and the target ROI correspond to the same portion of the subject. In some embodiments, the processing device140may generate the reference ROI based on information relating to the target ROI. Merely by way of example, the target image may be a planning image related to a radiation treatment, and the reference image may be a treating image related to the radiation treatment. The processing device140may identify, in the reference image, a feature point corresponding to an isocenter (e.g., treatment isocenter). The processing device140may further generate the reference ROI in the reference image based on the feature point corresponding to the isocenter. For instance, the processing device140may designate the feature point as a center (e.g., a geometric center) of the reference ROI and generate the reference ROI having the same shape and/or size as the target ROI. In510, the processing device140(e.g., the image registration module440) may register the target image with the reference image based on the target ROI and the reference ROI. Merely by way of example, the processing device140may register the target image with the reference image using a feature-based algorithm, e.g., by aligning the target ROI with the reference ROI. In some embodiments, when the dimensions of the target image and the reference image are the same, the dimensions of the target ROI and the reference ROI may be the same. For example, the target ROI and the reference ROI may both be a 2D region. Alternatively, the dimensions of the target image and the reference image may be different. For instance, the target image may be a 3D image and the reference image may be a 2D image. The target ROI may be a 3D region and the reference image may be a 2D image. The registration may include a 3D-3D registration, a 3D-2D registration, a 2D-2D registration, or the like. To register the target ROI with the reference image, the processing device140may project the 3D target ROI onto a plane to generate a 2D target ROI. The plane may include, for example, a transverse plane, a coronal plane, or a sagittal plane, or an oblique plane that is other than a transverse plane, a coronal plane, and a sagittal plane. The 2D target ROI may be used for registering the target image with the 2D reference image. The 2D target ROI and the 2D reference ROI may both be in a same plane. For example, when the target image is a planning image, and the reference image is a treatment image, the target image may be registered with the reference image to correlate a first position of the subject in the first scan to a second position of the subject in the second scan. In some embodiments, the processing device140and/or a user (e.g., an operator) may cause a position of one or more movable components of the radiotherapy device to be adjusted based on a result of the image registration between the target image and the reference image (or referred to as a registration result for brevity) so that the radiation treatment may be accurately delivered to the target portion of the subject. As another example, the target image and the reference image may be both used for diagnosis purposes. For instance, the target image and the reference image may both be CT images of the subject. The image registration may aid a user (e.g., a doctor) in visualizing and monitoring pathological changes in the subject over time. For example, the image registration may help the user monitor and/or detect a change in, e.g., size, density, or the like, or a combination thereof, of a tumor or a nodule over time. When the first imaging device and the second imaging device are of different types, the target image may be registered with the reference image so that a composite image may be generated by fusing the target image and the reference image. For instance, the target image may be a CT image and the reference image may be an MRI image. The composite image may provide anatomical information and/or functional information related to the subject with improved accuracy for diagnosis and/or treatment purposes. In some embodiments, the processing device140may further determine a target sub-ROI in the target ROI in operation506. The processing device140may further identify a reference sub-ROI in the reference image in operation508. The reference sub-ROI may correspond to the target sub-ROI. The target image and the reference image may be registered based on the target sub-ROI and the reference sub-ROI. Merely byway of example, the processing device140may define an entropy measure H of the target ROI and select a region within the target ROI corresponding to the maximum entropy measure as the target sub-ROI. In this way, the target sub-ROI may include relatively rich information which may help improve the accuracy of the image registration result. It should be noted that the above description regarding the process500is merely provided for the purposes of illustration, and not intended to limit the scope of the present disclosure. For persons having ordinary skills in the art, multiple variations or modifications may be made under the teachings of the present disclosure. However, those variations and modifications do not depart from the scope of the present disclosure. In some embodiments, one or more operations may be omitted and/or one or more additional operations may be added. For example, the process500may further include an operation to generate, based on the target image, a display image representing the target ROI. Additionally or alternatively, the process500may further include transmitting the display image to a terminal device (e.g., the terminal device130) of a user. The user may view and/or modify the target ROI via the terminal device. FIG.6is a flowchart illustrating an exemplary process for determining a target ROI in the target image according to some embodiments of the present disclosure. At least a portion of process600may be implemented by the processing device140(e.g., the processor210of the computing device200as illustrated inFIG.2, the CPU340of the mobile device300as illustrated inFIG.3, one or more modules as shown inFIG.4). The operations of the illustrated process presented below are intended to be illustrative. In some embodiments, the process600may be accomplished with one or more additional operations not described, and/or without one or more of the operations discussed. Additionally, the order in which the operations of the process2100as illustrated inFIG.6and described below is not intended to be limiting. In602, the processing device140(e.g., the obtaining module410) may obtain a target image related to the subject. The target image may include a representation of a target portion of the subject. The target image may be registered with a reference image. the target image may be a planning image related to a radiation treatment, and the reference image may be a treating image related to the radiation treatment. Operation602may be performed in a manner similar to operation502. In604, the processing device140(e.g., the obtaining module410) may obtain feature information related to a movement of one or more feature portions of the subject. In some embodiments, the processing device140may determine the one or more feature portions of the subject based on information available in a library stored in a storage device (e.g., the storage device150). Merely for illustration purposes, the target portion of the subject may include a tumor. In some embodiments, a user (e.g., an operator) may input at least some subject information via the terminal device130. The processing device140may obtain the subject information from the terminal device130. In some embodiments, the processing device140may retrieve at least some subject information from a profile of the subject including, e.g., a patient record of the subject. Exemplary subject information may include the age, the gender, the body weight, the body height, the thickness of the body, information related to the target portion (e.g., a tumor site, a tumor type) of the subject, etc. The processing device140may retrieve, based on at least some of the subject information and from the library, one or more features of one or more specific portions of the subject. For example, the one or more features of a specific portion may include whether the specific portion of the subject is at risk surrounding a tumor (which may receive undesired radiation during a radiation treatment targeted at the tumor) corresponding to a tumor site and/or a tumor type related to the target portion, whether the specific portion of the subject tends to move or remain static, whether a position of the specific portion of the subject is affected by a pose of the subject for receiving a scan by an imaging device, a movement range of the specific portion, or the like, or any combination thereof, of the subject. The processing device140may select the one or more feature portions from the one or more specific portions. The processing device140may further determine feature information related to the movement of the one or more feature portions based on the one or more features corresponding to the one or more feature portions. The library may include information regarding multiple tumor types, multiple tumor sites, and multiple feature portions (e.g., organs and/or tissue) of one or more reference subjects corresponding to the multiple tumor types and tumor sites. Exemplary tumor types may include carcinoma, sarcoma, myeloma, leukemia, lymphoma, or the like, or any combination thereof, of the one or more reference subjects. Exemplary tumor sites may include the skin, a lung, a breast, the prostate, the colon, the rectum, the colon, the cervix, the uterus, or the like, or any combination thereof, of the one or more reference subjects. Each of the one or more feature portions of a reference subject may include one or more features corresponding to a tumor type and/or a tumor site of the reference subject. For instance, the library may be generated based on historical data related to different feature portions of the one or more reference subjects. As another example, a user may manually add to the library one or more features of a feature portion of a reference subject, or modify one or more features of a feature portion of a reference subject that is already in the library. In some embodiments, the library may be stored in the storage device150in the form of a table. The one or more features of a feature portion of a reference subject may include the age, the gender, the body weight, the body height, the thickness of the body, whether the portion of the subject is a portion at risk surrounding the tumor (e.g., an organ or tissue that may receive undesired radiation during a radiation treatment), whether the portion tends to move or remain static, a moving direction, whether a position of the portion is affected by a pose of the subject for receiving a scan by an imaging device, a movement range of the portion, or the like, or any combination thereof. Additionally or alternatively, the one or more features of the portion may further include whether the portion of the subject is a first feature portion relating to a physiological movement, whether the portion of the subject is a second feature portion whose position depends on the pose of the subject for receiving a scan, whether the movement of the second feature portion affects a position of the tumor, or the like, or any combination thereof. The processing device140may further determine one or more feature portions of the subject based on the library and at least some of the subject information related to the subject (e.g., the tumor site, the tumor type, the gender). As described in connection with operation506, the one or more feature portions of the subject may include one or more first feature portions related to the physiological movement and one or more second feature portions related to a pose of the subject for performing a scan. The processing device140may further obtain the feature information related to the one or more feature portions from the library. The feature information of a feature portion of the subject may include reference feature information of one or more features of a feature portion of each of one or more reference subjects. The feature portion of the subject may be the same as or similar to the feature portion of each of the one or more reference subjects. For instance, the feature portion of the subject and each of the reference subject(s) is the same—a tumor in the liver of the subject and each of the reference subject(s). As another example, the feature portion of the subject and each of the reference subject(s) is the same—a tumor in the left lung of the subject and each of the reference subject(s), all deemed at a same progression stage. In some embodiments, the feature information of a similar feature portion of multiple reference subjects may be presented in the library as individual groups of feature information each corresponding to one reference subject, or a single group of feature information (e.g., averaged across the reference subjects). In606, the processing device140(e.g., the identification module420) may identify one or more reference portions of the subject from the one or more feature portions based on the feature information. In some embodiments, the processing device140may determine, for each of the one or more first feature portions, whether the first feature portion is one of the one or more reference portions based on a first distance, a second distance, and the feature information related to the first feature portion. If a first feature portion is determined as one of the one or more reference portions, the first feature portion is also referred to as a first reference portion. The first distance refers to a distance between the target portion and each of one or more portions at risk of the subject in the target image. The second distance refers to a distance between the target portion and the first feature portion in the target image. In some embodiments, the first distance and/or the second distance may be a distance along the moving direction of the first feature portion. The moving direction may be described using the coordinate system160illustrated inFIG.1. For instance, the moving direction may be denoted as a +X direction or a −X direction. When a first feature portion moves along a +X direction, the first distance and/or a second distance may be a distance along the +X direction. The feature information may include a movement range of the first feature portion. The movement range may include the moving direction of a first feature portion and the moving distance of the first feature portion along the moving direction. For instance, the movement distance of the liver (i.e., a first feature portion relating to a respiratory movement) may be approximately 11 millimeters along a moving direction related to the respiratory movement. In some embodiments, the feature information related to the movement of the second feature portion may include whether the movement of the first feature portion affects the position of a lesion (e.g., a tumor, a fracture) located at various sites. The processing device140may determine, for each of the one or more first feature portions, whether the first feature portion is one of the one or more reference portions based on the feature information related to the first feature portion. For example, the feature information related to the first feature portion may include whether the movement of the first feature portion affects the position of a tumor located on the diaphragm, a tumor located in the brain, a tumor located in the stomach, etc. The processing device140may determine whether the first feature portion is one of the one or more reference portions based on information related to the target portion (e.g., a tumor site) and the feature information related to the first feature portion. In some embodiments, to identify the one or more reference portions, the processing device140may segment, from the target image, the one or more portions at risk, the one or more first feature portions, the one or more second feature portions, and the target portion. As another example, the segmentation of at least some of the fore-mentioned portions may be performed in advance. The processing device140may obtain a result of the segmentation from a storage (e.g., the storage150). For instance, the target image may be a planning image related to a radiation treatment. The target portion and the one or more portions at risk may be segmented from the planning image when planning for the radiation treatment. The first distance between the target portion and a portion at risk may be a minimum distance along the moving direction between a feature point corresponding to the target portion in the target image and a point corresponding to the portion at risk. For example, the feature point may be a center point (e.g., a geometric center, a center of mass, etc.) of the target portion in the target image. The point corresponding to the portion at risk may be a center point (e.g., a geometric center, a center of mass, etc.) of the portion at risk. Similarly, the second distance between the target portion and a first feature portion may be a minimum distance along the moving direction between the feature point corresponding to the target portion in the target image and a point corresponding to the first feature portion. In some embodiments, the target image may be a planning image for generating a radiation treatment plan. The target portion may be a 2D planning target region or a 3D planning target volume (PTV). The feature point may be an isocenter related to a radiation treatment, such as a planning isocenter. As used herein, the term “planning isocenter” refers to a point in space where radiation beams emitted from different angles are expected (planned) to intersect when the gantry of the radiotherapy device is rotating. In some embodiments, the processing device140may determine a minimum first distance among the one or more first distances. The processing device140may determine a distance difference between the second distance and the minimum first distance. The processing device140may compare the distance difference with the moving range of the first feature portion. In response to determining that the distance difference is greater than the moving range of the first feature portion, the processing device140may determine that the movement of the first feature portion does not affect the position of the target portion of the subject. The processing device140may designate the first feature portion as one of the one or more reference portions. In some embodiments, the processing device140may determine, for each of the one or more first feature portions, whether the second feature portion is one of the one or more reference portions. If a second feature portion is determined as one of the one or more reference portions, the second feature portion is also referred to as a first reference portion. For example, the processing device140may determine, for each of the one or more second feature portions of the subject, a third distance between the target portion and the second feature portion. A third distance may be determined similarly to how a first distance or a second distance is determined, the description of which is not repeated here. The processing device140may further compare the third distance with a distance threshold. In response to determining that the third distance is less than the distance threshold, the processing device140may designate the second feature portion as one of the one or more reference portions. In some embodiments, the feature information related to the second feature portion may include whether the movement of the second feature portion affects the position of a lesion (e.g., a tumor, a fracture) located at various sites. For example, the feature information related to the second feature portion may include whether the movement of the second feature portion affects the position of a tumor located in the breast, a tumor located in the brain, a tumor located in the stomach. Merely as an example, the feature information related to a left hand (i.e., an exemplary second feature portion) may include that the movement of the left hand does not affect the position of a tumor located in the breast. The processing device140may determine whether the second feature portion is one of the one or more reference portions based on information related to the target portion (e.g., a tumor site) and the feature information related to the second feature portion. In608, the processing device140(e.g., the determination module430) may determine the target ROI in the target image based on feature information related to the one or more reference portions. In some embodiments, the processing device140may determine an initial ROI in the target image. The processing device140may determine the target ROI by performing an iterative process including a plurality of iterations based on the initial ROI. The iterative process may correspond to one or more rules. In each of the plurality of iterations, the processing device140may determine whether at least one of the one or more rules is satisfied. In response to determining that at least one of the one or more rules is satisfied, the processing device140may terminate the iterative process. More details regarding the determination of the target ROI in the target image via the iterative process may be found elsewhere in the present disclosure, for example, inFIG.7and the description thereof. It should be noted that the above description regarding the process600is merely provided for the purposes of illustration, and not intended to limit the scope of the present disclosure. For persons having ordinary skills in the art, multiple variations or modifications may be made under the teachings of the present disclosure. However, those variations and modifications do not depart from the scope of the present disclosure. In some embodiments, one or more operations may be omitted and/or one or more additional operations may be added. FIG.7is a flowchart illustrating an exemplary process for determining the target ROI through a plurality of iterations according to some embodiments of the present disclosure. At least a portion of process700may be implemented by the processing device140(e.g., the processor210of the computing device200as illustrated inFIG.2, the CPU340of the mobile device300as illustrated inFIG.3, one or more modules as shown inFIG.4). The operations of the illustrated process presented below are intended to be illustrative. In some embodiments, the process700may be accomplished with one or more additional operations not described, and/or without one or more of the operations discussed. Additionally, the order in which the operations of the process2100as illustrated inFIG.7and described below is not intended to be limiting. In702, the processing device140(e.g., the determination module430) may determine an initial ROI in the target image. For instance, the target image may be a planning image for a radiation treatment. The processing device140may determine the initial ROI based on an isocenter (e.g., the planning isocenter) related to the radiation treatment. The size of the initial ROI may be a preset value stored in the storage device150or may be set and/or modified by a user (e.g., an operator). Merely by way of example, the isocenter may be located at a center (e.g., a geometric center, a center of mass, etc.) of the initial ROI. In some embodiments, the target image may be a 2D image and the initial ROI may have a 2D shape. For example, the initial ROI may be a circle, a triangle, a parallel, a square, a rectangle, a trapezoid, a pentagon, a hexagon, or the like. Alternatively, the initial ROI may have an irregular shape. In some embodiments, the target image may be a 3D image, and the initial ROI may have a 3D shape. For instance, the initial ROI may be a sphere, a cube, a cylinder, a pyramid, a circular cone, a circular truncated cone, or the like. In704, the processing device140(e.g., the determination module430) may adjust (e.g., extend or reduce) an intermediate ROI to obtain an adjusted ROI. As used herein, the intermediate ROI refers to the initial ROI or an ROI obtained in a prior iteration. For instance, in the first iteration, the intermediate ROI may be the initial ROI. In the ithiteration, the intermediate ROI may be an ROI obtained in the (i−1)thiteration. In some embodiments, the processing device140may adjust the intermediate ROI by extending at least a portion of the boundary of the intermediate ROI. The amount of extension of the boundary along at least two different directions may the same or different. Merely by way of example, the intermediate ROI may be a cube. The processing device140may extend the twelve sides of the cube along six orthogonal directions. For instance, the six orthogonal directions may be described using the 3D coordinate system160illustrated inFIG.1. The six orthogonal directions may include a +X direction, a −X direction, a +Y direction, a −Y direction, a +Z direction, a −Z direction, or the like, or any combination thereof. In some embodiments, for each of at least some of the twelve sides of the cube, the side may be extended by a predetermined distance (e.g., 2 millimeters, 5 pixels/voxels, etc.) along one of the six orthogonal directions that is perpendicular to the side. The adjusted ROI may be a cube that has a greater volume than the intermediate ROI. As another example, for an intermediate ROI that is a cube, the processing device140may extend eight of the twelve sides of the cube along four orthogonal directions. For each of the eight sides of the cube, the side may be extended along one of the four orthogonal directions that is perpendicular to the side. The extension along two different directions may be the same or different. The adjusted ROI may have a greater volume than the intermediate ROI. In706, the processing device140(e.g., the determination module430) may determine whether at least one of one or more rules is satisfied. The one or more rules may relate to the termination of the iterative process for determining the target ROI in the target image. In response to determining that none of the one or more rules is satisfied, the processing device140may proceed to operation706to continue to adjust the intermediate ROI. In response to determining that at least one of the one or more rules is satisfied, the processing device140may terminate the iterative process and proceed to operation710. For example, the one or more rules may include a first rule relating to the feature information of the one or more reference portions. The first rule may include that the adjusted ROI includes at least one pixel or voxel corresponding to at least one of the one or more reference portions. If the first rule is satisfied, the target ROI may include at least a part of a reference portion (e.g., a second reference portion related to the pose of the subject). The movement of the reference portion (e.g., a physiological movement or a movement caused by a pose difference) does not affect the position of the target portion of the subject. If the at least a part of the reference portion of the subject is included in the target ROI that is used for registering the target image with the reference image, the accuracy of the registration result may be decreased. Thus, the iterative process may need to be terminated when the processing device140determines that the first rule is satisfied. As another example, when the one or more reference portions include one or more first reference feature portions relating to the physiological movement, the first rule may include that the adjusted ROI includes a pixel or voxel corresponding to a movement region of at least one of the one or more first reference feature portions. As used herein, the term “movement region” of a first reference feature portion refers to a minimum region within which the movement of the first reference feature portion occurs. The movement region of the first reference feature portion may be determined in the target image based on the feature information related to the movement of the first reference feature portion. For example, the movement region may be determined based on the moving distance and the moving direction of the first reference feature portion, and a size of the first reference feature portion in the target image. As another example, the one or more rules may include a second rule. The second rule may be associated with a predefined limit. The predefined limit may relate to factors such as a count of iterations that have been performed, an area/volume of the adjusted ROI, or the like. For instance, the second rule may include that the count of iterations that have been performed reaches a predetermined value. Additionally or alternatively, the second rule may include that the area/volume of the adjusted ROI reaches a predetermined area value/volume threshold. As yet another example, the one or more rules may include a third rule. The third rule may include that the adjusted ROI includes at least one pixel or voxel that corresponds to an object located outside the body of the subject. For instance, the object may include air surrounding the subject, the couch of the first imaging device, etc. In708, the processing device140(e.g., the determination module430) may determine the target ROI base on the intermediate ROI or the adjusted ROI. For instance, in response to determining that at least one of the first rule, the second rule, or the third rule is satisfied, the processing device140may determine the target ROI based on the adjusted ROI. The target ROI may be used in the registration of the target image with the reference image. More details regarding the registration may be found elsewhere in the present disclosure, for example, in the description related to operation510. In some embodiments, the processing device140may further generate a display image based on the target image and the target ROI. The processing device140may mark the target ROI on the target image to generate the display image. The user may view the display image via the terminal device130. For example, the contour of the target ROI may be marked by a specific color that is different from other parts of the target image so that the user may easily identify the target ROI from the target image. As another example, the entire target ROI may be presented using the specific color. Additionally or alternatively, the terminal device130may provide a modification option for the user. The user may modify the target ROI in the display image, for example, by adjusting the contour of the target ROI using an input device such as a mouse or a touch screen. It should be noted that the above description regarding the process600is merely provided for the purposes of illustration, and not intended to limit the scope of the present disclosure. For persons having ordinary skills in the art, multiple variations or modifications may be made under the teachings of the present disclosure. However, those variations and modifications do not depart from the scope of the present disclosure. Having thus described the basic concepts, it may be rather apparent to those skilled in the art after reading this detailed disclosure that the foregoing detailed disclosure is intended to be presented by way of example only and is not limiting. Various alterations, improvements, and modifications may occur and are intended to those skilled in the art, though not expressly stated herein. These alterations, improvements, and modifications are intended to be suggested by this disclosure and are within the spirit and scope of the exemplary embodiments of this disclosure. Moreover, certain terminology has been used to describe embodiments of the present disclosure. For example, the terms “one embodiment,” “an embodiment,” and “some embodiments” mean that a particular feature, structure or characteristic described in connection with the embodiment is included in at least one embodiment of the present disclosure. Therefore, it is emphasized and should be appreciated that two or more references to “an embodiment” or “one embodiment” or “an alternative embodiment” in various portions of this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures or characteristics may be combined as suitable in one or more embodiments of the present disclosure. Further, it will be appreciated by one skilled in the art, aspects of the present disclosure may be illustrated and described herein in any of a number of patentable classes or context including any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof. Accordingly, aspects of the present disclosure may be implemented entirely hardware, entirely software (including firmware, resident software, micro-code, etc.) or combining software and hardware implementation that may all generally be referred to herein as a “module,” “unit,” “component,” “device,” or “system.” Furthermore, aspects of the present disclosure may take the form of a computer program product embodied in one or more computer readable media having computer readable program code embodied thereon. A computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated signal may take any of a variety of forms, including electro-magnetic, optical, or the like, or any suitable combination thereof. A computer readable signal medium may be any computer readable medium that is not a computer readable storage medium and that may communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable signal medium may be transmitted using any appropriate medium, including wireless, wireline, optical fiber cable, RF, or the like, or any suitable combination of the foregoing. Computer program code for carrying out operations for aspects of the present disclosure may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Scala, Smalltalk, Eiffel, JADE, Emerald, C++, C #, VB. NET, Python or the like, conventional procedural programming languages, such as the “C” programming language, Visual Basic, Fortran 2003, Perl, COBOL 2002, PHP, ABAP, dynamic programming languages such as Python, Ruby, and Groovy, or other programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider) or in a cloud computing environment or offered as a service such as a Software as a Service (SaaS). Furthermore, the recited order of processing elements or sequences, or the use of numbers, letters, or other designations therefore, is not intended to limit the claimed processes and methods to any order except as may be specified in the claims. Although the above disclosure discusses through various examples what is currently considered to be a variety of useful embodiments of the disclosure, it is to be understood that such detail is solely for that purpose and that the appended claims are not limited to the disclosed embodiments, but, on the contrary, are intended to cover modifications and equivalent arrangements that are within the spirit and scope of the disclosed embodiments. For example, although the implementation of various components described above may be embodied in a hardware device, it may also be implemented as a software only solution, e.g., an installation on an existing server or mobile device. Similarly, it should be appreciated that in the foregoing description of embodiments of the present disclosure, various features are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the disclosure aiding in the understanding of one or more of the various embodiments. This method of disclosure, however, is not to be interpreted as reflecting an intention that the claimed subject matter requires more features than are expressly recited in each claim. Rather, claim subject matter lie in less than all features of a single foregoing disclosed embodiment. | 80,529 |
11861857 | DETAILED DESCRIPTION This disclosure describes methods, apparatuses, and systems for using sensor data to identify objects in an environment and for controlling a vehicle relative to those identified objects. For example, an autonomous vehicle can include a plurality of sensors to capture sensor data corresponding to an environment of a vehicle. The sensor data can include data associated with an environment where, in some instances, multiple objects (e.g., pedestrians, vehicles, bicyclists, etc.) are located. Oftentimes, different sensor modalities are used for redundancy purposes and/or because of inherent shortcomings in the sensors. For example, in the case of time-of-flight sensors, intensity and distance information may be unreliable due to one or more of oversaturation, under exposure, ambiguous returns, or the like. Moreover, some data resolution techniques, such as techniques used to resolve a depth of an object from multiple returns associated with an object can sometimes result in inaccurate depths Techniques described herein can be used to improve sensor data, including time-of-flight sensor returns. For example, in implementations described herein, returns for a time-of-flight sensor may be obtained at different modulation frequencies to provide new techniques for resolving distance of objects based on those returns. Moreover, techniques described herein may be used to more accurately determine whether distances associated with those returns are within a nominal maximum range of the time-of-flight sensor. Techniques disclosed herein can also provide for improved filtering of pixels based on depth of those pixels, including depths determined according to the techniques described herein. The techniques described herein may, in some instances, result in a better understanding of the environment of the vehicle, thereby leading to safer and/or more confident controlling of the vehicle. As discussed herein, sensor data can be captured by one or more sensors, which can include time-of-flight sensors, RADAR sensors, LIDAR sensors, SONAR sensors, image sensors, microphones, or any combination thereof. The sensor can include an emitter to emit a signal and a sensor to sense a return signal from the environment. Further, the sensor can comprise a sensor computing device to determine a data format of the captured sensor data. In the context of a time-of-flight (ToF) sensor, the captured sensor data can be represented, for example, in a raw (e.g., a quadrature) format. In some instances, the sensor data in the quadrature format can include one or more pixels where each pixel can be represented as a numerical value (e.g., 12 bits with a value range of 2−11to (211−1)). After determining the sensor data in the quadrature format, a sensor computing device (e.g., an FPGA, SoC, ASIC, CPU, etc.) can determine the sensor data in an intensity and depth format. In some instances, the intensity and depth format can be associated with the pixels (e.g., an intensity and depth value for each pixel) in the sensor data. The sensor computing device can also, using the sensor data in the intensity and depth format, perform an unprojection operation to determine the sensor data in an unprojected format to project each pixel in the sensor data into a multi-dimensional space (e.g., 3D-space using an x-position, a y-position, and a z-position). In implementations described herein, the sensor and/or the sensor computing device may vary attributes of the sensor and/or processes using the sensor to generate improved sensor data. For example, in some implementations, the carrier may be modified to establish different configurations of the sensor. For example, in implementations described herein, the modulation frequency of the emitted carrier signal may be varied between successive frames. The modulation frequency may be varied between a first, relatively lower modulation frequency and a second, relatively higher modulation frequency. In examples, the lower modulation frequency will configure the sensor with a larger nominal maximum range, but in some instances at a lower accuracy, and the higher modulation frequency will configure the sensor with a shorter nominal maximum range, with usually with higher accuracy. In some examples, the relatively lower modulation frequency is selected to provide a considerably larger nominal maximum depth than the nominal maximum depth associated with the relatively higher modulation frequency. In some examples, the higher modulation frequency may be on the order of three- to eight-times higher than the lower modulation frequency. For instance, the lower modulation frequency may be between about 3 MHz and about 5 MHz, which may provide a nominal maximum depth on the order of about 35 meters or more. The higher modulation frequency may be about 20 MHz or greater, which may provide a nominal maximum depth of 4 to 5 meters or less. In examples, returns associated with the different modulation frequencies can be considered together to determine a more accurate depth measurement for detected surfaces. In some examples, returns associated with the lower modulation frequency may be used, e.g., independent of the returns associated with the higher modulation frequency, to determine an estimated depth. For example, because the nominal maximum depth associated with the lower modulation frequency is relatively large, e.g., 35 meters or more, most detected surfaces will be within the nominal maximum range. Accordingly, the nominal depth measured by the first modulation frequency is likely the depth of the surface, and in implementations can be used as an estimated depth. In an example, a nominal measurement of 9.7 meters measured at the first, e.g., lower, modulation frequency is likely to be from a surface 9.7 meters from the time-of-flight sensor. Unlike the measurement at the lower modulation frequency, the depth measured at the higher modulation frequency may be more likely to be ambiguous, e.g., corresponding to the nominal depth or to a sum of the nominal depth and a multiple of the nominal maximum range (or maximum unambiguous range). For instance, consider a return having a nominal measurement of 1.5 meters generated with the sensor operating at the higher modulation frequency and having a nominal maximum range of 4 meters. The 1.5 meter nominal measurement may correspond to a number of candidate depths, including a depth of 1.5 meters (the nominal depth), a depth of 5.5 meters (the nominal measurement plus the nominal maximum range −4 meters), 9.5 meters (the nominal measure plus two times the nominal maximum range −8 meters), and so forth. In examples of this disclosure, because the depth has already been estimated from the lower modulation frequency return(s), the actual depth of the surface may be the candidate depth of the higher modulation frequency closest to this estimated depth. For instance, consider the example above in which estimated depth (from the lower modulation frequency return(s)) is 9.7 meters, the techniques described herein will determine the actual depth to be 9.5 meters, e.g., the candidate depth closest to 9.7 meters. The return associated with the higher modulation frequency may be more reliable than the return associated with the (significantly) lower modulation frequency, so the techniques described herein may provide improved accuracy over conventional techniques. Some example implementations of this disclosure may also incorporate disambiguation techniques. For instance, in some implementations, disambiguation techniques may be used to determine whether a return is actually within the nominal maximum depth of the sensor in the first configuration, e.g., at the lower modulation frequency. As noted above, in some instances it may be desirable to select a lower modulation frequency that provides a relatively large nominal maximum depth, such that most objects beyond the nominal maximum depth at the lower modulation frequency will not be detected. However, in some instances, some objects beyond the nominal maximum depth may be detected. For instance, highly reflective objects, such as retroreflectors, may still be detectable beyond the nominal maximum sensor depth. In examples, techniques described herein may determine when detected objects are such highly reflective objects. In some example implementations, disambiguation techniques may be used to determine that a measured surface is beyond the nominal maximum range of the sensor in the first, e.g., lower modulation frequency, configuration. For instance, techniques described herein may determine first candidate depths for returns at the lower modulation frequency and second candidate depths for returns at the higher modulation frequency. The first candidate depths may include the nominal depth measured for a return, as well as the sum of the nominal depth and multiples of the nominal maximum range associated with the lower modulation frequency. The second candidate depths can include the depth measured for a return, as well as the sum of the nominal depth and multiples of the nominal maximum range associated with the higher modulation frequency. The techniques described herein including determining a disambiguated depth from the first candidate depths and the second candidate depths. For instance, the disambiguated depth may be based on the candidate depth of the first candidate depths and the candidate depth of the second candidate depths that are closest to each other. In some examples, the disambiguated depth can be an average, e.g., a weighted average, of these candidate depths. As will be appreciated, because the modulation frequencies are associated with different maximum unambiguous ranges, depth measurements larger than the maximum unambiguous ranges of the individual sensors can be disambiguated. Accordingly, depths beyond the maximum range distance(s) associated with the lower modulation frequency may be determined. In some examples, modulation frequencies may be chosen that are associated with coprime maximum unambiguous ranges, and depths up to the product of the ranges can be disambiguated with information from two successive frames. In some examples, once a surface is determined, e.g., using the disambiguation techniques, to have a depth beyond the nominal maximum depth of the sensor in the first configuration, techniques described herein may further confirm the depth. For instance, and as noted above, the first modulation frequency is chosen to provide a relatively large nominal maximum depth. Accordingly, surfaces beyond the nominal maximum depth must be highly reflective to be imaged at the sensor. In some examples, the techniques described herein determine whether the surface having a disambiguated depth beyond the nominal maximum sensor depth also has an intensity equal to or above a threshold intensity. For instance, the intensity image may have intensity information for the surface, which information may be compared to a threshold intensity. When the measured intensity is equal to or above the threshold intensity, the surface may be confirmed to be beyond the nominal maximum depth. Alternatively, if the intensity does not equal or exceed the threshold intensity, the depth of the surface may be determined to be located at the nominal depth measured by the sensor in the first configuration, or at an actual depth determined according to additional techniques described herein. Once a surface is confirmed to be beyond the nominal maximum sensor depth, in some examples the surface can be modeled at the correct, distant depth. Alternatively, some processes described herein may include generating filtered data that excludes returns associated with the surface, e.g., because such returns are too far in the distance and/or less reliable because of this distance. For example, in some implementations, the techniques described herein can filter pixels e.g., from a depth image and/or an intensity image based on the depth determined using the techniques described herein. In at least some instances, different threshold intensities may be applied to pixels associated with different depths. In at least one example, and as described herein, surfaces that are sensed and that are beyond a nominal maximum depth may generally be associated with highly reflective objects, such as retroreflectors or the like. Because these objects may be of less importance, e.g., because of their remote distance, pixels associated therewith that have a relatively higher intensity can be removed from consideration. In contrast, surfaces that are relatively closer to the sensor may be of more importance for navigation and autonomous vehicle, and thus a lower intensity threshold may be used to filter pixels. In at least some examples, the intensity threshold may be applied across all depths for instance, any surfaces beyond a certain depth, such as the nominal maximum depth of the sensor, may be filtered with a first intensity threshold whereas pixels within the nominal maximum depth may be filtered with a second, lower threshold intensity. In other instances, however, different threshold intensities may be applied to groupings of pixels at different depths. Accordingly, pixels associated with a surface at a first depth in a first location in the environment may be filtered differently from a second group of pixels associated with a different object is similar depth in the environment. Based on the intensity thresholds, pixels may be filtered to generate filtered data that may be then passed to a computing system associated with the vehicle, e.g. to identify objects in and/or characteristics of the environment In some examples, once characteristics of the environment are determined according to the techniques described herein, the computing device of the autonomous vehicle may determine one or more trajectories for proceeding relative to the object(s). In some instances, depth and/or intensity information generated according to techniques described herein may be combined, or fused, with data from other sensor modalities to determine the one or more trajectories. Techniques described herein may be directed to leveraging sensor and perception data to enable a vehicle, such as an autonomous vehicle, to navigate through an environment while circumventing objects in the environment. Techniques described herein can utilize information sensed about the objects in the environment, e.g., by a single, configurable sensor, to more accurately determine features of the objects. By capturing image data at different sensor configurations, depth data can be disambiguated. For example, techniques described herein may be faster and/or more robust than conventional techniques, as they may increase the reliability of depth and/or intensity information, alleviating the need for successive images. That is, techniques described herein provide a technological improvement over existing object detection, classification, prediction and/or navigation technology. In addition to improving the accuracy with which sensor data can be used to determine objects and correctly characterize motion of those objects, techniques described herein can provide a smoother ride and improve safety outcomes by, for example, more accurately providing safe passage to an intended destination. While this disclosure uses an autonomous vehicle in examples, techniques described herein are not limited application in autonomous vehicles. For example, any system in which sensor ambiguity and/or inconsistent sensor data exists may benefit from the techniques described. By way of non-limiting example, techniques described herein may be used on aircrafts, e.g., to identify and disambiguate depths associated with objects in an airspace or on the ground. Moreover, non-autonomous vehicles could also benefit from techniques described herein, e.g., for collision detection and/or avoidance systems. FIGS.1-9provide additional details associated with the techniques described herein. FIG.1illustrates an example environment100through which an example vehicle102is traveling. The example vehicle102can be a driverless vehicle, such as an autonomous vehicle configured to operate according to a Level 5 classification issued by the U.S. National Highway Traffic Safety Administration, which describes a vehicle capable of performing all safety-critical functions for the entire trip, with the driver (or occupant) not being expected to control the vehicle at any time. In such examples, because the vehicle102can be configured to control all functions from start to completion of the trip, including all parking functions, it may not include a driver and/or controls for driving the vehicle102, such as a steering wheel, an acceleration pedal, and/or a brake pedal. This is merely an example, and the systems and methods described herein may be incorporated into any ground-borne, airborne, or waterborne vehicle, including those ranging from vehicles that need to be manually controlled by a driver at all times, to those that are partially or fully autonomously controlled. In some instances, the techniques can be implemented in any system using machine vision and is not limited to vehicles. The example vehicle102can be any configuration of vehicle, such as, for example, a van, a sport utility vehicle, a cross-over vehicle, a truck, a bus, an agricultural vehicle, and/or a construction vehicle. The vehicle102can be powered by one or more internal combustion engines, one or more electric motors, hydrogen power, any combination thereof, and/or any other suitable power source(s). Although the example vehicle102has four wheels, the systems and methods described herein can be incorporated into vehicles having fewer or a greater number of wheels, tires, and/or tracks. The example vehicle102can have four-wheel steering and can operate generally with equal performance characteristics in all directions, for example, such that a first end of the vehicle102is the front end of the vehicle102when traveling in a first direction, and such that the first end becomes the rear end of the vehicle102when traveling in the opposite direction. Similarly, a second end of the vehicle102is the front end of the vehicle when traveling in the second direction, and such that the second end becomes the rear end of the vehicle102when traveling in the opposite direction. These example characteristics may facilitate greater maneuverability, for example, in small spaces or crowded environments, such as parking lots and/or urban areas. A vehicle such as the example vehicle102can be used to travel through an environment and collect data. For example, the vehicle102can include one or more sensor systems104. The sensor system(s)104can be, for example, one or more time-of-flight sensors, LIDAR sensors, RADAR sensors, SONAR sensors, image sensors, audio sensors, infrared sensors, location sensors, etc., or any combination thereof. Certain implementations described herein may be particularly well-suited for use with time-of-flight sensors, although other types of sensors also are contemplated. The sensor system(s)104may be disposed to capture sensor data associated with the environment. For example, the sensor data may be processed to identify and/or classify objects in the environment, e.g., trees, vehicles, pedestrians, buildings, road surfaces, signage, barriers, road marking, or the like. As also illustrated inFIG.1, the sensor system(s)104can include one or more processors106and memory108communicatively coupled to the processor(s)106. The processor(s)106and/or the memory108may be physically integrated into the sensor, e.g., as an SoC, FPGA, ASIC, or the like, or, in some implementations, the processor(s)106and/or the memory108may be available to, e.g., connected to receive signals from and/or send signals to, the sensor system(s)104. As discussed above, the one or more sensor system(s)104can determine the sensor data in various formats (e.g., a quadrature format, an intensity and depth format, and/or an unprojected format) using the one or more processors106. In the example ofFIG.1, the sensor system may include a time-of-flight sensor, which may be configured to emit a carrier (e.g., a signal) and receive, e.g., capture, a response carrier (e.g., a response signal) comprising the carrier reflected off a surface in the environment. The time-of-flight sensor may be configured to determine sensor data in a quadrature format based on the carrier and the response carrier. In some instances, the sensor can measure a phase shift between the carrier and the response carrier and/or perform numerical integration calculation to determine the sensor data in the quadrature format (e.g., determining one or more of a quadrature from the response signal). In some implementations, the sensor can also determine an intensity and depth format of the sensor data, which may also be referred to as depth image. For example, using the sensor data, the sensor system can determine depth and intensity values for each point associated with an object in an environment. In still further examples, the sensor system(s) can also determine the sensor data in an unprojected format. For example, an unprojection can refer to a transformation from a two-dimensional frame of reference into a three-dimensional frame of reference, while a projection can refer to a transformation from a three-dimensional frame of reference into a two-dimensional frame of reference. In some instances, processing techniques can determine a relative location of the sensor system(s)104(e.g., relative to one or more other sensors or systems) and can unproject the data into the three-dimensional representation based at least in part on the intensity and depth format, intrinsic and extrinsic information associated with the sensor system(s)104(e.g., focal length, center, lens parameters, height, direction, tilt, distortion, etc.), and the known location of the sensor system(s)104. In some instances, the point can be unprojected into the three-dimensional frame, and the distances between the sensor system(s)104and the points in the three-dimensional frame can be determined (e.g., <x, y, z>). As also illustrated inFIG.1, the sensor system(s)104may be configured to output the sensor data, e.g., the intensity and depth information, the quadrature values, or the like, as a series of frames, e.g., image frames. For instance, the frames can include a stream of serially-generated (e.g., at a predetermined interval) frames including a first frame110(1), a second frame110(2), through an nth frame110(N) (collectively referred to herein as “the frames110”). Generally, each of the frames110may include the same type of data, e.g., data related to the intensity and depth information for each of a plurality of pixels comprising the receiver of the sensor, but techniques described herein may vary aspects of the sensor such that the data is captured at different sensor configurations. For example, as illustrated inFIG.1, the first frame110(1) includes first intensity information112and first depth data114captured at a first sensor configuration. Similarly, the second frame110(2) includes second intensity information116and second depth information118captured at a second sensor configuration. Techniques described herein can generate composite or blended data from multiple frames, e.g., from two or more of the frames110. For example, the composite or blended data may better represent the environment and/or may have a higher associated confidence. For instance, and as illustrated inFIG.1, data associated with the first frame110(1), e.g., the first intensity information112and the first depth information114, and data associated with the second frame110(2), e.g., the second intensity information116and the second depth information118, may be used to generate processed sensor data120. In some examples, the processed sensor data can include a resolved depth122for a sensed object. For example, the resolved depth122may be determined based at least in part on the first depth information114and the second depth information118, as detailed further herein. The processed sensor data120may also include filtered data124. For instance, and as detailed further herein, the filtered data124may be a subset of the sensed data contained in the frames110, e.g., a subset of intensity and/or depth pixels determined based on the first intensity information112, the second intensity information116, and/or the resolved depth122Straumann As noted above, the first frame110(1) includes data captured by the sensor system(s)104in an associated first configuration and the second frame110(2) includes data captured by the sensor system(s)104in an associated second configuration. In some examples, the different configurations of the sensor system104have different modulation frequencies. Time-of-flight sensors generally determine depth of an object by determining a phase shift of the reflected carrier signal, relative to the emitted carrier signal. The value of the phase shift directly corresponds to the depth of the object relative to the sensor system(s)104. Conventionally, the modulation frequency of the time-of-flight sensor may be determined based on a maximum distance at which objects are expected to be. However, because the carrier signal repeats itself every period, e.g., the wave may be a sinusoid or the like, the returned depth may be ambiguous. By way of non-limiting example, the modulation frequency of the time-of-flight sensor may be chosen to determine depths of up to 5 meters. Thus, a phase shift of 30-degrees may correspond to a 2-meter depth, for example. However, that 2 meters may be a modulus associated with the phase shift, and in fact, the distance may be 7 meters, or 12 meters, or some different depth equal to the sum of the nominal, 2-meter depth and a multiple of the maximum, 5-meter nominal distance. Altering the modulation frequency of light emitted by the time-of-flight sensor will vary the nominal maximum depth at which the sensor will detect objects. More specifically, light at a higher modulation frequency will travel a relatively shorter distance before repeating an emission pattern (e.g., will have a shorter period) than light at a lower modulation frequency. Techniques described herein alter the modulation frequency of the sensor system(s)104to receive measured depths for surfaces at multiple modulation frequencies. In some examples, the first modulation frequency and the second modulation frequency, e.g., for first and second sensor configurations, are chosen based on their associated nominal maximum depths. The first configuration may be chosen to have a relatively large nominal maximum depth, e.g., on the order of about 20 meters or more. In some examples, the modulation frequency for the first configuration is selected to be a relatively large depth beyond which surfaces may not impact operation of the vehicle102. In contrast, the second configuration may be chosen to have a much smaller nominal maximum depth, e.g., on the order of a few meters, such as 2 to 5 meters. For example, the first configuration may provide a nominal maximum depth that is four or more times that of the nominal maximum depth associated with the second configuration. In one non-limiting example, the first configuration may have a modulation frequency of about 5 MHz or less, and the second configuration may have a modulation frequency of about 20 MHz or more. In some examples, it may be desirable that the modulation frequencies be selected to ensure that the larger nominal maximum depth (in the first configuration) is not an exact multiple of the smaller nominal maximum depth (in the second configuration). Moreover, some techniques described herein may use disambiguation techniques, and disambiguation may be improved by selecting two nominal maximum depths that are coprime, e.g., 5 m and 32 m. For example, when nominal maximum depths associated with the first and second configurations are coprime, depths up to the product of the two maximum depths (160 meters in the example) can be disambiguated using techniques described herein. In other implementations in which coprime numbers are not used, the depth to which values can be disambiguated may correspond to the lowest common multiple of the two depths, which may be a reason to avoid nominal maximum depths that are multiples. As in the examples above, techniques according to this disclosure use configurations having relatively large differences in modulation frequencies, and therefore, relatively disparate nominal maximum depths, to provide more accurate depth measurements. For example, in processes detailed further below in connection withFIG.2, techniques described herein can include determining an estimated depth of an object based on returns captured with the sensor system(s)104in the first configuration, e.g., the nominal depth of the measurement. Then, an actual depth of the object may be determined from the sensor data generated in the second configuration, e.g., as a candidate depth based on the nominal depth measured in the second configuration that is closest to the estimated depth. For example, techniques herein recognize that sensor returns at shorter modulation frequencies are more frequently erroneous and/or have a higher uncertainty than returns at higher modulation frequencies, which tend to be more accurate. Thus, the techniques described herein estimate a depth of a surface based on the lower modulation frequency return, but use higher modulation frequency return information to determine an actual depth, based on the estimate. For example, the depth determined according to these techniques may be the resolved depth122. As described, in some aspects of this disclosure, one or more sensor returns captured at a first sensor configuration are used to estimate a depth of a surface, e.g., as the nominal measured depth, and the actual depth is determined as the candidate depth from the second sensor configuration closest to the estimated depth. This process assumes that the measured surface is at a distance less than the nominal maximum depth of the sensor in the first configuration. In additional aspects of this disclosure, including aspects detailed below in connection withFIG.3, aspects of this disclosure can also confirm this assumption. More specifically, and as detailed below, some implementations of this disclosure include determining a disambiguated depth based on depth data from the sensor in the first configuration and depth data from the sensor in the second configuration. For instance, examples include confirming that the disambiguated depth is below the nominal maximum depth of the sensor in the first configuration, e.g., for added certainty of the depth of the surface. In contrast, if the disambiguated depth is beyond the nominal maximum depth of the sensor in the first configuration, e.g., indicating the measured surface is farther away, processes described herein can perform other operations relative to the surface. Determining more accurate depths of surfaces according to the techniques described herein can also improve data filtering. For example, once depths of surfaces are more accurately determined according to the techniques discussed above, the filtered data124can be generated by remove data, e.g., pixels, from the sensor data that are not relevant to processing at the vehicle. Without limitation, portions of the sensor data that are not indicative of objects relative to which the vehicle travels may be filtered out, e.g., to reduce the amount of data sent from the sensor. For example, and as detailed further below in connection withFIG.4, techniques described herein can filter data by comparing intensity information, e.g., from the first intensity data112and/or the second intensity data116, to a threshold intensity. The threshold intensity may vary based on depth information. For instance, pixels having a greater depth, e.g., associated with surfaces relatively farther from the sensor, may be filtered using a higher intensity threshold, whereas pixels having a smaller depth, e.g., associated with surfaces closer to the sensor, may be filtered using a lower intensity threshold. These and other examples are described in more detail below. FIG.2includes textual and graphical flowcharts illustrative of a process200for determining a depth of a sensed surface, according to implementations of this disclosure. For example, the process200can be implemented using components and systems illustrated inFIG.1and described above, although the process200is not limited to being performed by such components and systems. Moreover, the components and systems ofFIG.1are not limited to performing the process200. In more detail, the process200can include an operation202that includes receiving first sensor data associated with a first sensor configuration. As noted above, techniques described herein may be particularly applicable for use with time-of-flight sensors, and the example ofFIG.2may use time-of-flight sensors as one specific example. The disclosure is not limited to use with time-of-flight sensors, as techniques described herein may be applicable to other types of sensors that may be adversely affected by glare. In some examples, the operation202can include receiving both depth and intensity data measured by the time-of-flight sensor, although only the depth information may be required for aspects ofFIG.2. An example204accompanying the operation202illustrates a vehicle206, which may be the vehicle102in some examples. One or more time-of-flight sensors208are mounted on the vehicle206, e.g., to sense an environment surrounding the vehicle206. For instance, the time-of-flight sensor(s)208may be arranged to sense objects generally in a direction of travel of the vehicle206, although the sensors may be otherwise disposed and more sensors than the one illustrated may be present. The time-of-flight sensor(s)208may be configured to generate first sensor data210, which includes depth information (and in some instances may also include intensity information). In at least some examples, the depth information can include or be embodied as a depth image and the intensity information can include or be embodied as an intensity image. For example, the depth image can represent a depth of sensed objects in a scene on a pixel-by-pixel basis and the intensity image can represent an intensity (e.g., brightness) of sensed objects in the scene on the same pixel-by-pixel basis. The depth image and the intensity image include information about objects in an environment of the vehicle. A vehicle212travelling on a road proximate the vehicle206may be an example of such an object. The first sensor data210is generated with the time-of-flight sensor206in a first configuration. For example, the first configuration may correspond to a first, relatively low modulation frequency. For instance, the first configuration may have a modulation frequency of about 5 MHz or less, which may be selected to have a relatively large nominal maximum depth, as described herein. At an operation214, the process200can include determining an estimated depth from the first sensor data. As noted above, the first sensor data210can include depth or range data and/or data from which depth or range data can be generated. For instance, a depth or range of a surface may be determined based on a time it takes light emitted by the sensor208in the first configuration to reflect off the surface and return to the sensor. An example216accompanying the operation214demonstrates a return218associated with the sensor208. For example, the return may be associated with the vehicle212or some other object in the environment of the vehicle206. The return218(as with all returns) is a measured distance, e.g., in meters, from the sensor208. In this example, the measured distance of the return218is an estimated depth220of the object (e.g., the vehicle212). Because the first configuration of the sensor208is associated with a relatively low modulation frequency, the sensor208in the first configuration has a relatively large nominal maximum depth222. In examples, the modulation frequency is chosen such that the nominal maximum depth222is likely to be larger than a depth at which most objects that could be sensed by the sensor208are located. For instance, objects beyond a certain distance or depth from the sensor208are likely to lack sufficient reflectivity that they can be sensed by the sensor208. In some aspects of this disclosure, the nominal maximum depth222may be equal to or greater than about 30 meters or more. Because of the foregoing, the nominal measured depth of each surface, or associated with each return, is likely to be the actual depth of the surface. This is in contrast to configurations with higher modulation frequencies (and lower nominal maximum depths) in which the measured depth is ambiguous, e.g., the measured depth could be the actual depth or it could be the measured depth plus some multiple of the nominal maximum depth associated with the configuration. In the first configuration, while the sensed depth of the return218is likely to be the actual depth of the return, the relatively low modulation frequency may be more prone to errors or have a lower associated confidence. In aspects of this disclosure, the estimated depth220may be the actual depth, or depth measured by the sensor in the first configuration, but the depth may be “estimated” because of the lower associated confidence or higher tolerance resulting from the lower frequency. At an operation224, the process200can include receiving second sensor data associated with a second sensor configuration. As noted above, techniques described herein may be particularly applicable for use with time-of-flight sensors. An example226accompanying the operation224illustrates the vehicle206the sensor208mounted on the vehicle206. In the example226, the sensor208may be configured to generate second sensor data228, which includes depth information (and in some instances may also include intensity information). In at least some examples, the depth information can include or be embodied as a depth image and the intensity information can include or be embodied as an intensity image. For example, the depth image can represent a depth of sensed objects in a scene on a pixel-by-pixel basis and the intensity image can represent an intensity (e.g., brightness) of sensed objects in the scene on the same pixel-by-pixel basis. The depth image and the intensity image include information about objects in an environment of the vehicle. The second sensor data228is generated with the time-of-flight sensor208in a second configuration. For example, the second configuration may correspond to a second, relatively high modulation frequency. For instance, the second configuration may have a modulation frequency of about 15 MHz to 20 MHz or more, which may be selected to have a relatively small nominal maximum depth, as described herein. In the example204the sensor208is configured in the first configuration to generate the first sensor data210and in the example226the sensor208is configured in the second configuration to generate the second sensor data228. In other examples, however, the first sensor data210may be generated by a first sensor, e.g., configured in the first configuration, and the second sensor data228may be generated by a second sensor, e.g., configured in the second configuration. For instance, returns from a first sensor on the vehicle206may be correlated with returns from a second sensor on the vehicle206, e.g., such that returns associated with a same surface are grouped or otherwise associated. By way of example and not limitation, returns associated with the vehicle212may be associated regardless of the sensor detecting the vehicle212. In such examples, the first sensor and the second sensor will have overlapping fields of view. At an operation230, the process200can include determining candidate depths from the second sensor data. As noted above, the second sensor data228can include depth or range data and/or data from which depth or range data can be generated. For instance, a depth or range of a surface may be determined based on a time it takes light emitted by the sensor208in the second configuration to reflect off the surface and return to the sensor. Because the second configuration has a relative high modulation frequency, it also has a relatively small nominal maximum depth. In some examples, the nominal maximum depth associated with the second configuration can be 5 meters or less. As a result, the depth measured in the second configuration is ambiguous. That is, the measured depth may be the actual depth of the surface or the actual depth plus some multiple of the nominal maximum depth associated with the second configuration. An example232accompanying the operation230demonstrates a plurality of candidate depths234associated with a measured return at the sensor208. More specifically, the candidate depths234include a first candidate depth234(1) that corresponds to the depth determined by the sensor in the second configuration, e.g., the measured depth. The sensor208in the second configuration has a nominal maximum depth236, which, like the nominal maximum depth222, is the maximum depth that the sensor208in the second configuration will determine, e.g., based on the modulation frequency of the emitted light. However, and because of the relatively high modulation frequency associated with the second configuration, the sensor will generate returns for surfaces beyond the nominal maximum depth236. The determined depths of those returns will be less than the nominal maximum depth236, but the determined depth may be a remainder. The actual depth of those surfaces will be ambiguous. Specifically, and as shown in the example232, the actual depth of the surface could be any of the candidate depths234. In the illustration, the first candidate depth234(1), as noted above, is the measured depth, a second candidate depth234(2) is the sum of the measured depth and the nominal maximum depth236, a third candidate depth234(3) is the sum of the measured depth and two times the nominal maximum depth236, and a fourth candidate depth234(4) is the sum of the measured depth and three times the nominal maximum depth236. Although the candidate depths234are illustrated as including four candidate depths, additional candidate depths will correspond to other sums of the measured depth and multiples of the nominal maximum depth236. At an operation238, the process200include determining a measured depth as the candidate depth closest to the estimated depth. As noted above, the sensor208generates returns in a first configuration and in a second configuration (or different sensors in the different configurations generate the returns). An example240accompanying the operation238shows the return218(corresponding to the first sensor configuration) and the candidate depths234(corresponding to the second sensor configuration) on the same plot line. In this example, the return218and the candidate depths are associated with the same surface, e.g., which may be a surface of the vehicle212. As illustrated in the example240, a measured depth242for the surface according to the process200is the third candidate depth234(3), because the third candidate depth234(3) is the closest of the candidate depths234to the estimated depth220. In examples, the measured depth242may be the depth used as an actual depth of the object, e.g., for downstream processing as described herein. According to the example ofFIG.2, the first sensor data210and the second sensor data228both include information about the same surface(s), e.g., about the vehicle212. Some conventional systems may merge different data associated with the same surface, e.g., using disambiguation techniques, to determine a measured distance of the surface. For instance, the measured distance in these conventional systems can be an average, weighted average, or the like, of depths determined in multiple configurations and/or sensed at different times. In contrast, because aspects of the current disclosure use a relatively low modulation frequency in the first configuration, the depth measured in the first configuration, e.g., the measured depth of the first return218, is not ambiguous and can therefore be used as the estimated depth220, as discussed in connection with the example216. However, the depth measured in the second configuration is ambiguous, as described in connection with the example232. Although the depth measured in the second configuration is ambiguous, it is also a more accurate depth, e.g., because of the higher modulation frequency. Thus, aspects of this disclosure may use only the depth information determined from the second configuration as the distance to the surface, but the estimated depth220is used to determine which of the candidate depths234is the correct depth. Accordingly, unlike techniques that determine an average, weighted average, or other formula including two depth measurements to determine a measured depth, the current disclosure uses the more accurate measurement, informed by the less accurate measurement. As noted, the first configuration and the second configuration are chosen to provide the nominal maximum depths222,236that vary significantly. For instance, the nominal maximum depth222in the first configuration may be on the order of three or more times the nominal maximum depth236in the second configuration. As also described above, the first configuration may be determined such that the nominal maximum depth is sufficiently large that most objects that can be sensed will be within the nominal maximum depth, thus allowing the measured depth determined with the first configuration to be used as the estimated depth. However, in some limited examples it may be that an object sensed in the first configuration is actually beyond the nominal maximum distance, e.g., the measured distance is a remainder. For instance, highly reflective objects, such as retroreflectors, may be capable of being sensed at relatively far distances, e.g., beyond 35 meters or more. Aspects of this disclosure also are suited to determine whether a sensed object is beyond the nominal maximum depth of the sensor in the first configuration. FIG.3includes textual and graphical flowcharts illustrative of a process300for determining whether a sensed surface is beyond a sensor nominal maximum depth, according to implementations of this disclosure. For example, the process300can be implemented using components and systems illustrated inFIG.1and described above, although the process300is not limited to being performed by such components and systems. Moreover, the components and systems ofFIG.1are not limited to performing the process300. In more detail, the process300includes an operation302that includes receiving first sensor data and second sensor data. For example, the first sensor data is associated with a first sensor configuration and the second sensor data is associated with a second sensor configuration. The first sensor data received at the operation302can be the first sensor data210and/or the second sensor data can be the second sensor data228discussed above in connection withFIG.2. An example304accompanying the operation302illustrates a vehicle306, which may be the vehicle102and/or the vehicle206in some examples. One or more time-of-flight sensors308are mounted on the vehicle306, e.g., to sense an environment surrounding the vehicle306. For instance, the time-of-flight sensor(s)308may be arranged to sense objects generally in a direction of travel of the vehicle306, although the sensors may be otherwise disposed and more sensors than the one illustrated may be present. In examples, the sensor(s)308may correspond to the sensor(s)208. The sensor(s)308may be configured to generate sensor data310which includes depth information and intensity information. In at least some examples, the depth information can include or be embodied as a depth image and the intensity information can include or be embodied as an intensity image. For example, the depth image can represent a depth of sensed objects in a scene on a pixel-by-pixel basis and the intensity image can represent an intensity (e.g., brightness) of sensed objects in the scene on the same pixel-by-pixel basis. The depth image and the intensity image include information about objects in an environment of the vehicle. A vehicle312travelling on a road proximate the vehicle306may be an example of such an object. The sensor data310includes first data generated with the sensor(s)308in a first configuration and second data generated with the sensor(s)308in a second configuration. For example, the first configuration may correspond to a first, relatively low modulation frequency. For instance, the first configuration may have a relatively low modulation frequency, e.g., of about 5 MHz or less, which may be selected to have a relatively large nominal maximum depth, as described herein. In contrast, the second configuration may correspond to a second, relatively high modulation frequency. For instance, the second configuration may have a modulation frequency of about 15 MHz to 20 MHz or higher, which may be selected to have a relatively small nominal maximum depth, as described herein. Without limitation, the sensor data310can include the first sensor data210as the first data and the second sensor data228as the second data. At an operation314, the process300can also include disambiguating a depth based on the first sensor data and the second sensor data. For instance, the operation314can include comparing first candidate depths from the first sensor data with second candidate depths from the second sensor data. An example316accompanying the operation314illustrates this concept. Specifically, the example316shows first candidate depths318including an individual candidate depth318(1) corresponding to a measured depth associated with the first configuration and an individual candidate depth318(2) corresponding to the sum of the measured depth and the nominal maximum depth of the sensor in the first configuration. The example316also shows second candidate depths320including an individual candidate depth320(1) corresponding to a measured depth associated with the second configuration and additional candidate depths (e.g., candidate depths320(2), -320(6)) corresponding to the sum of the measured depth and the nominal maximum depth of the sensor in the second configuration. Although the example316shows only two of the first candidate depths318and six of the second candidate depths320, more candidate depths can also be considered. In some instances, the individual candidate depth318(1) may correspond to the depth of the return218inFIG.2and the individual candidate depth318(2) may correspond to a depth corresponding to a sum of the depth of the return218and the nominal maximum depth222. Also in the example316, the candidate depths320may correspond to the candidate depths234. In the example316, the individual candidate depth318(2) of the first candidate depths318and the individual candidate depth320(5) of the second candidate depths320are closest to each other, e.g., within a threshold distance. This distance is shown inFIG.2as dashed rectangles around the first candidate depths318. Because the individual candidate depth318(2) of the first candidate depths318and the individual candidate depth320(5) of the second candidate depths320are closest to each other, a disambiguated depth322is determined using those values. In some examples, the disambiguated depth322may be a range indicated by the rectangle associated with the individual candidate depth318(2). In other examples, the disambiguated depth322can be an average of the depths represented by the individual candidate depths318(2),320(5), a weighted average of those depths, or some other function of the two depths. In still further examples, one of the individual candidate depths318(2),320(5) may be used as the disambiguated depth322. As detailed above in connection withFIG.2and elsewhere herein, because the first configuration of the sensor308is associated with a relatively low modulation frequency, the sensor308in the first configuration has a relatively large nominal maximum depth wherein the second configuration, which is associated with a relatively higher modulation frequency and thus has a relatively smaller nominal maximum depth. This is illustrated in the example316by the larger distance between the first candidate depths318(which are separated by a first nominal maximum depth associated with the first configuration) that the distance between the second candidate depths320(which are separated by a second nominal maximum depth associated with the second configuration). In examples, the modulation frequency is chosen such that the first nominal maximum depth is likely to be larger than a depth at which most objects that could be sensed by the sensor308are located. However, and as noted above, some surfaces, especially highly reflective surfaces, that are beyond the first nominal maximum depth may be sensed by the sensor308. At an operation324, the process300includes determining that the disambiguated distance is beyond a nominal range of the sensor in the first configuration. More specifically, and as shown in an example326accompanying the operation324, the disambiguated depth322is compared to the nominal maximum depth associated with the first configuration of the sensor. If the disambiguated distance322is larger than the nominal maximum depth associated with the first configuration, further investigation of the return and/or the surface is warranted, because, as noted above, the first configuration includes a low modulation frequency selected to provide a larger nominal maximum depth that is greater than the depth at which most objects will be detected. At an operation328, the process300includes determining whether an intensity meets or exceeds a threshold intensity. For example, and as noted above, the sensor data can include first depth information and first intensity information (generated in the first configuration) and second depth information second intensity information (generated in the second configuration). The depth information and the intensity information are correlated, that is, each return at the sensor can include depth information for the return (e.g., a depth pixel) and intensity information for the return (e.g., an intensity pixel). Thus, at the operation328the process330can include determining whether intensity information (e.g., from intensity pixel(s)) for the return(s) determined to be beyond the nominal maximum depth associated with the first configuration are equal to or greater than a threshold intensity. An example330shows that first intensity information associated with the first configuration, e.g., having the first modulation frequency, f1, and second intensity information associated with the second configuration, e.g., having the second modulation frequency, f2, are compared to the intensity threshold. In some examples only one of the first intensity information or the second intensity information may be compared to the intensity threshold. As detailed herein, surfaces beyond the nominal maximum depth associated with the first configuration are unlikely to be sensed unless they are highly reflective. Such highly reflective surfaces will have a relatively high intensity measured at the sensor308. In examples, the threshold intensity may be determined heuristically or experimentally, e.g., by sensing objects with known properties at distances beyond the nominal maximum depth associated with the first configuration. Also in some examples, the threshold intensity may be depth dependent, e.g., greater for greater disambiguated depths and lower for relatively shorter disambiguated depths. In examples in which the intensity does not meet or exceed the threshold distance, yet the disambiguated distance exceeds the nominal maximum depth associated with the first sensor configuration, the pixel may be determined to be unreliable, or in some instances, At an operation332, the process300includes confirming that the return is associated with a surface outside the nominal maximum range of the sensor in the first configuration. For instance, the disambiguated depth being beyond the first nominal maximum depth associated with the first configuration will suggest that the return is beyond the nominal maximum depth. However, because of sensor errors, inaccuracies, or the like, the determination of the disambiguated depth may be inaccurate. The process300uses the associated intensity being equal to or above the threshold intensity to further confirm that the return is likely beyond the nominal maximum depth. In some instances, as described herein, pixels that are confirmed to be associated with surfaces beyond the nominal maximum depth may be excluded or filtered out. In still further instances, the surfaces may be modeled or otherwise determined to be at these relatively greater distances. As described in connection with the process200and the process300, aspects of this disclosure relate to determining distances of measured surfaces using depth data generated from multiple sensor configurations. Moreover, these determined depths may be more accurate than depths determined using conventional techniques. This disclosure also includes improved techniques for filtering data based on this improved depth information. FIG.4includes textual and graphical flowcharts illustrative of a process400for filtering pixels based on determined depth, according to implementations of this disclosure. For example, the process400can be implemented using components and systems illustrated inFIG.1and described above, although the process400is not limited to being performed by such components and systems. Moreover, the components and systems ofFIG.1are not limited to performing the process400. In more detail, the process400can include an operation402that includes capturing sensor data using a time-of-flight sensor. As noted above, techniques described herein may be particularly applicable to use with time-of-flight sensors, and the example ofFIG.4uses time-of-flight sensors as one specific example. The disclosure is not limited to use with time-of-flight sensors, as techniques described herein may be applicable to other types of ranging sensors. In some examples, the operation402can include receiving both depth and intensity data measured by the time-of-flight sensor. An example404accompanying the operation402illustrates a vehicle406, which may be the vehicle102, the vehicle206, and/or the vehicle306in some examples. One or more time-of-flight sensors408are mounted on the vehicle406, e.g., to sense an environment surrounding the vehicle406. For instance, the time-of-flight sensor(s)408may be arranged to sense objects generally in a direction of travel of the vehicle406, although the sensors may be otherwise disposed and more or fewer sensors than the two illustrated may be present. The time-of-flight sensor(s)408may be configured to generate sensor or image data410, which can include depth information412and intensity information414. In at least some examples, the depth information412can include or be embodied as a depth image416and the intensity information414can include or be embodied as an intensity image418. As illustrated in the example404, the depth image416can represent a depth of sensed objects (or surfaces) in a scene on a pixel-by-pixel basis and the intensity image418can represent an intensity (e.g., brightness) of sensed objects (or surfaces) in the scene on the same pixel-by-pixel basis. In the example, the depth image416and the intensity image418may include information about an first object420(1), which, for example, may be a vehicle travelling on a road proximate the vehicle406, and information about a second object420(2), which, for example, may be a street sign. The depth image416and the intensity image418may be generated at substantially simultaneously the same time. In the representation of the depth image416, relatively lighter pixels may represent objects that are farther away (e.g., background objects) whereas as relatively darker pixels may represent relatively closer objects. The example404specifically identifies a first depth pixel422(1) in the depth image416associated with the first object420(1), a second depth pixel422(2) associated with the second object420(2), and a third depth pixel422(3) associated with a background of the sensed environment. The example404also outlines the object420in the intensity image418. In the intensity image418, relatively lighter pixels may represent higher intensity whereas relatively darker pixels may represent lower intensity. The example404specifically identifies a first intensity pixel422(1)′, a second intensity pixel422(2)′, and a third intensity pixel422(3)′ in the intensity image418. The first depth pixel422(1) and the first intensity pixel422(1)′ generally correspond to the same portion of the first object420(1), e.g., they are associated and include intensity and depth information, respectively, for the same portion of the same surface in the environment. Similarly, the second depth pixel422(2) and the second intensity pixel422(2)′ generally correspond to the same portion of the second object420(2), and the third depth pixel422(3) and the third intensity pixel422(3)′ generally correspond to the same portion of the background of the environment, e.g., they are associated and include intensity and depth information, respectively, for the same portion of the same surface or feature on the second object420(2) and the background, respectively. Stated differently, the depth associated with the first depth pixel422(1) and the intensity associated with the first intensity pixel422(1)′ may describe attributes (depth and intensity) of the same pixel or the same return. In the example404, the depth image416and the intensity image418may represent a single instance of the image data410and may be generated with the sensor(s)408in a single sensor configuration. In examples, and as described herein, the sensor(s)408may be capable of capturing sensor data at a plurality of sensor configurations, including a first configuration associated with a first modulation frequency and a second configuration associated with a second modulation frequency. Without limitation, and for example only, a first instance of the image data410may correspond to the first sensor data210and a second instance of the image data410may correspond to second sensor data228. In other examples, an instance of the image data410may correspond to a blending or combination of one or more instances of the first sensor data210and one or more instances of the second sensor data228. For example, and without limitation, the depth values of the depth pixels may be determined using the techniques discussed herein in connection withFIG.2and/orFIG.3, detailed above. At an operation424, the process400can include identifying a pixel beyond a nominal maximum sensor depth and additional pixel(s) at a corresponding depth. An example426accompanying the operation424shows the depth image416. In the depth image416, the second depth pixel422(2), which is associated with the object420(2), has a depth that exceeds a nominal maximum depth associated with the sensor(s)408. In this example, as in other examples of this disclosure, the nominal maximum depth may be associated with a first sensor configuration chosen to have a relatively large nominal maximum depth. For instance, the depth pixel may be associated with a surface determined using the process300. In the example426, pixels immediately surrounding the second depth pixel422(2), e.g., other pixels associated with the object and having a depth similar to those of the second depth pixel422(2), may be grouped with the second depth pixel422(2), e.g., as a subset428of the depth pixels. In some instances, the subset428may include pixels having the same depth. In other examples, the subset428may include pixels within a threshold depth, e.g., within 0.1 or 0.5 meter, of a neighboring pixel. For instance, neighboring pixels within the threshold depth may be grouped together to generate the subset428. In examples, the subset428may include only a continuous group of pixels, e.g., with each pixel in the subset428having at least one neighbor included. In other examples, the subset428can include discontinuous pixels, e.g., one or more pixels having the depth or a depth within a range, but not neighboring any other pixel(s) having the same depth. The example426also shows a second subset430of the depth pixels. The second subset430includes pixels, including the first depth pixel422(1), that may be associated with a depth or range of depths different from the subset428. In the example, the second subset430is generally associated with the first object420(1). As indicated by the differently colored pixels within the second subset430in the depth image416, there is some variation in depth among the pixels. As noted above, the pixels in the subset430may be within a range of depths and/or each of the pixels may have a depth that is within a threshold depth of a neighboring pixel. In at least some example, the Subsets of pixels in addition to the first subset428and the second subset430also may be identified. In the example426, the first subset428of pixels are beyond the nominal maximum sensor depth and the second subset420are less than the nominal maximum sensor depth, although the first subset428may include pixels that are associated with depths less than the nominal maximum sensor depth. For example, and without limitation, the first subset428may include pixels that are determined to be beyond the nominal maximum sensor depth using the process300described herein. The second subset430may include pixels the depth of which are determined using the process200described herein. However, other processes for determining depths may also be used. At an operation432, the process400can also include filtering the sensor data using different intensity thresholds for different depths. For instance, individual subsets of the pixels may be filtered using different intensity thresholds. An example434accompanying the operation432shows the intensity image418. The example434also shows that the pixels in the first subset428are compared to a first intensity threshold and that the pixels in the second subset430are compared to a second intensity threshold. The thresholds may be determined to filter out an increased number of pixels that are not associated with a detected object proximate the vehicle206, while retaining a maximum number of pixels that are associated with such detected objects. For instance, the first threshold may be relatively higher such that more pixels farther away are filtered, whereas the second threshold is relatively lower, such that more pixels associated with nearer objects are kept, thereby retaining more information about objects closer to the sensor. At operation436, the process400can include generating filtered data. For example, the operation436(and/or the operation432) can include comparing an intensity value e.g., a measured intensity value contained in the intensity image418, to the threshold discussed in connection with the operation432. In examples, if the measured intensity associated with the intensity pixel is lower than the threshold intensity, the pixel is discarded, removed, ignored, or otherwise filtered out (e.g., in either one or both of the intensity image218and the depth image216). Alternatively, if the measured intensity is equal to or greater than the threshold intensity, the pixel is retained. In an example434accompanying the operation432, filtered data can include a filtered depth image436and a filtered intensity image438. As illustrated, the filtered depth image236and the filtered intensity image238may include information about depth pixels and intensity pixels associated with the first object420(1) in the dynamic range of the time-of-flight sensor(s)408, but may exclude pixels associated with background elements or with objects outside the dynamic range, such as the second object420(2). With particular reference to the intensity image418and the filtered intensity image442, note that the intensity pixel422(2)′ is filtered out, despite having a higher intensity than some pixels retained from the first subset430. FIG.5depicts a block diagram of an example system500for implementing the techniques discussed herein. In at least one example, the system500can include a vehicle502, which can be similar to (or the same as) the vehicle102described above with reference toFIG.1. In the illustrated example500, the vehicle502is an autonomous vehicle; however, the vehicle502can be any other type of vehicle. The vehicle502can include one or more computing devices504, one or more sensor systems506, which may include one or more sensor computing devices508, one or more emitter(s)510, one or more communication connections512, at least one direct connection514(e.g., for physically coupling with the vehicle502to exchange data and/or to provide power), and one or more drive modules516. In some instances, the vehicle502can include more or fewer instances of the computing device(s)504. The one or more sensor systems506can be configured to capture sensor data associated with an environment. The vehicle computing device(s)504can include one or more processors518and memory534communicatively coupled with the one or more processors518. In at least one instance, the one or more processors518can be similar to the processor106and the memory534can be similar to the memory108described above with reference toFIG.1. In the illustrated example, the memory534of the computing device(s)504stores a localization component522, a perception component524, a planning component526, one or more system controllers528, and one or more maps530. Though depicted as residing in the memory534for illustrative purposes, it is contemplated that the localization component522, the perception component524, the planning component526, and the one or more system controllers528can additionally, or alternatively, be accessible to the computing device(s)504(e.g., stored in a different component of vehicle502and/or be accessible to the vehicle502(e.g., stored remotely)). In at least one example, the localization component522can include functionality to receive data from the sensor system(s)506to determine a position of the vehicle502. In instances described herein, in which the sensor system(s) include(s) a time-of-flight sensor, the localization component522can receive data, e.g., raw data, such as quadrature data, processed data, such as intensity and/or depth information, or the like. In other implementations, the localization component522can include and/or request/receive a three-dimensional map, e.g., of the map(s)530of an environment and can continuously determine a location of the autonomous vehicle within the map. In some instances, the localization component522can use SLAM (simultaneous localization and mapping) or CLAMS (calibration, localization and mapping, simultaneously) to receive image data, such as from the time-of-flight sensor, LIDAR data, RADAR data, SONAR data, IMU data, GPS data, wheel encoder data, or any combination thereof, and the like to accurately determine a location of the autonomous vehicle502. In some instances, the localization component522can provide data to various components of the vehicle502to determine an initial position of an autonomous vehicle for generating a candidate trajectory, as discussed herein. In some examples, the perception component524can include functionality to perform object detection, segmentation, and/or classification. In some examples, the perception component524can provide processed sensor data that indicates a presence of an entity that is proximate to the vehicle502and/or a classification of the entity as an entity type (e.g., car, pedestrian, cyclist, building, tree, road surface, curb, sidewalk, unknown, etc.). In additional and/or alternative examples, the perception component524can provide processed sensor data that indicates one or more characteristics associated with a detected entity and/or the environment in which the entity is positioned. In some examples, characteristics associated with an entity can include, but are not limited to, an x-position (global position), a y-position (global position), a z-position (global position), an orientation, an entity type (e.g., a classification), a velocity of the entity, an extent of the entity (size), etc. Characteristics associated with the environment can include, but are not limited to, a presence of another entity in the environment, a state of another entity in the environment, a time of day, a day of a week, a season, a weather condition, an indication of darkness/light, etc. In some instances, the planning component526can determine a path for the vehicle502to follow to traverse through an environment. For example, the planning component526can determine various routes and trajectories and various levels of detail. For example, the planning component526can determine a route to travel from a first location (e.g., a current location) to a second location (e.g., a target location). For the purpose of this discussion, a route can be a sequence of waypoints for traveling between two locations. As non-limiting examples, waypoints include streets, intersections, global positioning system (GPS) coordinates, etc. Further, the planning component526can generate an instruction for guiding the autonomous vehicle along at least a portion of the route from the first location to the second location. In at least one example, the planning component526can determine how to guide the autonomous vehicle from a first waypoint in the sequence of waypoints to a second waypoint in the sequence of waypoints. In some examples, the instruction can be a trajectory, or a portion of a trajectory. In some examples, multiple trajectories can be substantially simultaneously generated (i.e., within technical tolerances) in accordance with a receding horizon technique. A single trajectory of the multiple trajectories in a receding horizon having the highest confidence level may be selected to operate the vehicle. In other examples, the planning component526can alternatively, or additionally, use data from the perception component524to determine a path for the vehicle502to follow to traverse through an environment. For example, the planning component526can receive data from the perception component524regarding objects associated with an environment. Using this data, the planning component526can determine a route to travel from a first location (e.g., a current location) to a second location (e.g., a target location) to avoid objects in an environment. In at least one example, the computing device(s)504can include one or more system controllers528, which can be configured to control steering, propulsion, braking, safety, emitters, communication, and other systems of the vehicle502. These system controller(s)528can communicate with and/or control corresponding systems of the drive module(s)516and/or other components of the vehicle502, which may be configured to operate in accordance with a trajectory provided from the planning system526. In some examples, the one or more maps530can be stored on a remote computing device. In some examples, multiple maps530can be stored based on, for example, a characteristic (e.g., type of entity, time of day, day of week, season of the year, etc.). Storing multiple maps530can have similar memory requirements, but increase the speed at which data in a map can be accessed. In at least one example, the sensor system(s)506can be similar to the sensor system(s)104described above with reference toFIG.1. The sensor system(s)106can include time-of-flight sensors, location sensors (e.g., GPS, compass, etc.), inertial sensors (e.g., inertial measurement units (IMUs), accelerometers, magnetometers, gyroscopes, etc.), LIDAR sensors, RADAR sensors, SONAR sensors, infrared sensors, cameras (e.g., RGB, IR, intensity, depth, etc.), microphone sensors, environmental sensors (e.g., temperature sensors, humidity sensors, light sensors, pressure sensors, etc.), ultrasonic transducers, wheel encoders, etc. The sensor system(s)506can include multiple instances of each of these or other types of sensors. For instance, the time-of-flight sensors can include individual time-of-flight sensors located at the corners, front, back, sides, and/or top of the vehicle502. As another example, the camera sensors can include multiple cameras disposed at various locations about the exterior and/or interior of the vehicle502. The sensor system(s)506can provide input to the computing device(s)504. The sensor system(s)506can include the sensor computing device(s)508, which can include one or more processors532and memory534communicatively coupled with the one or more processors532. The one or more processors532can be similar to the processor(s)106and/or the processor(s)518and/or the memory530can be similar to the memory108and/or the memory520, described above. In the illustrated example, the memory530of the sensor system(s)506can store a depth determination component536, a disambiguation component538, a pixel filtering component540, and a data transmission component542. Though depicted as residing in the memory530for illustrative purposes, it is contemplated that the depth determination component536, the disambiguation component538, a pixel filtering component540, and/or a data transmission component542can additionally, or alternatively, be accessible to the sensor system(s)506(e.g., stored in a different component of vehicle502and/or be accessible to the vehicle502(e.g., stored remotely)). Moreover, although the depth determination component536, the disambiguation component538, a pixel filtering component540, and a data transmission component542are illustrated as being stored in and/or part of the sensor computing device(s)508, in other implementations any or all of these components may be stored in the memory520. That is, althoughFIG.5illustrates several components as being part of the sensor system(s)506, the processing associated with any or all of those components may be performed other than at the sensor. In one example, the sensor system506may output raw data, e.g., the quadrature data discussed above, for processing in accordance with functionality ascribed herein to one or more of the depth determination component536, the disambiguation component538, a pixel filtering component540, and a data transmission component542, but that processing may be performed other than at the location of the emitter and the receiver. The sensor computing device(s)508, including the depth determination component536, the disambiguation component538, a pixel filtering component540, and a data transmission component542may be configured to generate and/or process data in many formats. For example, and as noted above, the sensor computing device(s)508can measure a phase shift between the carrier and the response carrier and/or perform numerical integration calculations to determine the sensor data in the quadrature format. In other examples, the sensor computing device(s)508can determine an intensity and depth format of the sensor data. For purposes of illustration only, the sensor system(s)506can determine the sensor data in the intensity and depth format where an individual pixel in the sensor data is associated with an 8-bit value for the intensity and a 12-bit value for the depth. In some implementations, the sensor computing device(s)508can also determine the sensor data in an unprojected format. For example, an unprojection can refer to a transformation from a two-dimensional frame (or a 5.5-dimentional frame) of reference into a three-dimensional frame of reference or a three-dimensional surface, while a projection can refer to a transformation from a three-dimensional frame of reference into a two-dimensional frame of reference. In some instances, techniques described herein can determine a location of the sensor system(s)506relative to the three-dimensional surface and unproject the data into the three-dimensional frame based at least in part on the depth information, pixel coordinate, intrinsic and extrinsic information associated with the sensor system(s)506(e.g., focal length, center, lens parameters, height, direction, tilt, etc.), and the known location of the sensor system(s)506. In some instances, the depth information can be unprojected into the three-dimensional frame, and the distances between the sensor system(s)506and the various object contact points unprojected into the three-dimensional frame can be determined. In some instances, the unprojected three-dimensional points can correspond to a detailed map representing an environment that has been generated or built up over time using measurements from the sensor system(s)506or other mapping software and/or hardware. Because locations of the object contact points are known with respect to a three-dimensional surface, as the object moves over time (and accordingly, as various frames of object contact points are captured over time), various observations about the object such as orientation, length, width, velocity, etc. also can be determined over time. As used herein, the term “unproject,” “unprojected,” or “unprojecting” can refer to a conversion of two-dimensional data into three-dimensional data, while in some cases, the term “project,” “projected,” or “projecting” can refer to a conversion of three-dimensional data into two-dimensional data. In some instances, determining the various formats of sensor data (e.g., the quadrature format, the intensity and depth format, and the unprojected format) can require different amounts of computational resources to determine and/or require different amounts of bandwidth to transmit. The depth determination component536can be configured to receive depth information over multiple frames and determine depths of surfaces based on that information. For example, the depth determination component536can implement the functionality of the process200described above with reference toFIG.2. For instance, and without limitation, the depth determination component536can determine an estimated depth of a surface based on first sensor data generated in a first sensor configuration having a relatively low modulation frequency. The depth determination component536can also determine candidate depths for a surface based on second sensor data generated in a second sensor configuration having a relatively high modulation frequency. The depth determination component can also identify one of the candidate depths as the measured depth for the surface, e.g., based on the estimated depth. In some examples, the first modulation frequency and the second modulation frequency may be determined based on a nominal maximum depth, or non-ambiguous range, associated with each. For example, the nominal maximum depth may be inversely proportional to the modulation frequency. The modulation frequency may also determine the wavelength of the carrier signal. In some examples, the first modulation frequency may be relatively low and have a relatively high first nominal maximum depth, and the second modulation frequency may be relatively high and have a relatively low second nominal maximum depth. As described herein, sensor data captured for a modulation frequency resulting in a relatively larger nominal maximum depth will have more error than sensor data captured for a modulation frequency resulting in a relatively shorter nominal maximum depth. The first nominal maximum depth may be two or more times that of the second nominal maximum depth, and, in some instances, the first nominal maximum depth and the second nominal maximum depth may be coprime. Moreover, the first modulation frequency may be selected such that the first nominal maximum depth is larger than a distance at which the sensor is likely to sense most objects. The disambiguation component538can be configured to receive depth information over multiple frames and determine a depth of objects according to those frames. For example, as detailed herein, the disambiguation component538can determine whether a return is beyond a nominal maximum depth of a sensor in a first sensor configuration having a relatively large nominal maximum depth. In some examples, the disambiguation component538may implement the process300discussed above with reference toFIG.3. In some instances, the disambiguation component538can determine first candidate depths (e.g., distances) based on the depth data captured at a first modulation frequency and second candidate depths (e.g., distances) based on depth data captured at a second modulation frequency. The disambiguation component538may determine the depth corresponding to one of the first candidate depths and to one of the second candidate depths as an actual depth for each pixel. In some instances, the first nominal maximum depth and the second nominal maximum depth may be coprime. In implementations described herein, the disambiguation component538can further disambiguate the depth using error measurements for the two (or more) modulation frequencies, e.g., by weighting more heavily the return from the modulation frequency with the lower non-ambiguous range. The disambiguation component can also determine whether measured surfaces are beyond the relatively larger nominal maximum depth associated with the sensor, e.g., by comparing an intensity of a surface to a threshold intensity. The pixel filtering system542can be configured to receive sensor data generated by the sensor system(s)506, e.g., by a time-of-flight senor, and filter the sensor data to remove certain pixels. For example, and without limitation, the pixel filtering system542can implement the process400detailed above with reference toFIG.4. In examples described herein, filtering sensor data can include retaining pixels associated with sensed objects and rejecting pixels associated with background objects and/or objects beyond a nominal maximum depth associated with the sensor. For example, and as detailed herein, data generated by time-of-flight sensors can include noise, especially from stray light caused by a number of factors. Removing noisy pixels can provide down-stream systems with improved data. In examples described herein, pixel noise can be particularly problematic in implementations in which a distance to objects in the environment is required, e.g., to safely travel through an environment relative to such objects. The data transmission component542can transmit the sensor data from the sensor computing device(s)508, e.g., to the localization component522, the perception component524, and/or the planning component526. The vehicle502can also include one the emitter(s)510for emitting light and/or sound, as described above. The emitter(s)510in this example include interior audio and visual emitters to communicate with passengers of the vehicle502. By way of example and not limitation, interior emitters can include speakers, lights, signs, display screens, touch screens, haptic emitters (e.g., vibration and/or force feedback), mechanical actuators (e.g., seatbelt tensioners, seat positioners, headrest positioners, etc.), and the like. The emitter(s)510in this example also include exterior emitters. By way of example and not limitation, the exterior emitters in this example include lights to signal a direction of travel or other indicator of vehicle action (e.g., indicator lights, signs, light arrays, etc.), and one or more audio emitters (e.g., speakers, speaker arrays, horns, etc.) to audibly communicate with pedestrians or other nearby vehicles, one or more of which may comprise acoustic beam steering technology. The vehicle502can also include the communication connection(s)512that enable communication between the vehicle502and one or more other local or remote computing device(s). For instance, the communication connection(s)512can facilitate communication with other local computing device(s) on the vehicle502and/or the drive module(s)516. Also, the communication connection(s)512can allow the vehicle to communicate with other nearby computing device(s) (e.g., other nearby vehicles, traffic signals, etc.). The communications connection(s)512can also enable the vehicle502to communicate with a remote teleoperations computing device or other remote services. The communications connection(s)512can include physical and/or logical interfaces for connecting the computing device(s)504to another computing device or an external network (e.g., the Internet). For example, the communications connection(s)512can enable Wi-Fi-based communication such as via frequencies defined by the IEEE 802.11 standards, short range wireless frequencies such as Bluetooth, cellular communication (e.g., 5G, 3G, 4G, 4G LTE, 5G, etc.) or any suitable wired or wireless communications protocol that enables the respective computing device to interface with the other computing device(s). In at least one example, the vehicle502can include one or more drive modules516. In some examples, the vehicle502can have a single drive module516. In at least one example, if the vehicle502has multiple drive modules516, individual drive modules516can be positioned on opposite ends of the vehicle502(e.g., the front and the rear, etc.). In at least one example, the drive module(s)516can include one or more sensor systems to detect conditions of the drive module(s)516and/or the surroundings of the vehicle502. By way of example and not limitation, the sensor system(s) can include one or more wheel encoders (e.g., rotary encoders) to sense rotation of the wheels of the drive modules, inertial sensors (e.g., inertial measurement units, accelerometers, gyroscopes, magnetometers, etc.) to measure orientation and acceleration of the drive module, cameras or other image sensors, ultrasonic sensors to acoustically detect objects in the surroundings of the drive module, LIDAR sensors, RADAR sensors, etc. Some sensors, such as the wheel encoders can be unique to the drive module(s)516. In some cases, the sensor system(s) on the drive module(s)516can overlap or supplement corresponding systems of the vehicle502(e.g., sensor system(s)506). The drive module(s)516can include many of the vehicle systems, including a high voltage battery, a motor to propel the vehicle, an inverter to convert direct current from the battery into alternating current for use by other vehicle systems, a steering system including a steering motor and steering rack (which can be electric), a braking system including hydraulic or electric actuators, a suspension system including hydraulic and/or pneumatic components, a stability control system for distributing brake forces to mitigate loss of traction and maintain control, an HVAC system, lighting (e.g., lighting such as head/tail lights to illuminate an exterior surrounding of the vehicle), and one or more other systems (e.g., cooling system, safety systems, onboard charging system, other electrical components such as a DC/DC converter, a high voltage junction, a high voltage cable, charging system, charge port, etc.). Additionally, the drive module(s)516can include a drive module controller which can receive and preprocess data from the sensor system(s) and to control operation of the various vehicle systems. In some examples, the drive module controller can include one or more processors and memory communicatively coupled with the one or more processors. The memory can store one or more modules to perform various functionalities of the drive module(s)516. Furthermore, the drive module(s)516also include one or more communication connection(s) that enable communication by the respective drive module with one or more other local or remote computing device(s). The processor(s)518of the vehicle502, the processor(s)532of the sensor computing device(s), and/or the processor(s)106of the sensor system(s)104can be any suitable processor capable of executing instructions to process data and perform operations as described herein. By way of example and not limitation, the processor(s)518,532,106can comprise one or more Central Processing Units (CPUs), Graphics Processing Units (GPUs), or any other device or portion of a device that processes electronic data to transform that electronic data into other electronic data that can be stored in registers and/or memory. In some examples, integrated circuits (e.g., ASICs, etc.), gate arrays (e.g., FPGAs, etc.), and other hardware devices can also be considered processors in so far as they are configured to implement encoded instructions. The memory520,534,108are examples of non-transitory computer-readable media. The memory520,534,108can store an operating system and one or more software applications, instructions, programs, and/or data to implement the methods described herein and the functions attributed to the various systems. In various implementations, the memory520,534,108can be implemented using any suitable memory technology, such as static random-access memory (SRAM), synchronous dynamic RAM (SDRAM), nonvolatile/Flash-type memory, or any other type of memory capable of storing information. The architectures, systems, and individual elements described herein can include many other logical, programmatic, and physical components, of which those shown in the accompanying figures are merely examples that are related to the discussion herein. In some instances, aspects of some or all of the components discussed herein can include any models, algorithms, and/or machine learning algorithms. For example, in some instances, the components in the memory520,534,108can be implemented as a neural network. As described herein, an exemplary neural network is a biologically inspired algorithm which passes input data through a series of connected layers to produce an output. Each layer in a neural network can also comprise another neural network or can comprise any number of layers (whether convolutional or not). As can be understood in the context of this disclosure, a neural network can use machine learning, which can refer to a broad class of such algorithms in which an output is generated based on learned parameters. Although discussed in the context of neural networks, any type of machine learning can be used consistent with this disclosure. For example, machine learning algorithms can include, but are not limited to, regression algorithms (e.g., ordinary least squares regression (OLSR), linear regression, logistic regression, stepwise regression, multivariate adaptive regression splines (MARS), locally estimated scatterplot smoothing (LOESS)), instance-based algorithms (e.g., ridge regression, least absolute shrinkage and selection operator (LASSO), elastic net, least-angle regression (LARS)), decisions tree algorithms (e.g., classification and regression tree (CART), iterative dichotomiser 3 (ID3), Chi-squared automatic interaction detection (CHAID), decision stump, conditional decision trees), Bayesian algorithms (e.g., naïve Bayes, Gaussian naïve Bayes, multinomial naïve Bayes, average one-dependence estimators (AODE), Bayesian belief network (BNN), Bayesian networks), clustering algorithms (e.g., k-means, k-medians, expectation maximization (EM), hierarchical clustering), association rule learning algorithms (e.g., perceptron, back-propagation, hopfield network, Radial Basis Function Network (RBFN)), deep learning algorithms (e.g., Deep Boltzmann Machine (DBM), Deep Belief Networks (DBN), Convolutional Neural Network (CNN), Stacked Auto-Encoders), Dimensionality Reduction Algorithms (e.g., Principal Component Analysis (PCA), Principal Component Regression (PCR), Partial Least Squares Regression (PLSR), Sammon Mapping, Multidimensional Scaling (MDS), Projection Pursuit, Linear Discriminant Analysis (LDA), Mixture Discriminant Analysis (MDA), Quadratic Discriminant Analysis (QDA), Flexible Discriminant Analysis (FDA)), Ensemble Algorithms (e.g., Boosting, Bootstrapped Aggregation (Bagging), AdaBoost, Stacked Generalization (blending), Gradient Boosting Machines (GBM), Gradient Boosted Regression Trees (GBRT), Random Forest), SVM (support vector machine), supervised learning, unsupervised learning, semi-supervised learning, etc. Additional examples of architectures include neural networks such as ResNet70, ResNet101, VGG, DenseNet, PointNet, and the like. FIGS.6-9illustrate example processes in accordance with embodiments of the disclosure. These processes, as well as the processes ofFIGS.2-4are illustrated as logical flow graphs, each operation of which represents a sequence of operations that can be implemented in hardware, software, or a combination thereof. In the context of software, the operations represent computer-executable instructions stored on one or more computer-readable storage media that, when executed by one or more processors, perform the recited operations. Generally, computer-executable instructions include routines, programs, objects, components, data structures, and the like that perform particular functions or implement particular abstract data types. The order in which the operations are described is not intended to be construed as a limitation, and any number of the described operations can be combined in any order and/or in parallel to implement the processes. FIG.6depicts an example process600for determining depths of surfaces (or objects) in an environment using a sensor system. For example, some or all of the process600can be performed by the sensor system104and/or by one or more components illustrated inFIG.5, as described herein. For example, some or all of the process300can be performed by the sensor system(s)506, including, but not limited to, the depth determination component536. At an operation602, the process600includes receiving, for a first sensor configuration, first depth information for an object in an environment. For example, techniques described herein may be useful for improving depth determinations based on sensor data received from a time-of-flight sensor. In these examples, the time-of-flight sensor generates raw data in the form of quadrature data, which may be used to determine depth and/or intensity values for pixels of a receiver of the time-of-flight sensor. So, for example, the operation302may include receiving the quadrature data, e.g., as a group of four frames, each corresponding to one of four different phases of reading return signals, and processing those four frames to generate a single frame. The single frame can include intensity and depth data for each pixel in the frame. As noted, the first depth information is associated with a first sensor configuration. For example, the configuration may include a first integration time, a first illumination power, and/or a first modulation frequency. As described herein, the modulation frequency may be inversely proportional to a nominal maximum depth associated with the first sensor configuration. In examples, the first sensor configuration includes a relatively low modulation frequency, e.g., 5 MHz or less, to provide a relatively large nominal maximum depth in the first senor configuration. At an operation604, the process600can include determining, based on the first depth information, an estimated depth of the object. As noted above, the first sensor configuration is selected to provide a relatively large nominal maximum sensor depth. In examples, the nominal maximum sensor depth is selected such that most objects detectable by the sensor are within the nominal maximum sensor depth. For example, the nominal maximum sensor depth may be 25 meters or more. Because of this relatively large nominal maximum depth, the process600can estimate the depth of a measured object as the nominal measurement. For instance, if the first depth information indicates that an object is at 16 meters and the nominal maximum depth associated with the first sensor configuration is 30 meters, the operation604may estimate the depth of the object as 16 meters, as opposed to the sum of 16 meters and some multiple of the nominal maximum depth, e.g., 46 meters, 76 meters, or the like, as described herein. At an operation606, the process600can include receiving, for a second sensor configuration, second depth information for the object. The operation606may be substantially the same as the operation602, but the data may be captured at a second sensor configuration different from the first sensor configuration. In some examples, the second configuration may include at least a different modulation frequency. For instance, the modulation frequency associated with the second sensor configuration may be significantly higher than the modulation frequency associated with the first sensor configuration. In some examples, the first image data may represent a first image frame and the second image data may represent second image frame. In some examples, the first image frame and the second image frame may be successive image frames in a stream of image frames captured by the sensor. At operation608, the process600can include determining, based on the second depth information, a measured depth of the object. For example, the operation608can include determining candidate depths for the object based on the second depth information. For example, because the second sensor configuration has a relatively smaller nominal maximum depth, the sensed depth of the surface in the second configuration is more likely to be ambiguous. Considering the example of above in which the estimated depth is 16 meters, the second depth information may indicate a sensed depth of 0.2 meters. In this example, if the nominal maximum depth associated with the second sensor configuration is 4 meters, the actual depth of the surface could be 0.2 meters (the sensed depth) or could be the sum of the sensed depth and a multiple of the nominal maximum depth, or 4 meters. Accordingly, the depth of the surface may be 0.2 meters, 4.2 meters, 8.2 meters, 12.2 meters, 16.2 meters, 20.2 meters, and so forth. In this example, 16.2 is the candidate depth based on the second depth information closest to the estimated depth. Thus, the operation608determines the measured depth of the object to be 16.2 meters. According to examples described herein, the first depth information is used to determine an estimate, but the second depth information is used as the actual measured value of the surface. Thus, the process600may not require depth disambiguation techniques, which can require additional processing power and/or result in errors or the like. Although examples used to discussFIG.6can include using one first frame, captured at a first sensor system configuration, and one second frame, captured at a second sensor system configuration, to determine information about objects in the environment, other implementations may consider more frames. For example, the first frame and the second frame may be used to determine the blended intensity, but one or more different frames may be used to determine the estimated, candidate, and measured depth values. For instance, two of the frames may be captured at the same modulation frequency. For example, the multiple returns at the same modulation frequency may be used to verify each other. This may be particularly beneficial in the case of a rapidly-changing environment, e.g., because the vehicle is moving relatively quickly and/or because many different objects are in the environment. Similarly, more than two frames may provide even more robust intensity information than the two frames provide in the foregoing example(s). FIG.7depicts an example process700of determining whether a sensed surface is beyond a nominal range, e.g., a nominal maximum depth, of the sensor. In some examples, the process700may be a part of the process300, although the process300does not require the process700and the process700may have additional uses. At an operation702, the process700includes receiving first sensor data including first depth information and first intensity information with a sensor in a first configuration. For example, the first sensor data may be quadrature data generated by the sensor in response to receiving a reflected carrier signal. In other examples, the first sensor data may be a depth image and/or an intensity image, e.g., generated from quadrature data as described herein. In some examples, the first sensor configuration can include a relatively small modulation frequency, e.g., on the order of about 10 MHz or smaller or 5 MHz or smaller. At an operation704, the process700includes receiving second sensor data including second depth information and second intensity information with the sensor in a second configuration. For example, the second sensor data may be quadrature data generated by the sensor in response to receiving a reflected carrier signal in the second configuration. In other examples, the second sensor data may be a depth image and/or an intensity image, e.g., generated from quadrature data as described herein. In some examples, the second sensor configuration can include a relatively high modulation frequency, e.g., on the order of about 15 MHz or higher or 20 MHz or higher. At an operation706, the process700includes determining a disambiguated depth of the object based on the first depth information and the second depth information. For example, the disambiguated depth may be determined by determining a plurality of first candidate depths, based on the first depth information, and a plurality of second candidate depth, based on the second depth information. The disambiguated depth may be based on the first candidate depth and the second candidate depth that are closest to each other. For example, the operation706may be similar to or the same as the operation314inFIG.3. At an operation708, the process700includes determining whether the disambiguated depth is beyond a nominal range of the sensor in the first configuration. For example, the operation708may include comparing the disambiguated depth to the relatively large nominal maximum depth associated with the first sensor configuration. If, at the operation708it is determined that the disambiguated depth is beyond the nominal range, the process700includes, as an operation710, determining an intensity of the object from the intensity information. As described herein, the first sensor data and/or the second sensor data can include both depth and intensity information. The operation710can include determining, from the sensor data, an intensity of the surface determined to be beyond the nominal range of the sensor. For example, the operation710can include determining the intensity from a depth image associated with one or both of the first sensor data and the second sensor data. At an operation712, the process700includes determining whether the intensity exceeds a threshold intensity. For example, the operation712can include comparing the intensity determined at the operation710with a threshold intensity. As described herein, the first sensor configuration (or one of the sensor configurations) is associated with a relatively low modulation frequency, resulting in a relatively high nominal range for the sensor. Accordingly, most detectable objects beyond the nominal range of the sensor in the first configuration are not detectable by the sensor. However, highly reflective objects, e.g., retroreflectors, may reflect sufficient emitted light to be sensed by the sensor. So, although highly reflective surfaces may be detectable beyond the nominal range of the sensor, such surfaces may be expected to have higher intensities. Thus, the threshold intensity may be selected based on properties of highly reflective objects at significant distances. If, at the operation712, it is determined that the intensity does exceed the threshold intensity, at an operation714, the process700includes identifying the point (or the surface) as being a wrap-around point. That is, the measured surface is confirmed to be beyond the nominal range of the sensor. In examples, the wrap-around point may be filtered out, or the depth (confirmed to be beyond the nominal range) and/or the intensity may be used for other purposes, e.g., to model a surface or object at the distance beyond the nominal range. If, at the operation708, it is determined that the disambiguated depth is not beyond the nominal range of the sensor in the first configuration, and/or if, at the operation714, it is determined that the intensity does not exceed the threshold intensity, the process700can include, as an operation718, determining a depth based on the second depth information. For example, if the disambiguated depth is not beyond the nominal range of the sensor, and/or the intensity does not suggest a highly reflective surface, the return may be determined to have a depth corresponding to the disambiguated depth, a depth determined from the first depth information, a depth determined from the second depth information, and/or from a combination of the depths. In still further examples, the operation718can include disregarding the surface, e.g., by filtering out pixels associated with the surface. FIG.8depicts an example process800of filtering sensor data based at least in part on depth and intensity information. In some examples, the process800may be a part of the process400, although the process400does not require the process400and the process800may have additional uses. At an operation802, the process800includes receiving first sensor data including first depth information and first intensity information with a sensor in a first configuration. For example, the first sensor data may be quadrature data generated by the sensor in response to receiving a reflected carrier signal. In other examples, the first sensor data may be a depth image and/or an intensity image, e.g., generated from quadrature data as described herein. In some examples, the first sensor configuration can include a relatively small modulation frequency, e.g., on the order of about 10 MHz or smaller or 5 MHz or smaller. At an operation804, the process800includes receiving second sensor data including second depth information and second intensity information with the sensor in a second configuration. For example, the second sensor data may be quadrature data generated by the sensor in response to receiving a reflected carrier signal in the second configuration. In other examples, the second sensor data may be a depth image and/or an intensity image, e.g., generated from quadrature data as described herein. In some examples, the second sensor configuration can include a relatively high modulation frequency, e.g., on the order of about 15 MHz or higher or 20 MHz or higher. At an operation806, the process800includes determining a measured depth of the object based on the first depth information and the second depth information. In some examples, the depth may be determined using the process200and/or as the disambiguated depth determined using the operation314in the process300. At an operation808, the process800also includes identifying additional pixels at the depth. For example, the process400described above illustrates determining subsets of pixels having the same or similar depths. In some instances, the subsets can include neighboring pixels that have the same depth or a depth within a predetermined range. In some instances, one or more subsets may be associated with pixels beyond a nominal maximum depth of the sensor and one or more subsets may be associated with pixels within the nominal maximum depth. At an operation810, the process800includes determining depth-based intensity thresholds. For example, the operation810can include determining intensity thresholds for different of the subsets of the pixels. For instance, and as detailed further above with reference toFIG.4, the threshold intensity may be higher for greater depths (or depth ranges) and lower for shorter depths (or depth ranges). In some instances, a first threshold intensity may be applied to all pixels beyond the nominal maximum depth of the sensor in the first configuration (as described herein) and a second, lower threshold intensity may be applied to all pixels within the nominal maximum depth. At an operation812, the process800includes determining whether an intensity of a pixel exceeds the threshold intensity. For example, the operation812can include determining, from the first intensity information or the second intensity information, which may be a first intensity image and a second intensity image, whether pixels in a subset of pixels (determined at the operation808) meet or exceed the associated intensity determined at the operation810. If, at the operation814it is determined that the intensity of the pixel does exceed the threshold intensity, at an operation814, the process800includes retaining the pixel in filtered data. Alternatively, if, at the operation814it is determined that the intensity of the pixel does not exceed the threshold intensity, at an operation816the process800includes filtering the pixel, e.g., by removing the pixel from data subsequently used. According to the process800, the amount of data transmitted from the sensor may be reduced, thereby improving processing requirements for systems and components using the sensor data. For example, the process800may retain only sensor data that is relevant for controlling an autonomous vehicle. Moreover, by filtering pixels based on varying intensity thresholds determined based on pixel depth, the accuracy of filtering may be improved. FIG.9depicts an example process900for controlling an autonomous vehicle relative to objects in an environment, as discussed herein. For example, some or all of the process900can be performed by the vehicle102ofFIG.1and/or the vehicle502and its related components illustrated in and discussed with reference to,FIG.5. For example, some or all of the process900can be performed by the localization component522, the perception component524, the planning component526, and/or the one or more system controllers528. At operation902, the process can include receiving sensor data, including depth and/or intensity information. For example, the sensor data may be received from a time-of-flight sensor. The sensor data may be raw data, e.g., quadrature data, from which the depth and/or intensity information can be determined in accordance with techniques described herein, or the sensor data may include the intensity and/or depth values. The sensor data may also be received from the time-of-flight sensor on a frame-by-frame basis or the sensor data may be a resolved frame (or data associated therewith). At operation904, the process900can include determining, based at least in part on the sensor data, a distance to an object in the environment. For example, the process200and/or the process300may be used to determine a depth of an object. Moreover, the localization component522and/or the perception component524may receive the depth and/or intensity data at902and identify objects in the environment at the depth. For example, the vehicle computing device(s)504may classify objects based on the sensor data and map the objects in the environment relative to the vehicle502. At operation906, the process900can include generating, based on the distance to the object and additional sensor data (e.g., LiDAR data, radar data, vision data), a trajectory relative to the object(s). For example, the planning component526of the vehicle computing device(s)504can further determine relative movement, e.g., velocity and acceleration, of the objects in the environment using one or more sensor modalities, object classification data, and the maps530and/or other information to determine the travel path. In some examples, the travel path may be based at least in part on fused data including data from one or more sensor modalities, including a time-of-flight sensor, LiDAR, radar, or the like. At operation908, the process900can include controlling an autonomous vehicle to follow the travel path. In some instances, the commands generated in the operation908can be relayed to a controller onboard an autonomous vehicle to control the autonomous vehicle to drive the travel path. Although discussed in the context of an autonomous vehicle, the process900, and the techniques and systems described herein, can be applied to a variety of systems utilizing sensors. The various techniques described herein can be implemented in the context of computer-executable instructions or software, such as program modules, that are stored in computer-readable storage and executed by the processor(s) of one or more computers or other devices such as those illustrated in the figures. Generally, program modules include routines, programs, objects, components, data structures, etc., and define operating logic for performing particular tasks, or implement particular abstract data types. Other architectures can be used to implement the described functionality, and are intended to be within the scope of this disclosure. Furthermore, although specific distributions of responsibilities are defined above for purposes of discussion, the various functions and responsibilities might be distributed and divided in different ways, depending on circumstances. Similarly, software can be stored and distributed in various ways and using different means, and the particular software storage and execution configurations described above can be varied in many different ways. Thus, software implementing the techniques described above can be distributed on various types of computer-readable media, not limited to the forms of memory that are specifically described. EXAMPLE CLAUSES A: An example vehicle includes: a time-of-flight sensor; one or more processors; and memory storing processor-executable instructions that, when executed by the one or more processors, configure the vehicle to perform operations comprising: receiving first depth information and first intensity information generated by the time-of-flight sensor, the first depth information being based, at least in part, on a first modulation frequency of the time-of-flight sensor; receiving second depth information and second intensity information generated by the time-of-flight sensor, the second depth information being based, at least in part, on a second modulation frequency higher than the first modulation frequency; determining, based at least in part on the first depth information, a plurality of first candidate depths for a surface; determining, based at least in part on the second depth information, a plurality of second candidate depths for the surface; determining, based on the plurality of first candidate depths and the plurality of second candidate depths, a disambiguated depth for the surface; determining that the disambiguated depth is greater than a nominal maximum sensor depth associated with the time-of-flight sensor operating at the first modulation frequency; determining, from at least one of the first intensity information or the second intensity information, an intensity associated with the surface; and determining, based at least in part on the disambiguated depth being greater than the nominal maximum sensor depth and the intensity being equal to or greater than a threshold intensity, that an actual depth of the surface is beyond the nominal maximum sensor depth of the time-of-flight sensor operating at the first modulation frequency. B: The vehicle of example A, wherein the determining the disambiguated depth comprises: determining a first candidate depth of the plurality of first candidate depths closest to a second candidate depth of the plurality of second candidate depths; and determining the disambiguated depth based at least in part on at least one of the first candidate depth or the second candidate depth. C: The vehicle of example A or example B, wherein the first modulation frequency is equal to or less than about 10 MHz and the second modulation frequency is equal to or greater than about 15 MHz. D: The vehicle of any one of example A through example C, the operations further comprising: determining, based on the disambiguated depth, information about an object at a position beyond the nominal maximum sensor depth; determining a trajectory relative to the object; and controlling the vehicle to travel along the trajectory. E: The vehicle of any one of example A through example D, the operations further comprising: generating filtered data excluding depth information associated with the surface, based on the actual depth of the surface being beyond the nominal maximum sensor depth; determining, based at least in part on the filtered data, one or more objects in an environment of the vehicle; determining a trajectory through the environment relative to the one or more objects; and controlling the vehicle to travel along the trajectory. F: An example method includes: determining, based at least in part on sensor data generated by a sensor, that a depth of a surface is beyond a nominal maximum depth of the sensor, the sensor data including depth information and intensity information; and based at least in part on the depth being greater than the nominal maximum depth of the sensor, one of: determining, based on the intensity information indicating that an intensity associated with the surface is equal to or greater than a threshold intensity, that the surface is beyond the nominal maximum depth of the sensor; or determining, based on the intensity information indicating that an intensity associated with the surface is below the threshold intensity, a measured depth of the surface less than the nominal maximum sensor depth. G: The method of example F, wherein: the sensor data comprises first sensor data including first depth information and first intensity information generated in a first sensor configuration and second sensor data including second depth information and second intensity information generated in a second sensor configuration. H: The method of example F or example G, further comprising: determining the depth of the surface as a disambiguated depth from the first depth information and the second depth information. I: The method of any one of example F through example H, wherein the determining that the surface is beyond the nominal maximum depth of the sensor comprises comparing at least one of a first intensity of the surface from the first intensity information to the threshold intensity or a second intensity of the surface from the second intensity information to the threshold intensity. J: The method of any one of example F through example I, wherein: the first sensor configuration is associated with a first modulation frequency and the second sensor configuration is associated with a second modulation frequency higher than the first modulation frequency; and the determining the measured depth comprises: determining an estimated depth of the surface using the first depth information that is shorter than a first nominal maximum depth associated with the first sensor configuration, and determining the measured depth as a candidate depth of a plurality of candidate depths determined based at least in part on the second depth information, the candidate depth being the one of the plurality of candidate depths closest to the estimated depth. K: The method of any one of example F through example J, wherein the plurality of candidate depths comprises a first candidate depth comprising a sensed depth and a second candidate depth comprising a sum of the sensed depth and a second nominal maximum depth associated with the second sensor configuration. L: The method of any one of example F through example K, wherein the first modulation frequency is equal to or less than about 10 MHz and the second modulation frequency is equal to or greater than about 15 MHz. M: The method of any one of example F through example L, further comprising: determining, based at least in part on the surface being located beyond the nominal maximum sensor depth, filtered data excluding the first depth information and the second depth information associated with the surface; determining, based at least in part on the filtered data, one or more objects in an environment of the vehicle; determining a trajectory relative to the one or more objects; and controlling a vehicle to travel along the trajectory. N: The method of any one of example F through example M, further comprising: determining, based at least in part on the actual depth of the surface, a position of an object in an environment of a vehicle; determining a trajectory of the vehicle through the environment relative to the object; and controlling the vehicle to travel along the trajectory. O: An example system includes: one or more processors; and computer-readable storage media storing instructions executable by the one or more processors to perform acts comprising: receiving first sensor data associated with a first sensor configuration including a first modulation frequency, the first sensor data including first depth information; receiving second sensor data associated with a second sensor configuration including a second modulation frequency higher than the first modulation frequency, the second sensor data including second depth information; determining, based at least in part on the first depth information and the second depth information, a disambiguated depth of the surface; and determining, based at least in part on the disambiguated depth being greater than a first nominal maximum depth associated with the first sensor configuration and further based at least in part on an intensity associated with the surface being equal to or above a threshold intensity, that the surface is beyond the first nominal maximum depth associated with the first sensor configuration. P: The system of example O, wherein the first sensor data further comprises first intensity information and the second sensor data further comprises second intensity information, the acts further comprising: determining, from at least one of the first intensity information or the second intensity information the intensity associated with the surface; and comparing the intensity to the threshold intensity. Q: The system of example O or example P, the acts further comprising: determining, based at least in part on the surface being located beyond the nominal maximum sensor depth associated with the first configuration, filtered data excluding the first depth information and the second depth information associated with the surface; determining, based at least in part on the filtered data, one or more objects in an environment of the vehicle; determining a trajectory relative to the one or more objects; and controlling a vehicle to travel along the trajectory. R: The system of any one of example O through example Q, the acts further comprising: determining, based on the intensity information being less than the threshold intensity, an actual depth of the surface based on the second depth information, the actual depth of the surface being less than the first nominal maximum depth of the sensor in the first sensor configuration. S: The system of any one of example O through example R, further comprising: determining the estimated depth of the surface as a nominal depth determined from the first depth information; and determining the actual depth as one of a plurality of second candidate depths determined from the second depth information and closest to the estimated depth. T: The system of any one of example O through example S, wherein the first sensor data is generated by a first sensor configured in the first sensor configuration and the second sensor data is generated aby a second sensor configured in the second sensor configuration, the first sensor and the second sensor having overlapping fields of view. CONCLUSION While one or more examples of the techniques described herein have been described, various alterations, additions, permutations and equivalents thereof are included within the scope of the techniques described herein. In the description of examples, reference is made to the accompanying drawings that form a part hereof, which show by way of illustration specific examples of the claimed subject matter. It is to be understood that other examples can be used and that changes or alterations, such as structural changes, can be made. Such examples, changes or alterations are not necessarily departures from the scope with respect to the intended claimed subject matter. While the steps herein can be presented in a certain order, in some cases the ordering can be changed so that certain inputs are provided at different times or in a different order without changing the function of the systems and methods described. The disclosed procedures could also be executed in different orders. Additionally, various computations described herein need not be performed in the order disclosed, and other examples using alternative orderings of the computations could be readily implemented. In addition to being reordered, in some instances, the computations could also be decomposed into sub-computations with the same results. | 130,383 |
11861858 | Skilled artisans will appreciate that elements in the figures are illustrated for simplicity and clarity and have not necessarily been drawn to scale. For example, the dimensions of some of the elements in the figures may be exaggerated relative to other elements to help to improve understanding of embodiments of the present invention. The apparatus and method components have been represented where appropriate by conventional symbols in the drawings, showing only those specific details that are pertinent to understanding the embodiments of the present invention so as not to obscure the disclosure with details that will be readily apparent to those of ordinary skill in the art having the benefit of the description herein. DETAILED DESCRIPTION Generally speaking, shipping companies seek to accurately and efficiently detail the load status of each container for which they are responsible. Many companies incorporate imaging systems to provide this analysis (e.g., load monitoring units (LMUs)). However, these traditional imaging systems suffer from a number of drawbacks, such as being unable to effectively localize containers within the system's field of view (FOV). The methods/systems of the present disclosure provide solutions to the localization problems associated with the traditional imaging systems. Namely, the methods/systems of the present disclosure alleviate problems associated with rotation and clipping parameter determination in traditional imaging systems used for ULD localization. For example, a method of the present disclosure includes capturing a set of image data featuring a ULD; locating a fiducial marker proximate to the ULD within the set of image data; cropping the set of image data, based upon the located fiducial marker, to generate a set of marker point data and a set of floor point data; rotating the set of image data based upon the set of marker point data and the set of floor point data; and clipping the rotated set of image data based upon the set of marker point data and the set of floor point data. As used herein, the term “container” shall refer to any container transportable by at least one of a vehicle, a train, a marine vessel, and airplane, and configured to store transportable goods such as boxed and/or unboxed items and/or other types of freight. Accordingly, an example of a container includes an enclosed container fixedly attached to a platform with wheels and a hitch for towing by a powered vehicle. An example of a container also includes an enclosed container removably attached to a platform with wheels and a hitch for towing by a powered vehicle. An example of a container also includes an enclosure that is fixedly attached to a frame of a powered vehicle, such as the case may be with a delivery truck, box truck, etc. As such, while the exemplary embodiment(s) described below may appear to reference one kind of a container, the scope of the invention shall extend to other kinds of container, as defined above. FIG.1is a perspective view, as seen from above, of a load point101within a loading facility that depicts a load monitoring unit (LMU)202having a 3D camera (e.g., a 3D-depth camera) oriented in a direction to capture 3D image data of a shipping container102(also referenced herein as a “ULD” and/or a “unit load device”), in accordance with example embodiments herein. As depicted, the shipping container102has a shipping container type of “AMJ.” Generally, a shipping container is selected from one of several differently dimensioned containers. In various embodiments, shipping containers may comprise any type of unit load device (ULD). For example, a shipping container type may be of any ULD type, including, for example, any of an AMJ type, an AAD type, an AKE type, an AYY type, a SAA type, and APE type, or an AQF type. For ULD shipping containers, the first letter (e.g., “A” for “Certified aircraft container”) indicates a specific type of ULD container, such as certified, thermal, etc., the second letter represents base size in terms of dimensions (e.g., “M” for 96×125 inch), and the third letter represents a side contour size and shape (e.g., “J” for a cube shaped ULD container having a diagonal sloping roof portion on one side only). More generally, however, a shipping container may be any aircraft-based shipping container. The load point101may be a predefined search space determined based on the shipping container102size, dimensions, or otherwise configuration and/or the area in which the shipping area is localized. For example, in one embodiment, the predefined search space may be determined based on ULD type, shape, or position within a general area. As shown inFIG.1, for example, the predefined search space is determined based on the size and dimensions of the shipping container102which is of type AMJ. In general, load point101is defined so as to completely (or at least partially) include or image the shipping container102. The load point101may further include a frontal area103that generally defines a front position of the predefined search space and/or the shipping container102. FIG.1additionally depicts, within the load point101, personnel or loaders105and106that load packages104and107into the shipping container102. In the embodiment ofFIG.1, the shipping container102is being loaded by the loaders105with the packages104and107during a loading session. The loading session includes loading a set or group of identified packages into the shipping container102. The loaders105and106and the packages104and107, by movement through the load point101, may generally cause occlusion and interference with the LMU202(as discussed forFIG.2) capturing 3D image data, over time, of the shipping container102. Thus, accurately localizing the container102within the load point101is critical to ensure that improper localization does not further complicate the imaging difficulties posed by occlusion and interference during normal operations of a loading session. FIG.2is a perspective view of the LMU202ofFIG.1, in accordance with example embodiments herein. In various embodiments, the LMU202is a mountable device. Generally, an LMU (e.g., LMU202) comprises camera(s) and a processing board, and is configured to capture data of a loading scene (e.g., a scene including space101). The LMU202may run container fullness estimation and other advanced analytical algorithms. The LMU202may include a mounting bracket252for orienting or otherwise positioning the LMU202within a loading facility associated with the load point101, as described herein. The LMU202may further include one or more processors and one or more memories for processing image data as described herein. For example, the LMU202may include flash memory used for determining, storing, or otherwise processing the imaging data/datasets and/or post-scanning data. In addition, the LMU202may further include a network interface to enable communication with other devices. The LMU202may include a 3D camera254(also referenced herein as a “Time-of-Flight (ToF) camera”) for capturing, sensing, and/or scanning 3D image data/datasets. For example, in some embodiments, the 3D camera254may include an Infra-Red (IR) projector and a related IR camera. In such embodiments, the IR projector projects a pattern of IR light or beams onto an object or surface, which, in various embodiments herein, may include surfaces or areas of a predefined search space (e.g., load point101) or objects within the predefined search area, such as boxes or packages (e.g., packages104and107) and the storage container102. The IR light or beams may be distributed on the object or surface in a pattern of dots or points by the IR projector, which may be sensed or scanned by the IR camera. A depth-detection app, such as a depth-detection app executing on the one or more processors or memories of the LMU202, can determine, based on the pattern of dots or points, various depth values, for example, depth values of the predefined search area. For example, a near-depth object (e.g., nearby boxes, packages, etc.) may be determined where the dots or points are dense, and distant-depth objects (e.g., far boxes, packages, etc.) may be determined where the points are more spread out. The various depth values may be used by the depth-detection app and/or the LMU202to generate a depth map. The depth map may represent a 3D image of, or contain 3D image data of, the objects or surfaces that were sensed or scanned by the 3D camera254, for example, the load point101and any objects, areas, or surfaces therein. The 3D camera254may also be configured to capture other sets of image data in addition to the 3D image data, such as grayscale image data. The LMU202may further include a photo-realistic camera256for capturing, sensing, or scanning 2D image data. The photo-realistic camera256may be an RGB (red, green, blue) based camera for capturing 2D images having RGB-based pixel data. In some embodiments, the photo-realistic camera256may capture 2D images, and related 2D image data, at the same or similar point in time as the 3D camera254such that the LMU202can have both sets of 3D image data and 2D image data available for a particular surface, object, area, and/or scene at the same or similar instance in time. Further in these embodiments, the LMU202may include a depth alignment module (e.g., as part of the depth detection app) to depth-align 3D image data with 2D image data. In other embodiments, the 3D camera254and the photo-realistic camera256may be a single imaging apparatus configured to capture 3D depth image data simultaneously with 2D image data. Consequently, in these embodiments, the captured 2D images and the corresponding 2D image data may be depth-aligned with the 3D images and 3D image data. The LMU202may also include a processing board258configured to, for example, perform container fullness estimation and other advanced analytical algorithms based on images captured by the cameras254,256. Generally, the processing board258may include one or more processors and one or more computer memories for storing image data, and/or for executing apps that perform analytics or other functions as described herein. The processing board258may also include transceivers and/or other components configured to communicate with external devices/servers. The processing board258may thus transmit and/or receive data or other signals to/from external devices/servers before, during, and/or after performing the analytical algorithms described herein. In various embodiments, and as shown inFIG.1, the LMU202may be mounted within a loading facility and oriented in the direction of the loading point101to capture 3D and/or 2D image data of the shipping container102. For example, as shown inFIG.1, the LMU202may be oriented such that the 3D and 2D cameras of the LMU202may capture 3D image data of the shipping container102, e.g., where the LMU202may scan or sense the walls, floor, ceiling, packages, or other objects or surfaces within the load point101to determine the 3D and 2D image data. The image data may be processed by the processing board258of the LMU202(or, in some embodiments, one or more remote processors and/or memories of a server) to implement analysis, functions, such as graphical or imaging analytics, as described by the one or more various flowcharts, block diagrams, methods, functions, or various embodiments herein. It should be noted that the LMU202may capture 3D and/or 2D image data/datasets of a variety of loading facilities or other areas, such that additional loading facilities or areas (e.g., warehouses, etc.) in addition to the predefined search spaces (e.g., load point101) are contemplated herein. In some embodiments, for example, the LMU202may process the 3D and 2D image data/datasets, as scanned or sensed from the 3D camera254and the photo-realistic camera256, for use by other devices (e.g., an external server). For example, the processing board258of the LMU202may process the image data or datasets captured, scanned, or sensed from the load point101. The processing of the image data may generate post-scanning data that may include metadata, simplified data, normalized data, result data, status data, or alert data as determined from the original scanned or sensed image data. In some embodiments, the image data and/or the post-scanning data may be sent to a client device/client application, such as a container feature assessment app that may be, for example, installed and executing on a client device, for viewing, manipulation, or otherwise interaction. In other embodiments, the image data and/or the post-scanning data may be sent to a server for storage or for further manipulation. For example, the image data and/or the post-scanning data may be sent to a server. In such embodiments, the server or servers may generate post-scanning data that may include metadata, simplified data, normalized data, result data, status data, or alert data as determined from the original scanned or sensed image data provided by the LMU202. As described herein, the server or other centralized processing unit and/or storage may store such data, and may also send the image data and/or the post-scanning data to a dashboard app, or other app, implemented on a client device, such as the container feature assessment app implemented on a client device. FIG.3is a block diagram representative of an example logic circuit capable of implementing, for example, one or more components of the example TMU202ofFIG.2or, more specifically, the example processing board258ofFIG.2. The example logic circuit ofFIG.3is a processing platform300capable of executing instructions to, for example, implement operations of the example methods described herein, as may be represented by the flowcharts of the drawings that accompany this description. Other example logic circuits capable of, for example, implementing operations of the example methods described herein include field programmable gate arrays (FPGAs) and application specific integrated circuits (ASICs). The example processing platform300ofFIG.3includes a processor302such as, for example, one or more microprocessors, controllers, and/or any suitable type of processor. The example processing platform300ofFIG.3includes memory (e.g., volatile memory, non-volatile memory)304accessible by the processor302(e.g., via a memory controller). The example processor302interacts with the memory304to obtain, for example, machine-readable instructions stored in the memory304corresponding to, for example, the operations represented by the flowcharts of this disclosure. The memory304also includes a region of interest (ROI) estimation algorithm306that is accessible by the example processor302. The ROI estimation algorithm306may comprise rule-based instructions, an artificial intelligence (AI) and/or machine learning-based model, and/or any other suitable algorithm architecture or combination thereof configured to determine rotation and clipping parameters for images of a ULD (e.g., shipping container102). For example, the example processor302may access the memory304to execute the ROI estimation algorithm306when the LMU202captures a set of image data featuring a ULD. Additionally or alternatively, machine-readable instructions corresponding to the example operations described herein may be stored on one or more removable media (e.g., a compact disc, a digital versatile disc, removable flash memory, etc.) that may be coupled to the processing platform300to provide access to the machine-readable instructions stored thereon. The example processing platform300ofFIG.3also includes a network interface308to enable communication with other machines via, for example, one or more networks. The example network interface308includes any suitable type of communication interface(s) (e.g., wired and/or wireless interfaces) configured to operate in accordance with any suitable protocol(s) (e.g., Ethernet for wired communications and/or IEEE 802.11 for wireless communications). The example, processing platform300ofFIG.3also includes input/output (I/O) interfaces310to enable receipt of user input and communication of output data to the user. Such user input and communication may include, for example, any number of keyboards, mice, USB drives, optical drives, screens, touchscreens, etc. FIG.4is a flowchart representative of a method400for determining rotation and clipping parameters for images of ULDs, in accordance with embodiments described herein. Method400describes various methods for determining rotation and clipping parameters for images of ULDs, and embodiments of the method400are discussed below in context with reference toFIGS.5,6, and7. Generally speaking, and as mentioned above, the method400for determining rotation and clipping parameters for images of ULDs utilizes fiducial markers located in image data featuring a ULD within a load point (e.g., load point101) to determine the rotation and clipping parameters associated with images of ULDs placed at the load point. The rotation and clipping parameters may include angular values (e.g., pitch and yaw angles) and Cartesian coordinate values representing vertical, lateral, and depth position values of a ROI containing the ULD within the image representing the load point. It is to be appreciated that any suitable coordinate system and/or any other measurement metric or combinations thereof may be used to represent the rotation and clipping parameters associated with images of ULDs placed at the load point. Further, it is to be understood that any of the steps of the method400may be performed by, for example, the LMU202, the ToF camera254, the processor302, the ROI estimation algorithm306, and/or any other suitable components or combinations thereof discussed herein. At block402, the method400includes capturing a set of image data featuring a ULD. Broadly, the set of image data may represent the load point, such that the set of image data may feature the ULD when the ULD is located within the load point (e.g., during a loading session). The LMU202ofFIG.2may automatically capture or receive a signal from an operator instructing the LMU202to capture a set of image data in response to the presence of a ULD in the load point (e.g., load point101). The LMU202may capture image data of the ULD using any number of cameras included in the LMU202, such as the ToF camera254and/or the photo-realistic camera256. More specifically, the LMU202may capture the set of image data in response to a signal from an operator attempting to initially configure the LMU202to accurately and consistently capture images of ULDs at the load point for container analytics purposes. For example, the LMU202may have been recently installed at the load point, and the operator may attempt to initially configure the LMU202by capturing a set of image data with the LMU202, and proceeding to analyze (e.g., via the ROI estimation algorithm306) the set of image data in accordance with the method400, as further described herein. In reference toFIG.5, the LMU202may capture, as part of the set of image data, a set of 3D point data502representing a ULD from a side perspective using, e.g., the ToF camera254. As illustrated inFIG.5, the set of 3D point data502provides a very sparse approximation of the ULD within the FOV of the ToF camera254. As previously mentioned, the material(s) used to construct most typical ULDs (e.g., airplane-grade aluminum) results in a high reflection rate of most incident signals. The ToF camera254may utilize IR projection to calculate depth values corresponding to the IR signals received back at the ToF camera254. The projected IR beams/pulses may interact with the curved, metal surfaces of many common ULDs in such a manner as to generate distorted and/or otherwise obscured depth values. As illustrated, the 3D point data502features a floor plane that is angled downwards, and a missing back wall. More generally, the obscured/erroneous depth values do not accurately correspond to the location of the ULD within the load point. Accordingly, the 3D data present in the set of 3D point data502is virtually unusable for container analytics purposes, particularly localization. Unlike the 3D depth image504, the grayscale image505may be, for example, an ambient image or amplitude image captured by the ToF camera254. Consequently, the grayscale image505may represent an amplitude of the signals captured by the ToF camera254in other images (e.g., the 3D depth image504). The grayscale image505may thus represent a more accurate representation of the load point because the data comprising the grayscale image505remains relatively unaffected by the signal distortion associated with the reflective, metal surfaces of many ULDs. As illustrated, the grayscale image505features two fiducial markers506a,506band an unobscured floor plane506c. In embodiments, the set of image data featuring the ULD may comprise (i) a 3D depth image504that includes 3D point data and (ii) a grayscale image505that includes two-dimensional (2D) point data and that is depth-aligned with the 3D depth image504. The LMU202may capture both images using, for example, the ToF camera254. In this manner, the grayscale image505will automatically be aligned with the 3D depth image504because both images were captured by the ToF camera254. However, it is to be understood that the LMU202may capture the 3D depth image504using the ToF camera254, and the grayscale image505, for example, using the photo-realistic camera256and/or any other combination of cameras. In these embodiments, the LMU202may also include a depth alignment module (e.g., as part of the depth detection app) to depth-align the 3D depth image504with the grayscale image505. Moreover, the LMU202may capture the 3D depth image504and the grayscale image505from a frontal perspective of the ULD, in contrast to the side perspective of the ULD illustrated by the set of 3D point data502. In other embodiments, the set of image data featuring the ULD includes (i) a 3D depth image and (ii) a red-green-blue (RGB) image. The LMU202may capture the 3D depth image using the ToF camera254, and may capture the RGB image using, for example, the photo-realistic camera256. In practice, the ToF camera254and the photo-realistic camera256may be positioned at different locations within the LMU202, such that the resulting images (three-dimensional depth image and RGB image, respectively) may need to be aligned. The LMU202may align (e.g., via a depth alignment module) the images such that each pixel representing an object included in the RGB image corresponds to a depth value from a pixel representing the object in the 3D image. At block404, the method400includes locating a fiducial marker proximate to the ULD within the set of image data. Generally, the fiducial markers506a,506bare patterns printed and/or otherwise displayed near a front edge of the ULD that are used as points of reference by the LMU202(e.g., via the ROI estimation algorithm306) to determine the rotation and clipping parameters associated with images of the ULD. More specifically, the fiducial markers506a,506bmay be placed at a fixed distance from the front edge(s) and the side edges of the ULD. While illustrated inFIG.5as two fiducial markers, it will be appreciated that any suitable number of fiducial markers may be placed within and/or near the ULD. In some embodiments, the fiducial marker(s) may comprise a plurality of fiducial markers that are proximate to the ULD. In any event, the ROI estimation algorithm306may locate the fiducial markers506a,506bby determining the coordinate values within the set of image data that are associated with each marker506a,506b. The ROI estimation algorithm306may then use these coordinate values as reference coordinates within the set of image data during the techniques described herein. Typically, the ROI estimation algorithm306may locate the fiducial markers506a,506bwithin the grayscale image505, due to the higher fidelity image characteristics of the grayscale image505compared to the 3D depth image504. However, prior to determining coordinate values associated with the fiducial markers506a,506b, the ROI estimation algorithm306may apply various filters and/or filtering techniques (e.g., CLAHE filtering techniques, etc.) to reduce the noise and generally enhance the quality of the grayscale image505. The ROI estimation algorithm306may then locate the fiducial markers506a,506bwithin the grayscale image505using any suitable image analysis/recognition technique. When the ROI estimation algorithm306locates the fiducial markers506a,506b, the LMU202may further locate the unobscured floor plane506cand project the 2D coordinate values for the fiducial markers506a,506band the unobscured floor plane506cfrom the grayscale image505to the 3D depth image504. For example, the ROI estimation algorithm306may project the 2D coordinate values corresponding to the exterior corners of the fiducial marker(s) and the floor plane (e.g., 2D coordinate values for the four corners of fiducial markers506a,506b), 2D coordinate values corresponding to the exterior edges of the fiducial marker(s) and the floor plane, all 2D coordinate values corresponding to the fiducial marker(s) and the floor plane, an average 2D coordinate value for one or more edges of the fiducial marker(s) and the floor plane, and/or any other suitable quantity, orientation, statistical representation, and/or otherwise indication of the coordinate value(s) corresponding to the fiducial marker(s) and the floor plane or combinations thereof. In embodiments, the ROI estimation algorithm306may locate the unobscured floor plane506cby analyzing a set of edge values corresponding to the fiducial markers506a,506b. The ROI estimation algorithm306may retrieve (e.g., from memory304) predetermined gap/distance values representing the distance from edge values of the fiducial markers506a,506bto edge values of the unobscured floor plane506c. The ROI estimation algorithm306may then adjust the coordinates of the edge values of the fiducial markers506a,506bby the predetermined gap/distance values to determine the coordinates of the edge values of the unobscured floor plane506c. Additionally or alternatively, the ROI estimation algorithm306may apply a statistical adjustment factor to the edge values of the fiducial markers506a,506bto determine the edge values of the unobscured floor plane506c. For example, a predetermined gap/distance value may indicate that an edge of the unobscured floor plane506cmay begin ten pixels to the left/right of an edge associated with a fiducial marker506a,506b. A statistical adjustment factor may indicate, for example, that the coordinate values corresponding to an edge of the unobscured floor plane506care approximately twelve pixels to the left/right of an average pixel coordinate value corresponding to an edge of the fiducial marker506a,506b. The LMU202may receive a predetermined gap/distance value and/or a statistical adjustment factor prior to capturing image data (e.g., block402), for example, via a network interface (e.g., network interface308) based on an input received from an operator or a predetermined gap/distance value and/or a statistical adjustment factor retrieved from an external device (e.g., external server). For example, assume that the grayscale image505represents the FOV of the LMU202cameras, and further assume that the grayscale image505may be overlaid with a coordinate mapping (e.g., a Cartesian coordinate mapping). The coordinate mapping may include a series of 100 equally spaced divisions in a lateral and a vertical direction that divide the grayscale image505into a set of 10,000 equal area regions. Moreover, each of the 100 equally spaced divisions may include a numerical identifier, and the numerical identifiers may monotonically increase as the divisions extend further away in the respective directions. Thus, the coordinate mapping may designate the bottom left corner of the grayscale image505as the origin (e.g., coordinates (0,0)), the top left corner of the grayscale image505having coordinates (0, 100), the bottom right corner of the grayscale image505having coordinates (100,0), and the top right corner of the grayscale image505having coordinates (100,100). Further in this example, assume that the 3D depth image504also represents the FOV of the LMU202cameras, and that the 3D depth image504may also be overlaid with the coordinate mapping, as described with respect to the grayscale image502. The 3D depth image504may also include a depth component, such that the coordinates describing any particular point (e.g., pixel) in the 3D depth image504may have a lateral component, a vertical component, and a depth component. Thus, the coordinate mapping of any particular pixel in the 3D depth image504may be represented as (x, y, z), where x is the lateral component, y is the vertical component, and z is the depth component. The depth component for each pixel included in the 3D depth image504may describe, for example, a distance of an object represented by the pixel from the LMU202. The depth component corresponding to a pixel may be represented in feet, inches, meters, and/or any other suitable units, or combinations thereof. It is to be understood that a particular pixel within the 3D depth image504with a coordinate mapping represented as (x, y, z), where x and y represent any suitable coordinate values (as described above) and z represents a depth value, may have a corresponding coordinate mapping (x, y) within the grayscale image505. When the coordinate mappings for each pixel in the 3D depth image504and the grayscale image505are aligned in this manner, the images are considered “depth-aligned.” Accordingly, when the ROI estimation algorithm306identifies a particular pixel within the grayscale image505to perform cropping and/or any other suitable analysis with respect to the particular pixel, the ROI estimation algorithm306may perform a similar or identical analysis with respect to the particular pixel within the 3D depth image504. Hence, ensuring that the 3D depth image504and the grayscale image505(or in embodiments, the RGB image or other suitable image) are depth-aligned is critical to accurately perform a depth-based cropping and/or any other suitable analysis between the 3D depth image504and any other suitable two-dimensional image. In any event, the ROI estimation algorithm306may determine that a right edge of the fiducial marker506ais located at (25, 15-35), a left edge of the fiducial marker506bis located at (75, 15-35), and the LMU202may receive a predetermined gap/distance value indicating that the right/left edge of the unobscured floor plane506cmay begin at (x±10, --) to the right/left of the right/left edge associated with the fiducial markers506a,506b. Using the coordinates of the right edge of the fiducial marker506aand the left edge of the fiducial marker506b, the ROI estimation algorithm306may determine that the right/left edges of the unobscured floor plane506cbegin at (65, --) and (35, --), respectively. Similarly, the LMU202may receive a predetermined gap/distance value indicating that the top/bottom edges of the unobscured floor plane506cmay begin at (--, y±10) relative to the top/bottom edges associated with the fiducial markers506a,506b, and may determine that the top/bottom edges of the unobscured floor plane506cbegin at (--, 45) and (--, 5), respectively. Combining these ranges, the ROI estimation algorithm306may determine 2D coordinates of the unobscured floor plane506cas defined by a box having corners at coordinates (35, 5), (35, 45), (65, 5), and (65, 45). As illustrated inFIG.5, the ROI estimation algorithm306(e.g., via processor302) may project the 2D coordinate values of the fiducial markers506a,506band the unobscured floor plane506cfrom the grayscale image505onto the 3D depth image504to further determine depth values corresponding to the fiducial markers506a,506band the unobscured floor plane506c. The 3D depth image405and the grayscale image505are automatically depth-aligned or are depth-aligned by the LMU202(e.g., via a depth alignment module), so there may generally be a one-to-one correspondence between the 2D coordinates of the fiducial markers506a,506band the unobscured floor plane506cextracted from the grayscale image505and the corresponding 2D (e.g., vertical and lateral coordinates) coordinates of the fiducial markers506a,506band the unobscured floor plane506cwithin the 3D point data of the 3D depth image504. The ROI estimation algorithm306may therefore assume that the 3D point data identified in the 3D depth image504as representing the fiducial markers506a,506bor the unobscured floor plane506cincludes depth values that similarly represent the fiducial markers506a,506bor the unobscured floor plane506c. At block406, the method400includes cropping the set of image data based upon the located fiducial marker(s) to generate a set of marker point data and a set of floor point data. Generally, as illustrated inFIG.6, the ROI estimation algorithm306may crop the 3D point data identified in the 3D depth image504as representing the fiducial markers506a,506bto generate the set of marker point data and the 3D point data identified in the 3D depth image504as representing the unobscured floor plane506cto generate the set of floor point data. The ROI estimation algorithm306may perform the cropping by removing all 3D point data from the 3D depth image504that is not included in the regions of the 3D point data identified in the 3D depth image504as representing the fiducial markers506a,506band the unobscured floor plane506c. However, in some embodiments, the ROI estimation algorithm306may also crop the 3D depth image504based on the depth values within the 3D point data representing the fiducial markers506a,506band/or the unobscured floor plane506c. The LMU202may receive a depth threshold indicating that any pixels including a depth value that exceeds the depth threshold should be excluded from the set of marker point data and/or the set of floor point data. The ROI estimation algorithm306may then scan each pixel included in the 3D point data representing the fiducial markers506a,506band/or the unobscured floor plane506c, evaluate the depth value for each pixel, and exclude each pixel in the 3D depth image504that has a depth value exceeding the depth threshold. For example, the depth threshold may be twenty meters, such that any pixel in the 3D point data representing the fiducial markers506a,506band/or the unobscured floor plane506cincluding a depth component that is greater than twenty (e.g., any pixel with a coordinate mapping (--, --, z>20)) may be cropped out of the set of marker point data and/or the set of floor point data. With the sets of marker point data and floor point data, the ROI estimation algorithm306may accurately calculate the orientation of the floor and fiducial markers with respect to the orientation of the LMU202, in part, by fitting planes to the data. Accordingly, as illustrated in the floor plane image602, the ROI estimation algorithm306may fit a floor plane606to the set of floor point data (e.g., 3D point data representing the unobscured floor plane506ccropped from the 3D depth image504). The floor plane606may be a planar surface that approximates the orientation of the floor with respect to the orientation of the LMU202. Similarly, the ROI estimation algorithm306may fit a marker plane608to the set of marker point data (e.g., 3D point data representing the fiducial markers506a,506bcropped from the 3D depth image504), and the marker plane608may be a planar surface that approximates the orientation of the fiducial markers506a,506bwith respect to the orientation of the LMU202. With the floor plane606and the marker plane608, the ROI estimation algorithm306may calculate a pitch angle and a yaw angle of the 3D point data that collectively describe an orientation of the 3D point data relative to the orientation of the LMU202. For example, using the floor plane606, the ROI estimation algorithm306may calculate a pitch angle relative to the orientation of the LMU202using the dimensions of the floor plane606(e.g., the calculated (x, y, z) coordinates) in conjunction with trigonometric relationships. The pitch angle may generally refer to a difference in orientation between the floor plane606and the LMU202along a horizontal (e.g., a lateral) axis609. For example, if the floor plane606has a pitch angle of 0° or 180° with respect to the LMU202, the floor plane606would be parallel to the line of sight of the LMU202(e.g., minimal light reflection from the floor plane606directly to the LMU202). If the floor plane606has a pitch angle of 90° or 270° with respect to the LMU202, the floor plane606would be perpendicular to the line of sight of the LMU202(e.g., maximum light reflection from the floor plane606directly to the LMU202. As another example, the ROI estimation algorithm306may calculate a yaw angle relative to the orientation of the LMU202using the dimensions of the marker plane608(e.g., the calculated (x, y, z) coordinates) in conjunction with trigonometric relationships. The yaw angle may generally refer to a difference in orientation between the marker plane608and the LMU202along a vertical axis610. For example, if the marker plane608has a yaw angle of 0° with respect to the LMU202, the marker plane608would be perfectly vertically aligned to the line of sight of the LMU202(e.g., the ULD featured in the set of image data would be perfectly vertically aligned with the LMU202). If the marker plane608has a yaw angle of 90° with respect to the LMU202, the marker plane608would be parallel to the line of sight of the LMU202. Thus, including two fiducial markers506a,506bwhen performing the method400may allow the ROI estimation algorithm306to accurately determine a yaw angle because the fiducial markers506a,506bprovide two independent, known features of the ULD that should be equidistant from the LMU202(e.g., resulting in a 0° yaw angle). If the fiducial markers506a,506bare not equidistant from the LMU202(e.g., the yaw angle is non-zero), then the ROI estimation algorithm306may determine that the ULD is rotated around the vertical axis relative to the LMU202. For example, the ROI estimation algorithm306may determine that the right side of the ULD is slightly closer to the LMU202than the left side of the ULD, resulting in a non-zero yaw angle. At block408, the method400includes rotating the set of image data based upon the set of marker point data and the set of floor point data. Generally, the ROI estimation algorithm306may rotate the set of image data by the yaw angle and the pitch angle determined based upon the set of marker point data and the set of floor point data to bring the set of image data into vertical and horizontal alignment with the line of sight of the LMU202. For example, if the ROI estimation algorithm306calculates a yaw angle of five degrees in a clockwise direction based upon the set of marker point data (e.g., the marker plane608), the ROI estimation algorithm306may rotate the entire set of image data five degrees in a counterclockwise direction to bring the set of image data into vertical alignment with the line of sight of the LMU202. Similarly, if the ROI estimation algorithm306calculates a pitch angle of seven degrees in a counterclockwise direction based upon the set of floor point data (e.g., the floor plane606), the ROI estimation algorithm306may rotate the entire set of image data seven degrees in a clockwise direction to bring the set of image data into horizontal alignment with the line of sight of the LMU202. At block410, the method400includes clipping the rotated set of image data based upon the set of marker point data and the set of floor point data. Generally, clipping may refer to determining a region within the 3D point data that optimally represents the ULD. The ROI estimation algorithm306may calculate/estimate optimal clipping distances (e.g., 3D coordinates, each represented in clipping image700) for each of the three axes by leveraging previously identified/determined parameters of the set of marker point data and the set of floor point data along with one or more fixed distance(s) associated with each set of data. The fixed distance(s) may be stored locally on the LMU202(e.g., via memory304), and/or the LMU202may retrieve/receive the fixed distance(s) via a network interface (e.g., network interface308) based on an input received from an operator or fixed distance(s) retrieved from an external device (e.g., external server). In embodiments, the ROI estimation algorithm306may estimate (i) a set of depth clipping coordinates (e.g., a frontal clipping distance and a rear clipping distance) for the rotated set of image data based upon the set of marker point data, (ii) a set of longitudinal clipping coordinates (e.g., a top clipping distance and a bottom clipping distance) for the rotated set of image data based upon the set of floor point data, and (iii) a set of lateral clipping coordinates (e.g., side clipping distances) for the rotated set of image data based upon the set of marker point data. Further in these embodiments, estimating the set of depth clipping coordinates may further comprise calculating a statistical depth value of the set of marker point data that is adjusted by a depth displacement of the fiducial marker(s)506a,506bwithin the ULD. Estimating the set of longitudinal clipping coordinates may further comprise calculating a statistical height value of the set of floor point data, and estimating the set of lateral clipping coordinates may further comprise calculating a first set of extreme lateral coordinates corresponding to the ULD based upon a second set of extreme lateral coordinates (e.g., maximum side values) corresponding to the set of marker point data. As an example, the ROI estimation algorithm306may estimate a frontal clipping distance (e.g., Zmin, illustrated inFIG.7by element702) to include 3D point data extending up to, but not exceeding, the front of the ULD by utilizing the set of marker point data. Namely, the ROI estimation algorithm306may statistically estimate a depth value associated with the front of the fiducial markers506a,506bby, for example, calculating an average depth value for all 3D point data included as part of the set of marker point data. The ROI estimation algorithm306may then calculate a minimum depth corresponding to the frontal clipping distance702by adjusting the estimated depth value associated with the front of the fiducial markers506a,506bby a fixed distance between the fiducial markers and the ULD. If, for example, the fiducial markers506a,506bare placed three inches inside the ULD (e.g., the front face of both fiducial markers506a,506bare three inches from the front edge of the ULD interior), then the ROI estimation algorithm306may subtract three inches from the estimated depth value associated with the front of the fiducial markers506a,506bto estimate the frontal clipping distance702. Correspondingly, the ROI estimation algorithm306may estimate the maximum depth corresponding to a rear clipping distance (e.g., Zmax) by adjusting the frontal clipping distance702by the known depth of the container (e.g., determined by ULD type). As another example, the ROI estimation algorithm306may estimate a side clipping distance (e.g., Xmax, illustrated inFIG.7by element704) to include 3D point data extending up to, but not exceeding, the sides of the ULD by utilizing the set of marker point data. Namely, the ROI estimation algorithm306may estimate a maximum side value associated with the left/right sides of the fiducial markers506a,506bby, for example, calculating an average maximum side value for all 3D point data included as part of the sides of the respective fiducial markers506a,506b. The ROI estimation algorithm306may calculate an average maximum side value corresponding to the right side of the fiducial marker506bplaced on the right side of the ULD, and an average maximum side value corresponding to the left side of the fiducial marker506aplaced on the left side of the ULD (from the perspective of the LMU202). The ROI estimation algorithm306may then calculate maximum side values corresponding to the side clipping distance704by adjusting the maximum side value associated with the left/right sides of the fiducial markers506a,506bby a fixed distance between the sides of the fiducial markers506a,506band the sides of the ULD. If, for example, the right side of the right fiducial marker506bis three inches from the right side of the ULD, then the ROI estimation algorithm306may add three inches to the maximum side value associated with the right side of the right fiducial marker506bto estimate the side clipping distance704. Accordingly, the ROI estimation algorithm306may estimate the side clipping distance704for the left side of the ULD (e.g., Xmin) by similarly adjusting the maximum side value of the left side of the left fiducial marker506aby the known distance from the left side of the left fiducial marker506ato the left side of the ULD. As yet another example, the ROI estimation algorithm306may estimate a bottom clipping distance (e.g., Ymax, illustrated inFIG.7by element706) to include 3D point data extending up to, but not exceeding, the bottom of the ULD by utilizing the set of floor point data. Namely, the ROI estimation algorithm306may statistically estimate a height value associated with the floor of the unobscured floor plane506cby, for example, calculating an average height value for all 3D point data included as part of the set of floor point data. The ROI estimation algorithm306may simply designate the statistically estimated height value as the bottom clipping distance706, and/or the ROI estimation algorithm306may adjust the statistically estimated height value by, for example, a fixed distance between the unobscured floor plane506cand the bottom surface of the ULD to calculate the bottom clipping distance706. Correspondingly, the ROI estimation algorithm306may estimate a minimum height corresponding to a top clipping distance (e.g., Ymin) by adjusting the bottom clipping distance706by the known height of the container (e.g., determined by ULD type). When the ROI estimation algorithm306estimates clipping parameters for the set of image data, the ROI estimation algorithm306may identify all 3D point data (e.g., within the 3D point data502and/or the 3D depth image504) representative of the ULD. The ROI estimation algorithm306may also generate a projected image708by projecting the rotated and clipped 3D point data (e.g., from the clipping image700) back into the originally captured 3D point data710. As a result of the method400, the rotated and clipped 3D point data represents a clear, well-defined ULD, and the originally captured 3D point data710represents many erroneous and/or otherwise unintelligible 3D point data signal captures. In embodiments, the ROI estimation algorithm306may be and/or include a machine learning model. Thus, some or all of the steps of the method400may be performed by the machine learning model. In these embodiments, the method400may further include training a machine learning model to locate the fiducial marker(s) within the set of image data, crop the set of image data, rotate the set of image data, and clip the rotated set of image data. For example, the processing platform300may include the machine learning model in memory304. The machine learning model may include, for example, a convolutional neural network and/or any other suitable machine learning technique. The processing platform300may train the machine learning model using (i) a plurality of sets of image data, each set of image data featuring a respective ULD, (ii) a plurality of sets of marker point data, each set of marker point data corresponding to a respective set of image data, (iii) a plurality of sets of floor point data, each set of floor point data corresponding to a respective set of image data, and (iv) a plurality of sets of rotated and clipped image data. Generally, the machine learning model training may take place in two steps. First, the machine learning model may analyze each set of image data of the plurality of sets of image data in an attempt to determine the corresponding sets of marker point data and the corresponding sets of floor point data. The machine learning model may determine a set of marker point data and floor point data for each set of image data that may be compared to the known marker point data and floor point data for those respective sets of image data. Based on how closely the marker point data and floor point data match the known marker point data and floor point data for each respective set of image data, the model may be adjusted to more accurately identify marker point data and floor point data in future iterations. Second, the machine learning model may analyze the marker point data and floor point data for each respective set of image data in an attempt to determine the rotation and clipping parameters for each respective set of image data. The machine learning model may determine rotation and clipping parameters for each respective set of image data that may be compared to known rotation and clipping parameters for each respective set of image data. Based on how closely the rotation and clipping parameters for each respective set of image data match the known rotation and clipping parameters for each respective set of image data, the model may be adjusted to more accurately identify/calculate/determine rotation and clipping parameters in future iterations. Similarly, in these embodiments, the method400may include applying the machine learning model to the set of image data featuring the ULD to locate the fiducial marker within the set of image data, crop the set of image data, rotate the set of image data, and clip the rotated set of image data. In this manner, the processing platform300may train and apply the machine learning model to automatically crop, rotate, and clip sets of image data featuring ULDs, thereby increasing the overall system efficiency by reducing the processing bandwidth necessary to perform the initial LMU configuration. The above description refers to a block diagram of the accompanying drawings. Alternative implementations of the example represented by the block diagram includes one or more additional or alternative elements, processes and/or devices. Additionally or alternatively, one or more of the example blocks of the diagram may be combined, divided, re-arranged or omitted. Components represented by the blocks of the diagram are implemented by hardware, software, firmware, and/or any combination of hardware, software and/or firmware. In some examples, at least one of the components represented by the blocks is implemented by a logic circuit. As used herein, the term “logic circuit” is expressly defined as a physical device including at least one hardware component configured (e.g., via operation in accordance with a predetermined configuration and/or via execution of stored machine-readable instructions) to control one or more machines and/or perform operations of one or more machines. Examples of a logic circuit include one or more processors, one or more coprocessors, one or more microprocessors, one or more controllers, one or more digital signal processors (DSPs), one or more application specific integrated circuits (ASICs), one or more field programmable gate arrays (FPGAs), one or more microcontroller units (MCUs), one or more hardware accelerators, one or more special-purpose computer chips, and one or more system-on-a-chip (SoC) devices. Some example logic circuits, such as ASICs or FPGAs, are specifically configured hardware for performing operations (e.g., one or more of the operations described herein and represented by the flowcharts of this disclosure, if such are present). Some example logic circuits are hardware that executes machine-readable instructions to perform operations (e.g., one or more of the operations described herein and represented by the flowcharts of this disclosure, if such are present). Some example logic circuits include a combination of specifically configured hardware and hardware that executes machine-readable instructions. The above description refers to various operations described herein and flowcharts that may be appended hereto to illustrate the flow of those operations. Any such flowcharts are representative of example methods disclosed herein. In some examples, the methods represented by the flowcharts implement the apparatus represented by the block diagrams. Alternative implementations of example methods disclosed herein may include additional or alternative operations. Further, operations of alternative implementations of the methods disclosed herein may combined, divided, re-arranged or omitted. In some examples, the operations described herein are implemented by machine-readable instructions (e.g., software and/or firmware) stored on a medium (e.g., a tangible machine-readable medium) for execution by one or more logic circuits (e.g., processor(s)). In some examples, the operations described herein are implemented by one or more configurations of one or more specifically designed logic circuits (e.g., ASIC(s)). In some examples the operations described herein are implemented by a combination of specifically designed logic circuit(s) and machine-readable instructions stored on a medium (e.g., a tangible machine-readable medium) for execution by logic circuit(s). As used herein, each of the terms “tangible machine-readable medium,” “non-transitory machine-readable medium” and “machine-readable storage device” is expressly defined as a storage medium (e.g., a platter of a hard disk drive, a digital versatile disc, a compact disc, flash memory, read-only memory, random-access memory, etc.) on which machine-readable instructions (e.g., program code in the form of, for example, software and/or firmware) are stored for any suitable duration of time (e.g., permanently, for an extended period of time (e.g., while a program associated with the machine-readable instructions is executing), and/or a short period of time (e.g., while the machine-readable instructions are cached and/or during a buffering process)). Further, as used herein, each of the terms “tangible machine-readable medium,” “non-transitory machine-readable medium” and “machine-readable storage device” is expressly defined to exclude propagating signals. That is, as used in any claim of this patent, none of the terms “tangible machine-readable medium,” “non-transitory machine-readable medium,” and “machine-readable storage device” can be read to be implemented by a propagating signal. In the foregoing specification, specific embodiments have been described. However, one of ordinary skill in the art appreciates that various modifications and changes can be made without departing from the scope of the invention as set forth in the claims below. Accordingly, the specification and figures are to be regarded in an illustrative rather than a restrictive sense, and all such modifications are intended to be included within the scope of present teachings. Additionally, the described embodiments/examples/implementations should not be interpreted as mutually exclusive, and should instead be understood as potentially combinable if such combinations are permissive in any way. In other words, any feature disclosed in any of the aforementioned embodiments/examples/implementations may be included in any of the other aforementioned embodiments/examples/implementations. The benefits, advantages, solutions to problems, and any element(s) that may cause any benefit, advantage, or solution to occur or become more pronounced are not to be construed as a critical, required, or essential features or elements of any or all the claims. The claimed invention is defined solely by the appended claims including any amendments made during the pendency of this application and all equivalents of those claims as issued. Moreover, in this document, relational terms such as first and second, top and bottom, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. The terms “comprises,” “comprising,” “has”, “having,” “includes”, “including,” “contains”, “containing” or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises, has, includes, contains a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. An element proceeded by “comprises . . . a”, “has . . . a”, “includes . . . a”, “contains . . . a” does not, without more constraints, preclude the existence of additional identical elements in the process, method, article, or apparatus that comprises, has, includes, contains the element. The terms “a” and “an” are defined as one or more unless explicitly stated otherwise herein. The terms “substantially”, “essentially”, “approximately”, “about” or any other version thereof, are defined as being close to as understood by one of ordinary skill in the art, and in one non-limiting embodiment the term is defined to be within 10%, in another embodiment within 5%, in another embodiment within 1% and in another embodiment within 0.5%. The term “coupled” as used herein is defined as connected, although not necessarily directly and not necessarily mechanically. A device or structure that is “configured” in a certain way is configured in at least that way, but may also be configured in ways that are not listed. The Abstract of the Disclosure is provided to allow the reader to quickly ascertain the nature of the technical disclosure. It is submitted with the understanding that it will not be used to interpret or limit the scope or meaning of the claims. In addition, in the foregoing Detailed Description, it can be seen that various features are grouped together in various embodiments for the purpose of streamlining the disclosure. This method of disclosure is not to be interpreted as reflecting an intention that the claimed embodiments require more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive subject matter may lie in less than all features of a single disclosed embodiment. Thus, the following claims are hereby incorporated into the Detailed Description, with each claim standing on its own as a separately claimed subject matter. | 59,035 |
11861859 | DETAILED DESCRIPTION Hereinafter, embodiments of the disclosure are described in detail with reference to the accompanying drawings. It should be noted that the same elements will be designated by the same reference numerals although they are shown in different drawings. In the following description, specific details such as detailed configurations and components are merely provided to assist with the overall understanding of the embodiments of the present disclosure. Therefore, it should be apparent to those skilled in the art that various changes and modifications of the embodiments described herein may be made without departing from the scope of the present disclosure. In addition, descriptions of well-known functions and constructions are omitted for clarity and conciseness. The terms described below are terms defined in consideration of the functions in the present disclosure, and may be different according to users, intentions of the users, or customs. Therefore, the definitions of the terms should be determined based on the contents throughout this specification. The present disclosure may have various modifications and various embodiments, among which embodiments are described below in detail with reference to the accompanying drawings. However, it should be understood that the present disclosure is not limited to the embodiments, but includes all modifications, equivalents, and alternatives within the scope of the present disclosure. Although the terms including an ordinal number such as first, second, etc. may be used for describing various elements, the structural elements are not restricted by the terms. The terms are only used to distinguish one element from another element. For example, without departing from the scope of the present disclosure, a first structural element may be referred to as a second structural element. Similarly, the second structural element may also be referred to as the first structural element. As used herein, the term “and/or” includes any and all combinations of one or more associated items. The terms used herein are merely used to describe various embodiments of the present disclosure but are not intended to limit the present disclosure. Singular forms are intended to include plural forms unless the context clearly indicates otherwise. In the present disclosure, it should be understood that the terms “include” or “have” indicate existence of a feature, a number, a step, an operation, a structural element, parts, or a combination thereof, and do not exclude the existence or probability of the addition of one or more other features, numerals, steps, operations, structural elements, parts, or combinations thereof. Unless defined differently, all terms used herein have the same meanings as those understood by a person skilled in the art to which the present disclosure belongs. Terms such as those defined in a generally used dictionary are to be interpreted to have the same meanings as the contextual meanings in the relevant field of art, and are not to be interpreted to have ideal or excessively formal meanings unless clearly defined in the present disclosure. The electronic device according to one embodiment may be one of various types of electronic devices. The electronic devices may include, for example, a portable communication device (e.g., a smart phone), a computer, a portable multimedia device, a portable medical device, a camera, a wearable device, or a home appliance. According to one embodiment of the disclosure, an electronic device is not limited to those described above. The terms used in the present disclosure are not intended to limit the present disclosure but are intended to include various changes, equivalents, or replacements for a corresponding embodiment. With regard to the descriptions of the accompanying drawings, similar reference numerals may be used to refer to similar or related elements. A singular form of a noun corresponding to an item may include one or more of the things, unless the relevant context clearly indicates otherwise. As used herein, each of such phrases as “A or B,” “at least one of A and B,” “at least one of A or B,” “A, B, or C,” “at least one of A, B, and C,” and “at least one of A, B, or C,” may include all possible combinations of the items enumerated together in a corresponding one of the phrases. As used herein, terms such as “1st,” “2nd,” “first,” and “second” may be used to distinguish a corresponding component from another component, but are not intended to limit the components in other aspects (e.g., importance or order). It is intended that if an element (e.g., a first element) is referred to, with or without the term “operatively” or “communicatively”, as “coupled with,” “coupled to,” “connected with,” or “connected to” another element (e.g., a second element), it indicates that the element may be coupled with the other element directly (e.g., wired), wirelessly, or via a third element. As used herein, the term “module” may include a unit implemented in hardware, software, or firmware, and may interchangeably be used with other terms, for example, “logic,” “logic block,” “part,” and “circuitry.” A module may be a single integral component, or a minimum unit or part thereof, adapted to perform one or more functions. For example, according to one embodiment, a module may be implemented in a form of an application-specific integrated circuit (ASIC). Conventional disparity estimation methods focus only on estimating disparity from a specific domain, such as for indoor scenarios only, or for street view only. Accordingly, when testing on a different scenario using a conventional method, accuracy may be very bad. FIG.1illustrates a deep learning system for robust disparity estimation based on cost-volume attention, according to an embodiment. Referring toFIG.1, the deep learning system includes a feature map extraction module101, a cost volume calculation module102, a cost volume attention (CVA) module103, a cost aggregation module104, and a disparity fusion module105. The feature extraction module101extracts feature maps from left and right images. The cost volume calculation module102calculates a matching cost based on the left/right feature maps. The CVA module103adjusts (emphasizes/suppresses) portions of the cost volume based on attention, providing different weights for different disparities in the cost volume. The cost aggregation module104aggregates the attention-aware cost volume to output a disparity. The disparity fusion module105fuses two aggregated disparities (e.g., trained on different disparity ranges) to provide a final output disparity. AlthoughFIG.1illustrates each module as a separate element, the modules may be included in a single element such as a processor or an ASIC. FIG.2illustrates a process of generating a final output disparity by a deep learning system, according to an embodiment. Specifically,FIG.2illustrates a process of deep learning system, which works well for various scenarios by using single model, based on CVA. Herein, the system may be referred to as CVANet. For example, the process illustrated inFIG.2may be performed by the deep learning system illustrated inFIG.1. Referring toFIG.2, a disparity fusion scheme is provided, based on two networks trained on different disparity ranges. A first network is optimized on a partial disparity range [0, a], and a second network is optimized on a full disparity range [0, b], where b>a. The feature extraction modules, cost volume calculation modules, CVA modules, and cost aggregation modules may be the same for the two disparity estimation networks of different disparity ranges. In both networks, the feature extraction module extracts feature maps from left and right images. Thereafter, the cost volume calculation module calculates a matching cost between the left/right feature maps. The output is a cost volume, which represents the matching cost between the left and right feature maps at every disparity level. In the ideal case, the matching cost of the true disparity level will be 0. The CVA module revises the cost volume based on the attention technology, which provides different weights for different disparities in the cost volume. For different scenarios, this attention module will focus on different parts of cost-volume. For example, if it is an outdoor scenario, the attention module may give a small disparity more weight (because outdoor objects are far away), but may give large a disparity more weight for an indoor scenario. The CVA module may refine the matching cost volume in either a multi-branch way or a single-branch way. The cost aggregation module aggregates the attention-aware cost volume to output a disparity map from each network. Thereafter, the disparity fusion module fuses the aggregated disparities from each network (based on the different disparity ranges) to provide a final estimated disparity. The feature extraction may be implemented using a conventional feature extraction backbone, such as ResNet, or stacked hourglass network. The inputs of feature extraction are left and right images, each having a size H×W, wherein H is height and W is width, and the outputs are the corresponding feature maps C×W×H for left and right images respectively, while C is the number of channels. The cost volume may also be implemented using existing cost-volumes, such as a standard cost-volume based on feature map correlation, or an extended cost volume that integrates multiple cost volumes. The output of cost volume may be a four dimensional (4D) feature map C×D×H×W, wherein C is the number of channels, D is the disparity level, H is the height, and W is the width. Regarding the CVA, because the cost volume is a 4D feature map, conventional attention algorithms based on a 3D feature map cannot be directly adopted. Accordingly, various embodiments are provided herein for performing the attention on the cost volume. CVA Based On Multi-Branch Attention A concept of multi-branch CVA is partitioning 4D feature maps CV∈R(C×D×H×W)into several 3D feature maps, and then applying an attention mechanism on each 3D feature map. Below, two different methods are described for 4D to 3D partitioning, (a) partitioning along channel dimension of CV, i.e., channel-wise disparity attention on the cost volume (CVA-CWDA), and (b) partitioning along disparity dimension of CV, i.e., disparity-wise channel attention on the cost volume (CVA-DWCA). FIG.3illustrates a process of CVA-CWDA, according to an embodiment. For example, the process ofFIG.3may be performed by the CVA module103ofFIG.1. InFIG.3, M identifies a 3D map for each of the channels 1 though C of the cost volume. Y is the output attention-based cost volume corresponding to M. The attention map is D×D, which is able to show different attention on disparity for different datasets. Referring toFIG.3, a 4D feature map is partitioned into C 3D feature maps, each with size D×H×W (labeled as M). Specifically, a 4D feature map CV∈R(C×D×H×W)is partitioned along channel dimension of CV, resulting in 3D feature maps CV={M1, . . . , MC}, Mi∈R(D×H×W), 1≤i≤C. Thereafter, channel attention is applied to each of the C feature maps at attention blocks A1to ACto obtain the attention-aware feature map Yi∈R(D×H×W). The attention may be calculated along the disparity dimension for each of Yi, which results in a D×D attention matrix. The attention-aware feature maps are then concatenated back to a 4D feature map CV′={Y1, Y2, . . . , YC} as the output of the CVA module. FIG.4illustrates a detailed process of an attention block in CVA-CWDA, according to an embodiment. Referring toFIG.4, an attention block reshapes a D×H×W map M into a reshaped (WH)×D map Mrand a reshaped and transposed D×(WH) map MrT. Mrand MrTare then multiplied and a softmax is adopted to obtain a D×D attention map, i.e., an attention matrix X∈R(D×D), which is then multiplied by Mr, reshaped to 4D, and then added to M in order to output a D×H×W attention-aware feature map Y. FIG.5illustrates a process of CVA-DWCA, according to an embodiment. For example, the process ofFIG.5may be performed by the CVA module103ofFIG.1. InFIG.5, N identifies a 3D map for each of the disparity levels 1 though D of the cost volume. Y is the output attention-based cost volume corresponding to N. The attention map is C×C, which is able to show different attention on disparity for different channels of cost volumes. Referring toFIG.5, a 4D feature map is partitioned into D 3D feature maps, each with size C×H×W (labeled as N). Specifically, a 4D feature map CV∈R(C×D×H×W)is partitioned along channel dimension of CV, resulting in 3D feature maps CV={N1, . . . , ND}, N1∈R(C×H×W), 1≤i≤D. Thereafter, channel attention is applied to each of the D feature maps at attention blocks N1to NDto obtain the attention-aware feature map Yi∈R(C×H×W). The attention may be calculated along the channel dimension for each of Yi, which results in a C×C attention matrix. The attention-aware feature maps are then concatenated back to a 4D feature map CV′={Y1, Y2, . . . , YD} as the output of the CVA module. FIG.6illustrates a detailed process of an attention block in CVA-DWCA, according to an embodiment. Referring toFIG.6, an attention block reshapes a C×H×W map N into a reshaped (WH)×C map Nrand a reshaped and transposed C×(WH) map NrT. Nrand NrTare then multiplied and a softmax is adopted to obtain a C×C attention map, i.e., an attention matrix X∈R(C×C), which is then multiplied by Nr, reshaped to 4D, and then added to N in order to output a C×H×W attention-aware feature map Y. In the above-described embodiments, the CVA-CWDA and CVA-DWCA modules capture different information. More specifically, CVA-CWDA tries to find a correlation between different disparity levels. For example, if the input image is a close-view indoor scenario, CVA-CWDA may emphasize the cost volume with a large disparity level. However, if the input image is an outdoor scenario, CVA-CWDA may emphasize the cost volume with small disparity level. CVA-DWCA focuses on a correlation between different channels of the cost volume, which may be useful when the cost volume consists of multiple types of information, such as the extended cost volume in AMNet. When the cost volume consists of feature map correlation and differences, CVA-DWCA may revise which kind of information used in cost volume is better for a specific image. CVA Based on Single-Branch Attention A concept of single-branch CVA is directly working on the 4D cost volume. Before calculating an attention matrix, high-dimensional feature maps are “flattened” into low-dimensional feature maps. This is achieved by a one-shot attention module, where the input cost volume are flattened into 2D feature maps. Below, four different methods are provided for flattening the high-dimensional feature maps for attention calculation, (a) CVA-SBDA, (b) CVA-SBCA, (c) CVA-SBCDCA, and (d) CVA-SBSA. FIG.7illustrates a process of CVA-SBDA, according to an embodiment. Referring toFIG.7, the input to the CVA-SBDA is a 4D feature map CV∈R(C×D×H×W). CV is reshaped into a 2D (WHC)×(D) map CVr∈R((CWH)×D)and reshaped and transposed into a 2D (D)×(WHC) map CVrT∈R(D×(CWH)). CVrand CVrTare multiplied and a softmax is adopted in order to obtain an attention matrix X∈R(D×D). The D×D attention matrix X is multiplied with CVr, reshaped to 4D, and then added to CV in order to output an attention-aware cost volume CV′∈R(C×D×H×W). FIG.8illustrates a process of single-branch channel attention on the cost volume (CVA-SBCA), according to an embodiment. Referring toFIG.8, the input to the CVA-SBCA is a 4D feature map CV∈R(C×D×H×W). CV is reshaped into a 2D (DWH)×(C) map CVr∈R((DWH)×C)and reshaped and transposed into a 2D (C)×(DWH) map CVrT∈R(C×(DWH)). CVrand CVrTare multiplied and a softmax is adopted in order to obtain an attention matrix X∈R(C×C). The C×C attention matrix X is multiplied with CVr, reshaped to 4D, and then added to CV in order to output an attention-aware cost volume CV′∈R(C×D×H×W). FIG.9illustrates a process of single-branch combined disparity-channel attention on the cost volume (CVA-SBCDCA), according to an embodiment. Referring toFIG.9, the input to the CVA-SBCDCA is a 4D feature map CV∈R(C×D×H×W). CV is reshaped into a 2D (WH)×(CD) map CVr∈R((WH)×(CD))and reshaped and transposed into a 2D (CD)×(WH) map CVrT∈R((CD)×(WH)). CVrand CVrTare multiplied and a softmax is adopted in order to obtain an attention matrix X∈R((CD)×(CD)). The CD×CD attention matrix X is multiplied with CVr, reshaped to 4D, and then added to CV in order to output an attention-aware cost volume CV′∈R(C×D×H×W). FIG.10illustrates a process of single-branch spatial attention on the cost volume (CVA-SBSA), according to an embodiment. Referring toFIG.10, the input to the CVA-SBSA is a 4D feature map CV∈R(C×D×H×W). CV is reshaped into a 2D (CD)×(WH) map CVr∈R((CD)×(WH))and reshaped and transposed into a 2D (WH)×(CD) map CVrT∈R((WH)×(CD)). CVrand CVrTare multiplied and a softmax is adopted in order to obtain an attention matrix X∈R((WH)×(WH)). The WH×WH attention matrix X is multiplied with CVr, reshaped to 4D, and then added to CV in order to output an attention-aware cost volume CV′∈R(C×D×H×W). In comparing the above-described embodiments, CVA-SBDA and CVA-SBCA have same size attention matrices as CVA-CWDA and CVA-DWCA, but their attention matrices are calculated from all of the channels of the cost volume, instead of the multi-branch CVAs, where the attention matrices are calculated per channel. Since the size of the attention matrices does not change, their computational costs are similar. CVA-SBCDCA has attention matrix with size CD×CD, which is a kind of combined attention between the disparity level and channel, but results in much higher computational cost. CVA-SBSA has attention matrix with size WH×WH, which is a kind of spatial attention, which also has higher computational cost. Dual Cost Volume Attention A concept of dual cost volume attention may utilize any two of the above-described CVA modules. As the dual attention is constructed by using two CVA modules together, sequential ordering or parallel ordering may be utilized. FIG.11illustrates a process of dual cost volume attention utilizing sequential ordering and parallel ordering, according to embodiment. Referring toFIG.11, in sequential ordering flow (a), two CVA modules are used utilized in series, and in parallel ordering flow (b), two CVA modules are used utilized in parallel and the results thereof are combined to provide a final cost volume estimate. Since the different attention matrices capture different information, dual cost volume attention may be utilized, as illustrated inFIG.11, by organizing CVAs in either a sequential order or a parallel order. Cost Aggregation A cost aggregation module will output a disparity map by inputting attention-aware cost volume. It may be implemented by any existing cost aggregation modules, such as the semi-global like cost aggregation in Guided Aggregation Net (GANet) as illustrated by components101,102, and104ofFIG.1, or a stacked atrous multi-scale (AM) as illustrated by components101,102,104ofFIG.1. Disparity Fusion To further improve the accuracy and robustness, two networks may be trained on different disparity ranges. Both of these two networks may use the same feature extraction/cost volume/cost attention/cost aggregation, but use different maximum disparity ranges. For example, the two networks (CVANets) may be based on two commonly-used backbones, AMNet and GANet. AMNet uses a depthwise separable version of ResNet-50 as feature extractor, followed by an AM module, which captures deep global contextual information at multiple scales. An extended cost volume (ECV) that simultaneously computes different cost matching metrics may be adopted for cost aggregation. The output of ECV may be processed by a stacked AM module to output the final disparity. GANet implements a feature extractor by an hourglass network and uses feature map correlation as cost volume. GANet designs a semi-global guided aggregation (SGA) layer that implements a differentiable approximation of semi-global matching and aggregates the matching cost in different directions over the whole image. This allows for accurate estimation on occluded and reflective regions. More specifically, a first CVANet is trained on a disparity range [0, a], and outputs a first disparity map D1=Σi=1aiP1,i, where P1,iis the probability of a pixel having estimated disparity equal to i when i<a, and P1,iis the probability of a pixel having estimated disparity greater than or equal to a when i=a. A second CVANet is trained on the full disparity range [0,b], where a<b, and outputs a second disparity map D2=Σi=1biP2,i, where P2,iis the probability of a pixel having estimated disparity equal to i when i<b, and P2,iis the probability of a pixel having estimated disparity greater than or equal to b when i=b. D1and D2may be fused using a disparity combining based on D1and D2directly, or a soft combining (or probability combining) that utilizes the probability vectors P1,i, P2, i. When disparity combining the final output disparity Dfusedmay be obtained as a simple weighted sum as follows: D1=∑i=1ai×P1,iD2=∑i=1bi×P2,iDfused={w1D1+w2D2D1andD2<aD2else w1 and w2 are constants laying between [0,1]. Set by validation results. When soft combining, the fusion occurs on the probability vectors, where w1, w2, and w3 are constants laying between [0,1], as follows: Pfused,i={w1P1,i+w2P2,ii<aw3P2,ii≥a Pfused,ishould be further normalized as Pfused,i=Pfused,i∑j=1bPfused,j. output based on soft combining may be represented by: Dfused=∑i=1bi×Pfused,i Using a single model, the above-described procedure may generate reasonable disparity outputs for both indoor and outdoor scenarios. Accuracy and efficiency (AE) comparison of CVANets with different attention modules are provided below in Table 1, which shows that the multi-branch attention modules generally have better accuracy/efficiency than single-branch attention modules. TABLE 1NetworkCVA moduleBranchComplexityAEAMNetN/AN/AN/A0.6499CVANet-AMNetCWDAMultiO(CD3HW)0.6299CVANet-AMNetDWCAMultiO(C3DHW)0.6277CVANet-AMNetSBDASingleO(CD3HW))0.6378CVANet-AMNetSBCASingleO(C3DHW)0.6369CVANet-AMNetSBCDCASingleO(C3D3HW)0.6299CVANet-AMNetSBSASingleO(CDH3W3)0.6451GANetN/AN/AN/A0.6493CVANet-AMNetCWDAMultiO(CD3HW)0.6259CVANet-AMNetDWCAMultiO(C3DHW)0.6277CVANet-AMNetSBDASingleO(C3DHW)0.6370CVANet-AMNetSBCASingleO(C3DHW)0.6380CVANet-AMNetSBCDCASingleO(C3D3HW)0.6274CVANet-AMNetSBSASingleO(CDH3W3)0.6441 An attention map also shows that for images with different scenarios, the above-described cost-volume attention modules work well. FIG.12illustrates graphs demonstrating effectiveness of cost-volume attention modules, according to an embodiment. Referring toFIG.12, to demonstrate the effectiveness of the above-described techniques, graphs (a) to (c) in the top row show column-wise sum of values the attention matrices (D×D), which provide consistent patterns as the disparity distribution in graphs (d) to (f) in the bottom row13. FIG.13illustrates a block diagram of an electronic device in a network environment, according to one embodiment. Referring toFIG.13, the electronic device1301in the network environment1300may communicate with an electronic device1302via a first network1398(e.g., a short-range wireless communication network), or an electronic device1304or a server1308via a second network1399(e.g., a long-range wireless communication network). The electronic device1301may communicate with the electronic device1304via the server1308. The electronic device1301may include a processor1320, a memory1330, an input device1350, a sound output device1355, a display device1360, an audio module1370, a sensor module1376, an interface1377, a haptic module1379, a camera module1380, a power management module1388, a battery1389, a communication module1390, a subscriber identification module (SIM)1396, or an antenna module1397. In one embodiment, at least one (e.g., the display device1360or the camera module1380) of the components may be omitted from the electronic device1301, or one or more other components may be added to the electronic device1301. In one embodiment, some of the components may be implemented as a single integrated circuit (IC). For example, the sensor module1376(e.g., a fingerprint sensor, an iris sensor, or an illuminance sensor) may be embedded in the display device1360(e.g., a display). The processor1320may execute, for example, software (e.g., a program1340) to control at least one other component (e.g., a hardware or a software component) of the electronic device1301coupled with the processor1320, and may perform various data processing or computations. As at least part of the data processing or computations, the processor1320may load a command or data received from another component (e.g., the sensor module1376or the communication module1390) in volatile memory1332, process the command or the data stored in the volatile memory1332, and store resulting data in non-volatile memory1334. The processor1320may include a main processor1321(e.g., a central processing unit (CPU) or an application processor (AP)), and an auxiliary processor1323(e.g., a graphics processing unit (GPU), an image signal processor (ISP), a sensor hub processor, or a communication processor (CP)) that is operable independently from, or in conjunction with, the main processor1321. Additionally or alternatively, the auxiliary processor1323may be adapted to consume less power than the main processor1321, or execute a particular function. The auxiliary processor1323may be implemented as being separate from, or a part of, the main processor1321. The auxiliary processor1323may control at least some of the functions or states related to at least one component (e.g., the display device1360, the sensor module1376, or the communication module1390) among the components of the electronic device1301, instead of the main processor1321while the main processor1321is in an inactive (e.g., sleep) state, or together with the main processor1321while the main processor1321is in an active state (e.g., executing an application). According to one embodiment, the auxiliary processor1323(e.g., an image signal processor or a communication processor) may be implemented as part of another component (e.g., the camera module1380or the communication module1390) functionally related to the auxiliary processor1323. The memory1330may store various data used by at least one component (e.g., the processor1320or the sensor module1376) of the electronic device1301. The various data may include, for example, software (e.g., the program1340) and input data or output data for a command related thereto. The memory1330may include the volatile memory1332or the non-volatile memory1334. The program1340may be stored in the memory1330as software, and may include, for example, an operating system (OS)1342, middleware1344, or an application1346. The input device1350may receive a command or data to be used by another component (e.g., the processor1320) of the electronic device1301, from the outside (e.g., a user) of the electronic device1301. The input device1350may include, for example, a microphone, a mouse, or a keyboard. The sound output device1355may output sound signals to the outside of the electronic device1301. The sound output device1355may include, for example, a speaker or a receiver. The speaker may be used for general purposes, such as playing multimedia or recording, and the receiver may be used for receiving an incoming call. According to one embodiment, the receiver may be implemented as being separate from, or a part of, the speaker. The display device1360may visually provide information to the outside (e.g., a user) of the electronic device1301. The display device1360may include, for example, a display, a hologram device, or a projector and control circuitry to control a corresponding one of the display, hologram device, and projector. According to one embodiment, the display device1360may include touch circuitry adapted to detect a touch, or sensor circuitry (e.g., a pressure sensor) adapted to measure the intensity of force incurred by the touch. The audio module1370may convert a sound into an electrical signal and vice versa. According to one embodiment, the audio module1370may obtain the sound via the input device1350, or output the sound via the sound output device1355or a headphone of an external electronic device1302directly (e.g., wired) or wirelessly coupled with the electronic device1301. The sensor module1376may detect an operational state (e.g., power or temperature) of the electronic device1301or an environmental state (e.g., a state of a user) external to the electronic device1301, and then generate an electrical signal or data value corresponding to the detected state. The sensor module1376may include, for example, a gesture sensor, a gyro sensor, an atmospheric pressure sensor, a magnetic sensor, an acceleration sensor, a grip sensor, a proximity sensor, a color sensor, an infrared (IR) sensor, a biometric sensor, a temperature sensor, a humidity sensor, or an illuminance sensor. The interface1377may support one or more specified protocols to be used for the electronic device1301to be coupled with the external electronic device1302directly (e.g., wired) or wirelessly. According to one embodiment, the interface1377may include, for example, a high definition multimedia interface (HDMI), a universal serial bus (USB) interface, a secure digital (SD) card interface, or an audio interface. A connecting terminal1378may include a connector via which the electronic device1301may be physically connected with the external electronic device1302. According to one embodiment, the connecting terminal1378may include, for example, an HDMI connector, a USB connector, an SD card connector, or an audio connector (e.g., a headphone connector). The haptic module1379may convert an electrical signal into a mechanical stimulus (e.g., a vibration or a movement) or an electrical stimulus which may be recognized by a user via tactile sensation or kinesthetic sensation. According to one embodiment, the haptic module1379may include, for example, a motor, a piezoelectric element, or an electrical stimulator. The camera module1380may capture a still image or moving images. According to one embodiment, the camera module1380may include one or more lenses, image sensors, image signal processors, or flashes. The power management module1388may manage power supplied to the electronic device1301. The power management module1388may be implemented as at least part of, for example, a power management integrated circuit (PMIC). The battery1389may supply power to at least one component of the electronic device1301. According to one embodiment, the battery1389may include, for example, a primary cell which is not rechargeable, a secondary cell which is rechargeable, or a fuel cell. The communication module1390may support establishing a direct (e.g., wired) communication channel or a wireless communication channel between the electronic device1301and the external electronic device (e.g., the electronic device1302, the electronic device1304, or the server1308) and performing communication via the established communication channel. The communication module1390may include one or more communication processors that are operable independently from the processor1320(e.g., the AP) and supports a direct (e.g., wired) communication or a wireless communication. According to one embodiment, the communication module1390may include a wireless communication module1392(e.g., a cellular communication module, a short-range wireless communication module, or a global navigation satellite system (GNSS) communication module) or a wired communication module1394(e.g., a local area network (LAN) communication module or a power line communication (PLC) module). A corresponding one of these communication modules may communicate with the external electronic device via the first network1398(e.g., a short-range communication network, such as Bluetooth™, wireless-fidelity (Wi-Fi) direct, or a standard of the Infrared Data Association (IrDA)) or the second network1399(e.g., a long-range communication network, such as a cellular network, the Internet, or a computer network (e.g., LAN or wide area network (WAN)). These various types of communication modules may be implemented as a single component (e.g., a single IC), or may be implemented as multiple components (e.g., multiple ICs) that are separate from each other. The wireless communication module1392may identify and authenticate the electronic device1301in a communication network, such as the first network1398or the second network1399, using subscriber information (e.g., international mobile subscriber identity (IMSI)) stored in the subscriber identification module1396. The antenna module1397may transmit or receive a signal or power to or from the outside (e.g., the external electronic device) of the electronic device1301. According to one embodiment, the antenna module1397may include one or more antennas, and, therefrom, at least one antenna appropriate for a communication scheme used in the communication network, such as the first network1398or the second network1399, may be selected, for example, by the communication module1390(e.g., the wireless communication module1392). The signal or the power may then be transmitted or received between the communication module1390and the external electronic device via the selected at least one antenna. At least some of the above-described components may be mutually coupled and communicate signals (e.g., commands or data) therebetween via an inter-peripheral communication scheme (e.g., a bus, a general purpose input and output (GPIO), a serial peripheral interface (SPI), or a mobile industry processor interface (MIPI)). According to one embodiment, commands or data may be transmitted or received between the electronic device1301and the external electronic device1304via the server1308coupled with the second network1399. Each of the electronic devices1302and1304may be a device of a same type as, or a different type, from the electronic device1301. All or some of operations to be executed at the electronic device1301may be executed at one or more of the external electronic devices1302,1304, or1308. For example, if the electronic device1301should perform a function or a service automatically, or in response to a request from a user or another device, the electronic device1301, instead of, or in addition to, executing the function or the service, may request the one or more external electronic devices to perform at least part of the function or the service. The one or more external electronic devices receiving the request may perform the at least part of the function or the service requested, or an additional function or an additional service related to the request, and transfer an outcome of the performing to the electronic device1301. The electronic device1301may provide the outcome, with or without further processing of the outcome, as at least part of a reply to the request. To that end, a cloud computing, distributed computing, or client-server computing technology may be used, for example. One embodiment may be implemented as software (e.g., the program1340) including one or more instructions that are stored in a storage medium (e.g., internal memory1336or external memory1338) that is readable by a machine (e.g., the electronic device1301). For example, a processor of the electronic device1301may invoke at least one of the one or more instructions stored in the storage medium, and execute it, with or without using one or more other components under the control of the processor. Thus, a machine may be operated to perform at least one function according to the at least one instruction invoked. The one or more instructions may include code generated by a complier or code executable by an interpreter. A machine-readable storage medium may be provided in the form of a non-transitory storage medium. The term “non-transitory” indicates that the storage medium is a tangible device, and does not include a signal (e.g., an electromagnetic wave), but this term does not differentiate between where data is semi-permanently stored in the storage medium and where the data is temporarily stored in the storage medium. According to one embodiment, a method of the disclosure may be included and provided in a computer program product. The computer program product may be traded as a product between a seller and a buyer. The computer program product may be distributed in the form of a machine-readable storage medium (e.g., a compact disc read only memory (CD-ROM)), or be distributed (e.g., downloaded or uploaded) online via an application store (e.g., Play Store™), or between two user devices (e.g., smart phones) directly. If distributed online, at least part of the computer program product may be temporarily generated or at least temporarily stored in the machine-readable storage medium, such as memory of the manufacturer's server, a server of the application store, or a relay server. According to one embodiment, each component (e.g., a module or a program) of the above-described components may include a single entity or multiple entities. One or more of the above-described components may be omitted, or one or more other components may be added. Alternatively or additionally, a plurality of components (e.g., modules or programs) may be integrated into a single component. In this case, the integrated component may still perform one or more functions of each of the plurality of components in the same or similar manner as they are performed by a corresponding one of the plurality of components before the integration. Operations performed by the module, the program, or another component may be carried out sequentially, in parallel, repeatedly, or heuristically, or one or more of the operations may be executed in a different order or omitted, or one or more other operations may be added. Although certain embodiments of the present disclosure have been described in the detailed description of the present disclosure, the present disclosure may be modified in various forms without departing from the scope of the present disclosure. Thus, the scope of the present disclosure shall not be determined merely based on the described embodiments, but rather determined based on the accompanying claims and equivalents thereto. | 38,879 |
11861860 | DETAILED DESCRIPTION As is set forth in greater detail below, implementations of the present disclosure are directed to the collection of two-dimensional (“2D”) body images of a body of a user and the determination of one or more body dimensions of the body based on the collected 2D image. Body dimensions, as used herein, include any length, circumference, ratio, etc., of any part of a body. For example, body dimensions include, but are not limited to, shoulder circumference, chest circumference, waist circumference, hip circumference, inseam length, bicep circumference, leg circumference, waist to hip ratio, chest to waist ratio, waist to height ratio, etc. Also disclosed is the generation and presentation of a 3D body model from the 2D body images. Two-dimensional body images may be obtained from any device that includes a 2D camera, such as cell phones, tablets, laptops, etc. In other implementations, the 2D body images may be obtained from any other source, such as data storage. The 2D body images may be sent by an application executing on the device to remote computing resources that process the 2D body images to determine personalized 3D body features, to generate a personalized 3D body model of the body of the user, to determine body dimensions of the body represented in the 2D image, and/or to determine body measurements of the body of the user. Body measurements include, but are not limited to, body composition (e.g., weight, body fat, bone mass, body mass, body volume, etc.). The application executing on the portable device receives the current body dimension information, current body measurement information, and personalized 3D body features, generates the personalized 3D body model, and presents some or all of the body dimensions, some or all of the body measurements, and/or the personalized 3D body model to the user. In some implementations, the user may interact with the personalized 3D body model to view different sides of the personalized 3D body model and/or to visualize differences in the personalized 3D body model and/or corresponding body dimensions if one or more body measurements change. For example, a user may provide a target body measurement, such as a decrease in body fat, and the disclosed implementations may generate one or more predicted personalized 3D body models and corresponding predicted body dimensions that represent a predicted appearance and predicted dimensions of the body of the user with the target body measurement(s). In some implementations, the predicted appearance and/or predicted dimensions of the body may be presented as a 3D body slider and/or other adjustor that the user may interact with to view progressive changes to the body appearance and dimensions at different body measurements. FIG.1Ais a transition diagram of 2D body image collection and processing to produce a personalized 3D body model of a body of a user100and/or body dimensions of the body of the user that may be presented back to the user,FIG.1Bis a transition diagram of a generation of a predicted 3D body model and corresponding predicted body dimensions, andFIG.2illustrates examples of different orientations or body directions of a body200, in accordance with implementations of the present disclosure. In some implementations, a user100/200may execute an application125/225on a portable device130/230, such as a cellular phone, tablet, laptop, etc., that includes an imaging element (e.g., camera) and interact with the application. The imaging element may be any conventional imaging element, such as a standard 2D Red, Green, Blue (“RGB”) digital camera that is included on many current portable devices. Likewise, images, as discussed herein may be still images generated by the imaging element and/or images or frames extracted from video generated by the imaging element. The user may provide user information, such as username, password, etc., to the application so that the application can identify the user and determine a user account associated with the user. Likewise, the user may provide other user information, such as body information, including but not limited to weight, height, age, gender, ethnicity, etc. The user may select which user information is provided or choose not to provide any user information. In addition, in some implementations, the user may interact with the application executing on the portable device130/230without providing any user identifying information (e.g., operate as a guest to the application). Upon user identification and/or receipt of user information, the user100/200positions the portable device130/230such that a field of view of the imaging element of the portable device is substantially horizontal and facing toward the user. In some implementations, the application125/225executing on the portable device130/230may provide visual and/or audible instructions that guide the user100/200in the placement and positioning of the portable device130/230. For example, the application may instruct the user100/200to place the portable device130/230between waist and head height of the user and in a substantially vertical direction (e.g., between 2 and 10 degrees of vertical) such that the imaging element is pointed toward the user and the field of view of the imaging element is substantially horizontal. In some implementations, the application may request that the user wear a minimal amount of clothing, such as undergarments shown inFIGS.1A,1B, and2. By wearing minimal clothing, processing of the 2D body image may be more accurate. Once the portable device is properly positioned, 2D body images of the user100/200are captured by the imaging element of the portable device130/230. The 2D body images are processed to determine that the user is in a defined pose, such as an “A Pose,” and to determine a body direction of the body of the user with respect to the imaging element. The defined pose may be any body position that enables image capture of components of the body. In one example, the defined pose is an “A Pose” in which the arms are separated from the sides of the body and the legs are separated, for example by separating the feet of the body to about shoulder width. The A Pose allows image processing of 2D body images to distinguish between body parts (e.g., legs, arms, torso) from different angles and also aids in body direction determination. The body direction may be any direction or orientation of the body with respect to the imaging element. Example body directions include, but are not limited to, a front side body direction in which the body is facing the imaging element, a right side body direction in which the body is turned such that a right side of the body is facing the imaging element, a left side body direction in which a left side of the body is facing the imaging element, and a back side body direction in which a back of the body is facing the imaging element. As will be appreciated, any number of body directions and corresponding orientations of the body may be utilized with the disclosed implementations and the four discussed (front side, right side, back side, and left side) are provided only as examples. In some implementations, the application125/225executing on the portable device130/230may guide the user through different body directions and select one or more 2D images as representative of each body direction. For example, referring toFIG.2, an application225executing on the portable device230may guide the user into the proper pose, such as the “A Pose” illustrated by the body200of the user and then guide the user through a series of body directions200A,200B,200C,200D,200E,200F,200G, and200H in which the user rotates their body to the requested body direction and remains in the A Pose while 2D body images are generated and one or more of those 2D body images are selected by the application as a 2D body direction image corresponding to the current body direction of the body of the user. In the example illustrated inFIG.2, eight different 2D body direction images are selected by the application225executing on the portable device230, one for each respective body direction200A,200B,200C,200D,200E,200F,200G, and200H. Returning back toFIG.1A, as each 2D body direction image is selected by the application, or after all 2D body direction images are selected, the 2D body direction images are sent from the application125/225executing on the portable device130/230via a network290(FIG.2) to remote computing resources103/203for further processing. In addition, the user information provided to the application by the user100/200may be sent from the application executing on the portable device130/230to the remote computing resources103/203. In other implementations, all processing may be done on the portable device. In still other examples, as images are generated, the images may be sent to the remote computing resources103/203and processed by the remote computing resources103/203to select the body direction images. The remote computing resources103/203may include a 3D body model system101that receives the user information and/or the 2D body direction images and processes those images using one or more neural networks, such as a convolutional neural network, to generate personalized 3D body features corresponding to a personalized 3D body model of the body of the user100/200. In addition, one or more of the 2D body direction images, such as front side 2D body direction image may be processed to determine one or more additional body measurements, such as body fat percentage, body mass, bone density, muscle mass, etc. Still further, one or more of the 2D body direction images may be processed, as discussed further below, to determine one or more body dimensions of the user, such as shoulder circumference, waist circumference, waist-to-hip ratio, etc. The 3D body model system101, upon generating the personalized 3D body features, body dimensions and body measurement sends the personalized 3D body features, body dimensions, and body measurements back to the application125/225executing the portable device130/230. The application125/225, upon receipt of the personalized 3D body features, body dimensions, and body measurements generates, from the personalized 3D body features, a personalized 3D body model that is representative of the body100/200of the user and presents the personalized 3D body model, one or more body dimensions, and one or more body measurements on a display of the portable device130/230. In addition to rendering and presenting the personalized 3D body model, one or more body dimensions, and/or one or more body measurements may be presented. In some implementations, the user100/200can interact with the presented personalized 3D body model, body dimensions, and body measurements. For example, the user may view historical information that was previously collected for the user via the application125/225. The user may also interact with the presented personalized 3D body model to rotate and/or turn the presented personalized 3D body model. For example, if the portable device130/230includes a touch-based display, the user may use the touch-based display to interact with the application and rotate the presented personalized 3D body model to view different views (e.g., front, side, back) of the personalized 3D body model. Likewise, in some implementations, the user may view body dimension information with respect to a larger population or cohort to which the user is associated (e.g., based on age, fitness level, height, weight, gender, etc.). For example, the user may view body dimension information relative to body dimension information of other people that are within five years of age of the user and of a same gender as the user. In some implementations, as part of interaction with the application125/225, the user100/200may provide one or more adjustments to body measurements, referred to herein as targets. For example, a user may request to alter the body fat measurement value of the body by a defined amount (e.g., from 25% to 20%), alter the muscle mass by a defined amount, alter the body weight a defined amount, etc. In other implementations, in addition to altering one or more body measurements, the user may specify one or more activities (e.g., exercise, nutrition, sleep) that should cause adjustments to one or more body measurements. In the example illustrated inFIG.1B, the user provides a body fat measurement adjustment to a target body fat measurement value. Upon receipt of the target body fat measurement value, the application125/225executing on the portable device130/230sends the target body fat measurement value to the remote computing resources103/203for further processing. The remote computing resources103/203and the 3D body model system101process the received target body fat measurement value along with other current body measurements and the personalized 3D body features to generate predicted personalized 3D body features, predicted body dimensions, and predicted body measurements that correspond to the target body fat measurement value and/or the selected activity. The remote computing resources103/203may then send the predicted personalized 3D body features, predicted body dimensions, and predicted body measurements to the application125/225and the application125/225may render a predicted 3D body model based on the received predicted personalized 3D body features. Similar to the personalized 3D body model, the application125/225may present the predicted 3D body model, one or more predicted body dimensions, and/or one or more of the predicted body measurements to the user and enable interaction by the user with the predicted personalized 3D body model, predicted body dimensions, and/or predicted body measurements. As discussed further below, in some implementations, the user may be able to alter views between the personalized 3D body model and the predicted personalized 3D body model. In other implementations, the application125/225may integrate the personalized 3D body model and the predicted personalized 3D body model to produce a 3D body slider and/or other adjustor (e.g., radio button, dial, etc.) that provides the user with a continuous view of different appearances of the body and/or different body dimensions of the body at different body measurements between the current body measurements and the predicted body measurements. The 3D body slider and/or other adjustor, which relates to any type of controller or adjustor that may be used to present different appearances of the body and/or different body dimensions of the body at different body measurements is referred to herein generally as a “3D body model adjustor.” FIG.3Ais a user interface301-1presented by an application executing on a portable device, such as the application125/225executing on the portable device130/230discussed above with respect toFIGS.1A,1B, and2, in accordance with implementations of the present disclosure. In this example, the user interface301-1illustrates a 2D body direction image300-1captured by an imaging element of the portable device that was used to generate and present a personalized 3D body model, corresponding body dimension information and corresponding body measurement information. In this example, the illustrated user interface301-1shows the 2D body direction image, body dimensions, including the shoulder circumference333-1, waist circumference333-2, and waist/hip ratio333-3, and body measurements, including the body fat percentage302-1determined for the body, and the weight304of the body. As will be appreciated, additional or less body dimensions and/or body measurements may be included on the user interface301-1. For example, additional body dimensions, such as bicep circumference, waist to height ratio, thigh circumference, etc., may optionally be presented on the user interface. In some examples, a user may select which body dimensions and/or body measurements are presented. As discussed further below, the body dimensions may be determined from the 2D body direction image300-1. Likewise, the body measurements may be determined from the 2D body direction image300-1and/or provided as user information by the user. In other implementations, additional or fewer body dimensions and/or additional or fewer body measurements may be presented on the user interface301-1by the application125/225. A user interacting with the user interface301-1may also select to view other 2D body direction images that were used to generate a personalized 3D body model, other body dimensions determined for the body, and/or other body measurements determined for the body, by selecting the indicators310and/or swiping or otherwise interacting with the user interface301-1to alter the currently presented 2D body direction image300-1. The user may also alternate between a view of 2D body direction images300-1, as illustrated in the user interface301-1ofFIG.3Aand the rendered and presented personalized 3D body model300-2, as illustrated in the small image presentation of the personalized 3D body model300-2inFIG.3Aand as illustrated as the primary image300-2in user interface301-2ofFIG.3B. Referring briefly toFIG.3B, the user may interact with to rotate and/or change the view of the personalized 3D body model300-2by directly interacting with the personalized 3D body model300-2. For example, the user may rotate the presentation of the personalized 3D body model to view different portions of the personalized 3D body model, zoom out to view more of the personalized 3D body model, or zoom in to view details corresponding to a portion of the personalized 3D body model. In some implementations, if the user has utilized the application125/225over a period of time to generate multiple instances of personalized 3D body models of the user, the user interface may also present historical body measurements and/or body dimensions316corresponding to the different dates in which 2D body images of the body of the user were captured and used to generate a personalized 3D body model, body dimensions, and body measurements of the body of the user. In the illustrated example, the user may select between viewing historical waist/hip ratio316-1(a body dimension) as illustrated inFIG.3Aand body fat percentage316-2(a body measurement), as illustrated inFIG.3B, through selection of the toggle control318. In other implementations, different or additional historical body dimensions and/or body measurements may be accessible through the user interface301. In addition to viewing historical body dimensions and/or body measurements, the user may also access and view either the 2D body images that were collected at those prior points in time and/or view the personalized 3D body models generated from those prior 2D body images, through selection of the date control322-1or the arrow control322-2. The user may also interact with the user interface301-1to select to take a new scan of their body by selecting the Take A New Scan control314. In response to a user selecting the Take A New Scan control314, the application executing on the portable device will provide instructions to the user to position the user in the defined pose (e.g., A Pose) and at proper body directions so that 2D body direction images can be generated and used to produce a personalized 3D body model, body dimensions, and body measurements of the body of the user, as discussed herein. In some implementations, a user may also interact with the application125/225to predict an appearance of the body with different body measurements (e.g., changes in body fat percentage and/or changes in muscle mass). For example,FIG.3Cis a user interface illustrating an example 3D body model adjustor in the form of a slider adjustment and resulting predicted three-dimensional body model, corresponding body dimensions, and corresponding body measurements, in accordance with implementations of the present disclosure. As illustrated, a user may interact with the user interface301-3to alter one or more body measurements and the application executing on the device will generate a predicted personalized 3D body model300-3, predicted body dimensions, and predicted body measurements in accordance with the altered body measurements, in accordance with implementations of the present disclosure. In the illustrated example, the user is using their hand360to interact with a single slider302-2presented on the user interface301-3to alter the body fat measurement value, in this example from the computed 27% to 10%. In response to receiving the target body measurement, in this example the reduced body fat measurement value, the disclosed implementations, as discussed further below, generate and present a predicted personalized 3D body model300-3, predicted body dimensions, and predicted body measurements representative of a predicted appearance of the body of the user with the target body measurement. The predicted personalized 3D body model300-3may be predicted and rendered based on the personalized 3D body model and corresponding personalized 3D body features determined for the body of the user. Likewise, shading and contours, such as shading to show stomach muscle definition303-1or body dimensions changes, such as increased bicep circumference303-2, increased shoulder circumference333-1, decreased waist circumference333-2, decreased waist/hip ratio333-3, etc., may be generated and presented with the presentation of the predicted personalized 3D body model. The predicted body dimensions may be determined from the predicted personalized 3D body model and/or from the trained machine learning model. For example, the predicted personalized 3D body model may be used to generate a predicted personalized silhouette that may be used, as discussed herein, to generate predicted body dimensions. Alternatively, based on the target body measurement, a synthetic body model may be selected or generated and a silhouette generated by the synthetic body model that corresponds to the target body measurements. The silhouette may then be used, as discussed herein, to determine predicted body dimensions corresponding to the target body measurement. Like the other rendered and presented personalized 3D body models, the user may interact with the presented predicted personalized 3D body model300-3to view different portions or aspects of the predicted personalized 3D body model. While the example illustrated inFIG.3Cshows alteration of the body fat percentage, in other examples, a user may select to alter other body measurements, such as body weight, muscle mass, etc. Likewise, in some examples, based on a change to one body measurement, other body measurements and/or body dimensions may be automatically changed to correspond to the changed body measurement. For example, if the user changes the body fat percentage from 27% to 10%, as in the illustrated example, the application executing on the portable device may determine that in most instances a change in that amount of body fat percentage also typically results in a weight change from the determined 136 pounds to 115 pounds. The user may accept this anticipated change to other body measurements and/or body dimensions, provide other inputs for those body measurements and/or body dimensions, or select to leave those body measurements/body dimensions unchanged. In still other examples, a user may be able to interact with a multi-dimensional slider and specify different changes to body measurements and/or activities. In some implementations, some or all of the sliders of the multi-dimensional slider may be interconnected such that a change to one slider may results in a change or adjustment to another slider. In other implementations, other forms of multi-dimensional 3D body model adjustors may also be presented. FIG.3Dis a user interface301-4illustrating another example 3D body model adjustor in the form of a multi-dimensional slider312-3adjustment and resulting predicted personalized 3D body model300-4and predicted body dimensions333-1,333-2,333-3, in accordance with implementations of the present disclosure. In this example, the user may interact with a multi-dimensional slider312-3to adjust one or more body measurements and/or activity levels. In this example, the user may adjust the body fat measurement value, muscle mass measurement of the body, weight of the body, the amount of time they do cardio exercises, lift weights, the number of calories consumed, and/or the number of hours the user sleeps. In other implementations, the sliders may represent other body measurements (e.g., muscle mass, weight, etc.) and/or other activities that may be changed by the user and utilized by the disclosed implementations as targets for use in computing predicted personalized 3D body features and corresponding predicted personalized 3D body models. FIG.4is a transition diagram400of processing 2D body images of a body to produce a personalized 3D body model and body dimensions of that body, in accordance with implementations of the present disclosure. 3D modeling and body dimension determination of a body from 2D body images begins with the receipt or creation of a 2D body image402that includes a representation of the body403of the user to be modeled. As discussed above, 2D body images402for use with the disclosed implementations may be generated using any conventional imaging element, such as a standard 2D Red, Green, Blue (“RGB”) digital camera that is included on many current portable devices (e.g., tablets, cellular phones, laptops, etc.). The 2D body image may be a still image generated by the imaging element or an image extracted from video generated by the imaging element. Likewise, any number of images may be used with the disclosed implementations. As discussed, the user may be instructed to stand in a particular orientation (e.g., front facing toward the imaging element, side facing toward the imaging element, back facing toward the imaging element, etc.) and/or to stand in a particular pose, such as an “A pose”. Likewise, the user may be instructed to stand a distance from the camera such that the body of the user is completely or partially included in a field of view of the imaging element and represented in the generated image402. Still further, in some implementations, the imaging element may be aligned or positioned at a fixed or stationary point and at a fixed or stationary angle so that images generated by the imaging element are each from the same perspective and encompass the same field of view. As will be appreciated, a user may elect or opt-in to having a personalized 3D body model of the body of the user generated and may also select whether the generated personalized 3D body model and/or other information, such as determined body dimensions, may be used for further training of the disclosed implementations and/or for other purposes. The 2D body image402that includes a representation of the body403of the user may then be processed to produce a segmented silhouette404of the body403of the user represented in the image402. A variety of techniques may be used to generate the silhouette404. For example, background subtraction may be used to subtract or black out pixels of the image that correspond to a background of the image while pixels corresponding to the body403of user (i.e., foreground) may be assigned a white or other color values. In another example, a semantic segmentation algorithm may be utilized to label background and body (foreground) pixels and/or to identify different segments of the body. For example, a convolutional neural network (“CNN”) may be trained with a semantic segmentation algorithm to determine bodies, such as human bodies, in images and/or to determine body segments (e.g., head segment, neck segment, torso segment, left arm segment, etc.). In addition or as an alternative thereto, the segmented silhouette may be segmented into one or more body segments, such as hair segment404-1, head segment404-2, neck segment404-3, upper clothing segment404-4, upper left arm404-5, lower left arm404-6, left hand404-7, torso404-8, upper right arm404-9, lower right arm404-10, right hand404-11, lower clothing404-12, upper left leg404-13, upper right leg404-16, etc. For example, the CNN may be trained with a semantic segmentation algorithm to predict for each pixel of an image the likelihood that the pixel corresponds to a segment label (e.g., hair, upper clothing, lower clothing, head, upper right arm, etc.). For example, the CNN may be trained to process each 2D body image and output, for each pixel of each image, a vector that indicates a probability for each label that the pixel corresponds to that label. For example, if there are twenty-three labels (e.g., body segments) for which the CNN is trained, the CNN may generate, for each pixel of a 2D image, a vector that includes a probability score for each of the twenty-three labels indicating the likelihood that the pixel corresponds to the respective label. As a result, each pixel of an image may be associated with a segment based on the probability scores indicated in the vector. For segments for which the CNN is trained but are not represented in the 2D image, the CNN will provide low or zero probability scores for each label indicated in the vector, thereby indicating that the segment is not visible in the 2D body image. In some implementations, the silhouette of the body of the user may be normalized in height and centered in the image. This may be done to further simplify and standardize inputs to a CNN to those on which the CNN was trained. Likewise, a silhouette of the body of the user may be preferred over the representation of the body of the user so that the CNN can focus only on body shape and not skin tone, texture, clothing, etc. The silhouette404of the body may then be processed by one or more other CNNs406that are trained to determine body traits, also referred to herein as body features, representative of the body and to produce personalized 3D body features that are used to determine body dimensions of the body and a personalized 3D body model of the body. The body features may be represented as a set of neural network weights representative of different aspects of the body. In some implementations, the CNN406may be trained for multi-mode input to receive as inputs to the CNN the silhouette404, and one or more known body attributes405of the body of the user. For example, a user may provide a height of the body of the user, a weight of the body of the user, a gender of the body of the user, etc., and the CNN may receive one or more of those provided attributes as an input. Based on the received inputs, the CNN406generates body features407corresponding to the body and personalized 3D body features, such as 3D joint locations, body volume, shape of the body, pose angles, etc. In some implementations, the CNN406may be trained to predict hundreds of body features of the body represented in the image402. The body dimensions CNN470processes the body features407and determines body dimensions472for the body, as discussed further below. Likewise, a personalized 3D body model of the body is generated based on the personalized 3D body features. For example, to generate the personalized 3D body model, the personalized 3D body features may be provided to a body model, such as the Shape Completion and Animation of People (“SCAPE”) body model, a Skinned Multi-Person Linear (“SMPL”) body model, etc., and the body model may generate the personalized 3D body model of the body of the user based on those predicted body features. In some implementations, as discussed further below, personalized 3D model refinement408may be performed to refine or revise the generated personalized 3D body model to better represent the body of the user. For example, the personalized 3D body model may be compared to the representation of the body403of the user in the image402to determine differences between the shape of the body403of the user represented in the image402and the shape of the personalized 3D body model. Based on the determined differences, the silhouette404may be refined and the refined silhouette processed by the CNN406to produce a refined personalized 3D body model of the body of the user. This refinement may continue until there is no or little difference between the shape of the body403of the user represented in the image402and the shape of the personalized 3D body model410. In other implementations, a 2D model image may be generated from the personalized 3D body model and that 2D model image may be compared to the silhouette and/or the 2D body image to determine differences between the 2D model image and the 2D body image or silhouette. Based on the determined differences, the personalized 3D body features and/or the personalized 3D body model may be refined until the personalized 3D body model corresponds to the body of the user represented in the 2D body image and/or the silhouette. Still further, in some implementations, the personalized 3D body model410of the body of the user may be augmented with one or more textures, texture augmentation412, determined from the image402of the body of the user. For example, the personalized 3D body model may be augmented to have a same or similar color to a skin color of the body403represented in the image402, clothing or clothing colors represented in the image402may be used to augment the personalized 3D body model, facial features, hair, hair color, etc., of the body of the user represented in the image402may be determined and used to augment the personalized 3D body model, etc. The result of the processing illustrated in the transition400is a personalized 3D body model414or avatar representative of the body of the user, that has been generated from 2D body images of the body of the user. In addition, determined body dimensions may be presented with the personalized 3D body model414, as illustrated above. FIG.5Ais another transition diagram500of processing 2D body images502of a body to produce a personalized 3D body model and body dimensions of that body, in accordance with implementations of the present disclosure. In some implementations, multiple 2D body images of a body from different views (e.g., front view, side view, back view, three-quarter view, etc.), such as 2D body images502-1,502-2,502-3,502-4through502-N may be utilized with the disclosed implementations to generate a personalized 3D body model of the body. In the illustrated example, the first 2D body image502-1is an image of a human body503oriented in a front view facing a 2D imaging element. The second 2D body image502-2is an image of the human body503oriented in a first side view facing the 2D imaging element. The third 2D body image502-3is an image of the human body503oriented in a back view facing the 2D imaging element. The fourth 2D body image502-4is an image of the human body503oriented in a second side view facing the 2D imaging element. As will be appreciated, any number of 2D body images502-1through502-N may be generated with the view of the human body503in any number or orientations with respect to the 2D imaging element. Each of the 2D body images502-1through502-N are processed to segment pixels of the image that represent the human body from pixels of the image that do not represent the human body to produce a silhouette504of the human body as represented in that image. Segmentation may be done through, for example, background subtraction, semantic segmentation, etc. In one example, a baseline image of the background may be known and used to subtract out pixels of the image that correspond to pixels of the baseline image, thereby leaving only foreground pixels that represent the human body. The background pixels may be assigned RGB color values for black (i.e., 0, 0, 0). The remaining pixels may be assigned RGB values for white (i.e., 255, 255, 255) to produce the silhouette504or binary segmentation of the human body. In another example, a CNN utilizing a semantic segmentation algorithm may be trained using images of human bodies, or simulated human bodies to train the CNN to distinguish between pixels that represent human bodies and pixels that do not represent human bodies and optionally to identify pixels of different segments of the human body. In such an example, the CNN may process the image502and indicate or label pixels that represent the body (foreground) and pixels that do not represent the body (background). The background pixels may be assigned RGB color values for black (i.e., 0, 0, 0). The remaining pixels may be assigned RGB values for white (i.e., 255, 255, 255) to produce the silhouette or binary segmentation of the human body. For segmentation, pixels may be further processed to determine body segments of the body to which the pixels correspond. In other implementations, other forms or algorithms, such as edge detection, shape detection, etc., may be used to determine pixels of the image502that represent the body and pixels of the image502that do not represent the body and a silhouette504of the body produced therefrom. Returning toFIG.5A, the first 2D body image502-1is processed to segment a plurality of pixels of the first 2D body image502-1that represent the human body from a plurality of pixels of the first 2D body image502-1that do not represent the human body, to produce a front silhouette504-1of the human body. The second 2D body image502-2is processed to segment a plurality of pixels of the second 2D body image502-2that represent the human body from a plurality of pixels of the second 2D body image502-2that do not represent the human body, to produce a first side silhouette504-2of the human body. The third 2D body image502-3is processed to segment a plurality of pixels of the third 2D body image502-3that represent the human body from a plurality of pixels of the third 2D body image502-3that do not represent the human body, to produce a back silhouette504-3of the human body. The fourth 2D body image502-4is processed to segment a plurality of pixels of the fourth 2D body image502-4that represent the human body from a plurality of pixels of the fourth 2D body image502-4that do not represent the human body, to produce a second side silhouette504-4of the human body. Processing of the 2D body images502-1through502-N to produce silhouettes504-1through504-N from different orientations of the human body503may be performed for any number of images502. As discussed above with respect toFIG.4, in some implementations, the silhouette may be segmented into different body segments by processing the pixels of the 2D image to determine a likelihood that the pixel corresponds to a segment label (e.g., hair, upper clothing, lower clothing, head, upper right arm, upper left leg, etc.). In some implementations, in addition to generating a silhouette504from the 2D body image, the silhouette may be normalized in size and centered in the image. For example, the silhouette may be cropped by computing a bounding rectangle around the silhouette504. The silhouette504may then be resized according to s, which is a function of a known height h of the user represented in the 2D body image (e.g., the height may be provided by the user): s=h*0.8*imagehμh(1) Where imagehis the input image height, which may be based on the pixels of the image, and μhis the average height of a person (e.g., ˜160 centimeters for females; ˜176 centimeters for males). Each silhouette504representative of the body may then be processed to determine body traits or features of the human body. For example, different CNNs may be trained using silhouettes of bodies, such as human bodies, from different orientations with known features. In some implementations, different CNNs may be trained for different orientations. For example, a first CNN506A-1may be trained to determine front view features from front view silhouettes504-1. A second CNN506A-2may be trained to determine right side features from right side silhouettes. A third CNN506A-3may be trained to determine back view features from back view silhouettes. A fourth CNN506A-4may be trained to determine left side features from left side silhouettes. Different CNNs506A-1through506A-N may be trained for each of the different orientations of silhouettes504-1through504-N. Alternatively, one CNN may be trained to determine features from any orientation silhouette. In implementations that utilize multiple images of the body503to produce multiple sets of features, such as the example illustrated inFIG.5A, those features may be concatenated and the concatenated features processed together with a CNN to generate a set of personalized body features507. For example, a CNN may be trained to receive features generated from different silhouettes504to produce personalized body features507. The personalized body features507may indicate any aspect or information related to the body503represented in the images502. For example, the personalized body features507may indicate 3D joint locations, body volume, shape of the body, pose angles, neural network weights corresponding to the body, etc. In some implementations, the concatenated CNN506B may be trained to predict hundreds of personalized body features507corresponding to the body503represented in the images502. Utilizing the personalized body features507, a body dimensions CNN570processes the features and determines body dimensions572for the body, as discussed further below. Likewise, a personalized 3D body model of the body is generated based on the personalized body features507. For example, the personalized body features507may be provided to a body model, such as the SCAPE body model, the SMPL body model, etc., and the body model may generate the personalized 3D body model of the body503represented in the images502based on those personalized body features507. In the illustrated example, personalized 3D model refinement508may be performed to refine or revise the generated personalized 3D body model to better represent the body503represented in the 2D body images502. For example, the personalized 3D body model may be compared to the body503represented in one or more of the 2D body images502to determine differences between the shape of the body503represented in the 2D body image502and the shape of the personalized 3D body model generated from the body features. In some implementations, the personalized 3D body model may be compared to a single image, such as image502-1. In other implementations, the personalized 3D body model may be compared to each of the 2D body images502-1through502-N in parallel or sequentially. In still other implementations, one or more 2D model images may be generated from the personalized 3D body model and those 2D model images may be compared to the silhouettes and/or the 2D body images to determine differences between the 2D model images and the silhouette/2D body images. Comparing the personalized 3D body model and/or a 2D model image with a 2D body image502or silhouette504may include determining an approximate pose of the body503represented in the 2D body image and adjusting the personalized 3D body model to the approximate pose. The personalized 3D body model or rendered 2D model image may then be overlaid or otherwise compared to the body503represented in the 2D body image502and/or represented in the silhouette504to determine a difference between the personalized 3D body model and the 2D body image. Based on the determined differences between the personalized 3D body model and the body503represented in the 2D body image502, the silhouette504generated from that image may be refined to account for those differences. For example, if the personalized 3D body model is compared with the body503represented in the first image502-1and differences are determined, the silhouette504-1may be refined based on those differences. Alternatively, the body features and/or the personalized 3D body model may be refined to account for those differences. If a silhouette is refined as part of the personalized 3D model refinement508, the refined silhouette may be processed to determine refined features for the body503represented in the 2D body image based on the refined silhouette. The refined features may then be concatenated with the features generated from the other silhouettes or with refined features generated from other refined silhouettes that were produced by the personalized 3D model refinement508. For example, the personalized 3D model refinement508may compare the generated personalized 3D body model with the body503as represented in two or more 2D body images502, such as a front image502-1and a back image502-3, differences determined for each of those images, refined silhouettes generated from those differences and refined front view features and refined back view features generated. Those refined features may then be concatenated with the two side view features to produce refined body model features. In other implementations, personalized 3D model refinement508may compare the personalized 3D body model with all views of the body503represented in the 2D body images502to determine differences and generate refined silhouettes for each of those 2D body images502-1through502-N. Those refined silhouettes may then be processed by the CNNs506A-1through506A-N to produce refined features and those refined features concatenated to produce refined body features507. Finally, the refined body features507may be processed by personalized 3D modeling510to generate a refined personalized 3D body model. This process of personalized 3D refinement may continue until there is no or limited difference (e.g., below a threshold difference) between the generated personalized 3D body model and the body503represented in the 2D body images502. In another implementation, personalized 3D model refinement508may sequentially compare the personalized 3D body model with representations of the body503in the different 2D body images502. For example, personalized 3D model refinement508may compare the personalized 3D body model with a first representation of the body503in a first 2D body image502-1to determine differences that are then used to generate a refined silhouette504-1corresponding to that first 2D body image502-1. The refined silhouette may then be processed to produce refined features and those refined features may be concatenated506B with the features generated from the other silhouettes504-2through504-N to generate refined body features, which may be used to generate a refined personalized 3D body model. The refined personalized 3D body model may then be compared with a next image of the plurality of 2D body images502to determine any differences and the process repeated. This process of personalized 3D refinement may continue until there is no or limited difference (e.g., below a threshold difference) between the generated personalized 3D body model and the body503represented in the 2D body images502. In some implementations, upon completion of personalized 3D model refinement508, the personalized 3D body model of the body represented in the 2D body images502may be augmented with one or more textures, texture augmentation512, determined from one or more of the 2D body images502-1through502-N. For example, the personalized 3D body model may be augmented to have a same or similar color to a skin color of the body503represented the 2D body images502, clothing or clothing colors represented in the 2D body images502may be used to augment the personalized 3D body model, facial features, hair, hair color, etc., of the body503represented in the 2D body image502may be determined and used to augment the personalized 3D body model. Similar to personalized 3D model refinement, the approximate pose of the body in one of the 2D body images502may be determined and the personalized 3D body model adjusted accordingly so that the texture obtained from that 2D body image502may be aligned and used to augment that portion of the personalized 3D body model. In some implementations, alignment of the personalized 3D body model with the approximate pose of the body503may be performed for each 2D body image502-1through502-N so that texture information or data from the different views of the body503represented in the different 2D body images502may be used to augment the different poses of the resulting personalized 3D body model. The result of the processing illustrated in the transition500is a personalized 3D body model514or avatar representative of the body of the user, that has been generated from 2D body images502of the body503of the user. In addition, determined body dimensions572may be presented with the personalized 3D body model, as illustrated above. FIG.5Bis another transition diagram550of processing 2D body images552of a body to produce a personalized three-dimensional model of that body, in accordance with implementations of the present disclosure. In some implementations, multiple 2D body images of a body from different views (e.g., front view, side view, back view, three-quarter view, etc.), such as 2D body images552-1,552-2,552-3,552-4through552-N may be utilized with the disclosed implementations to generate a personalized 3D body model of the body. In the illustrated example, the first 2D body image552-1is an image of a human body553oriented in a front view facing a 2D imaging element. The second 2D body image552-2is an image of the human body553oriented in a first side view facing the 2D imaging element. The third 2D body image552-3is an image of the human body553oriented in a back view facing the 2D imaging element. The fourth 2D body image552-4is an image of the human body553oriented in a second side view facing the 2D imaging element. As will be appreciated, any number of 2D body images552-1through552-N may be generated with the view of the human body553in any number or orientations with respect to the 2D imaging element. Each of the 2D body images552-1through552-N are processed to segment pixels of the image that represent the human body from pixels of the image that do not represent the human body to produce a silhouette554of the human body as represented in that image. Segmentation may be done through, for example, background subtraction, semantic segmentation, etc. In one example, a baseline image of the background may be known and used to subtract out pixels of the image that correspond to pixels of the baseline image, thereby leaving only foreground pixels that represent the human body. The background pixels may be assigned RGB color values for black (i.e., 0, 0, 0). The remaining pixels may be assigned RGB values for white (i.e., 255, 255, 255) to produce the silhouette554or binary segmentation of the human body. In another example, a CNN utilizing a semantic segmentation algorithm may be trained using images of human bodies, or simulated human bodies to train the CNN to distinguish between pixels that represent human bodies and pixels that do not represent human bodies. In such an example, the CNN may process the image552and indicate or label pixels that represent the body (foreground) and pixels that do not represent the body (background). The background pixels may be assigned RGB color values for black (i.e., 0, 0, 0). The remaining pixels may be assigned RGB values for white (i.e., 255, 255, 255) to produce the silhouette or binary segmentation of the human body. In other implementations, other forms or algorithms, such as edge detection, shape detection, etc., may be used to determine pixels of the image552that represent the body and pixels of the image552that do not represent the body and a silhouette554of the body produced therefrom. Returning toFIG.5B, the first 2D body image552-1is processed to segment a plurality of pixels of the first 2D body image552-1that represent the human body from a plurality of pixels of the first 2D body image552-1that do not represent the human body, to produce a front silhouette554-1of the human body. The second 2D body image552-2is processed to segment a plurality of pixels of the second 2D body image552-2that represent the human body from a plurality of pixels of the second 2D body image552-2that do not represent the human body, to produce a first side silhouette554-2of the human body. The third 2D body image552-3is processed to segment a plurality of pixels of the third 2D body image552-3that represent the human body from a plurality of pixels of the third 2D body image552-3that do not represent the human body, to produce a back silhouette554-3of the human body. The fourth 2D body image552-4is processed to segment a plurality of pixels of the fourth 2D body image552-4that represent the human body from a plurality of pixels of the fourth 2D body image552-4that do not represent the human body, to produce a second side silhouette554-4of the human body. Processing of the 2D body images552-1through552-N to produce silhouettes554-1through554-N from different orientations of the human body553may be performed for any number of images552. As discussed above with respect toFIG.4, in some implementations, the silhouette may be segmented into different body segments by processing the pixels of the 2D image to determine a likelihood that the pixel corresponds to a segment label (e.g., hair, upper clothing, lower clothing, head, upper right arm, upper left leg, etc.). Similar toFIG.5A, in some implementations, in addition to generating a silhouette554from the 2D body image, the silhouette may be normalized in size and centered in the image. Each silhouette554representative of the body may then be processed to determine body traits or features of the human body. For example, different CNNs may be trained using silhouettes of bodies, such as human bodies, from different orientations with known features. In some implementations, different CNNs may be trained for different orientations. For example, a first CNN556A-1may be trained to determine front view features from front view silhouettes554-1. A second CNN556A-2may be trained to determine right side features from right side silhouettes554-2. A third CNN556A-3may be trained to determine back view features from back view silhouettes554-3. A fourth CNN556A-4may be trained to determine left side features from left side silhouettes554-4. Different CNNs556A-1through556A-N may be trained for each of the different orientations of silhouettes554-1through554-N. Alternatively, one CNN may be trained to determine features from any orientation silhouette. In some implementations, the same or different CNNs may also utilize the 2D body image502as an input to the CNN that is used to generate and determine the body features. For example, the first CNN556A-1may be trained to determine front view features based on inputs of the front view silhouettes554-1and/or the 2D body image552-1. The second CNN556A-2may be trained to determine right side features from right side silhouettes554-2and/or the right side 2D body image552-2. The third CNN556A-3may be trained to determine back view features from back view silhouettes554-3and/or the back view 2D body image552-3. The fourth CNN556A-4may be trained to determine left side features from left side silhouettes554-4and/or the left side 2D body image552-4. Different CNNs556A-1through556A-N may be trained for each of the different orientations of silhouettes554-1through554-N and/or 2D body images502-1through502-N. In still other implementations, different CNNs may be trained for each of the silhouettes554and the 2D body images. For example, the first CNN556A-1may be trained to determine front view features from the silhouette554-1and another front view CNN may be trained to determine front view features from the 2D body image552-1. The second CNN556A-2may be trained to determine right side view features from the silhouette554-2and another right side view CNN may be trained to determine right side view features from the 2D body image552-2. The third CNN556A-3may be trained to determine back view features from the silhouette554-3and another back view CNN may be trained to determine back view features from the 2D body image552-3. The fourth CNN556A-4may be trained to determine left side view features from the silhouette554-4and another left side view CNN may be trained to determine left side view features from the 2D body image552-4. In implementations that utilize multiple images of the body553and/or multiple silhouettes to produce multiple sets of features, such as the example illustrated inFIG.5B, those features may be concatenated CNN556B and the concatenated features processed together with a CNN to generate a set of personalized body features557. For example, a CNN may be trained to receive features generated from different silhouettes554, features generated from different 2D body images552, and/or features generated by a CNN that processes both silhouettes554and the 2D body images552to produce personalized body features557. The personalized body features557may indicate any aspect or information related to the body553represented in the images552. For example, the personalized body features557may indicate 3D joint locations, body volume, shape of the body, pose angles, neural network weights corresponding to the body, etc. In some implementations, the concatenated CNN556B may be trained to predict hundreds of personalized body features557corresponding to the body553represented in the images552. Utilizing the personalized body features557, a body dimensions CNN570processes the features and determines body dimensions572for the body, as discussed further below. Likewise, a personalized 3D body model of the body is generated based on the personalized body features557. For example, the personalized body features557may be provided to a body model, such as the SCAPE body model, the SMPL body model, etc., and the body model may generate the personalized 3D body model of the body553represented in the images552based on those personalized body features557. In the illustrated example, personalized 3D model refinement558may be performed to refine or revise the generated personalized 3D body model to better represent the body553represented in the 2D body images552. For example, as discussed above, the personalized 3D body model may be compared to the body553represented in one or more of the 2D body images to determine differences between the shape of the body553represented in the 2D body image552and the shape of the personalized 3D body model generated from the body features. In some implementations, the personalized 3D body model may be compared to a single image, such as image552-1. In other implementations, the personalized 3D body model may be compared to each of the 2D body images552-1through552-N in parallel or sequentially. In still other implementations, one or more 2D model images may be generated from the personalized 3D body model and those 2D model images may be compared to the silhouettes and/or the 2D body images to determine differences between the 2D model images and the silhouette/2D body images. Comparing the personalized 3D body model and/or a 2D model image with a 2D body image552or silhouette554may include determining an approximate pose of the body553represented in the 2D body image and adjusting the personalized 3D body model to the approximate pose. The personalized 3D body model or rendered 2D model image may then be overlaid or otherwise compared to the body553represented in the 2D body image552and/or represented in the silhouette554to determine a difference between the personalized 3D body model image and the 2D body image/silhouette. Based on the determined differences between the personalized 3D body model and the body553represented in the 2D body image552, the silhouette554generated from that image may be refined to account for those differences. Alternatively, the body features and/or the personalized 3D body model may be refined to account for those differences. In some implementations, upon completion of personalized 3D model refinement558, the personalized 3D body model of the body represented in the 2D body images552may be augmented with one or more textures, texture augmentation562, determined from one or more of the 2D body images552-1through552-N. For example, the personalized 3D body model may be augmented to have a same or similar color to a skin color of the body553represented the 2D body images552, clothing or clothing colors represented in the 2D body images552may be used to augment the personalized 3D body model, facial features, hair, hair color, etc., of the body553represented in the 2D body image552may be determined and used to augment the personalized 3D body model. Similar to personalized 3D model refinement, the approximate pose of the body in one of the 2D body images552may be determined and the personalized 3D body model adjusted accordingly so that the texture obtained from that 2D body image552may be aligned and used to augment that portion of the personalized 3D body model. In some implementations, alignment of the personalized 3D body model with the approximate pose of the body553may be performed for each 2D body image552-1through552-N so that texture information or data from the different views of the body553represented in the different 2D body images552may be used to augment the different poses of the resulting personalized 3D body model. The result of the processing illustrated in the transition550is a personalized 3D body model564or avatar representative of the body of the user, that has been generated from 2D body images552of the body553of the user. In addition, determined body dimensions572may be presented with the personalized 3D body model, as illustrated above. As discussed above, features or objects expressed in imaging data, such as human bodies, colors, textures or outlines of the features or objects, may be extracted from the data in any number of ways. For example, colors of pixels, or of groups of pixels, in a digital image may be determined and quantified according to one or more standards, e.g., the RGB color model, in which the portions of red, green or blue in a pixel are expressed in three corresponding numbers ranging from 0 to 255 in value, or a hexadecimal model, in which a color of a pixel is expressed in a six-character code, wherein each of the characters may have a range of sixteen. Moreover, textures or features of objects expressed in a digital image may be identified using one or more computer-based methods, such as by identifying changes in intensities within regions or sectors of the image, or by defining areas of an image corresponding to specific surfaces. Furthermore, edges, contours, outlines, colors, textures, silhouettes, shapes or other characteristics of objects, or portions of objects, expressed in images may be identified using one or more algorithms or machine-learning tools. The objects or portions of objects may be identified at single, finite periods of time, or over one or more periods or durations. Such algorithms or tools may be directed to recognizing and marking transitions (e.g., the edges, contours, outlines, colors, textures, silhouettes, shapes or other characteristics of objects or portions thereof) within the digital images as closely as possible, and in a manner that minimizes noise and disruptions, and does not create false transitions. Some detection algorithms or techniques that may be utilized in order to recognize characteristics of objects or portions thereof in digital images in accordance with the present disclosure include, but are not limited to, Canny edge detectors or algorithms; Sobel operators, algorithms or filters; Kayyali operators; Roberts edge detection algorithms; Prewitt operators; Frei-Chen methods; semantic segmentation algorithms; background subtraction; or any other algorithms or techniques that may be known to those of ordinary skill in the pertinent arts. Image processing algorithms, other machine learning algorithms or CNNs may be operated on computer devices of various sizes or types, including but not limited to smartphones or other cell phones, tablets, video cameras or other computer-based machines. Such mobile devices may have limited available computer resources, e.g., network bandwidth, storage capacity or processing power, as compared to larger or more complex computer devices. Therefore, executing computer vision algorithms, other machine learning algorithms, or CNNs on such devices may occupy all or much of the available resources, without any guarantee, or even a reasonable assurance, that the execution of such algorithms will be successful. For example, processing digital 2D body images captured by a user of a portable device (e.g., smartphone, tablet, laptop, webcam) according to one or more algorithms in order to produce a personalized 3D body model from the digital images may be an ineffective use of the limited resources that are available on the smartphone or tablet. Accordingly, in some implementations, as discussed herein, some or all of the processing may be performed by one or more computing resources that are remote from the portable device. In some implementations, initial processing of the images to generate binary segmented silhouettes may be performed on the device. Subsequent processing to generate and refine the personalized 3D body model may be performed on one or more remote computing resources. For example, the silhouettes may be sent from the portable device to the remote computing resources for further processing. Still further, in some implementations, texture augmentation of the personalized 3D body model of the body may be performed on the portable device or remotely. In some implementations, to increase privacy of the user, only the binary segmented silhouette may be sent from the device for processing on the remote computing resources and the original 2D images that include the representation of the user may be maintained locally on the portable device. In such an example, the rendered personalized 3D body model and body dimensions may be sent back to the device and the device may perform texture augmentation of the received personalized 3D body model based on those images. Utilizing such a distributed computing arrangement retains user identifiable information on the portable device of the user while at the same time leveraging the increased computing capacity available at remote computing resources. Machine learning tools, such as artificial neural networks, have been utilized to identify relations between respective elements of apparently unrelated sets of data. An artificial neural network, such as CNN, is a parallel distributed computing processor comprised of individual units that may collectively learn and store experimental knowledge, and make such knowledge available for use in one or more applications. Such a network may simulate the non-linear mental performance of the many neurons of the human brain in multiple layers by acquiring knowledge from an environment through one or more flexible learning processes, determining the strengths of the respective connections between such neurons, and utilizing such strengths when storing acquired knowledge. Like the human brain, an artificial neural network may use any number of neurons in any number of layers, including an input layer, an output layer, and one or more intervening hidden layers. In view of their versatility, and their inherent mimicking of the human brain, machine learning tools including not only artificial neural networks but also nearest neighbor methods or analyses, factorization methods or techniques, K-means clustering analyses or techniques, similarity measures such as log likelihood similarities or cosine similarities, latent Dirichlet allocations or other topic models, or latent semantic analyses have been utilized in image processing applications. Artificial neural networks may be trained to map inputted data to desired outputs by adjusting the strengths of the connections between one or more neurons, which are sometimes called synaptic weights. An artificial neural network may have any number of layers, including an input layer, an output layer, and any number of intervening hidden layers. Each of the neurons in a layer within a neural network may receive one or more inputs and generate one or more outputs in accordance with an activation or energy function, with features corresponding to the various strengths or synaptic weights. Likewise, each of the neurons within a network may be understood to have different activation or energy functions; in this regard, such a network may be dubbed a heterogeneous neural network. In some neural networks, at least one of the activation or energy functions may take the form of a sigmoid function, wherein an output thereof may have a range of zero to one or 0 to 1. In other neural networks, at least one of the activation or energy functions may take the form of a hyperbolic tangent function, wherein an output thereof may have a range of negative one to positive one, or −1 to +1. Thus, the training of a neural network according to an identity function results in the redefinition or adjustment of the strengths or weights of such connections between neurons in the various layers of the neural network, in order to provide an output that most closely approximates or associates with the input to the maximum practicable extent. Artificial neural networks may typically be characterized as either feedforward neural networks or recurrent neural networks, and may be fully or partially connected. In a feedforward neural network, e.g., a convolutional neural network, information specifically flows in one direction from an input layer to an output layer, while in a recurrent neural network, at least one feedback loop returns information regarding the difference between the actual output and the targeted output for training purposes. Additionally, in a fully connected neural network architecture, each of the neurons in one of the layers is connected to all of the neurons in a subsequent layer. By contrast, in a sparsely connected neural network architecture, the number of activations of each of the neurons is limited, such as by a sparsity parameter. Moreover, the training of a neural network is typically characterized as supervised or unsupervised. In supervised learning, a training set comprises at least one input and at least one target output for the input. Thus, the neural network is trained to identify the target output, to within an acceptable level of error. In unsupervised learning of an identity function, such as that which is typically performed by a sparse autoencoder, target output of the training set is the input, and the neural network is trained to recognize the input as such. Sparse autoencoders employ backpropagation in order to train the autoencoders to recognize an approximation of an identity function for an input, or to otherwise approximate the input. Such backpropagation algorithms may operate according to methods of steepest descent, conjugate gradient methods, or other like methods or techniques, in accordance with the systems and methods of the present disclosure. Those of ordinary skill in the pertinent art would recognize that any algorithm or method may be used to train one or more layers of a neural network. Likewise, any algorithm or method may be used to determine and minimize the error in an output of such a network. Additionally, those of ordinary skill in the pertinent art would further recognize that the various layers of a neural network may be trained collectively, such as in a sparse autoencoder, or individually, such that each output from one hidden layer of the neural network acts as an input to a subsequent hidden layer. Once a neural network has been trained to recognize dominant characteristics of an input of a training set, e.g., to associate an image with a label, a category, a cluster or a pseudolabel thereof, to within an acceptable tolerance, an input and/or multiple inputs, in the form of an image, silhouette, features, known traits corresponding to the image, etc., may be provided to the trained network, and an output generated therefrom. For example, the CNN discussed above may receive as inputs a generated silhouette and one or more body attributes (e.g., height, weight, gender) corresponding to the body represented by the silhouette. The trained CNN may then produce as outputs the predicted features corresponding to those inputs. Referring toFIG.6, a block diagram of components of one image processing system600in accordance with implementations of the present disclosure is shown. The system600ofFIG.6includes a body model system610, an imaging element620that is part of a portable device630of a user, such as a tablet, a laptop, a cellular phone, a webcam, etc., and an external media storage facility670connected to one another across a network680, such as the Internet. The body model system610ofFIG.6includes M physical computer servers612-1,612-2. . .612-M having one or more databases (or data stores)614associated therewith, as well as N computer processors616-1,616-2. . .616-N provided for any specific or general purpose. For example, the body model system610ofFIG.6may be independently provided for the exclusive purpose of generating personalized 3D body models, body dimensions, and/or body measurements from 2D body images captured by imaging elements, such as imaging element620, or silhouettes produced therefrom, or alternatively, provided in connection with one or more physical or virtual services configured to manage or monitor such information, as well as one or more other functions. The servers612-1,612-2. . .612-M may be connected to or otherwise communicate with the databases614and the processors616-1,616-2. . .616-N. The databases614may store any type of information or data, including simulated silhouettes, body features, simulated 3D body models, etc. The servers612-1,612-2. . .612-M and/or the computer processors616-1,616-2. . .616-N may also connect to or otherwise communicate with the network680, as indicated by line618, through the sending and receiving of digital data. The imaging element620may comprise any form of optical recording sensor or device that may be used to photograph or otherwise record information or data regarding a body of the user, or for any other purpose. As is shown inFIG.6, the portable device630that includes the imaging element620is connected to the network680and includes one or more sensors622, one or more memory or storage components624(e.g., a database or another data store), one or more processors626, and any other components that may be required in order to capture, analyze and/or store imaging data, such as the 2D body images discussed herein. For example, the imaging element620may capture one or more still or moving images and may also connect to or otherwise communicate with the network680, as indicated by the line628, through the sending and receiving of digital data. Although the system600shown inFIG.6includes just one imaging element620therein, any number or type of imaging elements, portable devices, or sensors may be provided within any number of environments in accordance with the present disclosure. The portable device630may be used in any location and any environment to generate 2D body images that represent a body of the user. In some implementations, the portable device may be positioned such that it is stationary and approximately vertical (within approximately ten-degrees of vertical) and the user may position their body within a field of view of the imaging element620of the portable device at different orientations so that the imaging element620of the portable device may generate 2D body images that include a representation of the body of the user from different orientations. The portable device630may also include one or more applications623stored in memory that may be executed by the processor626of the portable device to cause the processor of the portable device to perform various functions or actions. For example, when executed, the application623may provide instructions to a user regarding placement of the portable device, positioning of the body of the user within the field of view of the imaging element620of the portable device, orientation of the body of the user, etc. Likewise, in some implementations, the application may present a personalized 3D body model, body dimensions, and/or body measurements determined and generated from the 2D body images in accordance with the described implementations, to the user and allow the user to interact with the personalized 3D body model. For example, a user may rotate the personalized 3D body model to view different angles of the personalized 3D body model, view accurate body dimensions determined from the 2D images, view body measurements, such as body fat, body mass, body volume, etc. Likewise, in some implementations, the personalized 3D body model may be modified by request of the user to simulate what the body of the user may look like under certain conditions, such as loss of weight, gain of muscle, etc. The external media storage facility670may be any facility, station or location having the ability or capacity to receive and store information or data, such as silhouettes, simulated or rendered personalized 3D body models of bodies, textures, body dimensions, etc., received from the body model system610, and/or from the portable device630. As is shown inFIG.6, the external media storage facility670includes J physical computer servers672-1,672-2. . .672-J having one or more databases674associated therewith, as well as K computer processors676-1,676-2. . .676-K. The servers672-1,672-2. . .672-J may be connected to or otherwise communicate with the databases674and the processors676-1,676-2. . .676-K. The databases674may store any type of information or data, including digital images, silhouettes, personalized 3D body models, etc. The servers672-1,672-2. . .672-J and/or the computer processors676-1,676-2. . .676-K may also connect to or otherwise communicate with the network680, as indicated by line678, through the sending and receiving of digital data. The network680may be any wired network, wireless network, or combination thereof, and may comprise the Internet in whole or in part. In addition, the network680may be a personal area network, local area network, wide area network, cable network, satellite network, cellular telephone network, or combination thereof. The network680may also be a publicly accessible network of linked networks, possibly operated by various distinct parties, such as the Internet. In some implementations, the network680may be a private or semi-private network, such as a corporate or university intranet. The network680may include one or more wireless networks, such as a Global System for Mobile Communications (GSM) network, a Code Division Multiple Access (CDMA) network, a Long Term Evolution (LTE) network, or some other type of wireless network. Protocols and components for communicating via the Internet or any of the other aforementioned types of communication networks are well known to those skilled in the art of computer communications and thus, need not be described in more detail herein. The computers, servers, devices and the like described herein have the necessary electronics, software, memory, storage, databases, firmware, logic/state machines, microprocessors, communication links, displays or other visual or audio user interfaces, printing devices, and any other input/output interfaces to provide any of the functions or services described herein and/or achieve the results described herein. Also, those of ordinary skill in the pertinent art will recognize that users of such computers, servers, devices and the like may operate a keyboard, keypad, mouse, stylus, touch screen, or other device (not shown) or method to interact with the computers, servers, devices and the like, or to “select” an item, link, node, hub or any other aspect of the present disclosure. The body model system610, the portable device630or the external media storage facility670may use any web-enabled or Internet applications or features, or any other client-server applications or features including E-mail or other messaging techniques, to connect to the network680, or to communicate with one another, such as through short or multimedia messaging service (SMS or MMS) text messages. For example, the servers612-1,612-2. . .612-M may be adapted to transmit information or data in the form of synchronous or asynchronous messages from the body model system610to the processor626or other components of the portable device630, or any other computer device in real time or in near-real time, or in one or more offline processes, via the network680. Those of ordinary skill in the pertinent art would recognize that the body model system610, the portable device630or the external media storage facility670may operate any of a number of computing devices that are capable of communicating over the network, including but not limited to set-top boxes, personal digital assistants, digital media players, web pads, laptop computers, desktop computers, electronic book readers, cellular phones, and the like. The protocols and components for providing communication between such devices are well known to those skilled in the art of computer communications and need not be described in more detail herein. The data and/or computer executable instructions, programs, firmware, software and the like (also referred to herein as “computer executable” components) described herein may be stored on a computer-readable medium that is within or accessible by computers or computer components such as the servers612-1,612-2. . .612-M, the processor626, the servers672-1,672-2. . .672-J, or any other computers or control systems utilized by the body model system610, the portable device630, applications623, or the external media storage facility670, and having sequences of instructions which, when executed by a processor (e.g., a central processing unit, or “CPU”), cause the processor to perform all or a portion of the functions, services and/or methods described herein. Such computer executable instructions, programs, software and the like may be loaded into the memory of one or more computers using a drive mechanism associated with the computer-readable medium, such as a floppy drive, CD-ROM drive, DVD-ROM drive, network interface, or the like, or via external connections. Some implementations of the systems and methods of the present disclosure may also be provided as a computer-executable program product including a non-transitory machine-readable storage medium having stored thereon instructions (in compressed or uncompressed form) that may be used to program a computer (or other electronic device) to perform processes or methods described herein. The machine-readable storage media of the present disclosure may include, but is not limited to, hard drives, floppy diskettes, optical disks, CD-ROMs, DVDs, ROMs, RAMs, erasable programmable ROMs (“EPROM”), electrically erasable programmable ROMs (“EEPROM”), flash memory, magnetic or optical cards, solid-state memory devices, or other types of media/machine-readable medium that may be suitable for storing electronic instructions. Further, implementations may also be provided as a computer executable program product that includes a transitory machine-readable signal (in compressed or uncompressed form). Examples of machine-readable signals, whether modulated using a carrier or not, may include, but are not limited to, signals that a computer system or machine hosting or running a computer program can be configured to access, or including signals that may be downloaded through the Internet or other networks. FIG.7is a block diagram of a trained body composition model700that determines body features709and body dimensions707of a body represented in two-dimensional images, in accordance with implementations of the present disclosure. As discussed above, the model700may be a neural network, such as a CNN that is trained to receive one or more inputs that are processed to generate one or more outputs, such as the body features709and body dimensions707. In the illustrated example, the trained body composition model706may include several component CNNs that receive different inputs and provide different outputs. Likewise, outputs from one of the component CNNs may be provided as an input to one or more other component CNNs of the trained body composition model. For example, the trained body composition model may include three parts of component CNNs. In one implementation, a first component part may include one or more feature determination CNNs706A and a second component part may include a concatenation CNN706B. The third part may be a body dimensions CNN706C. In the illustrated example, there may be different feature determination CNNs706A for each of the different body orientations (e.g., front view, right side view, back view, left side view, three-quarter view), different silhouettes704corresponding to those different body orientations, and/or different 2D body images corresponding to those different body orientations, each CNN trained for inputs having the particular orientation. Likewise, in some implementations, the feature determination CNNs706A may receive multiple different types of inputs. For example, in addition to receiving a silhouette704and/or 2D body image, each feature determination CNN706A may receive one or more body attributes705corresponding to the body represented by the silhouettes704and/or 2D body images. The body attributes705may include, but are not limited to, height, weight, gender, etc. As discussed, the trained feature determination CNNs706A may process the inputs and generate features representative of the bodies represented in the 2D body images that were used to produce the silhouettes704. For example, if there are four silhouettes, one for a front view, one or a right side view, one for a back view, and one for a left side view, the four feature determination CNNs706A trained for those views each produce a set of features representative of the body represented in the 2D body image used to generate the silhouette. Utilizing binary silhouettes704of bodies improves the accuracy of the feature determination CNN706A as it can focus purely on size, shape, etc. of the body, devoid of any other aspects (e.g., color, clothing, hair, etc.). In other implementations, the use of the 2D body images in conjunction with or independent of the silhouettes provides additional data, such as shadows, skin tone, etc., that the feature determination CNN706A may utilize in determining and producing a set of features representative of the body represented in the 2D body image. The features output from the feature determination CNNs706A, in the disclosed implementation, are received as inputs to the concatenation CNN706B. Likewise, in some implementations, the concatenation CNN706B may be trained to receive other inputs, such as body attributes705. As discussed, the concatenation CNN706B may be trained to receive the inputs of features, and optionally other inputs, produce concatenated features, and produce as outputs a set of body features709corresponding to the body represented in the 2D body images. In some implementations, the body features may include hundreds of features, including, but not limited to, shape, pose, volume, joint position, etc., of the represented body. The body dimensions CNN706C, which may be trained using real and/or synthetic data, as discussed further below, may receive as inputs the outputs from the concatenation CNN706B, and optionally other inputs, and process those inputs to determine one or more body dimensions707for the body represented in the input silhouette704. FIG.8is an example body dimensions model training process800, in accordance with implementations of the present disclosure. The example process800may be used to generate labeled simulated data that is used to train a neural network, such as a CNN, to determine body dimensions from a silhouette of a body, such as a human body, as discussed herein. The example process800begins by obtaining existing body scans/dimensions of a group of bodies, as in802. Existing body scans may include, but are not limited to, body scans provided by users, public databases of body dimensions and corresponding 3D body scans, etc. In general, the only requirement for the existing body scans and dimensions be that the body scan be a 3D body scan and the dimensions include some or all of the dimensions for which the body dimension model is to be trained. In some implementations, the existing body scan/dimensions may be a limited set of data, such as 5,000-10,000 existing body scans/dimensions of different bodies of different dimensions. The existing body scans/dimensions may then be fit to a topology, such as SMPL model, or other similar topology, as in804. Using the topology and the different body dimensions corresponding to the different body scans, the example process800computes a distribution of any number of derivative body scans and corresponding body dimensions across the topology space, as in806. For example, the body scans/dimensions may be integrated between two existing body scans/dimensions to compute essentially an infinite number of additional bodies and corresponding dimensions between those two existing body scans and corresponding body dimensions. Utilizing the computed distribution, an unlimited number of synthetic 3D body models and corresponding body dimensions may then be generated, as in808. For example, the initial set of 5,000-10,000 existing body scans and corresponding body models may be expanded to include hundreds of thousands, millions, or more 3D body models and corresponding body dimensions of varying sizes and shapes for which the body dimensions and synthetic body model are known. The synthetic 3D body model and corresponding body dimensions may then be used to generate labeled simulated body data using the labeled training data generation process900, as discussed further below with respect toFIG.9. The result of the labeled training data generation process900(FIG.9) is labeled simulated body data. As discussed below, the labeled simulated body data includes silhouettes generated from the synthetic 3D body models and the corresponding body dimensions for that synthetic 3D body model. In other implementations, the labeled simulated body data includes body features representative of the synthetic 3D body model, as determined from one or more silhouettes of that synthetic 3D body model. Utilizing the labeled simulated body data, the body dimensions model may be trained to determine body dimensions from one or more silhouettes generated from a 2D body image and/or from personalized body features generated from one or more 2D body images of a body, as discussed herein, as in812. Training of the body dimension model may be performed using supervised learning and the labeled simulated body data as the training inputs. FIG.9is an example labeled training data generation process900, in accordance with implementations of the present disclosure. The example process900begins by obtaining samples of the shape/pose parameters from synthetic body scans and corresponding body dimensions, as in901. A mesh of a body may then be generated for each sampled shape parameters, as in902. Generating a mesh of a body may be performed using any of a variety of 3D modeling techniques or engines. For example, any one or more of Blender, OpenGL, Neural Mesh Render, etc., may be used to generate a mesh of a body. Each mesh of a body may then be positioned to correspond to the obtained sampled pose parameters, as in904. In some implementations, the position of the mesh may also be varied slightly between body meshes and/or positioned in a defined pose, such as an A pose. For example the orientation, rotation, distance, amount of leg and/or arm separation, etc., may be varied between meshes of bodies, thereby increasing the realistic aspect of the synthetic data. For each posed mesh of a body, one or more silhouettes are generated, as in906. For example multiple silhouettes may be generated from the posed mesh of the body, each silhouette from different orientations (e.g., front view, right side view, back view, left side view, etc.). Generation of a silhouette from a posed mesh of a body may be performed in a manner similar to the discussion above for generating a silhouette from a 2D body image. For example, a 2D representation of the posed mesh of the body may be generated, thereby indicative of a 2D body image of a body, and then the 2D representation utilized to generate a silhouette of the posed mesh of the body. In other implementations, the silhouette(s) of the 3D mesh may be determined directly from the posed mesh of the body. In some implementations, body features representative of the posed mesh of the body may be generated from the silhouette(s), as discussed above. Finally, the silhouette(s) and/or body features determined from the silhouette(s) for each synthetic body may be combined with the body dimensions generated for that synthetic body to generate labeled simulated body data, as in908. FIG.10is illustrative of an example training1000of a body dimension model1006using synthetically generated labeled training data, in accordance with implementations of the present disclosure. As illustrated, synthetic data is used to generate a mesh of a body1002and the body dimensions1003, such as the chest circumference, waist circumference, hip circumference, etc., are known from the simulated data. The mesh of the body is then positioned in a defined pose, such as an A pose1004. As discussed above the pose may vary slightly for different meshes of different bodies of the simulated data. One or more silhouettes1008are then generated from the pose1004of the mesh of the body, thereby representing input data1010that will typically be received by the body dimension model1006once trained. Finally, the silhouette and corresponding body dimensions, which are the labels of the training data for training of the body dimension model1006are used to train the body dimension model so that it can accurately determine body dimensions from silhouettes of bodies represented in 2D images, as discussed herein. As noted above, in some implementations, multiple silhouettes may be generated from each pose1004of the mesh of the body and provided as inputs to train the body dimension model1006. In still other examples, body features may be generated from the one or more silhouettes generated for the posed mesh of the body and those body features and corresponding body dimensions provided as inputs to the body dimension model1006to train the body dimension model. FIG.11is an example flow diagram of a personalized 3D body model generation process1100, in accordance with implementations of the present disclosure. The example process1100begins upon receipt of one or more 2D body images of a body, as in1102. As noted above, the disclosed implementations are operable with any number of 2D body images for use in generating a personalized 3D body model of that body. For example, in some implementations, a single 2D body image may be used. In other implementations, two, three, four, or more 2D body images may be used. As discussed above, the 2D body images may be generated using any 2D imaging element, such as a camera on a portable device, a webcam, etc. The received 2D body images are then segmented to produce a binary silhouette of the body represented in the one or more 2D body images, as in1104. As discussed above, one or more segmentation techniques, such as background subtraction, semantic segmentation, Canny edge detectors or algorithms, Sobel operators, algorithms or filters, Kayyali operators, Roberts edge detection algorithms, Prewitt operators, Frei-Chen methods, or any other algorithms or techniques that may be known to those of ordinary skill in the pertinent arts. In some implementations, the silhouette may be further segmented into body segments. In addition, in some implementations, the silhouettes may be normalized in height and centered in the image before further processing, as in1106. For example, the silhouettes may be normalized to a standard height based on a function of a known or provided height of the body of the user represented in the image and an average height (e.g., average height of female body, average height of male body). In some implementations, the average height may be more specific than just gender. For example, the average height may be the average height of a gender and a race corresponding to the body, or a gender and a location (e.g., United States) of the user, etc. The normalized and centered silhouette may then be processed by one or more neural networks, such as one or more CNNs as discussed above, to generate body parameters representative of the body represented in the 2D body images, as in1108. As discussed above, there may be multiple steps involved in body parameter prediction. For example, each silhouette may be processed using CNNs trained for the respective orientation of the silhouette to generate sets of features of the body as determined from the silhouette. The sets of features generated from the different silhouette may then be processed using a neural network, such as a CNN, to concatenate the features and generate the body parameters representative of the body represented in the 2D body images. The body parameters may then be provided to one or more body models, such as an SMPL body model or a SCAPE body model and the body model may generate a personalized 3D body model for the body represented in the 2D body images, as in1110. In addition, in some implementations, the personalized 3D body model may be refined, if necessary, to more closely correspond to the actual image of the body of the user, as in1200. Personalized 3D body model refinement is discussed above, and discussed further below with respectFIGS.12A and12B. As discussed below, the personalized 3D body model refinement process1200(FIG.12A) returns a refined silhouette, as in1114. Upon receipt of the refined silhouette, the example process1100again generates body parameters, as in1108, and continues. This may be done until no further refinements are to be made to the silhouette. In comparison, the personalized 3D body model refinement process1250(FIG.12B) generates and returns a refined personalized 3D body model and the example process1100continues at block1116. After refinement of the silhouette and generation of a personalized 3D body model from refined body parameters, or after receipt of the refined personalized 3D body model fromFIG.12B, one or more textures (e.g., skin tone, hair, clothing, etc.) from the 2D body images may be applied to the personalized 3D body model, as in1116. Finally, the personalized 3D body model may be provided to the user as representative of the body of the user and/or other personalized 3D body model information (e.g., body mass, joint locations, arm length, body fat percentage, etc.) may be determined from the model, as in1118. FIG.12Ais an example flow diagram of a personalized 3D body model refinement process1200, in accordance with implementations of the present disclosure. The example process1200begins by determining a pose of a body represented in one of the 2D body images, as in1202. A variety of techniques may be used to determine the approximate pose of the body represented in a 2D body image. For example, camera parameters (e.g., camera type, focal length, shutter speed, aperture, etc.) included in the metadata of the 2D body image may be obtained and/or additional camera parameters may be determined and used to estimate the approximate pose of the body represented in the 2D body image. For example, a personalized 3D body model may be used to approximate the pose of the body in the 2D body image and then a position of a virtual camera with respect to that model that would produce the 2D body image of the body may be determined. Based on the determined position of the virtual camera, the height and angle of the camera used to generate the 2D body image may be inferred. In some implementations, the camera tilt may be included in the metadata and/or provided by a portable device that includes the camera. For example, many portable devices include an accelerometer and information from the accelerometer at the time the 2D body image was generated may be provided as the tilt of the camera. Based on the received and/or determined camera parameters, the pose of the body represented in the 2D body image with respect to the camera may be determined, as in1202. The personalized 3D body model of the body of the user may then be adjusted to correspond to the determined pose of the body in the 2D body image, as in1204. With the personalized 3D body model adjusted to approximately the same pose as the user represented in the image, the shape of the personalized 3D body model may be compared to the shape of the body in the 2D body image and/or the silhouette to determine any differences between the personalized 3D body model and the representation of the body in the 2D body image and/or silhouette, as in1206. In some implementations, it may be determined whether any determined difference is above a minimum threshold (e.g., 2%). If it is determined that there is a difference between the personalized 3D body model and the body represented in one or more of the 2D body images, the silhouette may be refined. The silhouette may then be used to generate refined body parameters for the body represented in the 2D body images, as discussed above with respect toFIG.11. If the silhouette is refined, the refined silhouette is returned to the example process1100, as discussed above and as illustrated in block1114(FIG.11). If no difference is determined or if it is determined that the difference does not exceed a minimum threshold, an indication may be returned to the example process1100that there are no differences between the personalized 3D body model and the 2D body image/silhouette. FIG.12Bis an example flow diagram of another personalized 3D body model refinement process1250, in accordance with implementations of the present disclosure. The example process1250begins by determining a pose of a body represented in one of the 2D body images, as in1252. A variety of techniques may be used to determine the approximate pose of the body represented in a 2D body image. For example, camera parameters (e.g., camera type, focal length, shutter speed, aperture, etc.) included in the metadata of the 2D body image may be obtained and/or additional camera parameters may be determined and used to estimate the approximate pose of the body represented in the 2D body image. For example, a personalized 3D body model may be used to approximate the pose of the body in the 2D body image and then a position of a virtual camera with respect to that model that would produce the 2D body image of the body may be determined. Based on the determined position of the virtual camera, the height and angle of the camera used to generate the 2D body image may be inferred. In some implementations, the camera tilt may be included in the metadata and/or provided by a portable device that includes the camera. For example, many portable devices include an accelerometer and information from the accelerometer at the time the 2D body image was generated may be provided as the tilt of the camera. Based on the received and/or determined camera parameters, the pose of the body represented in the 2D body image with respect to the camera may be determined, as in1252. The personalized 3D body model of the body of the user may then be adjusted to correspond to the determined pose of the body in the 2D body image, as in1254. With the personalized 3D body model adjusted to approximately the same pose as the user represented in the image, a 2D model image from the personalized 3D body model is generated, as in1256. The 2D model image may be generated, for example, by converting or imaging the personalized 3D body model into a 2D image with the determined pose, as if a digital 2D image of the personalized 3D body model had been generated. Likewise, the 2D model image may be a binary image with pixels corresponding to the model having a first set of values (e.g., white—RGB values of 255, 255, 255) and pixels that do not represent the model having a second set of values (e.g., black—RGB values of 0, 0, 0) The 2D model image is then compared with the 2D body image and/or the silhouette to determine any differences between the 2D model image and the representation of the body in the 2D body image and/or silhouette, as in1258. For example, the 2D model image may be aligned with the 2D body image and/or the silhouette and pixels between the images compared to determine differences between the pixel values. In implementations in which the pixels are binary (e.g., white or black) an error (e.g., % difference) may be determined as a difference in pixel values between the 2D model image and the 2D body image. That error is differentiable and may be utilized to adjust the body parameters and, as a result, the shape of the personalized 3D body model. In some implementations, it may be determined whether any determined difference is above a minimum threshold (e.g., 2%). If it is determined that there is a difference between the 2D model image and the body represented in one or more of the 2D body images/silhouette, the personalized 3D body model and/or the body parameters may be refined to correspond to the shape and/or size of body represented in the 2D body image and/or the silhouette, as in1260. This example process1250may continue until there is no difference between the 2D model image and the 2D body image/silhouette, or the difference is below a minimum threshold. As discussed above, the refined personalized 3D body model produced from the example process1250, or if no refinements are necessary, the personalized 3D body model is returned to example process1100at block1112and the process1100continues. FIG.13is an example body dimensions generation processes1300, in accordance with disclosed implementations. The example process1300begins upon receipt of one or more 2D body images of a body, as in1302. As noted above, the disclosed implementations are operable with any number of 2D body images for use in generating body dimensions of the body represented in the image. For example, in some implementations, a single 2D body image may be used. In other implementations, two, three, four, or more 2D body images may be used. As discussed above, the 2D body images may be generated using any 2D imaging element, such as a camera on a portable device, a webcam, etc. The received 2D body images are then segmented to produce a binary silhouette of the body represented in the one or more 2D body images, as in1304. As discussed above, one or more segmentation techniques, such as background subtraction, semantic segmentation, Canny edge detectors or algorithms, Sobel operators, algorithms or filters, Kayyali operators, Roberts edge detection algorithms, Prewitt operators, Frei-Chen methods, or any other algorithms or techniques that may be known to those of ordinary skill in the pertinent arts. In some implementations, the silhouette may be further segmented into body segments. In addition, in some implementations, the silhouette(s) may be normalized in height and centered in the image before further processing, as in1306. For example, the silhouettes may be normalized to a standard height based on a function of a known or provided height of the body of the user represented in the image and an average height (e.g., average height of female body, average height of male body). In some implementations, the average height may be more specific than just gender. For example, the average height may be the average height of a gender and a race corresponding to the body, or a gender and a location (e.g., United States) of the user, etc. The silhouette(s) of the body represented in the 2D image(s) may then be provided to a trained body dimension model, as discussed above, to generate body dimensions for the body represented in the 2D images, as in1308. In some implementations, the silhouette(s) may be sent directly to the trained body dimension model for processing. In other implementations, as discussed above, the silhouettes, if there are more than one, may be concatenated and/or further processed to generate personalized body features representative of the body and corresponding silhouette. Those personalized body features may then be provided to the trained body dimension model and the body dimension model may generate body dimensions for the body represented in the 2D image(s) based on the received personalized body features. Finally, the body dimensions determined by the trained body dimension model may be provided, as in1310. In some implementations, the determined body dimensions may be included in a presentation along with a generated personalized 3D body model of the body, with other body measurements, etc. In other examples, the body dimensions may be used to group or classify the body into a cohort and/or to provide information regarding the body dimensions determined for the body compared to body dimensions of others in the same cohort, having a similar age, gender, etc. FIG.14is a block diagram of an example system1400operable to determine body dimensions1414from a 2D body image, in accordance with implementations of the present disclosure. As discussed above, the input 2D body image may include a representation of an entire body or a representation of a portion of the body (e.g., head, torso, leg, arm, head to knee, neck to knee, neck to torso, etc.). Likewise, while the discussions herein focus primarily on receiving and processing a 2D body image, the disclosed implementations may likewise be used with 2D video. In such an implementation, a frame from the video may be extracted and processed with the disclosed implementations to determine body dimensions for a body represented in the extracted frame. Each component of the system1400may be performed as computer executable instructions executing on a computing device, such as the computing resources103/203(FIGS.1A,1B,2) and/or the portable device130/230(FIGS.1A,1B,2). In some implementations, all aspects of the system may execute on one set of computing resources, such as the computing resources103/203or the portable device130/230. In other implementations, a first portion of the system1400may execute on one set of computing resources, such as the portable device130/230while a second portion of the system1400executes on a second set of computing resources, such as the computing resources103/203. Regardless of the source, the 2D body image is received by an input handling component1402. The input handling component processes the received 2D body image and produces a normalized body image1405. The normalized body image is of a defined size, such as 640×256 pixels by 3 channels (red, green, blue). Likewise, pixels that do not represent the body may be suppressed by setting their color values to a defined color, such as black (0,0,0). The normalized body image decreases the number of input variations into the remainder of the system1400. In some implementations, the body image may be segmented into multiple body segments and pixels of each body segment may include an identifier associating the pixel with a respective body segment. The normalized body image is then passed to a modeling component1404that may include one or more neural networks1407. For example, the neural network1407may be a modified version of a residual network, such as ResNet-50. Residual learning, or a residual network, such as ResNet-50 utilizes several layers or bottlenecks that are stacked and trained to the task to be performed (such as image classification). The network learns several low/mid/high level features at the end of its layers. In residual learning, the neural network1407is trained to learn the residual of each bottleneck. A residual can be simply understood as a subtraction of the feature learned from input of that layer. Some residual networks, such as ResNet-50 do this by connecting the output of one bottleneck to the input of another bottleneck. The disclosed implementations modify the residual network by extracting the features learned in each layer and concatenating those features with the output of the network to determine body dimensions1414of the body represented in the received 2D body image. In addition to determining a body dimensions1414of the body represented in the 2D image, in some implementations, an update component1406may be used to determine one or more loss functions1411from the determined body dimensions and from anchor body dimensions1415(e.g., synthetically determined body dimensions and/or existing body dimensions) that are maintained by the system1400. Anchor body dimensions, may be baseline or known body dimensions for different images, different body parts, body dimensions corresponding to different body shapes, muscle definitions, etc. The determined loss functions1411may be fed back into the modeling component1404and/or directly to the neural network1407as feedback1413. The feedback may be used to improve the accuracy of the system1400. In some implementations, additional information may be received by the system1400and used as additional inputs to the system1400. For example, additional information about the body, such as age, gender, ethnicity, height, weight, etc., may also be received as inputs and used by the neural network1407in determining body dimensions, as discussed herein. FIG.15is a block diagram of another example system1500operable to determine body dimensions from multiple 2D body images, in accordance with implementations of the present disclosure. In the example illustrated inFIG.15, multiple input images are received by the input handling component1502and each image is processed, as discussed herein, to generate respective normalized body images. For example, if a first image is a front side view image, the front side view image may be processed by the input handling component1502to produce a normalized front body image1505-1. Likewise, if the second image is a back side view image, the back side view image may be processed by the input handling component1502to produce a normalized back body image1505-2. Each normalized body image1505is passed to the modeling component1504and processed by one or more neural networks, such as neural network1507-1or neural network1507-2to determine respective body dimensions of the body. The outputs of those processes may be combined to produce a single set of dimensions1514representative of the body represented in the input images. In addition, the determined body dimensions may be processed by an update component1506along with anchor body measurements1515to determine one or more loss functions1511that are provided as feedback1513to the modeling component and/or the neural networks1507to improve the accuracy of the system1500. In some implementations the final body dimensions1514may be processed by the update component1506to determine the loss functions. In other implementations, the determined body dimensions1514determined for each of the normalized body images may be individually processed by the update component1506and the respective loss function1511provided as feedback1513to the respective portion of the modeling component and/or the neural network that processed the normalized body image. For example, the update component1506may determine a first loss function1511based on the determined body dimensions1514generated by the neural network1507-1and provide first loss functions1511as first feedback1513to the neural network1507-1. Likewise, the update component1506may also determine a second loss function1511based on the determined body dimensions1514generated by the neural network1507-2and provide the second loss functions1511as second feedback1513to the neural network1507-2. In still other examples, rather than utilizing a single neural network to process each received normalized input image, neural networks may be trained to process a combination of normalized input images to determine body dimensions. For example, if the combination of front side view body image and back side view body image is often received, a single neural network may be trained to process both normalized body images concurrently to determine body dimensions from the two images. In other implementations, other combinations of images, body directions in the images, or number of images may likewise be used to train a neural network for processing those images and determining body dimensions for the body represented in those images. In some implementations, additional information may be received by the system1500and used as additional inputs to the system1500. For example, additional information about the body, such as age, gender, ethnicity, height, weight, etc., may also be received as inputs and used by the neural networks1507in determining body dimensions, as discussed herein. In some implementations, the system1400/1500may also produce other outputs in addition to the body dimensions. For example, in some implementations, the disclosed implementations may also produce information indicating body measurements (e.g., body fat, body mass index, weight, etc.) and/or age, gender, ethnicity, height, etc. FIG.16is an example body fat measurement determination process1600, in accordance with implementations of the present disclosure. The example process1600begins upon receipt of the normalized body image, as in1602. The normalized body image may be, for example, produced from a 2D image as discussed above. The normalized body image is processed as an input to a first bottleneck of the neural network and the first bottleneck outputs a downsampled feature representation, as in1604. For example, a neural network may include multiple bottlenecks, such as five bottlenecks, each of which process an input and generate a downsampled feature representation as an output. Each bottleneck is a stack of deep-learning units, such as convolution layers, non-linear activation functions (Rectified Linear Units (“ReLU”)), pooling operations (MaxPooling, Average Pooling) and batch normalization. Each bottleneck may reduce the spatial resolution of the input by a factor of two. In other implementations, the spatial resolution may be downsampled differently. In this example, the first bottleneck receives the normalized body image as the input and reduces the spatial resolution of the normalized body image from 640×256, by a factor of two, down to 320×128. Likewise, in this example, the channels are increased to 64 channels. In other implementations, channel increase may be different based on, for example, computing capacity, computation time, etc. Accordingly, in this example, the output of the first bottleneck is a feature representation with a height of 320, a width of 128, and 64 channels. The example process1600then generates extracted features from the downsampled feature representation, as in1606. For example, the features from any one or more bottlenecks may be extracted by averaging the outputs of the bottleneck across the spatial dimensions. For example, if the features are extracted from the output of the first bottleneck, the 64 feature channels are averaged across the 320×128 spatial dimensions. In some implementations, features may not be extracted from all bottlenecks of the neural network. For example, features may not be extracted from the first output of the first bottleneck for use in determining the body fat measurement. In other examples, features may not be extracted from other bottlenecks of the neural network. In comparison, in some implementations, features may be extracted from all bottleneck outputs and utilized with the disclosed implementations. As the features are extracted, a determination is made as to whether additional bottlenecks remain to be processed, as in1610. If it is determined that additional bottlenecks remain, the downsampled feature representation from the upstream bottleneck is used as the input to the next bottleneck, as in1612, and the process1600continues. Continuing with the example of five bottlenecks, the first downsampled feature representation output from the first bottleneck may be provided as an input to the second bottleneck. The second bottleneck receives the first downsampled feature representation, which has spatial dimensions of 320×128, and 64 channels and processes that input to produce a second downsampled feature representation that has spatial dimensions of 160×64 and 256 channels. As the example process1600(FIG.16) continues, the third bottleneck receives the second downsampled feature representation and processes that input to produce a third downsampled feature representation that has spatial dimensions of 80×32 and 512 channels. The fourth bottleneck receives the third downsampled feature representation and processes that input to produce a fourth downsampled feature representation that has spatial dimensions of 40×16 and 1024 channels. The fifth bottleneck receives the fourth downsampled feature representation and processes that input to produce a fifth downsampled feature representation that has spatial dimensions of 20×8 and 2048 channels. As illustrated in the example process1600, extracted features are generated from the output downsampled feature representations, as in1606. For example and continuing with the above discussion, the 256 channels of the second downsampled feature representation may be averaged across the 160×64 spatial dimensions to get second extracted features F2∈ 256×1. The 512 channels of the third downsampled feature representation may be averaged across the 80×32 spatial dimensions to get third extracted features F3∈ 512×1. The 1024 channels of the fourth downsampled feature representation may be averaged across the 40×16 spatial dimensions to get fourth extracted features F4∈ 1024×1. The 2048 channels of the fifth downsampled feature representation may be averaged across the 20×8 spatial dimensions to get fifth extracted features F5∈ 2048×1. If there are no additional bottlenecks to process, the example process1600utilizes a multi-scale representation which combines the extracted features from each of the downsampled inputs and concatenates them with a 1000-channel feature output from the neural network to produce concatenated features, as in1612. A linear function may then be applied to the concatenated features to determine a body fat measurement representation, as in1614. For example, continuing with the above example, a linear function may be applied to the concatenated features to produce a determined body fat measurement representation which, in this example is a 65×1 dimensional vector. Although the disclosure has been described herein using exemplary techniques, components, and/or processes for implementing the systems and methods of the present disclosure, it should be understood by those skilled in the art that other techniques, components, and/or processes or other combinations and sequences of the techniques, components, and/or processes described herein may be used or performed that achieve the same function(s) and/or result(s) described herein and which are included within the scope of the present disclosure. Additionally, in accordance with the present disclosure, the training of machine learning tools (e.g., artificial neural networks or other classifiers) and the use of the trained machine learning tools to detect body pose, determine body point locations, determine body direction, determine body dimensions of the body, determine body measurements of the body, and/or to generate personalized 3D body models of a body based on one or more 2D body images of that body may occur on multiple, distributed computing devices, or on a single computing device, as described herein. Likewise, while the above discussions focus primarily on a personalized 3D body model, body dimensions, and/or body measurements of a body being generated from multiple 2D body direction images, in some implementations, the personalized 3D body model, body dimensions, and/or body measurements may be generated based on a single 2D body direction image of the body. In other implementations, two or more 2D direction body images may be used with the disclosed implementations. Still further, while the above implementations are described with respect to generating personalized 3D body models, body dimensions, and/or body measurements of human bodies represented in 2D body images, in other implementations, non-human bodies, such as dogs, cats, or other animals may be modeled in 3D, body dimensions, and/or body measurements determined based on 2D images of those bodies. Accordingly, the use of a human body in the disclosed implementations should not be considered limiting. It should be understood that, unless otherwise explicitly or implicitly indicated herein, any of the features, characteristics, alternatives or modifications described regarding a particular implementation herein may also be applied, used, or incorporated with any other implementation described herein, and that the drawings and detailed description of the present disclosure are intended to cover all modifications, equivalents and alternatives to the various implementations as defined by the appended claims. Moreover, with respect to the one or more methods or processes of the present disclosure described herein, including but not limited to the flow charts illustrated and discussed herein, orders in which such methods or processes are presented are not intended to be construed as any limitation on the claimed inventions, and any number of the method or process steps or boxes described herein can be combined in any order and/or in parallel to implement the methods or processes described herein. Likewise, in some implementations, one or more steps or orders of the methods or processes may be omitted. Also, the drawings herein are not drawn to scale. Conditional language, such as, among others, “can,” “could,” “might,” or “may,” unless specifically stated otherwise, or otherwise understood within the context as used, is generally intended to convey in a permissive manner that certain implementations could include, or have the potential to include, but do not mandate or require, certain features, elements and/or steps. In a similar manner, terms such as “include,” “including” and “includes” are generally intended to mean “including, but not limited to.” Thus, such conditional language is not generally intended to imply that features, elements and/or steps are in any way required for one or more implementations or that one or more implementations necessarily include logic for deciding, with or without user input or prompting, whether these features, elements and/or steps are included or are to be performed in any particular implementation. Disjunctive language such as the phrase “at least one of X, Y, or Z,” or “at least one of X, Y and Z,” unless specifically stated otherwise, is otherwise understood with the context as used in general to present that an item, term, etc., may be either X, Y, or Z, or any combination thereof (e.g., X, Y, and/or Z). Thus, such disjunctive language is not generally intended to, and should not, imply that certain implementations require at least one of X, at least one of Y, or at least one of Z to each be present. Unless otherwise explicitly stated, articles such as “a” or “an” should generally be interpreted to include one or more described items. Accordingly, phrases such as “a device configured to” are intended to include one or more recited devices. Such one or more recited devices can also be collectively configured to carry out the stated recitations. For example, “a processor configured to carry out recitations A, B and C” can include a first processor configured to carry out recitation A working in conjunction with a second processor configured to carry out recitations B and C. Language of degree used herein, such as the terms “about,” “approximately,” “generally,” “nearly” or “substantially” as used herein, represent a value, amount, or characteristic close to the stated value, amount, or characteristic that still performs a desired function or achieves a desired result. For example, the terms “about,” “approximately,” “generally,” “nearly” or “substantially” may refer to an amount that is within less than 10% of, within less than 5% of, within less than 1% of, within less than 0.1% of, and within less than 0.01% of the stated amount. Although the invention has been described and illustrated with respect to illustrative implementations thereof, the foregoing and various other additions and omissions may be made therein and thereto without departing from the spirit and scope of the present disclosure. | 130,038 |
11861861 | DETAILED DESCRIPTION Various exemplary embodiments and details are described hereinafter, with reference to the figures when relevant. It should be noted that the figures may or may not be drawn to scale and that elements of similar structures or functions are represented by like reference numerals throughout the figures. It should also be noted that the figures are only intended to facilitate the description of the embodiments. They are not intended as an exhaustive description of the invention or as a limitation on the scope of the invention. In addition, an illustrated embodiment needs not have all the aspects or advantages shown. An aspect or an advantage described in conjunction with a particular embodiment is not necessarily limited to that embodiment and can be practiced in any other embodiments even if not so illustrated, or if not so explicitly described. A method of determining a device parameter for a hearing device is disclosed. The hearing device may be a hearable or a hearing aid, wherein the processor is configured to compensate for a hearing loss of a user. The method may further relate to a listening device or hearing device customization. The hearing device may be of the behind-the-ear (BTE) type, in-the-ear (ITE) type, in-the-canal (ITC) type, receiver-in-canal (RIC) type or receiver-in-the-ear (RITE) type. The hearing aid may be a binaural hearing aid. The hearing device may comprise a first earpiece and a second earpiece, wherein the first earpiece and/or the second earpiece is an earpiece as disclosed herein. FIG.1shows an outer human ear1, where the outer human ear has a number of anatomical structures, that have a distinct shape and position on the outer human ear. The anatomical structures may be e.g. the Helix2, Crus Antihelcis11, Fossa Triangularis3, Crus Helics4, Intertragic Notch5, Antihelix10, Cavum Concae9, Tragus13, Antitragus8, Lobule6, External acoustic meatus12, External auditory meatus (not shown), and Cymba Concae7. Other anatomical structures that are recognizable on the outer human ear may also be utilized in the present method. FIG.2shows a side view of a BTE hearing device20, where the hearing device20comprises a housing21which is intended to be positioned behind the ear of a user, a hearing device tube22which transmits auditory signals to an earpiece23. A first end24of the hearing device tube22is attached to a first tube connector25of the housing21, and where a second end26of the hearing device tube22is attached to a second connector27of the earpiece23, which is positioned opposite to the insertion part28of the earpiece, which is configured to be inserted into the ear canal of a user. The hearing device tube22is often provided with a primary bend29and a secondary bend30, which allow a longitudinal axis of the earpiece to be substantially coaxial with the auditory canal of the user. When a BTE hearing device is personalised or customised for a user, one of the device parameters that is important for the personalisation is the length A of the hearing device tube22. If the length is too short then the earpiece will not fit properly into the ear canal, and the longitudinal axis of the earpiece will not be parallel to the central axis of the ear canal and may cause a reduced comfort for the user. If the length A of the hearing device tube22is too long, the hearing device tube22may stick out from the side of the ear and become visually displeasing for the user. Furthermore, if the hearing device tube22is too long, a BTE housing21may be improperly secured to the ear of the user, which may lead to that the BTE housing may easily fall of the ear and be lost. Thus, it may be important for the personalisation to get a proper and fitting length for the hearing device tube22for the specific user. FIG.3is a schematic block diagram of a method of determining a device parameter for a hearing device. The first step of the method may be to obtain an image100of an outer ear of a user. The obtaining of the image may e.g. be taking a photograph of the outer ear and introducing the photograph into a processor, memory of a processor, a computer, a mobile phone or other types of processing devices that are capable of running computer programs. When the image has been obtained the first anatomical landmark may be identified110from the image data received in step100. Following this the second anatomical landmark may be identified120from the image data received in step100. When the first and the second anatomical landmarks have been identified, the positions of the landmarks, or parts of the landmarks may be identified130. When the positions of the first and second anatomical landmarks have been identified, it is possible to utilize these positions to provide e.g. a distance between the two positions and/or a distance between two points defined by the anatomical landmarks, where the distance may be utilized to establish a device parameter140, such as the length A of a hearing device tube, as shown inFIG.2, where the length of the tube may be provided as an output150for a specialist that is customising the hearing device for a user. The output may then be utilised to cut the length of the hearing device tube in the correct length, and to bend it in the correct places in accordance with the anatomy of the ear, and thereby provide a reliable personalisation of the hearing device tube (device parameter) for the user. Within the understanding of the present disclosure, other device parameters, such as the positioning of the housing, size of the housing, size of the earpiece, curvature of the tubing and other parts of the hearing device may be defined as a device parameter that can be determined using the present method. FIG.4shows an example of the identification of a first and a second anatomical landmarks, where the identification of a certain position of the landmarks may be utilized to measure a distance between the two positions, where the distance may be representative of a device parameter such as the tube length of the hearing device. In this example, the first anatomical landmark of the outer human ear1is the Helix2, and the second anatomical landmark is the cavum concae9. The position of the first anatomical landmark may be the lowest leftmost point of the Helix2marked as B inFIG.4, while the position of the second anatomical landmark may be the center of the cavum concae9, marked as C inFIG.4. By identifying the specific anatomical landmarks, it is possible to indicate which part of the anatomical landmark is important for the specific device parameter. If another device parameter is to be obtained, it may be possible to identify a different first and second anatomical landmark for that purpose. As an example, for the shape of a BTE housing, the curvature of the Helix and e.g. the position and/or curvature of the Cruz Antihelix or Fossa Triangularis may be assessed in a similar manner. FIG.5shows a block diagram of a device for determining a hearing device parameter200. The device comprises a processor201, memory202, and an interface203to provide a communication with a user or a secondary device, e.g. a server. The use of the terms “first”, “second”, “third” and “fourth”, “primary”, “secondary”, “tertiary” etc. does not imply any particular order, but are included to identify individual elements. Moreover, the use of the terms “first”, “second”, “third” and “fourth”, “primary”, “secondary”, “tertiary” etc. does not denote any order or importance, but rather the terms “first”, “second”, “third” and “fourth”, “primary”, “secondary”, “tertiary” etc. are used to distinguish one element from another. Note that the words “first”, “second”, “third” and “fourth”, “primary”, “secondary”, “tertiary” etc. are used here and elsewhere for labelling purposes only and are not intended to denote any specific spatial or temporal ordering. Furthermore, the labelling of a first element does not imply the presence of a second element and vice versa. Although features have been shown and described, it will be understood that they are not intended to limit the claimed invention, and it will be made obvious to those skilled in the art that various changes and modifications may be made without departing from the spirit and scope of the claimed invention. The specification and drawings are, accordingly to be regarded in an illustrative rather than restrictive sense. The claimed invention is intended to cover all alternatives, modifications, and equivalents. LIST OF REFERENCES 1. Outer human ear2. Helix3. Fossa Triangularis4. Crus Helics5. Intertragic Notch6. Lobule7. Cymba Concae8. Antitragus9. Cavum Concae10. Antihelix11. Crus Antihelics12. External acoustic meatus13. Tragus20. BTE hearing device21. Housing22. Hearing device tube23. Earpiece24. First end of hearing device tube25. First tube connector26. Second end of the hearing device tube27. Second connector of the housing28. Insertion part of earpiece29. Primary bend of hearing device tube30. Secondary bend of hearing device tube100. Obtaining of image110. Identification of first anatomical Landmark120. Identification of second anatomical Landmark130. Identification of positions of anatomical landmarks140. Establish a device parameter150. Output200. Device for determining a hearing device parameter201. Processor202. Memory203. InterfaceA. Length of hearing device tubeB. Position of first anatomical landmarkC. Position of second anatomical landmark. | 9,494 |
11861863 | The detailed description explains embodiments of the invention, together with advantages and features, by way of example with reference to the drawings. DETAILED DESCRIPTION One or more embodiments of the present invention relate to capturing the point clouds and further a technical problem to recognize meaningful objects from the millions of points in each of these point clouds. Particularly, the technical problem addressed by one or more embodiments of the present invention is to autonomously identify a movable object in a point cloud based on a computer aided design (CAD) model of the movable object taking into consideration that a captured shape of the object can vary based on one or more “joints” of the object. A “joint” can be a connecting point, or a movable junction of the object that facilitate one or more parts of the object to turn/change direction along one or more axes. Referring now toFIGS.1-3, a laser scanner20is shown for optically scanning and measuring the environment surrounding the laser scanner20. The laser scanner20has a measuring head22and a base24. The measuring head22is mounted on the base24such that the laser scanner20may be rotated about a vertical axis23. In one embodiment, the measuring head22includes a gimbal point27that is a center of rotation about the vertical axis23and a horizontal axis25. The measuring head22has a rotary mirror26, which may be rotated about the horizontal axis25. The rotation about the vertical axis may be about the center of the base24. The terms vertical axis and horizontal axis refer to the scanner in its normal upright position. It is possible to operate a 3D coordinate measurement device on its side or upside down, and so to avoid confusion, the terms azimuth axis and zenith axis may be substituted for the terms vertical axis and horizontal axis, respectively. The term pan axis or standing axis may also be used as an alternative to vertical axis. The measuring head22is further provided with an electromagnetic radiation emitter, such as light emitter28, for example, that emits an emitted light beam30. In one embodiment, the emitted light beam30is a coherent light beam such as a laser beam. The laser beam may have a wavelength range of approximately 300 to 1600 nanometers, for example 790 nanometers, 905 nanometers, 1550 nm, or less than 400 nanometers. It should be appreciated that other electromagnetic radiation beams having greater or smaller wavelengths may also be used. The emitted light beam30is amplitude or intensity modulated, for example, with a sinusoidal waveform or with a rectangular waveform. The emitted light beam30is emitted by the light emitter28onto a beam steering unit, such as mirror26, where it is deflected to the environment. A reflected light beam32is reflected from the environment by an object34. The reflected or scattered light is intercepted by the rotary mirror26and directed into a light receiver36. The directions of the emitted light beam30and the reflected light beam32result from the angular positions of the rotary mirror26and the measuring head22about the axes25and23, respectively. These angular positions in turn depend on the corresponding rotary drives or motors. Coupled to the light emitter28and the light receiver36is a controller38. The controller38determines, for a multitude of measuring points X, a corresponding number of distances d between the laser scanner20and the points X on object34. The distance to a particular point X is determined based at least in part on the speed of light in air through which electromagnetic radiation propagates from the device to the object point X. In one embodiment the phase shift of modulation in light emitted by the laser scanner20and the point X is determined and evaluated to obtain a measured distance d. The speed of light in air depends on the properties of the air such as the air temperature, barometric pressure, relative humidity, and concentration of carbon dioxide. Such air properties influence the index of refraction n of the air. The speed of light in air is equal to the speed of light in vacuum c divided by the index of refraction. In other words, cair=c/n. A laser scanner of the type discussed herein is based on the time-of-flight (TOF) of the light in the air (the round-trip time for the light to travel from the device to the object and back to the device). Examples of TOF scanners include scanners that measure round trip time using the time interval between emitted and returning pulses (pulsed TOF scanners), scanners that modulate light sinusoidally and measure phase shift of the returning light (phase-based scanners), as well as many other types. A method of measuring distance based on the time-of-flight of light depends on the speed of light in air and is therefore easily distinguished from methods of measuring distance based on triangulation. Triangulation-based methods involve projecting light from a light source along a particular direction and then intercepting the light on a camera pixel along a particular direction. By knowing the distance between the camera and the projector and by matching a projected angle with a received angle, the method of triangulation enables the distance to the object to be determined based on one known length and two known angles of a triangle. The method of triangulation, therefore, does not directly depend on the speed of light in air. In one mode of operation, the scanning of the volume around the laser scanner20takes place by rotating the rotary mirror26relatively quickly about axis25while rotating the measuring head22relatively slowly about axis23, thereby moving the assembly in a spiral pattern. In an exemplary embodiment, the rotary mirror rotates at a maximum speed of 5820 revolutions per minute. For such a scan, the gimbal point27defines the origin of the local stationary reference system. The base24rests in this local stationary reference system. In addition to measuring a distance d from the gimbal point27to an object point X, the scanner20may also collect gray-scale information related to the received optical power (equivalent to the term “brightness.”) The gray-scale value may be determined at least in part, for example, by integration of the bandpass-filtered and amplified signal in the light receiver36over a measuring period attributed to the object point X. The measuring head22may include a display device40integrated into the laser scanner20. The display device40may include a graphical touch screen41, as shown inFIG.1, which allows the operator to set the parameters or initiate the operation of the laser scanner20. For example, the screen41may have a user interface that allows the operator to provide measurement instructions to the device, and the screen may also display measurement results. The laser scanner20includes a carrying structure42that provides a frame for the measuring head22and a platform for attaching the components of the laser scanner20. In one embodiment, the carrying structure42is made from a metal such as aluminum. The carrying structure42includes a traverse member44having a pair of walls46,48on opposing ends. The walls46,48are parallel to each other and extend in a direction opposite the base24. Shells50,52are coupled to the walls46,48and cover the components of the laser scanner20. In the exemplary embodiment, the shells50,52are made from a plastic material, such as polycarbonate or polyethylene for example. The shells50,52cooperate with the walls46,48to form a housing for the laser scanner20. On an end of the shells50,52opposite the walls46,48a pair of yokes54,56are arranged to partially cover the respective shells50,52. In the exemplary embodiment, the yokes54,56are made from a suitably durable material, such as aluminum for example, that assists in protecting the shells50,52during transport and operation. The yokes54,56each includes a first arm portion58that is coupled, such as with a fastener for example, to the traverse member44adjacent the base24. The arm portion58for each yoke54,56extends from the traverse member44obliquely to an outer corner of the respective shell50,52. From the outer corner of the shell, the yokes54,56extend along the side edge of the shell to an opposite outer corner of the shell. Each yoke54,56further includes a second arm portion that extends obliquely to the walls46,48. It should be appreciated that the yokes54,56may be coupled to the traverse member44, the walls46,48and the shells50,54at multiple locations. The pair of yokes54,56cooperate to circumscribe a convex space within which the two shells50,52are arranged. In the exemplary embodiment, the yokes54,56cooperate to cover all of the outer edges of the shells50,54, while the top and bottom arm portions project over at least a portion of the top and bottom edges of the shells50,52. This provides advantages in protecting the shells50,52and the measuring head22from damage during transportation and operation. In other embodiments, the yokes54,56may include additional features, such as handles to facilitate the carrying of the laser scanner20or attachment points for accessories for example. On top of the traverse member44, a prism60is provided. The prism extends parallel to the walls46,48. In the exemplary embodiment, the prism60is integrally formed as part of the carrying structure42. In other embodiments, the prism60is a separate component that is coupled to the traverse member44. When the mirror26rotates, during each rotation the mirror26directs the emitted light beam30onto the traverse member44and the prism60. Due to non-linearities in the electronic components, for example in the light receiver36, the measured distances d may depend on signal strength, which may be measured in optical power entering the scanner or optical power entering optical detectors within the light receiver36, for example. In an embodiment, a distance correction is stored in the scanner as a function (possibly a nonlinear function) of distance to a measured point and optical power (generally unscaled quantity of light power sometimes referred to as “brightness”) returned from the measured point and sent to an optical detector in the light receiver36. Since the prism60is at a known distance from the gimbal point27, the measured optical power level of light reflected by the prism60may be used to correct distance measurements for other measured points, thereby allowing for compensation to correct for the effects of environmental variables such as temperature. In the exemplary embodiment, the resulting correction of distance is performed by the controller38. In an embodiment, the base24is coupled to a swivel assembly (not shown) such as that described in commonly owned U.S. Pat. No. 8,705,012 ('012), which is incorporated by reference herein. The swivel assembly is housed within the carrying structure42and includes a motor138that is configured to rotate the measuring head22about the axis23. In an embodiment, the angular/rotational position of the measuring head22about the axis23is measured by angular encoder134. An auxiliary image acquisition device66may be a device that captures and measures a parameter associated with the scanned area or the scanned object and provides a signal representing the measured quantities over an image acquisition area. The auxiliary image acquisition device66may be, but is not limited to, a pyrometer, a thermal imager, an ionizing radiation detector, or a millimeter-wave detector. In an embodiment, the auxiliary image acquisition device66is a color camera. In an embodiment, a central color camera (first image acquisition device)112is located internally to the scanner and may have the same optical axis as the 3D scanner device. In this embodiment, the first image acquisition device112is integrated into the measuring head22and arranged to acquire images along the same optical pathway as emitted light beam30and reflected light beam32. In this embodiment, the light from the light emitter28reflects off a fixed mirror116and travels to dichroic beam-splitter118that reflects the light117from the light emitter28onto the rotary mirror26. In an embodiment, the mirror26is rotated by a motor136and the angular/rotational position of the mirror is measured by angular encoder134. The dichroic beam-splitter118allows light to pass through at wavelengths different than the wavelength of light117. For example, the light emitter28may be a near infrared laser light (for example, light at wavelengths of 780 nm or 1150 nm), with the dichroic beam-splitter118configured to reflect the infrared laser light while allowing visible light (e.g., wavelengths of 400 to 700 nm) to transmit through. In other embodiments, the determination of whether the light passes through the beam-splitter118or is reflected depends on the polarization of the light. The digital camera, which can be the first image acquisition device112, obtains 2D images of the scanned area to capture color data to add to the scanned image. In the case of a built-in color camera having an optical axis coincident with that of the 3D scanning device, the direction of the camera view may be easily obtained by simply adjusting the steering mechanisms of the scanner—for example, by adjusting the azimuth angle about the axis23and by steering the mirror26about the axis25. Referring now toFIG.4with continuing reference toFIGS.1-3, elements are shown of the laser scanner20. Controller38is a suitable electronic device capable of accepting data and instructions, executing the instructions to process the data, and presenting the results. The controller38includes one or more processing elements, such as the processors78. The processors may be microprocessors, field programmable gate arrays (FPGAs), digital signal processors (DSPs), and generally any device capable of performing computing functions. The one or more processors78have access to memory80for storing information Controller38is capable of converting the analog voltage or current level provided by light receiver36into a digital signal to determine a distance from the laser scanner20to an object in the environment. Controller38uses the digital signals that act as input to various processes for controlling the laser scanner20. The digital signals represent one or more laser scanner20data including but not limited to distance to an object, images of the environment, images acquired by panoramic camera126, angular/rotational measurements by a first or azimuth encoder132, and angular/rotational measurements by a second axis or zenith encoder134. In general, controller38accepts data from encoders132,134, light receiver36, light source/emitter28, and panoramic camera126and is given certain instructions for the purpose of generating a 3D point cloud of a scanned environment. Controller38provides operating signals to the light source28, light receiver36, panoramic camera126, zenith motor136and azimuth motor138. The controller38compares the operational parameters to predetermined variances and if the predetermined variance is exceeded, generates a signal that alerts an operator to a condition. The data received by the controller38may be displayed on a user interface via the display device40coupled to controller38. The user interface may be one or more LEDs (light-emitting diodes)82, an LCD (liquid-crystal diode) display, a CRT (cathode ray tube) display, a touch-screen display or the like. A keypad may also be coupled to the user interface for providing data input to controller38. In one embodiment, the user interface is arranged or executed on a mobile computing device that is coupled for communication, such as via a wired or wireless communications medium (e.g. Ethernet, serial, USB, Bluetooth™ or WiFi) for example, to the laser scanner20. The controller38may also be coupled to external computer networks such as a local area network (LAN) and the Internet. A LAN interconnects one or more remote computers, which are configured to communicate with controller38using a well-known computer communications protocol such as TCP/IP (Transmission Control Protocol/Internet({circumflex over ( )}) Protocol), RS-232, ModBus, and the like. Additional systems may also be connected to LAN with the controllers38in each of these systems20being configured to send and receive data to and from remote computers and other systems20. The LAN may be connected to the Internet. This connection allows controller38to communicate with one or more remote computers connected to the Internet. The processors72are coupled to memory80. The memory80may include random access memory (RAM) device84, a non-volatile memory (NVM) device86, and a read-only memory (ROM) device88. In addition, the processors78may be connected to one or more input/output (I/O) controllers90and a communications circuit92. In an embodiment, the communications circuit92provides an interface that allows wireless or wired communication with one or more external devices or networks, such as the LAN discussed above. Controller38includes operation control methods embodied in application code. These methods are embodied in computer instructions written to be executed by processors78, typically in the form of software. The software can be encoded in any language, including, but not limited to, assembly language, VHDL (Verilog Hardware Description Language), VHSIC HDL (Very High Speed IC Hardware Description Language), Fortran (formula translation), C, C++, C#, Objective-C, Visual C++, Java, ALGOL (algorithmic language), BASIC (beginners all-purpose symbolic instruction code), visual BASIC, ActiveX, HTML (HyperText Markup Language), Python, Ruby and any combination or derivative of at least one of the foregoing. Referring now toFIG.5, a method150is shown for generating a model or layout of the environment. A model/layout of the environment can be a 2D map, a 3D point cloud, or a combination thereof of the environment. The model can be generated using a scanner20. A 2D scan is captured by capturing point clouds in a single plane by the 3D scanner20. Unless specified, as used henceforth, a “model” or an “environment model” can be a 2D map or a 3D point cloud or a combination thereof scanned using a corresponding 2D/3D scanner system. In one or more examples, the scanning is performed on a periodic basis (daily, weekly, monthly) so that an accurate model of the environment can be made available. This periodic scanning makes the object recognition desirable to save time, so that the shape-changing objects do not have to be reset to an initial (or any other predetermined) state. For example, a scan may be performed in a manufacturing facility, while the facility is operating, therefore any movable objects (e.g. robots) may be automatically recognized no matter what position that are in at the instant the scan is performed. In this embodiment, the method150starts in block152with the operator initiating the scanning of an area or facility with the scanner20as described herein. The method150then proceeds to block154wherein the operator acquires images with a camera during the scanning process. The images may be acquired by a camera located in a mobile computing device (e.g. personal digital assistant, cellular phone, tablet or laptop) carried by the operator for example. In one or more embodiments, the scanner20may include a holder (not shown) that couples the mobile computing device to the scanner20. In block154, the operator may further record notes. These notes may be audio notes or sounds recorded by a microphone in the mobile computing device. These notes may further be textual notes input using a keyboard on the mobile computing device. It should be appreciated that the acquiring of images and recording of notes may be performed simultaneously, such as when the operator acquires a video. In one or more embodiments, the recording of the images or notes may be performed using a software application executed on a processor of the mobile computing device. The software application may be configured to communicate with the scanner20, such as by a wired or wireless (e.g. BLUETOOTH™) connection for example, to transmit the acquired images or recorded notes to the scanner20. In one embodiment, the operator may initiate the image acquisition by actuating actuator38that causes the software application to transition to an image acquisition mode. The method150then proceeds to block156where the images and notes are stored in memory, such as memory80for example. In one or more embodiments, the data on the pose of the scanner20is stored with the images and notes. In still another embodiment, the time or the location of the scanner20when the images are acquired or notes were recorded is also stored. Once the scanning of the area or facility is completed, the method150then proceeds to block158where the environment model164(FIG.6) is generated as described herein. The method150then proceeds to block160where an annotated environment model166(FIG.6) is generated. The annotated environment model166may include user-defined annotations, such as dimensions140or room size178described herein above with respect toFIG.10. The annotations may further include user-defined free-form text or hyperlinks for example. Further, in the exemplary embodiment, the acquired images168and recorded notes are integrated into the annotated environment model166. In one or more embodiments, the image annotations are positioned to the side of the environment model164the image was acquired or the note recorded. It should be appreciated that the images allow the operator to provide information to the map user on the location of objects, obstructions and structures, such as but not limited to robot144, barrier174and counter/desk176for example. Finally, the method150proceeds to block162where the annotated map is stored in memory. It should be appreciated that the image or note annotations may be advantageous in embodiments where the annotated environment model166is generated for public safety personnel, such as a fire fighter for example. The images allow the fire fighter to anticipate obstructions that may not be seen in the limited visibility conditions such as during a fire in the facility. The image or note annotations may further be advantageous in police or criminal investigations for documenting a crime scene and allow the investigator to make contemporaneous notes on what they find while performing the scan. Referring now toFIG.7, another method180is shown of generating a environment model having annotation that include 3D coordinates of objects within the scanned area. The method180begins in block182with the operator scanning the area. During the scanning process, the operator may see an object, such as robot144(FIG.6) or any other equipment for example, that the operator may desire to locate more precisely within the environment model or acquire additional information. In one or more embodiments, the scanner20includes a laser projector (not shown) that the operator may activate. The laser projector emits a visible beam of light that allows the operator to see the direction the scanner20is pointing. Once the operator locates the light beam from laser projector on the desired object, the method180proceeds to block186where the coordinates of the spot on the object of interest are determined. In one embodiment, the coordinates of the object are determined by first determining a distance from scanner20to the object. In one or more embodiments, this distance may be determined by a 3D measurement device, such as but not limited to a laser scanner20, or a 3D camera (not shown) (e.g. an RGBD camera) for example. In addition to the distance, the 3D camera60also may acquire an image of the object. Based on knowing the distance along with the pose of the scanner20, the coordinates of the object may be determined. The method180then proceeds to block188where the information (e.g. coordinates and image) of the object are stored in memory. It should be appreciated that in some embodiments, the operator may desire to obtain a three-dimensional (3D) representation of the object of interest in addition to the location relative to the environment model. In this embodiment, the method180proceeds to scanning block190and acquires 3D coordinates of points on the object of interest. In one or more embodiments, the object is scanned with a 3D coordinate measurement device, such as a laser scanner20, or the 3D camera60in block192. The scanner20then proceeds to determine the 3D coordinates of points on the surface of the object or interest in block194. In one or more embodiments, the 3D coordinates may be determined by determining the pose of the scanner when the image is acquired by the 3D camera. The pose information along with the distances and a registration of the images acquired by the 3D camera may allow the generation of a 3D point cloud of the object of interest. In one embodiment, the orientation of the object of interest relative to the environment is also determined from the acquired images. This orientation information may also be stored and later used to accurately represent the object of interest on the environment model. The method180then proceeds to block196where the 3D coordinate data is stored in memory. The method180then proceeds to block198where the environment model166is generated as described herein. In one or more embodiments, the location of the objects of interest (determined in blocks184-186) are displayed on the environment model166as a symbol147, such as a small circle, icon, or any other such indicator. It should be appreciated that the environment model166may include additional user-defined annotations added in block200, such as those described herein with reference toFIG.6. The environment model166and the annotations are then stored in block202. In use, the map user may select one of the symbols, such as symbol147for example. In response, an image of the object of interest191,193may be displayed. Where the object or interest191,193was scanned to obtain 3D coordinates of the object, the 3D representation of the object of interest191,193may be displayed. A technical challenge exists in cases such as in modern digital factories, where objects are mobile. In such cases, precise tracking of the objects is a technical challenge so as to capture the point cloud representation of the object such that the object is recognizable from the captured point cloud. Those objects, for example robot144, may not only be mobile, but also be capable to reshape by actuating one or more components or parts (e.g. an arm) about one or more axes. It should be noted that the embodiments herein are described using robot144as such shape-changing objects, however, in other embodiments the shape-changing objects can be other types of objects. FIG.8andFIG.9illustrate shape-change of a robot144according to one or more embodiments. Typically, CAD-models of robots144and other manufacturing units represent a baseline, or an “All-Zero” state (FIG.8) of the unit's axes (204,206). It should be noted that the robot144depicted herein is shown with two axes204,206along which the robot144can change its shape, however, in other examples, the robot144can include any other number of axes, and the axes can facilitate a change in shape in a different manner than what is depicted. The “in-use” shape (FIG.9) of the robot144differs from the CAD model (FIG.8). This is challenging when considering an artificial intelligence (AI) or other autonomous process to perform recognition or identification of the robot in the point cloud, because AI would need training data for all possible shapes of the robot144, increasing training time, training data generation and overhead. The technical solutions described herein address such challenges by using a shaped CAD model, representing the actual axes data of the robot144, and then using the AI to allocate the shaped model's representation in the point cloud that is captured. The technical solutions described herein facilitate autonomously identifying CAD models in the point cloud, the CAD models being such that the real-world object's shape changes over time (FIGS.8, and9). To facilitate this, the robot144is a smart device, that includes one or more sensors210(FIG.8,9). The robot144provides access to the sensor data, such as position, speed, timestamp, and the like. The sensors210can include position sensors, gyroscope, timer, and other such sensors that can provide an identification of the movement of the one or more axes204,206, and hence to the real-world shape of the robot144at time of measurement. In such cases, the sensor data is used with a timestamp equal or close to the scanning timestamp (from the scanner20) to shape the CAD model in order to gain similarity between the CAD model and the real-world representation of the robot144. This shaped CAD model can then be used to search for the respective object in the point cloud that is captured. It should be noted that the sensors210depicted inFIG.8andFIG.9are exemplary, and that in other examples, the sensors210can be located at different points on the robot144. Further, other types of sensors may also be used other than a rotational sensor. For example, where the object has a portion that moves linearly, a linear encoder may be used to determine its position. FIG.10depicts an object recognition system that can identify a shape-changing object in a point cloud according to one or more embodiments.FIG.11depicts a flowchart of a method for object recognition for a shape changing object in a point cloud according to one or more embodiments. The method240includes capturing a point cloud226containing the robot144using the scanner20, at241. At the same time, the method also includes collecting sensor data222for the axes (204,206) from the robot144, at242. In one or more examples, the sensor data222from the robot144can be received using a messaging framework like OPC uniform architecture, near field communication, or any other communication protocol/framework (e.g. Data Distribution Service or DDS). Those messaging frameworks provide messaging streams or similar technologies, where the sensor and axes data can be consumed. In one or more examples, the timestamps of the captured point cloud226and the captured sensor data222are stored. Accordingly, for an offline analysis at a later time the timestamps can be compared and the point cloud226and the sensor data222with substantially matching timestamps are used together to recognize the robot144in the point cloud226. Both inputs, the point cloud226from the laser scanning and the sensor data222, can be collected by a central computing device230, such as a networked computer, an enterprise computer/server, and the like. The central computer accesses a baseline CAD model of the robot144and manipulates the baseline CAD model to represent a real-world shape224of the robot144, at243. In one or more examples, the manipulation is performed using a CAD program. It should be appreciated that while the examples provided herein refer to the central computing device230as a single device, this is for example purposes and in other embodiments the central computing device may be comprised of a plurality of computing devices or be part of a distributed computing system. An object recognition module228uses the obtained real-world shape224to identify the robot144in the point cloud226, at244. In one or more examples, the object recognition module228uses an AI, which searches the point cloud226for the real-world shape224. Accordingly, objects without a fixed shape can be recognized or even identified in the point cloud226. The technical solutions in this manner provide a robust and efficient recognition algorithm, compared to “brute force” combinatorial algorithm. Further, objects of the same class can be distinguished based on the sensor data. Typically, multiple instances of one object, for example, multiple robots of the same type, are present in one point cloud. The object surface information is equal for each instance but the pose can be extracted from the general control system. Because robots typically have different poses, which can be read from the control system, one robot can be distinguished from another robot of the same type. Objects that are recognized in the point cloud226, in this manner, can be filtered, if desired, at245. The method240can be repeated for each point cloud capture. The filtered point cloud is then stored as part of the 2D floorplan, at246. Accordingly, the technical solutions described herein facilitate autonomously identifying an object that can change shape in a point cloud based on a baseline CAD model of the object. To facilitate such object recognition, at the time the point cloud is captured by a scanner, data from one or more sensors associated with the object are also captured. The sensor data is stored with the point cloud along with a timestamp that is indicative that the two input data were captured substantially simultaneously. The sensor data identify a position of one or more axes (or configurations) of the object. The position of the axes is used to determine a shifted/changed CAD model from the baseline CAD model. The shifted/changed shape represented by the new CAD model is subsequently searched for and identified in the point cloud. According to one or more embodiments, the CAD model can be a parametric based model, which includes constraints for the robot144represented by the CAD model to move to a position within a range of motion defined by the constraints. Alternatively, or in addition, the CAD model includes a catalog of files/data structures with the robot144defined in different positions of that robot144in operation. It should be appreciated that this provides advantages when environments are scanned on a periodic basis in that the system can distinguish between an object that was newly placed in the environment from one that is simply in a different position (e.g. arms are in a different configuration). This allows the operator to maintain an updated catalog or schedule of objects within the environment. The term “about” is intended to include the degree of error associated with measurement of the particular quantity based upon the equipment available at the time of filing the application. For example, “about” can include a range of ±8% or 5%, or 2% of a given value. The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the disclosure. As used herein, the singular forms “a”, “an” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms “comprises” and/or “comprising,” when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, element components, and/or groups thereof. While the invention has been described in detail in connection with only a limited number of embodiments, it should be readily understood that the invention is not limited to such disclosed embodiments. Rather, the invention can be modified to incorporate any number of variations, alterations, substitutions or equivalent arrangements not heretofore described, but which are commensurate with the spirit and scope of the invention. Additionally, while various embodiments of the invention have been described, it is to be understood that aspects of the invention may include only some of the described embodiments. Accordingly, the invention is not to be seen as limited by the foregoing description, but is only limited by the scope of the appended claims. | 35,954 |
11861864 | DETAILED DESCRIPTION Embodiments will now be described with reference to the figures. For simplicity and clarity of illustration, where considered appropriate, reference numerals may be repeated among the Figures to indicate corresponding or analogous elements. In addition, numerous specific details are set forth in order to provide a thorough understanding of the embodiments described herein. However, it will be understood by those of ordinary skill in the art that the embodiments described herein may be practiced without these specific details. In other instances, well-known methods, procedures, and components have not been described in detail so as not to obscure the embodiments described herein. Also, the description is not to be considered as limiting the scope of the embodiments described herein. Various terms used throughout the present description may be read and understood as follows, unless the context indicates otherwise: “or” as used throughout is inclusive, as though written “and/or”; singular articles and pronouns as used throughout include their plural forms, and vice versa; similarly, gendered pronouns include their counterpart pronouns so that pronouns should not be understood as limiting anything described herein to use, implementation, performance, etc. by a single gender; “exemplary” should be understood as “illustrative” or “exemplifying” and not necessarily as “preferred” over other embodiments. Further definitions for terms may be set out herein; these may apply to prior and subsequent instances of those terms, as will be understood from a reading of the present description. Any module, unit, component, server, computer, terminal, engine, or device exemplified herein that executes instructions may include or otherwise have access to computer-readable media such as storage media, computer storage media, or data storage devices (removable and/or non-removable) such as, for example, magnetic disks, optical disks, or tape. Computer storage media may include volatile and non-volatile, removable and non-removable media implemented in any method or technology for storage of information, such as computer-readable instructions, data structures, program modules, or other data. Examples of computer storage media include RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile disks (DVD) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information, and which can be accessed by an application, module, or both. Any such computer storage media may be part of the device or accessible or connectable thereto. Further, unless the context clearly indicates otherwise, any processor or controller set out herein may be implemented as a singular processor or as a plurality of processors. The plurality of processors may be arrayed or distributed, and any processing function referred to herein may be carried out by one or by a plurality of processors, even though a single processor may be exemplified. Any method, application, or module herein described may be implemented using computer readable/executable instructions that may be stored or otherwise held by such computer-readable media and executed by the one or more processors. The following relates generally to geospatial data collection; and more particularly, to systems and methods for determining mediated reality positioning offset for a virtual camera pose to display geospatial object data. In many cases, Augmented Reality (AR) can be used in the geospatial data collection and verification. High-accuracy AR geospatial data collection generally relies on a combination of a global navigation satellite systems (GNSS) and/or real-time kinematic (RTK) positioning, surveying prism, or other types of positioning tracking approaches, in combination with AR software, to display spatial data in relations to a positioning device. AR visuals are generally displayed in relation to the visualization device's camera. To improve accuracy of visuals, the AR software can use of a defined offset to take into account where the positioning device is located relative to the visualization device's camera. However, as illustrated in the examples ofFIGS.4and5, typical operations require frequent device titling, and thus, the offset frequently changes. The offset due to tilting becomes especially pronounced when a surveying pole, a backpack-based antenna, or a head-mounted antenna are used; as illustrated in the example ofFIGS.6and7. Such tilt introduces unavoidable inaccuracies during operations, as the device's camera is not set correctly relative to the positioning device. For example, in such situations, the antenna's position is usually defined as being several tens of centimeters behind the user. However, if the user begins to move sideways, the antenna begins to move in parallel to the device, and not trailing it. Thus, the offset between the camera and the antenna changes from being several tens of centimeters behind the user to several tens of centimeters beside the user. It is very important for high-precision applications, such as in preparation for engineering work, that the accuracy of the AR visuals remain within centimeters of the true location. As the tilt may introduce positioning shift of up to more than a meter (especially for pole-mounted GNSS devices), compensating for the shift is substantially important for maintaining positioning accuracy. While the following disclosure refers to mediated reality, it is contemplated that this includes any suitable mixture of virtual aspects and real aspects; for example, augmented reality (AR), mixed reality, modulated reality, holograms, and the like. The mediated reality techniques described herein can utilize any suitable hardware; for example, smartphones, tablets, mixed reality devices (for example, Microsoft™ HoloLens™), true holographic systems, purpose-built hardware, and the like. In some cases, geospatial data can be stored on a database after having been collected using any suitable approach; as an example, using geographic information systems (GIS) using, for example, a total station receiver, a high-precision global navigation satellite systems (GNSS) receiver and/or a real-time kinematic (RTK) positioning receiver. Advantageously, the present embodiments employ advanced visualization technologies, such as mediated reality techniques, to work in conjunction with other data collection techniques, such as visual positioning systems, to collect, adjust and/or verify the positioning of representations of geospatial data. Turning toFIG.1, a system for determining mediated reality (MR) positioning offset for a virtual camera pose to display geospatial object data150is shown, according to an embodiment. In this embodiment, the system150is run on a local computing device (for example, a mobile device). In further embodiments, the system150can be run on any other computing device; for example, a server, a dedicated price of hardware, a laptop computer, a smartphone, a tablet, a mixed reality device (for example, a Microsoft™ HoloLens™), true holographic systems, purpose-built hardware, or the like. In some embodiments, the components of the system150are stored by and executed on a single computing device. In other embodiments, the components of the system150are distributed among two or more computer systems that may be locally or remotely distributed; for example, using cloud-computing resources. FIG.1shows various physical and logical components of an embodiment of the system150. As shown, the system150has a number of physical and logical components, including a central processing unit (“CPU”)152(comprising one or more processors), random access memory (“RAM”)154, a user interface156, a device interface158, a network interface160, non-volatile storage162, and a local bus164enabling CPU152to communicate with the other components. CPU152executes an operating system, and various modules, as described below in greater detail. RAM154provides relatively responsive volatile storage to CPU152. The user interface156enables an administrator or user to provide input via an input device, for example a mouse or a touchscreen. The user interface156also outputs information to output devices; for example, a mediated reality device192, a display or multiple displays, a holographic visualization unit, and the like. The mediated reality device192can include any device suitable for displaying augmented or mixed reality visuals; for example, smartphones, tablets, holographic goggles, purpose-built hardware, or other devices. The mediated reality device192may include other output sources, such as speakers. In some cases, the system150can be collocated or part of the mediated reality device192. In some cases, the user interface156can have the input device and the output device be the same device (for example, via a touchscreen). The mediated reality device192can display a mediated reality ‘live’ view (such as a video stream or a sequential stream of captured images) received from a camera. This live view is oriented in a particular determined direction. In embodiments using holographic devices, in some cases, receiving the ‘live view’ can be omitted because the visual representation itself is displayed in the physical space. Non-volatile storage162stores the operating system and programs, including computer-executable instructions for implementing the operating system and modules, as well as any data used by these services. Additional stored data can be stored in a database166. During operation of the system150, the operating system, the modules, and the related data may be retrieved from the non-volatile storage162and placed in RAM154to facilitate execution. In an embodiment, the system150further includes a number of functional modules to be executed on the one or more processors152, including an positioning module170, a positioning module170, a offset module174, and a correction module176. In further cases, the functions of the modules can be combined or executed by other modules. FIG.9illustrates a diagrammatic example showing depending on how the MR device and positioning device moves (i.e., straight ahead, on an angle, or completely sideways), the relative positioning of the MR device relative to the positioning device moves, and the positioning device shifts from being behind to beside. Thus, if the position of the virtual display of the MR device is determined relative to the positioning device based on fixed offsets, it will be incorrect. In the present embodiments, the positioning module170of the system150determines the position, velocity, and precise time by processing the signals received from multiple satellites via a GNSS receiver computing device190. The position and velocity can be determined periodically or at any suitable times. GNSS can use satellites from one or more positioning systems, including global position system (GPS), global navigation satellite system (GLONASS), Galileo, Beidou, quasi-zenith satellite system (QZSS), and other regional systems. Based distance estimates determined from the multiple GNSS satellites, the GNSS receiver can determine position data; for example, including one or more of latitude, longitude, and elevation relative to the Earth or relative to a structure or object on the Earth. While the present embodiments describe using GNSS, it is appreciated that any spatial positioning frameworks of commercially suitable accuracy can be used; for example, total station, GPS, GLONASS, Galileo, Beidou, QZSS, Wi-Fi positioning system (WPS), cellular network positioning, and/or other approaches. FIG.8illustrates an example of a high accuracy MR visualization device with a positioning device mounted above the visualization device. The example ofFIG.8illustrates a user using the mediated reality device192(a tablet computer) to view an object in the geospatial data (a bus station) superimposed on an image captured by the camera or a virtual representation. The camera can be part of the mediated reality device192, a stand-alone computing device, or part of another computing device. In some cases, there can be multiple cameras. In some cases, the mediated reality device192, the positioning module170, or a combination, can determine position using computer vision recognition of objects in the environment. In some cases, the mediated reality device192can include one or more other sensors to determine position and/or orientation; for example, accelerometers, inertial sensors, and/or gyroscopes. Position includes a coordinate (for example, three-dimensional) representing the relative position of the mediated reality device192and orientation (for example, a three-dimensional vector) includes representations of the relative heading or direction of the mediated reality device192. The positioning module170can determine position and orientation periodically, or at any suitable time. For accurate placement of the geospatial object relative to the environment by the mediated reality device192, positioning and orientation of the system150need to be determined. The MR positioning data determined by the positioning module170and the geographic positioning data determined by the positioning module170can be used conjunctively by the offset module174to determine an accurate position of the system150. Generally, due to inherent inaccuracies, the MR positioning data determined or received by the positioning module170and the geographic positioning data determined or received by the offset module174are going to differ. MR positioning data is generally less accurate due to requiring initialization; such as having to establish a temporary local reference frame with a first predetermined point. Additionally, MR positioning data generally has unsatisfactory accuracy and is prone to drift over time and distance. Further, coordinate positioning data (such as from GNSS) can also have inherent error and imprecision. Thus, the MR positioning data and the geographic positioning data require periodic reconciliation to ensure high accuracy and proper placement of the geospatial objects relative to the environment. Generally, it is difficult to maintain accurate headings when using geographic positioning, such as with the GNSS receiver. In some cases, the MR positioning data and the geographic positioning data can be reconciled using a vector approach, multiantenna setup, compass, MR/physical path convergence, or other approaches. Once accurate positioning and heading are established, any shift of the positioning device relative to an MR virtual viewpoint will introduce inaccuracies to the MR visual placement. Geospatial objects are displayed in MR relative to the assumed position of the virtual viewpoint. The virtual viewpoint position is calculated based on the positioning device output. Shift of the positioning device will update the position of the virtual viewpoint. Although the virtual viewpoint did not move itself, the MR device will consider that it did, causing to reproject the visuals. FIGS.4and5demonstrate that a slightest tilt to the MR device can cause the positioning device to move relative to the virtual viewpoint. Such moves are unavoidable under normal operating conditions due to how the operator uses the device. This issue is magnified with antennas that are located at the end of an elongated object, such as the pole-mounted system exemplified inFIG.10, causing much greater shift. As illustrated inFIG.6, tilting the MR device/rig moves the position of the antenna relative to the visualization device lower and horizontally away relative to the “neutral” upright position illustrated inFIG.10; thus, shifting the displayed visuals by the same amount vertically and horizontally. The tilt of the MR device/rig can occur in any direction. Since the MR device can be used to display geospatial objects and data, it is important to determine an accurate location of the camera in space (e.g., latitude, longitude, elevation and heading) to set a pose of the virtual camera displayed by the MR device relative to the virtual (digital) representations of the geospatial objects. The position of the camera of the MR device relative to the positioning device can be determined by obtaining the position of the positioning device (e.g., GNSS) and applying offsets to the virtual camera. It works well if the positioning device stays in the same position relative to the camera of the MR device. However, once tilt is introduced, the position of the positioning device (e.g., GNSS) relative to the camera of the MR device changes. If the offsets are applied using the original offset settings, then the camera of the MR device will be calculated incorrectly. It will lead to defining the wrong virtual pose for the virtual camera, thus placing digital geospatial objects in incorrect positions. In an example, if a pole-based positioning device is used, and the pole is tilted, fixed offsets between the MR device camera and the pole-based positioning device will make it look like the user moved forward by 1 m (if the pole is tilted, and the antenna at the end of the positioning device moved forward by 1 m relative to its neutral position). However, the user did not move, so the virtual objects will shift by 1 m in the virtual camera, causing significant inaccuracies in the MR object representation. This situation can be further complicated by separating the positioning device (such as an antenna) and the visualization device, by, for example, placing the positioning device in a backpack (as illustrated in the example ofFIG.7). Under strict operating conditions, the MR device can determine the distance between the positioning device and the virtual viewpoint of the MR device. As the user moves forward, the MR device can offset the visuals by the same distance. However, as exemplified inFIG.9, if the user starts walking sideways, by examining a fence or a wall, the MR device will continue calculating the offset as if the positioning device was behind the virtual viewpoint along the moving path. However, since the user is moving sideways, the positioning device moves besides the virtual viewpoint, and not behind it, introducing significant positioning inaccuracies to the projected visuals. Advantageously, the present embodiments address the substantial technical challenges present in having to deal with the accurate placement of geospatial data relative to the virtual viewpoint using high-accuracy positioning data of the MR device as a reference. In many cases, the system150, via the user interface156, directs the mediated reality device192to present at least a subset of a library of geospatial objects to the user. The objects can come from an internal library of geospatial object definitions, an external library of geospatial object definitions, or a library of geospatial object definitions defined by the user. In some cases, the user can provide input to create or edit object definitions. An example screenshot of such a presentation is shown inFIG.15. In some cases, object definitions be generated automatically based on one or many dynamic factors. The object definition can have associated therewith attributes of the object; for example, geometry type, 2D or 3D model parameters, the object type (for example, hydrant or manhole), object condition, colour, shape, and other parameters. In other cases, the object can be defined as a simple point, line, or area, without any additional attributes. In some cases, a machine vision model can be used to identify objects in a scene captured by the mediated reality device192, and then identify an aspect of the object; for example, a closest point of the object or a center of the object. In some cases, the object library can be integrated with external systems. The format of the objects in the library can include GeoJSON or other protocols. In an example external integration arrangement, the external system crafts a token that contains information necessary for the system150to understand what object is being collected and what are the properties of such object. This token can then be passed to the system150via the network interface160using “pull” or “push” request approaches. The object definition can include, for example, the type of object; such as, a pipe or a point. In further cases, the object definition can also include attributes or characteristics of the object. As an example, the object definition can include: a manhole 1.2 m wide and 3.2 m deep with grey cover installed in 1987 oriented 14d North. Generally, the definition can be as extensive or as simple as required to generate a visual representation. In many cases, the system150, via the user interface156, presents a visual representation to the user via the user interface156, where the visual representation encompasses geospatial objects defined above. The visual representation can be, for example, a three-dimensional (3D) digital-twin model resembling the collected object. In further cases, the visual representation can be, for example, a symbol representing the object, such as a point, a flag, a tag, or the like. In further cases, the visual representation can be, for example, a schematic representation, a raster image, or the like. In some cases, the type of visual representation can be associated with the object in the library; and in other cases, the type of visual representation can be selected by the user. In some cases, along with the visual representation, other information can be displayed; for example, distance, elevation, size, shape, colours, and the like, can be displayed to assist with visualization and/or precise placement. In some cases, such as with GIS, the visual representation location can be represented by a single point, line, or outline, and to help the user understand where the object is placed, a point, a cross, a line, or other means, can be used within the visual representation. FIG.16illustrates an example screenshot of a visual representation (3D model) of a sewer pipe502presented on the screen collocated with a real-life sever manhole504. As illustrated, the background of the screen can be the mediated reality ‘live’ view received from the camera that is oriented in the direction of the system150. Turning toFIG.2, a flowchart of a method200for determining mediated reality (MR) positioning offset for a virtual camera pose to display geospatial object data is shown, according to an embodiment. At block202, the positioning module170determines, or receives an indication, that the system150is in a neutral position. In some cases, the neutral position is defined by the point where the MR visualization device192is leveled horizontally and GNSS receiver computing device190is at or near the topmost point along the possible rotation point along the x-axis in the y-z plane relative to its fulcrum point. For example, as illustrated inFIG.11and the middle position illustrated inFIG.12. In further cases, the neutral position can be retrieved from previous sessions stored in the database166, entered manually, or defined through other means. For example, via neutral x1, y1, z1 offset entry as depicted in the example ofFIG.3. At block204, the positioning module170records, receives, or determines positioning parameters associated with the neural position. For example, the roll, pitch, and yaw associated with the neutral position using suitable internal and/or external sensors, such as an inertial measurement unit (IMU) using gyroscopic sensors. At block206, the positioning module170receives positioning data representing a physical position from a spatial sensor type computing device190; where the physical position includes geographical coordinates. In most cases, the geographical coordinates are relative to the surface of the earth, for example latitude and longitude. In other cases, the geographical coordinates can be relative to another object; for example, relative to a building or landmark. In some cases, the physical position includes an elevation. In some cases, the positioning module170also receives or determines an orientation or bearing. In an example, the positioning module170can determine the position and orientation in 2D or 3D space (latitude, longitude, and, in some cases, elevation) using internal or external spatial sensors and positioning frameworks; for example, GNSS and/or RTK, Wi-Fi positioning system (WPS), manual calibration, vGIS calibration, markers, and/or other approaches. The positioning module170can then track the position and/or the orientation during operation. In some cases, machine vision combined with distance finders can be used to determine position, for example, using triangulation. In some cases, distance can be determined, for example, by using a built-in or external range finder spatial sensor directed to the object. In some cases, distance can be determined by a spatial sensor by capturing several images of the scene and comparing pixel shift. In some cases, distance can be determined using time-of-flight (TOF) spatial sensors. In some cases, beacon-based positioning can be used; for example, using iBeacon™. At block208, the positioning module170records updated positioning parameters (for example, the roll, pitch, and yaw of the MR device) at the time associated with receiving the position from the positioning module170in order to determine updated coordinates based on the recorded roll, pitch, and yaw, as described herein. At block210, the offset module174determines an updated offset for the positioning device. Using initial offsets between the camera of the MR device and the antenna of the positioning device in the neutral position, titling the positioning device will move the antenna relative to the neutral axis. The initial offsets can be received as input from the user, received over a communication network such as the internet, or factory-set based on device type. The updated positioning parameters can thus be used to determine the distance between the camera of the MR device and the positioning device. Using geometric relationships, such as Pythagorean theorem, the difference between the initial (x, y, z) angles of the neutral position and the (x, y, z) angles (roll, pitch, yaw) of the updated positioning parameters, with the distance between the camera of the MR device and the positioning device, provide the updated coordinates of the positioning device in (x, y, z) space relative to the camera of the MR device. In a particular case, using the neutral position values (neutral x1, y1, z1 offset) and the updated coordinates (x2, y2, z2), the offset module174determines a diagonal length for a rectangular prism configuration (as illustrated inFIG.13) or a hypotenuse in a triangle configuration. The difference between the neutral position values and the end point of the diagonal length or hypotenuse represented by the updated offset values (x2, y2, z2) represents the distance that the positioning device has moved from the neutral position, and thus, the updated offset. At block212, the offset module174outputs the updated offset and/or the updated coordinates of the camera of the MR device. At block214, with the updated offset value to know placement of the positioning device relative to the MR viewpoint, the correction module176adjusts the placement of georeferenced/geospatial virtual objects in MR relative to the MR viewpoint. Using the offsets, the correction module176determines the relative position of the camera of the MR device relative to the positioning device. The correction module176can determine a corrected position of the camera of the MR device using the offset, the viewing direction of the MR device, and the absolute position of the positioning device. From the corrected position, the MR viewpoint (i.e., the camera pose in the virtual space) can be determined relative to other virtual geospatial objects. The correction module176can instruct the MR device how to render the virtual geospatial objects on the MR display to render them positionally accurately (position, orientation, and scale) in relation to the physical world. In some cases, if a substantial horizontal offset between the visualization viewpoint and the positioning device exists, the positioning module170can track direction of movement. The tracking is performed by calculating a movement angle between the direction of the movement and the virtual viewpoint direction, and incorporating such movement into the offset determination described herein. The system150can repeat blocks206to214either periodically (e.g., 40 times per second), when input is received via the user interface156, or at any other suitable time when the updated offset is to be determined. FIG.14is a flowchart showing an example of applying movement compensation to detect position of the visualization viewpoint relative to the positioning device.FIG.9illustrates an example of such movement compensation. At1402, the positioning module170detects the vector representing the heading or orientation of the visualization viewpoint by, for example, determining an array of geographic coordinate points based on their time stamp, and then determining a trend line that forms the orientation vector. The heading is represented by vectors V12, V22and V32in the example ofFIG.9. At1404, the positioning module170determines the direction of movement. The direction can be determined using any suitable approach; for example, compass and GNSS/MR path convergence. The direction of movement is represented by vectors V11, V21and V31in the example ofFIG.9. At1406, the offset module174determines an offset by determining an angle between the heading of the visualization viewpoint and the direction of movement. The determined angle is represented as the angle between V11and V12, V21and V22, and V31and V32in the example ofFIG.9. The angle is determined by any suitable approach, such as, using trigonometric equations (e.g., calculating sides of the right triangles). The offset module174uses the determined angle to update x and y offsets between the visualization viewpoint and the positioning device. At block1408, using the determined offsets, the correction module176adjusts the placement of georeferenced virtual objects in MR relative to the MR viewpoint. While the mediated reality device192can include a camera to capture a physical scene and a screen to display a mixture of physical and virtual objects, it is contemplated that any apparatus for blending virtual and real objects can be used; for example, a holographic system that displays holographic augmentation or projects holograms. Although the foregoing has been described with reference to certain specific embodiments, various modifications thereto will be apparent to those skilled in the art without departing from the spirit and scope of the invention as outlined in the appended claims. The entire disclosures of all references recited above are incorporated herein by reference. | 31,247 |
11861865 | In the drawings, like reference numbers generally indicate identical or similar elements. Additionally, generally, the left-most digit(s) of a reference number identifies the drawing in which the reference number first appears. DETAILED DESCRIPTION Provided herein are system, apparatus, device, method and/or computer program product embodiments, and/or combinations and sub-combinations thereof, for autonomous vehicle pose validation. The process for automating an autonomous vehicle typically includes several phases: sensing, map creation, localization, perception, prediction, routing, motion planning, and control modules. Localization includes the task of finding the autonomous vehicle pose relative to a position on a high-definition cloud map. Legacy autonomous vehicle systems utilize registration algorithms to localize a vehicle's position and orientation in the world prior to navigating traffic. To determine an autonomous vehicle pose, autonomous vehicle systems may use a high definition map, in accordance with aspects of the disclosure. This high definition map is a dense 3D point cloud containing data from several lidar sweeps over time. The high definition map may also contain important semantic features representing various traffic rules. Localization is crucial because, by combining the dense 3D point cloud and semantic features, the autonomous vehicle system uses important prior information from the detailed high-definition map to derive the autonomous vehicle pose. The high definition map, derived from a 3D point cloud stitching of several lidar sweeps over time in accordance with aspects of the disclosure, may be created offline. At runtime, an autonomous vehicle system localizes the autonomous vehicle to a particular destination in the high definition map using an initial pose estimate from GPS information. In addition to the initial pose estimate in the high definition map, the autonomous vehicle system may use real-time lidar data to localize the autonomous vehicle. Autonomous vehicle systems can be configured to combine the 3D point cloud generated from real-time lidar data with the high definition map to generate an autonomous vehicle pose solution. However, when aligning these two 3D point clouds, the autonomous vehicle system may derive an invalid autonomous vehicle pose solution containing a rotation error, a translation error, an error from various occlusions, or an inaccurate GPS initial pose estimate, etc. An error may be introduced at every step of computing the autonomous vehicle pose solution. As a result, human operators still need to remain involved in validating autonomous vehicle pose solutions. However, this introduces human error into the localization process. For example, a human operator may inaccurately validate a false positive autonomous vehicle pose solution. This presents dangerous ramifications because the autonomous vehicle may be navigating traffic based on an incorrect position and orientation. As a result, a solution is needed to automate autonomous vehicle pose validation. This would bring the transportation industry a step closer to reaching a fully autonomous vehicle system and reducing human error from localization. The term “vehicle” refers to any moving form of conveyance that is capable of carrying either one or more human occupants and/or cargo and is powered by any form of energy. The term “vehicle” includes, but is not limited to, cars, trucks, vans, trains, autonomous vehicles, aircraft, aerial drones and the like. An “autonomous vehicle” (or “AV”) is a vehicle having a processor, programming instructions and drivetrain components that are controllable by the processor without requiring a human operator. An autonomous vehicle may be fully autonomous in that it does not require a human operator for most or all driving conditions and functions, or it may be semi-autonomous in that a human operator may be required in certain conditions or for certain operations, or that a human operator may override the vehicle's autonomous system and may take control of the vehicle. Notably, the present solution is being described herein in the context of an autonomous vehicle. However, the present solution is not limited to autonomous vehicle applications. The present solution may be used in other applications such as robotic applications, radar system applications, metric applications, and/or system performance applications. FIG.1illustrates an exemplary autonomous vehicle system100, in accordance with aspects of the disclosure. System100comprises a vehicle102athat is traveling along a road in a semi-autonomous or autonomous manner. Vehicle102ais also referred to herein as AV102a. AV102acan include, but is not limited to, a land vehicle (as shown inFIG.1), an aircraft, or a watercraft. AV102ais generally configured to detect objects102b,114,116in proximity thereto. The objects can include, but are not limited to, a vehicle102b, cyclist114(such as a rider of a bicycle, electric scooter, motorcycle, or the like) and/or a pedestrian116. As illustrated inFIG.1, the AV102amay include a sensor system111, an on-board computing device113, a communications interface117, and a user interface115. Autonomous vehicle101may further include certain components (as illustrated, for example, inFIG.2) included in vehicles, which may be controlled by the on-board computing device113using a variety of communication signals and/or commands, such as, for example, acceleration signals or commands, deceleration signals or commands, steering signals or commands, braking signals or commands, etc. The sensor system111may include one or more sensors that are coupled to and/or are included within the AV102a, as illustrated inFIG.2. For example, such sensors may include, without limitation, a lidar system, a radio detection and ranging (RADAR) system, a laser detection and ranging (LADAR) system, a sound navigation and ranging (SONAR) system, one or more cameras (e.g., visible spectrum cameras, infrared cameras, etc.), temperature sensors, position sensors (e.g., global positioning system (GPS), etc.), location sensors, fuel sensors, motion sensors (e.g., inertial measurement units (IMU), etc.), humidity sensors, occupancy sensors, or the like. The sensor data can include information that describes the location of objects within the surrounding environment of the AV102a, information about the environment itself, information about the motion of the AV102a, information about a route of the vehicle, or the like. As AV102atravels over a surface, at least some of the sensors may collect data pertaining to the surface. As will be described in greater detail in association withFIG.3, AV102amay be configured with a lidar system300. Lidar system300may include a light emitter system304(transmitter) that transmits a light pulse104to detect objects located within a distance or range of distances of AV102a. Light pulse104may be incident on one or more objects (e.g., AV102b) and be reflected back to lidar system300. Reflected light pulse106incident on light detector308is processed by lidar system300to determine a distance of that object to AV102a. Light detector308may, in some embodiments, contain a photodetector or array of photodetectors positioned and configured to receive the light reflected back into the system. Lidar information, such as detected object data, is communicated from lidar system300to an on-board computing device220(FIG.2). AV102amay also communicate lidar data to a remote computing device110(e.g., cloud processing system) over communications network108. Remote computing device110may be configured with one or more servers to process one or more processes of the technology described herein. Remote computing device110may also be configured to communicate data/instructions to/from AV102aover network108, to/from server(s) and/or database(s)112. It should be noted that the lidar systems for collecting data pertaining to the surface may be included in systems other than the AV102asuch as, without limitation, other vehicles (autonomous or driven), robots, satellites, etc. Network108may include one or more wired or wireless networks. For example, the network108may include a cellular network (e.g., a long-term evolution (LTE) network, a code division multiple access (CDMA) network, a 3G network, a 4G network, a 5G network, another type of next generation network, etc.). The network may also include a public land mobile network (PLMN), a local area network (LAN), a wide area network (WAN), a metropolitan area network (MAN), a telephone network (e.g., the Public Switched Telephone Network (PSTN)), a private network, an ad hoc network, an intranet, the Internet, a fiber optic-based network, a cloud computing network, and/or the like, and/or a combination of these or other types of networks. AV102amay retrieve, receive, display, and edit information generated from a local application or delivered via network108from database112. Database112may be configured to store and supply raw data, indexed data, structured data, map data, program instructions or other configurations as is known. The communications interface117may be configured to allow communication between AV102aand external systems, such as, for example, external devices, sensors, other vehicles, servers, data stores, databases etc. The communications interface117may utilize any now or hereafter known protocols, protection schemes, encodings, formats, packaging, etc. such as, without limitation, Wi-Fi, an infrared link, Bluetooth, etc. The user interface115may be part of peripheral devices implemented within the AV102aincluding, for example, a keyboard, a touch screen display device, a microphone, and a speaker, etc. FIG.2illustrates an exemplary system architecture200for a vehicle, in accordance with aspects of the disclosure. Vehicles102aand/or102bofFIG.1can have the same or similar system architecture as that shown inFIG.2. Thus, the following discussion of system architecture200is sufficient for understanding vehicle(s)102a,102bofFIG.1. However, other types of vehicles are considered within the scope of the technology described herein and may contain more or less elements as described in association withFIG.2. As a non-limiting example, an airborne vehicle may exclude brake or gear controllers, but may include an altitude sensor. In another non-limiting example, a water-based vehicle may include a depth sensor. One skilled in the art will appreciate that other propulsion systems, sensors and controllers may be included based on a type of vehicle, as is known. As shown inFIG.2, system architecture200includes an engine or motor202and various sensors204-218for measuring various parameters of the vehicle. In gas-powered or hybrid vehicles having a fuel-powered engine, the sensors may include, for example, an engine temperature sensor204, a battery voltage sensor206, an engine Rotations Per Minute (“RPM”) sensor208, and a throttle position sensor210. If the vehicle is an electric or hybrid vehicle, then the vehicle may have an electric motor, and accordingly includes sensors such as a battery monitoring system212(to measure current, voltage and/or temperature of the battery), motor current214and voltage216sensors, and motor position sensors218such as resolvers and encoders. Operational parameter sensors that are common to both types of vehicles include, for example: a position sensor236such as an accelerometer, gyroscope and/or inertial measurement unit; a speed sensor238; and an odometer sensor240. The vehicle also may have a clock242that the system uses to determine vehicle time during operation. The clock242may be encoded into the vehicle on-board computing device, it may be a separate device, or multiple clocks may be available. The vehicle also includes various sensors that operate to gather information about the environment in which the vehicle is traveling. These sensors may include, for example: a location sensor260(e.g., a Global Positioning System (“GPS”) device); object detection sensors such as one or more cameras262; a lidar system264; and/or a radar and/or a sonar system266. The sensors also may include environmental sensors268such as a precipitation sensor and/or ambient temperature sensor. The object detection sensors may enable the vehicle to detect objects that are within a given distance range of the vehicle200in any direction, while the environmental sensors collect data about environmental conditions within the vehicle's area of travel. During operations, information is communicated from the sensors to a vehicle on-board computing device220. The vehicle on-board computing device220analyzes the data captured by the sensors and optionally controls operations of the vehicle based on results of the analysis. For example, the vehicle on-board computing device220may control: braking via a brake controller222; direction via a steering controller224; speed and acceleration via a throttle controller226(in a gas-powered vehicle) or a motor speed controller228(such as a current level controller in an electric vehicle); a differential gear controller230(in vehicles with transmissions); and/or other controllers. Auxiliary device controller254may be configured to control one or more auxiliary devices, such as testing systems, auxiliary sensors, mobile devices transported by the vehicle, etc. Geographic location information may be communicated from the location sensor260to the on-board computing device220, which may then access a map of the environment that corresponds to the location information to determine known fixed features of the environment such as streets, buildings, stop signs and/or stop/go signals. Captured images from the cameras262and/or object detection information captured from sensors such as lidar system264is communicated from those sensors) to the on-board computing device220. The object detection information and/or captured images are processed by the on-board computing device220to detect objects in proximity to the vehicle200. Any known or to be known technique for making an object detection based on sensor data and/or captured images can be used in the embodiments disclosed in this document. Lidar information is communicated from lidar system264to the on-board computing device220. Additionally, captured images are communicated from the camera(s)262to the vehicle on-board computing device220. The lidar information and/or captured images are processed by the vehicle on-board computing device220to detect objects in proximity to the vehicle200. The manner in which the object detections are made by the vehicle on-board computing device220includes such capabilities detailed in this disclosure. When the vehicle on-board computing device220detects a moving object, the vehicle on-board computing device220generates one or more possible object trajectories for the detected object, and analyze the possible object trajectories to assess the probability of a collision between the object and the AV. If the probability exceeds an acceptable threshold, the on-board computing device220performs operations to determine whether the collision can be avoided if the AV follows a defined vehicle trajectory and/or implements one or more dynamically generated emergency maneuvers is performed in a pre-defined time period (e.g., N milliseconds). If the collision can be avoided, then the vehicle on-board computing device220may cause the vehicle200to perform a cautious maneuver (e.g., mildly slow down, accelerate, or swerve). In contrast, if the collision cannot be avoided, then the vehicle on-board computing device220causes the vehicle200to take an emergency maneuver (e.g., brake and/or change direction of travel). FIG.3illustrates an exemplary architecture for a lidar system300, in accordance with aspects of the disclosure. Lidar system264ofFIG.2may be the same as or substantially similar to the lidar system300. As such, the discussion of lidar system300is sufficient for understanding lidar system264ofFIG.2. As shown inFIG.3, the lidar system300includes a housing306which may be rotatable 360° about a central axis such as hub or axle315of motor316. The housing may include an emitter/receiver aperture312made of a material transparent to light. Although a single aperture is shown inFIG.2, the present solution is not limited in this regard. In other scenarios, multiple apertures for emitting and/or receiving light may be provided. Either way, the lidar system300can emit light through one or more of the aperture(s)312and receive reflected light back toward one or more of the aperture(s)312as the housing306rotates around the internal components. In an alternative scenario, the outer shell of housing306may be a stationary dome, at least partially made of a material that is transparent to light, with rotatable components inside of the housing306. Inside the rotating shell or stationary dome is a light emitter system304that is configured and positioned to generate and emit pulses of light through the aperture312or through the transparent dome of the housing306via one or more laser emitter chips or other light emitting devices. The light emitter system304may include any number of individual emitters (e.g., 8 emitters, 64 emitters, or 128 emitters). The emitters may emit light of substantially the same intensity or of varying intensities. The lidar system also includes a light detector308containing a photodetector or array of photodetectors positioned and configured to receive light reflected back into the system. The light emitter system304and light detector308would rotate with the rotating shell, or they would rotate inside the stationary dome of the housing306. One or more optical element structures310may be positioned in front of the light emitter system304and/or the light detector308to serve as one or more lenses or waveplates that focus and direct light that is passed through the optical element structure310. One or more optical element structures310may be positioned in front of a mirror (not shown) to focus and direct light that is passed through the optical element structure310. As shown below, the system includes an optical element structure310positioned in front of the mirror and connected to the rotating elements of the system so that the optical element structure310rotates with the mirror. Alternatively or in addition, the optical element structure310may include multiple such structures (for example lenses and/or waveplates). Optionally, multiple optical element structures310may be arranged in an array on or integral with the shell portion of the housing306. Lidar system300includes a power unit318to power the light emitter system304, a motor316, and electronic components. Lidar system300also includes an analyzer314with elements such as a processor322and non-transitory computer-readable memory320containing programming instructions that are configured to enable the system to receive data collected by the light detector unit, analyze it to measure characteristics of the light received, and generate information that a connected system can use to make decisions about operating in an environment from which the data was collected. Optionally, the analyzer314may be integral with the lidar system300as shown, or some or all of it may be external to the lidar system and communicatively connected to the lidar system via a wired or wireless communication network or link. FIG.4illustrates an example of real-time lidar coordinates collected from a real-time lidar sweep, in accordance with aspects of the disclosure.FIG.4is described with reference toFIGS.1-3. Localization enables vehicle on-board computing device220to ascertain the position and orientation of autonomous vehicle102in the world before navigating traffic. Lidar system264may scan the environment surrounding autonomous vehicle102to collect lidar information. On-board computing device220may use real-time lidar coordinates405retrieved from a real-time lidar sweep410to localize autonomous vehicle102. Real-time lidar coordinates405may consist of 3D spherical coordinates obtained from sensing the surroundings of autonomous vehicle102, which may be represented as P(x, y, z). The 3D spherical coordinates may represent an observed range (r)415, an elevation angle (φ)420, and an azimuth angle (θ)425. Observed range415represents the range between light emitter system304and real-time lidar coordinates405retrieved during real-time lidar sweep410. Elevation angle420represents the angle between the plane extending from the xy plane of lidar system264and real-time lidar coordinates405. To obtain observed range415, light emitter system304may generate and emit pulses of light via one or more laser emitter chips. Analyzer314may comprise a stopwatch that begins counting when the pulses travel outwards from lidar system264towards a target object. Light detector308containing a photodetector may receive light reflected back into the system. Analyzer314may determine the observed range415between lidar system264and the target object by computing half of the distance between the speed of the pulse and time passed between when the light emitter system304emitted the pulse and when light detector308received the pulse. Moreover, analyzer314may determine elevation angle (φ)420and azimuth angle (θ)425based upon the orientation of mirror. Using these measurements, analyzer314may derive real-time lidar coordinates405by converting the following Cartesian coordinates to 3D spherical coordinates: P(x,y,z)=P(rcos θ cos φ,rsin θ cos φ,rsin φ) Lidar system264may communicate real-time lidar coordinates405and other lidar information collected from the real-time lidar sweep410to vehicle on-board computing device220. On-board computing device220may generate a query point cloud430(not shown), which may be a 3D point cloud containing the real-time lidar coordinates405retrieved from lidar system264during the real-time lidar sweep410. FIG.5illustrates an example of an initial vehicle pose estimate in a high-definition map, in accordance with aspects of the disclosure.FIG.5is described with reference toFIGS.1-4. To localize an autonomous vehicle, vehicle on-board computing device220may determine the autonomous vehicle pose relative to a reference position in a high-definition map505. High-definition map505may be a visual representation of a 3D environment and various semantic features representing traffic rules. High-definition map505may include a reference point cloud510containing 3D geometric information of various objects surrounding autonomous vehicle102. High-definition map505may be a map comprising 3D point cloud information and other relevant information regarding a vehicle's surroundings. Lidar system264may transmit to vehicle on-board computing device220lidar coordinates from several lidar sweeps collected from more than one autonomous vehicle102over time. Using 3D point cloud processing, vehicle on-board computing device220may generate reference point cloud510by aggregating data collected from multiple lidar sweeps over time. Reference point cloud510may contain reference point cloud coordinates520from previous lidar sweeps. In some embodiments, high-definition map505may be a previously generated offline map from which vehicle on-board computing device220retrieves reference point cloud coordinates520. High-definition map505may also visually depict semantic features525representing various traffic rules, such as, traffic lanes, traffic signs, road boundaries, etc. On-board computing device220may use machine learning techniques (i.e., convolutional neural networks) known to a person of ordinary skill in the art to extract various features from images from the cameras262and/or ground images captured from lidar system264. The values of each pixel in the lidar ground images may include the ground height and the laser reflectivity of each lidar beam. On-board computing device220may then segment the lidar ground images using machine learning techniques based on the values of each pixel. On-board computing device220may use image segmentation machine learning techniques known to a person of ordinary skill in the art to extract labels for various features from the lidar ground images and camera images. On-board computing device220may project the semantic labels for the various semantic features525into the reference point cloud510of the high-definition map505. As a result, high-definition map505may result in a combination of the reference point cloud510obtained from previous lidar sweeps and a semantic feature map containing semantic features525. High-definition map505may be an offline map, which is given a particular destination at runtime. High-definition map505plays an important role in providing prior information about an environment prior to the autonomous vehicle navigating traffic. The reference point cloud510obtained from previous lidar sweeps over time and the semantic features525provide an important guidepost for autonomous vehicle localization. The prior information from reference point cloud510may be used to register query point cloud430to the high-definition map505. This enables vehicle on-board computing device220to localize the autonomous vehicle pose in real-time. On-board computing device220may use high-definition map505to calibrate the autonomous vehicle pose with global precision. Accordingly, vehicle on-board computing device220may derive an initial pose estimate530relative to high-definition map505using GPS information. GPS device260may retrieve satellite data representing the global location of autonomous vehicle102. GPS device260may calibrate the position and orientation of autonomous vehicle102to identify its initial vehicle pose estimate530. The initial vehicle pose estimate530may be represented as a set of 2D or 3D coordinates. An initial vehicle pose estimate530represented as a set of 2D coordinates may be described, by way of non-limiting example, by the coordinates (x, y, azimuth angle (θ)425). An initial vehicle pose estimate530represented as a set of 3D coordinates may be described, by way of non-limiting example, by the coordinates (x, y, z, roll angle, pitch angle, yaw angle), as described further inFIG.6B. GPS device260may transmit the initial vehicle pose estimate530to vehicle on-board computing device220. On-board computing device220may generate the initial pose estimate530in high-definition map505. On-board computing device220may retrieve map tiles within a specified range of the initial vehicle pose estimate530. Each map tile surrounding the initial pose estimate530may include a data structure for organizing coordinates into reference point cloud510. On-board computing device220may use the data stored in the data structure for each map tile into a coordinate system comprising reference point cloud coordinates520. Using the updated high-definition map505with the initial pose estimate530, vehicle on-board computing device220can compare the query point cloud430to reference point cloud510. FIG.6illustrates an example of aligning a query point cloud and reference point cloud, according to some embodiments.FIG.6is described with reference toFIGS.1-5. A precise high-definition map505is crucial for registering query point cloud430in order to determine an autonomous vehicle pose. On-board computing device220may derive the autonomous vehicle pose from the initial pose estimate530in reference point cloud510and query point cloud430. As discussed inFIG.5, 3D point cloud registration enables autonomous vehicle systems to create highly precise high-definition maps useful for various stages of autonomous driving. High-definition map505may be built offline, and on-board computing device220may download relevant subsections of high-definition map505during runtime. On-board computing device220may align query point cloud430to high-definition map505. However, for every new query point cloud430collected from a new real-time lidar sweep410, on-board computing device220may need to recalculate an autonomous vehicle pose solution. However, as shown inFIG.6A, using the initial vehicle pose estimate530to align the query point cloud430(black coordinates) and reference point cloud510(white coordinates) may result in misalignment. On-board computing device may be unable to localize the pose of an autonomous vehicle until the position of the query point cloud430is aligned relative to high-definition map505. To align query point cloud430and reference point cloud510, vehicle on-board computing device220may compute a 3D rotation and 3D translation of query point cloud430and reference point cloud510. FIG.6Bdepicts a 3D rotation of autonomous vehicle102. On-board computing device220may rotate query point cloud430to correct a rotation misalignment between query point cloud430and reference point cloud510. The query point cloud430may be rotated about the z-axis at a yaw angle615. In some embodiments, query point cloud430may be transformed, through rotation and translation, or by applying a SE(3) transformation to align the query point cloud430to the reference point cloud510. Additionally, vehicle on-board computing device220may translate query point cloud430to correct a translation misalignment between the query point cloud430and reference point cloud510, as shown inFIG.6D. The coordinates of autonomous vehicle102may need to be shifted based on a translation differential630, which may be a translation vector representing the distance between initial pose estimate530and the pose estimate computed by the registration algorithm. Accordingly, vehicle on-board computing device220may translate the query point cloud430to reference point cloud510by shifting the query point cloud430in accordance with the translation differential630. Using the 3D translation and 3D rotation of query point cloud430, vehicle on-board computing device220may concatenate the translation and rotation into a 4×4 transformation matrix, which enables vehicle on-board computing device220to reduce misalignment between query point cloud430and reference point cloud510. On-board computing device220may use various registration algorithms (e.g., iterative closest point algorithm, robust point matching, etc.) to find an approximately accurate alignment between query point cloud430and reference point cloud510. The registration algorithms may be configured to determine the optimal rotation and translation of reference point cloud510. As shown inFIG.6Efor illustration purposes only, vehicle on-board computing device220may employ the iterative closest point algorithm to determine the optimal transformation matrix in order to minimize the distance between query point cloud430and reference point cloud510. On-board computing device220may associate the reference point cloud coordinates520with the nearest real-time lidar coordinates405and converge the query point cloud430into reference point cloud510. To converge the query point cloud430into reference point cloud510, vehicle on-board computing device220may solve for an optimal transformation matrix for query point cloud430. On-board computing device220may compute the centroids for query point cloud430and reference point cloud510and compute a rotation matrix reflecting the distance between the query point cloud430and the reference point cloud510. Using the optimal rotation matrix, vehicle on-board computing device220may obtain the optimal translation vector by aligning the centroids of the query point cloud430to the reference point cloud510. On-board computing device220may repeat this process until query point cloud430and reference point cloud510converges. On-board computing device220may use any registration algorithm known to a person of ordinary skill in the art to align query point cloud430to reference point cloud510. As a result, vehicle on-board computing device220may derive an optimal transformation matrix aligning reference point cloud510to query point cloud430and generate a refined 3D point cloud reflecting alignment between the query point cloud430and reference point cloud510. The resulting refined 3D point cloud may be the localization point cloud solution735representing the registered autonomous vehicle pose. From a 2D perspective, the resulting localization point cloud solution735may represent alignment between query point cloud430and reference point cloud510. As shown inFIG.7, the real-time lidar coordinates405and reference point cloud coordinates520are approximately aligned, which may reflect a valid localization point cloud solution735. It is understood that errors in autonomous vehicle localization solutions may occur, in accordance with aspects of the disclosure. In an embodiment, high-definition map505may represent the surroundings of autonomous vehicle102with centimeter level accuracy. Moreover, legacy autonomous vehicle systems utilize various registration algorithms, such as the iterative closest point algorithm, to refine an initial pose estimate530retrieved from satellite data with real-time lidar coordinates405. However, legacy registration algorithms may produce misaligned pose solution for an autonomous vehicle pose depending on the configuration parameters of the registration algorithm. For example, a rotation error may result in a refined localization point cloud solution735. In this example, the autonomous vehicle system may utilize an iterative closest point algorithm to align a query point cloud430and reference point cloud510. However, the resulting localization point cloud solution735may contain a rotation error, in which the yaw angle615rotation of the query point cloud430was inaccurate by approximately 180 degrees. In another example, an error in the initial pose estimate530may occur due to an occlusion, such as a bush. This error may result in an inaccurate depiction of the surroundings of autonomous vehicle102in localization point cloud solution735. To provide an additional check to autonomous vehicle pose validation, human operators have reviewed a 2D or 3D representation of a localization point cloud solution735to determine the validity of an autonomous vehicle pose. In some embodiments, vehicle on-board computing device220may generate an autonomous vehicle pose solution from a 2D perspective based on query point cloud430and reference point cloud510. On-board computing device220may generate a lidar frame605representing a 2D frame of the query point cloud430at the time stamp when lidar system264collected the real-time lidar coordinates405and lidar information and a standardized global map frame610representing a 2D frame of reference point cloud510. On-board computing devices may overlay the 2D representation of the standardized global map frame610on lidar frame605. A human operator may view the two frames from a top-down 2D perspective to determine whether the autonomous vehicle pose is valid. However, this approach may result in human error to autonomous vehicle localization. A human operator may not always recognize false positives in localization point cloud solution735and may inaccurately validate an invalid autonomous vehicle pose. If a human operator does not recognize the false positive in the localization point cloud solution735and places the autonomous vehicle102in autonomous mode, this may result in dangerous ramifications, such as, placing the autonomous vehicle in the wrong lane. Therefore, a technical solution is needed to automate autonomous vehicle pose validation. FIG.8Ais an example of a range image used for automated autonomous vehicle pose validation, in accordance with aspects of the disclosure. Various autonomous vehicle localization algorithms have been developed to register an autonomous vehicle pose estimate. While these algorithms have improved autonomous vehicle localization, localization point cloud solution735may still result in alignment errors and still requires further validation from a human operator. On-board computing device220may automate validation of an autonomous vehicle pose estimate in localization point cloud solution735using statistical and machine learning methods. The use of these methods may significantly reduce the incorrect validation of false positive localization solutions produced from legacy registration algorithms and human error. On-board computing device220may automate validation of the vehicle pose by generating a range image805from the refined localization point cloud solution735. On-board computing device220may determine whether the localization point cloud solution735accurately estimates the autonomous vehicle pose by determining whether the data retrieved from range image805approximately represents observed values and data retrieved from real-time lidar sweep410. On-board computing device220may use 3D projection techniques known to a person of ordinary skill in the art to render range image805from localization point cloud solution735. On-board computing device220may categorize various features extracted from range image805with a predicted class label815. For example, predicted class label815may include labels for features, such as, but not limited to, ground, road, sidewalk, building, wall fence, bridge, tunnel, pole, traffic light, traffic sign, vegetation, terrain, etc. On-board computing device220may extract a predicted range820corresponding to a predicted class label815for each lidar beam810. For purposes of illustration, vehicle on-board computing device220may use a rasterization rendering technique to generate range image805. On-board computing device220may retrieve map tiles from localization point cloud solution735within a specified range of the vehicle pose estimate. Each map tile from localization point cloud solution735may include a data structure (e.g., k-d tree) for organizing coordinates into a point cloud. On-board computing device220may use the data stored in the data structure for each map tile into a lidar coordinate system. Lidar system264may project the coordinates from localization point cloud solution735into range image805. On-board computing device220may create a square surfel for the coordinates from localization point cloud solution735and may project each square surfel as two triangles into range image805. On-board computing device220may rasterize the triangles in range image805. On-board computing device220may encode each rasterized triangle in range image805with values of the predicted range820for each lidar beam810projected from lidar system264to the coordinates in localization point cloud solution735and the corresponding predicted class label815in range image805using image space algorithms (e.g., depth buffer method). Therefore, vehicle on-board computing device220may retrieve the predicted range820and corresponding predicted class label815for each lidar beam810from range image805and the observed range415for each lidar beam from the real-time lidar sweep410. On-board computing device220may validate the localization point cloud solution735using the predicted range820, predicted class label815, and observed range415for each lidar beam. For each lidar beam810, vehicle on-board computing device220may identify localization spherical coordinates825corresponding to the ratio between the predicted range820and observed range415. The localization spherical coordinates825for the ratio between the predicted range820to the observed range415may be represented as follows P(azimuth angle425, pitch angle620, predicted range820/observed range415). These localization spherical coordinates825may represent a unit sphere. Ideally, the ratio between the predicted range820and observed range415for each of the localization spherical coordinates825should be approximately 1. Accordingly, ideal unit sphere830represents a unit sphere with an equal predicted range820and observed range415. On-board computing device220may determine the validity of localization point cloud solution735depending on the percentage of localization spherical coordinates825remaining in the ideal unit sphere830. When a certain percentage of localization spherical coordinates825fall outside the ideal unit sphere830, the localization point cloud solution735may likely contain a rotation or translation error. For purposes of illustration, as shown inFIGS.8B and8D, localization point cloud solution735contains a rotation error and translation error. The query point cloud430generated from the real-time lidar sweep410represents a ground truth measurement of the real-time lidar coordinates405, as depicted by the gray autonomous vehicle102and corresponding dotted lidar beams. In contrast, the black autonomous vehicle102and corresponding lidar beams represent the localization point cloud solution735. The query point cloud430and localization point cloud solution735contains a rotation error, as reflected by the misaligned orientation of the gray autonomous vehicle102and black autonomous vehicle102. As shown inFIGS.8C and8E, the ideal unit sphere830representing the predicted ranges obtained from range image805corresponding to the observed ranges obtained from the ground truth measurements of the real-time lidar sweep410. Ideally, the predicted range of the localization spherical coordinates825obtained from the range image805would equate to the observed range415obtained from the ground truth measurements, as depicted by ideal unit sphere830. With an approximately accurate localization point cloud solution735, the majority of the localization spherical coordinates825would remain within or on ideal unit sphere830. However, when a rotation error occurs, the localization spherical coordinates825fall outside the ideal unit sphere830and may form an ellipsoidal shape outside the ideal unit sphere830, as shown inFIG.8C. Similarly, when a translation error occurs, the localization spherical coordinates825fall outside the ideal unit sphere830and may form a cone shape outside the ideal unit sphere830, as shown inFIG.8E. If a majority of the localization spherical coordinates825fall outside the ideal unit sphere830, this may indicate the localization point cloud solution735contains a rotation error or translation error. Accordingly, vehicle on-board computing device220may classify the localization point cloud solution735as a false positive. To determine whether the localization point cloud solution735is valid, vehicle on-board computing device220may set a threshold representing the percentage of localization spherical coordinates835that should remain within and on the ideal unit sphere830for the localization point cloud solution735to be classified as valid. On-board computing device220may also use a binary classifier to determine whether a localization point cloud solution735is valid. On-board computing device220may identify the percentage of lidar beams810corresponding to each predicted class label815. For example, vehicle on-board computing device220may identify what percentage of lidar beams810belong to predicted class labels, such as, ground, road, sidewalk, building, wall fence, bridge, tunnel, pole, traffic light, traffic sign, vegetation, terrain, etc. For each predicted class label815, vehicle on-board computing device220may identify the percentage of observed ranges from real-time lidar sweep410that are shorter than, approximately equal to, and/or farther than the predicted range820for each lidar beam810projected into range image805. This may be useful in situations when a particular feature, such as a wall, is predicted in the localization point cloud solution735, but the observation range415from the autonomous vehicle102to the wall is farther than the predicted range820. Accordingly, vehicle on-board computing device220may create a probability distribution P(A, B) to determine a threshold for the percentage of coordinates from localization point cloud solution735with misaligned observed ranges and predicted ranges from particular features. The value A may represent the event whether the prediction range820is significantly longer than, shorter than, or approximately equal to the observation range415. The value B may represent the predicted class label815of lidar beam810. Using this probability distribution, vehicle on-board computing device220may use a binary classifier trainer (e.g., a random forest classifier or support vector machine) to determine whether localization point cloud solution735is valid. In some embodiments, on-board computing device220may set multiple thresholds using the binary classifier trainer to classify localization point cloud solution735as valid. For example, the threshold may be a percentage of prediction ranges in range image805that would need to be roughly longer than, shorter than or approximately equal to the observation range415based on the type of predicted class label815. FIG.9is a flowchart illustrating a method for automated autonomous vehicle pose validation, according to some embodiments.FIG.9is described with reference toFIGS.1-8. Method900can be performed by processing logic that can comprise hardware (e.g., circuitry, dedicated logic, programmable logic, microcode, etc.), software (e.g., instructions executing on a processing device), or a combination thereof. It is to be appreciated that not all steps may be needed to perform the disclosure provided herein. Further, some of the steps may be performed simultaneously, or in a different order than shown inFIG.9, as will be understood by a person of ordinary skill in the art. At905, vehicle on-board computing device220retrieves observed range415from real-time lidar sweep410. On-board computing device220retrieves real-time lidar coordinates405from a real-time lidar sweep410. Real-time lidar sweep410may consist of 3D spherical coordinates obtained from sensing the surroundings of autonomous vehicle102, which may be represented as P(x, y, z). The 3D spherical coordinates may be comprised of an observed range (r)415, an elevation angle (φ)420, and an azimuth angle (θ)425. Observed range415represents the observed range between the light emitter system304and real-time lidar coordinates405. At910, vehicle on-board computing device220retrieves localization point cloud solution735. On-board computing device220generates a query point cloud430, which represents the real-time lidar coordinates405retrieved from lidar system264. On-board computing device220derives an initial pose estimate530in reference point cloud510using GPS information. On-board computing device220generates a localization solution point cloud735by aligning query point cloud430and the initial pose estimate530in reference point cloud510. To align query point cloud430and reference point cloud510, vehicle on-board computing device220computes a 3D rotation and 3D translation of query point cloud430and reference point cloud510. On-board computing device220derives an optimal transformation matrix aligning reference point cloud510to query point cloud430and generate a refined 3D point cloud reflecting alignment between the query point cloud430and reference point cloud510. The resulting refined 3D point cloud may be the localization point cloud solution735representing the registered autonomous vehicle pose. From a 2D perspective, the resulting localization point cloud solution735may represent alignment between query point cloud430and reference point cloud510. At915, vehicle on-board computing device220generates a range image805from localization point cloud solution735. On-board computing device220may use 3D projection techniques known to a person of ordinary skill in the art to render range image805from localization point cloud solution735. According to some embodiments, vehicle on-board computing device220may retrieve map tiles from localization point cloud solution735within a specified range of the vehicle pose estimate. Each map tile from localization point cloud solution735may include a data structure for organizing coordinates into a point cloud. On-board computing device220may use the data stored in the data structure for each map tile into a lidar coordinate system. Lidar system264may project the coordinates from localization point cloud solution735into range image805. On-board computing device220may create a square surfel for the coordinates from localization point cloud solution735and may project each square surfel as two triangles into range image805. On-board computing device220may rasterize the triangles in range image805. At920, vehicle on-board computing device220may retrieves the predicted range820and predicted class label915for the lidar beams810in range image805. On-board computing device220may categorize various features extracted from range image805with a predicted class label815. For example, predicted class label815may include labels for features, such as, but not limited to, ground, road, sidewalk, building, wall fence, bridge, tunnel, pole, traffic light, traffic sign, vegetation, terrain, etc. On-board computing device220may extract a predicted range820corresponding to a predicted class label815for each lidar beam810. On-board computing device220encodes each rasterized triangle in range image805with values of the predicted range820for each lidar beam810projected from lidar system264to the coordinates in localization point cloud solution735and the corresponding predicted class label815in range image805using image space algorithms (e.g., depth buffer method). Therefore, vehicle on-board computing device220may retrieve the predicted range820and corresponding predicted class label815for each lidar beam810from range image805and the observed range415for each lidar beam from the real-time lidar sweep410. At925, vehicle on-board computing device220determines localization spherical coordinates825corresponding to the ratio between the predicted range820and observed range415. The localization spherical coordinates825for the ratio between the predicted range820to the observed range415may be represented as follows P(azimuth angle425, pitch angle620, predicted range820/observed range415). These localization spherical coordinates825may represent a unit sphere. At930, vehicle on-board computing device220determines a threshold for the percentage of localization spherical coordinates825that may fall outside the ideal unit sphere830. Ideally, the ratio between the predicted range820and observed range415for each of the localization spherical coordinates825may be approximately 1. Accordingly, ideal unit sphere830represents a unit sphere with an equal predicted range820and observed range415. With an approximately accurate localization point cloud solution735, the majority of the localization spherical coordinates825would remain within or on ideal unit sphere830. On-board computing device220establishes a threshold for the localization point cloud solution735to be classified as valid based on the localization spherical coordinates825. In some embodiments, the threshold may be the percentage of localization spherical coordinates825that can fall outside the ideal unit sphere830. On-board computing device220may use machine learning techniques known to a person of ordinary skill in the art to determine the percentage of localization spherical coordinates825that would approximately classify localization point cloud solution735as valid. At935, vehicle on-board computing device220determines whether each of the localization spherical coordinates825falls within, on, or outside ideal unit sphere830. On-board computing device220determines the percentage of localization spherical coordinates825that fall within, on, or outside ideal unit sphere830. On-board computing device220determines whether the percentage of localization spherical coordinates825falling outside the ideal unit sphere830exceeds the threshold. If the percentage of localization spherical coordinates825outside the ideal unit sphere830exceeds the threshold, method900proceeds to955. If the percentage of localization spherical coordinates825outside the ideal unit sphere830does not exceed the threshold, method900proceeds to940. At940, vehicle on-board computing device220generates and transmits features to a binary classifier. Vehicle on-board computing device220generates a vector of features based on features detected in range image805. Vehicle on-board computing device220may establish a threshold for the percentage of features in range image805that may be marked as outliers for the localization point cloud solution735to be classified as valid. On-board computing device220may create a probability distribution P(A, B) to determine a threshold for the percentage of coordinates from localization point cloud solution735with misaligned observed ranges and predicted ranges from particular features. The value A may represent the event whether the prediction range820is significantly longer than, shorter than, or approximately equal to the observation range415. The value B may represent the predicted class label815of the lidar beam810. Using this probability distribution, vehicle on-board computing device220may use a binary classifier trainer (e.g., a random forest classifier or support vector machine) to determine whether localization point cloud solution735is valid. In some embodiments, vehicle on-board computing device220may optionally establish a threshold for the binary classifier. The threshold may be a percentage of prediction ranges in range image805that would need to be roughly longer than, shorter than or approximately equal to the observation range415based on the type of predicted class label815. At945, vehicle on-board computing device220retrieves a solution from the binary classifier indicating whether localization point cloud solution735is rejected based on the features transmitted to binary classifier. Vehicle on-board computing device220may identify the percentage of lidar beams810corresponding to each predicted class label815. For example, vehicle on-board computing device220identifies what percentage of lidar beams810belong to predicted class labels, such as, ground, road, sidewalk, building, wall fence, bridge, tunnel, pole, traffic light, traffic sign, vegetation, terrain, etc. For each predicted class label815, vehicle on-board computing device220determines whether the predicted range820from range image805is farther than, approximately equal to, and/or farther than the observed range415. The binary classifier determines whether a localization point cloud solution should be rejected based on the retrieved vector of features. In some embodiments, vehicle on-board computing device220may use a single or multiple thresholds to determine whether the localization point cloud solution735should be rejected, though it is not required. For example, on-board computing device220may identify the percentage of observed ranges from real-time lidar sweep410that are shorter than, approximately equal to, and/or farther than the predicted range820for each lidar beam810projected into range image805. On-board computing device220determines which predicted ranges in range image805are outliers based on an established distance between predicted range820and observed range415. If the binary classifier rejected the solution based on the retrieved features, method900proceeds to955. If the binary classifier accepted the solution based on the retrieved features, method900proceeds to950. At950, vehicle on-board computing device220validates localization point cloud solution735. At955, vehicle on-board computing device220classifies localization point cloud solution735as invalid. Various embodiments can be implemented, for example, using one or more computer systems, such as computer system1000shown inFIG.10.FIG.10is described with reference toFIGS.1-9. Computer system1000can be used, for example, to implement method900ofFIG.9. For example, computer system1000can implement and execute a set of instructions comprising generating range image from localization point cloud solution735, querying lidar beams810projected into range image805for a corresponding predicted class label815and predicted range820, comparing predicted range820and predicted class label815to the observed range415from real-time lidar sweep410, and validating the localization point cloud solution735. Computer system1000can be any computer capable of performing the functions described herein. Computer system1000can be any well-known computer capable of performing the functions described herein. Computer system1000includes one or more processors (also called central processing units, or CPUs), such as a processor1004. Processor1004is connected to a communication infrastructure or bus1006. One or more processors1004may each be a graphics processing unit (GPU). In an embodiment, a GPU is a processor that is a specialized electronic circuit designed to process mathematically intensive applications. The GPU may have a parallel structure that is efficient for parallel processing of large blocks of data, such as mathematically intensive data common to computer graphics applications, images, videos, etc. Computer system1000also includes user input/output device(s)1003, such as monitors, keyboards, pointing devices, etc., that communicate with communication infrastructure1006through user input/output interface(s)1002. Computer system1000also includes a main or primary memory1008, such as random access memory (RAM). Main memory1008may include one or more levels of cache. Main memory1008has stored therein control logic (i.e., computer software) and/or data. Computer system1000may also include one or more secondary storage devices or memory1010. Secondary memory1010may include, for example, a hard disk drive1012and/or a removable storage device or drive1014. Removable storage drive1014may be a floppy disk drive, a magnetic tape drive, a compact disk drive, an optical storage device, tape backup device, and/or any other storage device/drive. Removable storage drive1014may interact with a removable storage unit1018. Removable storage unit1018includes a computer usable or readable storage device having stored thereon computer software (control logic) and/or data. Removable storage unit1018may be a floppy disk, magnetic tape, compact disk, DVD, optical storage disk, and/any other computer data storage device. Removable storage drive1014reads from and/or writes to removable storage unit1018in a well-known manner. According to an exemplary embodiment, secondary memory1010may include other means, instrumentalities or other approaches for allowing computer programs and/or other instructions and/or data to be accessed by computer system1000. Such means, instrumentalities or other approaches may include, for example, a removable storage unit1022and an interface1020. Examples of the removable storage unit1022and the interface1020may include a program cartridge and cartridge interface (such as that found in video game devices), a removable memory chip (such as an EPROM or PROM) and associated socket, a memory stick and USB port, a memory card and associated memory card slot, and/or any other removable storage unit and associated interface. Computer system1000may further include a communication or network interface1024. Communication interface1024enables computer system1000to communicate and interact with any combination of remote devices, remote networks, remote entities, etc. (individually and collectively referenced by reference number1028). For example, communication interface1024may allow computer system1000to communicate with remote devices1028over communications path1026, which may be wired and/or wireless, and which may include any combination of LANs, WANs, the Internet, etc. Control logic and/or data may be transmitted to and from computer system1000via communication path1026. In an embodiment, a tangible, non-transitory apparatus or article of manufacture comprising a tangible, non-transitory computer usable or readable medium having control logic (software) stored thereon is also referred to herein as a computer program product or program storage device. This includes, but is not limited to, computer system1000, main memory1008, secondary memory1010, and removable storage units1018and1022, as well as tangible articles of manufacture embodying any combination of the foregoing. Such control logic, when executed by one or more data processing devices (such as computer system1000), causes such data processing devices to operate as described herein. Based on the teachings contained in this disclosure, it will be apparent to persons skilled in the relevant art(s) how to make and use embodiments of this disclosure using data processing devices, computer systems and/or computer architectures other than that shown inFIG.10. In particular, embodiments can operate with software, hardware, and/or operating system implementations other than those described herein. It is to be appreciated that the Detailed Description section, and not any other section, is intended to be used to interpret the claims. Other sections can set forth one or more but not all exemplary embodiments as contemplated by the inventor(s), and thus, are not intended to limit this disclosure or the appended claims in any way. While this disclosure describes exemplary embodiments for exemplary fields and applications, it should be understood that the disclosure is not limited thereto. Other embodiments and modifications thereto are possible, and are within the scope and spirit of this disclosure. For example, and without limiting the generality of this paragraph, embodiments are not limited to the software, hardware, firmware, and/or entities illustrated in the figures and/or described herein. Further, embodiments (whether or not explicitly described herein) have significant utility to fields and applications beyond the examples described herein. Embodiments have been described herein with the aid of functional building blocks illustrating the implementation of specified functions and relationships thereof. The boundaries of these functional building blocks have been arbitrarily defined herein for the convenience of the description. Alternate boundaries can be defined as long as the specified functions and relationships (or equivalents thereof) are appropriately performed. Also, alternative embodiments can perform functional blocks, steps, operations, methods, etc. using orderings different than those described herein. References herein to “one embodiment,” “an embodiment,” “an example embodiment,” or similar phrases, indicate that the embodiment described can include a particular feature, structure, or characteristic, but every embodiment can not necessarily include the particular feature, structure, or characteristic. Moreover, such phrases are not necessarily referring to the same embodiment. Further, when a particular feature, structure, or characteristic is described in connection with an embodiment, it would be within the knowledge of persons skilled in the relevant art(s) to incorporate such feature, structure, or characteristic into other embodiments whether or not explicitly mentioned or described herein. Additionally, some embodiments can be described using the expression “coupled” and “connected” along with their derivatives. These terms are not necessarily intended as synonyms for each other. For example, some embodiments can be described using the terms “connected” and/or “coupled” to indicate that two or more elements are in direct physical or electrical contact with each other. The term “coupled,” however, can also mean that two or more elements are not in direct contact with each other, but yet still co-operate or interact with each other. The breadth and scope of this disclosure should not be limited by any of the above-described exemplary embodiments, but should be defined only in accordance with the following claims and their equivalents. | 64,055 |
11861866 | DESCRIPTION OF THE PREFERRED EMBODIMENTS Reference will now be made in detail to the presently preferred embodiments of the invention, examples of which are illustrated in the accompanying drawings, wherein like reference numerals refer to like elements throughout. The present inventive concept relates to a method, system and computer readable storage which can identify regions of interest on a casino table for a camera or a plurality of cameras pointing on the table. A region of interest (ROI) is a particular subset (area) of the table in which relevant things might happen so this area should be observed and analyzed. For example, one type of region of interest are the betting area regions of interest (typically where the betting circles area) where players place their chips (betting area regions of interest). An automated casino table monitoring system would want to determine how much each player bets on each hand (e.g., on a hand of blackjack although it can be applied to any game) so the betting circles need to be observed. Another region of interest is the card areas where cards are dealt (each player has a respective card area and the dealer has a respective card area where the dealer's cards are dealt). Card areas should be observed so that cards dealt in the card area can be recognized for their value so the game can be monitored (card area regions of interest). In this way, an automated computer can follow along with the game and determine if the game is being dealt properly (e.g., if there are dealer errors) and if players are being paid properly. Note that side bets betting circles (areas on the table where the players make their side bets) are also considered a separate type of region of interest (side bet area regions of interest). The system can also determine the number of players at the table (based on observation/analysis of the betting regions of interest and/or the card area regions of interest) and the dealer performance which can be measured as basic hands per hour (based on observation/analysis of the betting regions of interest and/or the card regions of interest). Typically, for every betting area there is a side betting area (thus for every betting area region of interest there is a corresponding side bet region of interest). Regions of interest can also be used to simply detect the absence and presence of a chip (in a betting area) or a card (in a card area) to determine dealer efficiency (e.g., hands/hour) and utilization of the table. All of this information can be stored in any database. “Table” and “casino table” are used synonymously herein, unless another meaning for “table” is clearly evident. The general method operates as follows. A template image is generated from a table layout (which is an image of the table felt placed on the table itself in which the game is dealt on). The template image is then “painted” with different colors representing different regions of interest (the painting can be done manually using a paint program or automatically using region detection). These regions of interest represent where relevant activity will happen on the table (so these regions of interest can be analyzed to track play of the game). A computer file (“top-down meta-data”) is generated which quantitatively defines the regions of interest (e.g., as ellipses although other shapes can be used as well). A camera on a table which has the same layout the template image was generated from is used to detect some of the regions of interest. For example, chips can be placed on betting regions of interest and an image of the chips on the table can be subtracted from an image of an empty table thereby identifying where the chips (betting regions of interest) are on the camera image of the table. There are two frames of reference, the template image (which is top down) and the camera image (which is typically taken from the side). Now that real world objects (chips) are detected in regions of interest that we can identify on the template (stored in the top-down meta-data as the betting regions of interest), the homography has been determined. The homography is a mapping of one frame of reference to the other. Now all of the regions of interest in the top-down meta-data can be transformed to regions of interest in the camera image by applying the homography to the regions of interest in the top-down meta-data. A file referred to as the camera meta-data stores all of the transformed regions of interest from the top-down meta-data. Each camera pointed on the table would have its own homography and hence its own camera meta-data file. In this way, images from the cameras (“camera” as used herein refers to a video camera) can be analyzed at regions corresponding to the regions of interest defined in the camera meta-data file to be table to track progress of the game (e.g., to track how much in chips each player is betting etc.) The system can start with a layout. A layout is an image of the table felt that can be provided by a variety of sources (e.g., a distributor of the table game, a casino, a felt manufacturer, etc.) and can be an electronic (digital) file (e.g., a JPG or any other such format). The table felt (also referred to as table layout) has betting circles (where players place their bets in chips) and other indicia/markings on it. The layouts can be distributed electronically (e.g., emailed, web, etc.) to an administrator. The administrator is the party that is administering the system. The layout is typically photographed or drawn from the top down (e.g., an aerial view). “Layout” as used herein refers to a file which contains an image of a table's layout (felt) or it can also refer to the actual felt placed on a physical table (depending on the context how it is used). Each different casino table game (e.g., blackjack, baccarat, etc.) has its own table layout, although some games (e.g., blackjack) may have many different layouts. Note that each casino would typically have their own custom layouts that have their casino name printed on them (and hence their own custom templates). FIG.1is flowchart illustrating an exemplary method of identifying and utilizing regions of interest on a casino table. This is a very high level introduction to the system described herein. In operation100, template images and top-down meta-data are generated from the layout (file). Template images are images made from the layout (and show the respective table layout) and may be refined to meet particular standards (e.g., be a certain resolution, etc.) Top-down meta-data is data (e.g., a file) that describes regions of interest on a template image. For example, on the template image there may be seven betting circles, the top-down meta-data is data which identifies where on the template image the seven betting circles are located (defined geometrically such as by ellipses) and what type each region of interest is (e.g., a betting area region of interest). This can be done as described herein. “Top-down meta-data” can also be referred to as template meta-data. From operation100, the method proceeds to operation101, which distributes and updates the template image(s) and respective top-down meta-data to all tables across different casinos that are utilizing the current system. Periodically, updated (e.g., improved) versions of temple images and their respective top-down meta-data may be generated and these updated versions are electronically distributed to casinos and delivered to the individual hardware at each respective casino table which may need the updated versions. Note that while updated templates (and hence their top-down meta-data) are distributed automatically, only updated templates and meta-data that are used by a particular casino would need to be distributed to that casino. For example, casinos typically have their own templates for each game, so if casino's A template (and/or its top-down meta-data) for a particular game was updated by the administrator but not casino B's template (and/or its top-down meta-data), then the update would automatically only go to casino A but not casino B. This can be done as described herein. Note that as used herein, “template” is synonymous with “template image.” From operation101, the method proceeds to operation102, which determines the homography. At casino tables one or more cameras can be mounted at various locations on the table (or on the ceiling, on a pole, etc.) and these camera angles are different than the angle that the top-down templates were taken at. The camera meta-data identifies where the regions of interest are (the same regions of interest identified in the top-down meta-data) in the images taken by the cameras at the table. Since the cameras at the table use a different angle, the locations of the regions of interest taken therein will of course be different than the location (coordinates) of the regions of interest from the template. Thus, cameras at each casino table can be trained/initialized. Homography is a mapping of points defining the regions of interest from the top-down meta-data to the camera meta-data, although in an embodiment the points used in the homography (e.g., the file containing the homography) do not have to correspond to regions of interest but can be arbitrary (or selected) points. The regions of interest identified in the top-down meta-data can be identified from the cameras using techniques such as placing physical objects (e.g., a chip) on the table in the respective regions of interest so that it can be identified where the regions of interest from the top-down meta-data exist in the camera views (which would be stored in the camera meta-data). This can be done as described herein. From operation102, the method proceeds to operation103, which determines camera meta-data from the top-down meta-data using the homography. This transforms other regions of interest identified in the top-down meta-data to the respective camera view. For example, once the homography (mapping from the template image to the camera image) is known, then other regions of interest identified in the top-down meta-data can then be translated (using open source computer vision functions such as in OpenCV) to their camera image counterparts. Thus, any point and/or region of interest identified in the top-down meta-data can be located (transformed to) on each camera image using the homography. Each camera (there typically would be a plurality of cameras on each casino table) would have its own homography (mapping from its own image to the template image). So in other words, some regions of interest from the point of view from the cameras are determined physically which then determines the mapping (homography) between the template (top-down image) and the table camera views (camera images). Once the mapping is known, then any other regions of interest identified from the template (in the top-down meta-data) can then be converted to the camera meta-data mathematically (or in a more complex method, combining these mathematical transformations along with using visual images of the (empty) table layout as captured by the camera to better match the template image to the camera image in order to get a more accurate transformation). From operation103, the method proceeds to operation104, which captures video (or a still image or images) at the table using the cameras at the table. A still image (or multiple images) of the table can be processed as described herein. All images and video captured by each camera can be transmitted to any other server/computer on the system and can be stored by any such server/computer on the system. As such, the image(s) and video captured at each table can all be stored on a server (any server on the system) for later review/retrieval/use by the system for any purpose. The regions of interest (identified by the camera meta-data) are then analyzed and utilized to analyze the game-play. For example, the regions of interest which are betting circles are observed for the placement of chips so the computer system can identify how many chips are bet by each player on a hand/game. In this manner, a database can be maintained of each player and how much the player has bet on each hand (players can be identified when they sit down by presenting their player's card to the casino personnel as known in the art). Each player has his/her own betting circle (and hence their own betting area region of interest) and thus each the amount of bets (wagers) in each betting circle would be attributed to the player sitting at that particular spot. The regions of interest which are card areas are observed so that the cards dealt therein are analyzed so that the flow of the game is determined and any errors in the dealing of the game (or cheating) are flagged by the system. Note that in addition to card games, the system described herein can be applied to other types of games as well, such as dice (e.g., craps, Sic Bo, etc.), wheel (roulette, etc.), etc. Instead of a card region of interest, the region of interest would be a dice region of interest or a wheel region of interest, for a dice or wheel game, respectively. For example, the area of a table where the dice are rolled in the game of Sic Bo would be a dice region of interest, and the activity in this region of interest would be analyzed to determine (and store) the dice rolls, etc. On a roulette game, the wheel region of interest would be where the roulette wheel is so that this region can be analyzed so that the winning number on each spin can be determined and stored. As with the card regions of interest, any other regions of interest (e.g., wheel, dice, etc.) can be manually painted (identified) on the template image as to where the wheel actually will be located (or where the dice will be rolled). In all games (e.g., blackjack, dice, wheel, etc.) there can be all types of betting regions (for example in wheel games there would also be betting regions of interest (like as described herein with respect to blackjack) to track the player's bets in additional to the wheel region of interest). In this manner, activity on the casino table can be tracked and analyzed by the computer system to determine things like efficiency of the dealer (e.g., the dealer's error rate, dealer hands per hour, time required for shuffles, number of players at the table, etc.), the betting rate of individual players, the identification of cheating, etc. For example, players who bet above a certain amount may be entitled to complimentaries from the casino. The method would start with a layout that can be provided by a variety of sources, for example a distributor of a table game, the casino, etc. The layout can simply be an image (e.g., a photograph or a scan) of the table layout. From this layout, the method inFIG.2would result in template images and top-down meta-data. Template images are similar to the layout but modified to be in a standard format for the system (e.g., a particular resolution, size, etc.) Top-down meta-data is data which can be in any form (e.g., XML, etc.) which defines regions of interest on the respective template. Note that a vendor that provides the layout would not provide any of the meta-data (e.g., top-down meta-data), it would be up to the administrator to receive the layout and to identify (either electronically or manually) where the regions of interest are and produce color codes template images identifying these regions of interest. FIG.2is a flowchart illustrating an exemplary method of generating templates and top-down meta-data. The method illustrated inFIG.2is performed by the administrator of the method. The method begins by starting with a layout (an image of a table felt which has all of the indicia for a particular game). The layout can be emailed to the administrator or delivered via electronic means (e.g., internet, etc.) An example of a layout is illustrated inFIG.3. From operation200, the method proceeds to operation201, which creates a template image (or images) from the layout. Template images are the layout but transformed to a particular format. For example, all template images should typically be a particular size, resolution, etc., in order to be consistent and operate properly with the rest of the system. An example of a template image is illustrated inFIG.4. This is similar to the layout inFIG.3although an area is cut out on top where the dealer's chip tray would be. The dimensions of the template image may differ from the layout as well (the layout may need to be resized in order for it to become a template image). An overlay is also created. An overlay is another layer that fits on top of the template and identifies each region of interest. Different regions of interest can be denoted by different colors. For example, betting circles can be one color (e.g., red), the dealer's area can be a different color (e.g., green), etc. The overlay can be created manually by painting in the regions of interest with the respective color. For example, a card region of interest can be painted by a technician in an area where he/she expects cards to be dealt. Betting area regions are typically where the betting circles are. The overlay can also be created automatically by using pattern recognition to recognize the betting areas and the dealer areas and paint them different colors.FIG.6shows an example of an overlay which shows six betting areas (in one color) and the dealer's area on top (in another color). Note that the overlay (FIG.6) can fit onto the template image (FIG.4) in order to create a merged image (FIG.5). A merged image is a single image which has the template and the overlay merged together (note the white parts of the overlay can be considered a “transparent” color). In an embodiment, one unique color is used for each type of ROI (card, bet, side bet), and the system uses geometry to detect which is the first position and which is the last. In another embodiment, a different color could be used for each different region of interest (even of the same type), for example different betting circles would all be different colors. There are multiple regions of interest on the table of the same type (e.g., multiple betting areas, multiple card areas) because each casino table allows for multiple players to play simultaneously, although there may be only one dealer card area region of interest since there is only one dealer. Once the template image(s) and the overlay are created, the method can proceed to operation210. In another embodiment, the method can alternatively start at operation202instead of operation200. In operation202, the method already starts with a merged image (merging an overlay with the regions of interest with the template into a single image). A merged image (seeFIG.5) can be obtained by merging the template with the overlay, or by painting the regions of interest directly onto the template image. Thus, if the method is starting at operation202, it already has a merged image which (if the regions of interest are not directly painted onto the template) already has the overlay determined and merged with the template (the overlay that is merged can be determined as described in operation201). If a template image is not provided, the template image can be generated from the layout as described in operation201. From operation202, the method proceeds to operation203, which subtracts the template image from the merged image resulting in the overlay itself (seeFIG.6). If the system only has the layout but not a template image, then a template image can be created from the layout (as described in operation201). In this manner, an overlay and a template image are obtained to proceed to operation210. Thus, note that if the system starts without a merged image then it can begin from operation200and if the system already has a merged image than it can begin from operation202. Ultimately, in order to proceed to operation210the system should have a template image (also referred to as template) and an overlay showing the regions of interest. Thus, from operation201or operation203, the method proceeds to operation210, which can automatically determine the region types of the overlay based on their color. For example, betting circles for main bets can be a particular color (e.g., red), a dealer's area for the dealer's cards can be different color (e.g., green), side bet betting circles for side bets (also known as side bets) can be a different color (e.g., blue), etc. The different regions of interest can be automatically numbered (for example seeFIG.7which numbered all six betting areas (betting circles) as 1b, 2b, 3b, 4b, 5b and 6b, and the dealer's area is labeled 0c. This numbering can also be done manually as well. From operation210, the method proceeds to operation212which determines whether the template image (or images of there are more than one) and the overlay are both valid. This can be done according to a set of rules, as an example: 1) A template and overlay would only be valid if there are zero unknown regions. An unknown region is an area on the overlay with a color that does not match the color coding scheme. If there are any unknown regions, then the template and overlay would not be valid. 2) A template and overlay would only be valid if there are an appropriate number of card spots (card regions or card areas) and bet spots (e.g., for betting circles). If there is not an appropriate number of card spots and/or betting circles then the template and overlay are invalid. In some games, the number of card spots must equal the number of bet spots plus one for standard table games which deal cards to the dealer (this is because there must be a card spot for the dealer but the dealer does not make a bet). Depending on the game, template validation rules can be developed for that particular game (associated with the template) so that there are an appropriate number of betting regions and card regions. Alternatively, instead of game-specific validation, general validation rules can be predetermined for all templates. If in operation212, there is a template and overlay error (not valid) then the method would return to either operation100or102to try again. In this case, it is up to the technician (working for the administrator) to try again (perhaps adjusting the template image) and retry. Only if the template and overlay are found valid (validated) in operation212then would the method proceed to operation213. In operation213, top-down meta-data is generated. Top-down meta-data is meta-data which defines the regions of interest on the template. The top-down meta-data can be automatically generated from the overlay. For example, Table I below contains example top-down meta-data for a template. It defines the regions of interest (where to find the regions of interest on the template). For example, betting circle ‘1b’ (seeFIG.7) is defined as an ellipse with its center at coordinates “x=709.600647, y=203.087341, a width of the ellipse=71.76033, a height of the ellipse being 73.81597, and an angle of the ellipse being 266.7249. Thus, each region of interest on the template image is defined by a region in the template image by the top-down meta-data). Each region of interest is stored along with its name. The top-down meta-data can be generated by analyzing the different colored regions (on the overlay or on the template image itself) and fitting an ellipse to each region of interest (circles are ellipses with width=height). If regions of interest do not exactly fit an ellipse, then the closest ellipse to that region of interest will be approximated. Ellipses can be fitted to the regions of interest defined in the template image using techniques known in the art, such as OpenCV function fitEllipse (which inputs points on the perimeter of the region and output parameters of the closest fitted ellipse, the parameters being for example centerX, centerY, angle (rotation in degrees), width (in pixels), height (in pixels), BoundingRect (points on a rectangle bounding the ellipse). A top-down meta-data can be populated with all of the data defining each of the ellipses which define each region of interest along with a name for the respective region of interest, a type of the region of interest (e.g., betting, card, etc.) and any other data relating to that region of interest. In addition, the top-down meta-data file (like the camera meta-data file) can have other fields of well (not specific to each region of interest), such as the game name the file applies to, and any other fields that can be used by the system whether described herein or not. In Table I, other fields can include Id (identifier for the template), VLGameTypeld (global (meaning shared by all casinos) game Identifier), VLSideBetld (global side bet identifier), VLSideBetName (global side bet name), VLGameTypeName (global game name), TemplateEmptyData (top down image in png format, base 64 encoded), DetectedROIsJson (JSON text containing a list of the regions of interest (ROI) from the top down perspective), Id: ROI Identifier, Num (Position number (0 for dealer, 1-N for player positions)), Name (Mnemonic for region. includes position number and ROI type (c: Cards, b: Bets, s: SideBets)), Rect (bounding rectangle around the ellipse) Ellipse (geometric description of the ellipse from top-down perspective (center[x,y], angle, size[width,height])), Type: (ROI type (Unknown=0, Cards=1, Bet=2, Side=3, Control=4)), DetectedROIsChecksum (MD5 hash of the DetectedROIsJson used for determining when the template has changed), Seats (num positions/seats detected) TABLE I{“Id”: “6b018990-7152-43dc-a2ec-1c5503418a3d”,“VLGameTypeId”: “c28c8936-11c2-4089-9c7b-50297375977f”,“VLSideBetId”: null,“VLGameTypeName”: “Blackjack”,“VLSideBetName”: null,“Name”: “acme”,“TemplateEmptyData”: “data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAA... ARnQU1BAACxjwv8YQUAAAAJcEhZcwAADsMAA+==”,“DetectedROIsJson”: “[{“_Id”: “7025d4c3-ac8e-4f99-90e3-84f998708ea5”,“_Num”:0,“_Name”:“0c”,“_CamName”:“”,“_Ellipse”:{“_CenterX”:451.421722,“_CenterY”: 170.1532,“_Angle”: 180.289276,“_Width”:42.01747,“_Height”: 125.646286,“_BoundingRect”:“388, 149, 126, 42”},“_Type”:1},{“_Id”:“7caf1bc0-5e14-4575-a1b0-f7c29e62167e”,“_Num”:5,“_Name”:”5b”,“_CamName”:“”,“_Ellipse”:{“_CenterX”:282.9637,“_CenterY”:268.8289,“_Angle”:266.818359,“_Width”:71.4518661,“_Height”:71.70112,“_BoundingRect”:“245, 231, 76, 76”},“_Type”:2},{“_Id”: b3f619a6-c220-42fd-83ac-ae658f22d8fa”“_Num”:1,“_Name”:“1b”,“_CamName”:“”,“_Ellipse”:{“_CenterX”:709.600647,“_CenterY”:203.087341,“_Angle”:266.7249,“_Width”:71.76033,“_Height”:73.81597,“_BoundingRect”:“672, 164, 76, 78”},“_Type”:2},{“_Id”:“5c1e936b-2176-4175-88f2-71424a752763”,“_Num”:6,“_Name”:“6b”,“_CamName”:“”,“_Ellipse”:{“_CenterX”:191.684448,“_CenterY”:202.718628,“_Angle”: 184.044617,“_Width”:71.98778,“_Height”:72.3741455,“_BoundingRect”:“153, 164, 77, 77”},“_Type”:2},{“_Id”:“4be7b680-f975-40b2-af2d-b357564a24d0”,“_Num”:3,“_Name”:“3b”,“_CamName”:“”,“_Ellipse”:{“_CenterX”:507.2059,“_CenterY”:304.739624,“_Angle”: 174.947662,“_Width”:75.48305,“_Height”:76.14297,“_BoundingRect”:“466, 264, 82, 82”},“_Type”:2},{“_Id”:“b85600ce-7f07-4c97-a246-b098cfa3d05c”,“_Num”:4,“_Name”:“4b”,“_CamName”:“”,“_Ellipse”:{“_CenterX”:393.2059,“_CenterY”:304.739624,“_Angle”:174.947662,“_Width”:75.48305,“_Height”:76.14297,“_BoundingRect”:“352, 264, 82, 82”},“_Type”:2},{“_Id”:“b8c6ce0e-cd93-4b42-a20f-ac15f6b611b5”,“_Num”:2,“_Name”:“2b”,“_CamName”:“”,“_Ellipse”:{“_CenterX”:618.205933,“_CenterY”:269.739624,“_Angle”:174.947662,“_Width”:75.48305,“_Height”:76.14297,“_BoundingRect”:“577, 229, 82, 82”},“_Type”:2}]”,“DetectedROIsChecksum”: “8f8141294c88691009aa99aaa5759434”“NumSeats”: 6,} From operation213, the method proceeds to operation214. Once the top-down meta-data is generated, the template image(s) and respective top-down meta-data is then stored (of course the template image may have been stored earlier). It can be stored anywhere, on a local server, remote server, cloud storage, etc. Typically, the template images and the top-down meta-data are stored on a server operated by the administrator and accessible by remotely (e.g., the “cloud”) so that each casino utilizing the system can retrieve the template images and their respective top-down meta-data, but the template image(s) and top-down meta-data can be stored on any server/device described herein. SeeFIG.8as one example of an interface allowing the uploading of template images and respective meta-data to the “cloud” (a cloud-server or remote server), this is done by a technician working for the administrator. FIG.3is drawing illustrating a layout of a table felt, according to an embodiment. The layout is a clear top-down picture of a table felt. This can be taken by any party and uploaded (or emailed, etc.) so that it is transmitted to the administrator. This layout is started with in operation100. The image illustrated inFIG.3is an image file (e.g., JPG, etc.) FIG.4is a drawing illustrating a template generated from the layout fromFIG.3, according to an embodiment. The template is generated from the layout to meet particular standards (so all templates meet the same standards) and can also be cropped as illustrated inFIG.4. The template images should all be the same resolution, etc. This can be done in operations201and202/203. The image illustrated inFIG.4is an image file (e.g., JPG, etc.) Note that all template images from different layouts should be standardized, so they all can be processed in the same way. For example, the template images can be cropped in the same manner so different points on different template images would correspond to the same point on the table. FIG.5is a drawing illustrating the template illustrated inFIG.4with regions of interest filled in, according to an embodiment. FIG.5can either be considered a merged image (with the template merged with the overlay) or the template with the overlay painted over it using a separate layer. Note that the different region types (card area region of interest (for the dealer)501, player betting circles502which are betting area regions of interest) are painted with different colors to designate the different types of betting areas. Note that inFIGS.5-6, the horizontally shaded areas represent one color and the vertically shaded areas represent a different color. The horizontally shaded regions are betting areas (betting area regions of interest) where the players place their bets (using chips) and the vertically shaded region is a card area region of interest (for the dealer's cards). The regions of interest can be filled in automatically (using optical recognition) or manually (by a technician working for the administrator). The image illustrated inFIG.5is an image file (e.g., JPG, etc.) FIG.6is a drawing illustrating an overlay (which can be considered a “layer”) with regions of interest used for the template illustrated inFIG.4, according to an embodiment. FIG.6can be considered an overlay, it designates the regions of interest (with different colors being used for different types of regions of interest) without associated template. Combining the overlay ofFIG.6with the template ofFIG.4results in the merged image ofFIG.5(or combined image ofFIG.5using the overlay as a separate layer). The overlay illustrated inFIG.6is an image file (e.g., JPG, etc.) FIG.7is a drawing illustrating an identification/labeling of the regions of interest of the template illustrated inFIG.4, according to an embodiment. The regions of interest are automatically (or manually) numbered in operation210, the numbers for the regions of interest (e.g., ‘ 1b’, ‘2b’, etc.) are also referred to as the name of the region of interest (the names/numbers are reflected in the top-down meta-data). These names are matched with the numbering/names generated in1005/1006so that the positions identified in operations1005/1006can be match to the corresponding positions in the top-down meta-data.FIG.7is outputted on a computer screen to a technician. FIG.8is a drawing illustrating an interface enabling templates and top-down metadata to be uploaded to another server (such as the cloud), according to an embodiment. The administrator can initiate an upload of the templates and respective top-down meta-data to the cloud (e.g., a server accessible remotely) so that casinos can retrieve the templates and their respective top-down meta-data. Alternative, the templates and respective meta-data would not get uploaded to the cloud until after they are validated and then can be automatically uploaded (for example in operation214).FIG.8illustrates such an interface enabling such uploads. The interface also enables administrators to update templates and their respective top-down meta-data at such times as the files are updated so that the updated files are distributed to the casinos (and the casino tables that need them). Templates are distributed electronically to the casinos and the casino tables at those casinos so each table has the proper template and top-down meta-data that it needs. In an embodiment, every device at each table would get all the templates for that casino (where the table is located) so that if the casino decides to put another game on that particular table the electronics at the table would already have the respective template(s) ready. FIG.9is a flowchart illustrating an exemplary method of distributing templates and top-down meta-data to different casinos and tables, according to an embodiment. In operation900, the casino server (also known as a “premise server”) checks in with an administrator server (e.g., database) (could also be considered a “cloud server”) operated by the administrator (e.g., a cloud server). This is the same server where template images and top-down meta-data are stored to in operation214. These files are made accessible to all casinos which are part of the system. The casino server can continuously (and automatically) check in the administrator server (e.g., cloud server) to see if there are any updated templates on the system (for the particular casino served by this casino server). This can be done, for example, by the administrator server using a checksum computed for all of the templates (needed by that casino) and the casino server can poll the administrator server to see if the checksum has changed periodically (if the checksum has changed then there are new templates on the administrator server). Alternatively, as new templates (template images) and their respective top-down meta-data are made available, then a notification can be sent to the casino severs that new templates/top-down meta-data are available. From operation900, the method proceeds to operation901, which determines if there are any changes (updates) to the current set of templates (and top-down meta-data) on the administrator server. If not, then there is no need to update the templates/top-down meta-data on the casino server and this method does not continue. If in operation901, there are any template/top-down meta-data changes (or new templates/top-down meta-data) then the method would proceed to operation902, which would receive (from the administrator server) and store the updated (or new) template images and/or respective top-down meta-data on the casino server (or associated storage devices). From operation902, the method proceeds to operation903, wherein the casino server then transmits the updated template images and/or updated top-down meta-data to all of the casino devices in the casino that are utilizing this technology (in other words the devices at the casino tables which are implementing the system described herein). This is so all of the casino devices at the tables have the latest versions of the templates and their respective top-down meta-data. Transmissions can be made using any kind of computer communications network (e.g., wired, wireless, etc.) In operation903, all of the updated templates and/or respective top-down meta-data are transmitted to all of the devices on the system so those devices can store local copies of the templates and top-down meta-data. In addition to storing the latest versions of the files, these devices may need to update their camera meta-data if the template/top-down meta-data that was updated applies to the same game and template image/layout (as some games may have multiple templates/layouts) that is currently set up to be played on the table that the particular device is located. For example, if a particular table is currently playing “ACME Blackjack” (which has the ACME Blackjack table layout on the table of which the ACME Blackjack template image and top-down meta-data is based upon), the device would then re-adjust itself in order to utilize the updated ACME Blackjack template image(s) and top-down meta-data. Operation910determines whether a received updated template image(s) and top-down meta-data applies to the current game set up for play on the table (e.g., the layout on the table corresponds to any of the updated template image(s) and hence respective top-down meta-data). If updated template image(s) or respective top-down meta-data do not apply (correspond) to the current game set up for play on the table, then no further change is needed to be made by the device and the method can stop. A layout would correspond to a template image if the template image was generated from an image of that layout (subject to changes in the template image such as changes to resolution, cropping, etc., so that they are both really illustrating the same game.) If in operation910, the current game set up for play on a particular table (e.g., that table layout is currently present on the table) corresponds to the updated template image/top-down meta-data then the device at that table should be updated and so the method would proceed to operation911. In other words, if the template image and/or the top-down meta-data for the current game set up for play on this table has been updated, then the method would proceed to operation911as the table computer at the table needs to train for the updated template and/or top-down meta data (by adjusting the homography, etc.) It is possible that both the top-down meta-data file and its corresponding template image would be updated, or that only the template image but not the corresponding top-down meta-data file would be updated. However, it is more likely that the top-down meta-data would be updated but not the corresponding template image. This is because, after a period of time, the administrator may determine that the top-down meta-data needs to be adjusted to better define where the regions of interest actually are (for example, it might turn out that dealers don't generally deal to where the administrator initially anticipated requiring an adjustment of the regions of interest (and hence the top-down meta-data). If both the template image and the top-down meta-data for the current game set up for play on this table have not been updated, then the method would not proceed to operation911as no change to this table would be necessary. In operation911, the updated top-down meta-data is applied to the homography to determine revised camera meta-data. Camera meta-data is meta-data that identifies where the regions of interest are located on the views of (images captures by) the cameras trained (pointed) on the table (but these are typically not top-down cameras which would be directly above the table). Each table can have a number of cameras (e.g., 1 to 5 or more) placed at various places on the table pointed on the table to capture activity going on in the regions of interest on the table. The top-down meta-data identifies the regions of interest on the template (a top-down view) but with the cameras on the table the views are different and so the top-down meta-data needs to be adjusted in order to accommodate for the different view positions. Homography is a mapping for points on the template images to the camera views and is generated in operation1100. This mapping is used so regions of interest identified on the template image (the top-down view) can be transformed to identify where the regions of interest are located on the camera views on the table. The homography is based upon the positions and view directions of the cameras on the table. If the cameras change location or move the direction they are pointed in, then the homography would change. But if the cameras location and view direction remains the same, then the homography should typically stay the same even though the template image and/or top-down meta-data may have been updated. So in operation911, the previously generated homography (more on the homography is described herein) is applied to the updated top-down meta-data to determine the updated camera meta-data which identifies the regions of interest for each particular camera at the table. In this operation, if the top-down meta-data has not changed (i.e. only the template image has changed which is unlikely), then the camera meta-data would not change either. In operation911, a pre-stored image (e.g., a “training image”) of an empty table (with no objects on it) from the particular is camera is retrieved (which can be stored during operation1001) and used to pass to operation912. Note however, that this operation (retrieving such an image) is typically only needed if in operation912the more complex embodiment is utilized. From operation911, the method proceeds to operation912, which generates the camera meta-data. The transformation in operation912can be performed as described in operation1102. This can be done using one of two methods. In a more simpler embodiment, the camera meta-data can be generated from the top-down meta-data using the known homography by applying a transformation (e.g., by using library functions in OPENCV) to the top-down meta-data to translate the regions of interest therein into camera meta-data which contain the regions of interest from the point of view of the table cameras (see operation1102). Note that each camera would require its own transformation using the homography for that respective camera. Note that in OpenCV, the function Cvinvoke.PerspectiveTransform converts coordinates from one perspective to another based on homography. A related OpenCV function is CvInvoke.WarpPerspective, which converts an entire image from one perspective to another based on homography. Thus, as described herein, the ellipses defining the regions of interest from the top-down meta-data are transformed to the camera meta-data (defining the same regions of interest in the camera image) In a more complex embodiment, in operation912camera meta-data can be generated by calling a more complex algorithm which is illustrated inFIG.20. The updated template, updated top-down meta-data, updated camera meta-data (from operation911), the homography, the pre-stored images retrieved in operation911, and the lens type (e.g., the focal length such as 1.8 mm, etc.) of the respective camera, are all passed to the algorithm illustrated inFIGS.20-22(beginning at operation2000) which will return camera meta-data of the regions of interest. Note that operation2000is called for each camera being used. This camera meta-data may be improved over the simpler method of determining the camera meta-data by simply transforming the top-down meta-data using the homography, the improvement being because of such things as lens distortion or a possible movement of the camera. This is also described in operation1102 From operation912, the method proceeds to operation913which fine tunes the result from the transformation. The updated camera meta-data is adjusted based on the actual locations of the bet positions that were determined when the technician was initially initializing the table (see operation1100which can store the bet positions of the betting circles at the table). For example, betting spot (also referred to as betting circle) 1b (the first bet) might return its center position to be at coordinates (50,500) but the previous run found the actual center of 1b to be (54,515). This would mean an adjustment of (4,15) pixels for all of the regions belonging to the first position (e.g., the first betting circle, the first card position, the first side betting circle, etc.). If the simple method (the simple transformation in operation912without calling operation2000) is implemented, then the fine-tuning step can be skipped but a lens distortion correction (as known in the art) should be applied. From operation913, the method proceeds to operation914, which saves the revised camera meta-data to the device. Each camera (e.g., left camera, right camera) at the table has its own camera meta-data which identifies where the regions of interest are located for the images taken by that camera. Note that the method illustrated inFIG.9can be performed while the table is not being played on or while the table is in the middle of conducting a game. Note that as long as the cameras have not been moved too much (since the training/initiation for these cameras has been implemented) the homography for the cameras would remain the same. A small amount of shifting of the camera (e.g., not more than 20 pixels) could be permitted without requiring re-training. When a new table is set up, homography needs to be generated. Homography is a set of points which map the points on the template image to points on the image taken by a particular camera at the table. In other words, Table II illustrates a sample homography. In Table II, (TX1, TY1) represents a first point on a template image, (TX2, TY2) represents a second point on the template image, (TX3, TY3) represents a third point on the template image, etc. (RX1, RY1) represents where the first point is found on an image taken by a right camera at a casino table, (RX2, RY2) represents where the second point is found on the image taken by the right camera at the casino table, and (RX3, RY3) represents where the third point is found on the image taken by the right camera at the casino table. (LX1, LY1) represents where the first point is found on an image taken by a left camera at a casino table, (LX2, LY2) represents where the second point is found on the image taken by the left camera at the casino table, and (LX3, LY3) represents where the third point is found on the image taken by the left camera at the casino table. All of these points are given in standard (x,y) coordinates. There is no limit to the number of pairs of points used in the homography (and the more points the more accurate the mapping, but there should be at least four pairs of points). TABLE IITemplate: {(TX1, TY1), (TX2, TY2), (TX3, TY3), . . .}Right Camera: {RX1, RY1), (RX2, RY2), (RX3, RY3), . . .}Left Camera: {LX1, LY1), (LX2, LY2), (LX3, LY3), . . .} The initialize process would determine the homography for the table (which is dependent upon the position and angle of the cameras) so that the top-down meta-data can be transformed to identify the regions of interest on the side cameras (or any camera that is not at the top-down angle in which the template image was taken from which would capture a same image as the template image. Given homography such as that in Table II (with at least four pairs of points), given a point on one image the corresponding point can be mapped on another image. This can be done as known in the art, for example there exists a function in OpenCV (Open Source Computer Vision) called invoked Perspective Transform which can accomplish this. Typically, at least four pairs of points would be needed to compute the perspective transform. Thus once given the homography, regions of interest (defined by points) on the template image(s) (e.g., the top-down meta-data) can be mapped to regions of interest as viewed by a camera using the OpenCV Perspective Transform function (or any other computer code which accomplishes this result). Typically (whenever in the current method/system the transformation is made), at least four point pairs (four points on the template image and their four corresponding points on the camera image) would be needed to make the transformation. The transformation can be made with any greater number (than four) amount of points (actually pairs of points, a point on the image taken by the camera and a corresponding point on the template), e.g., if there are seven detected betting regions (see operations1003-1006) then all seven points (pairs) can be used. However, the transformation can also be done using four or more (but less than the number of known points), the points used can be chosen randomly or can be predetermined. This transformation can take place in operations912(in the “simple” embodiment where operation2000is not called),1102(in the “simple” embodiment where operation2000is not called),2001(in the more complex embodiment when operation2000is called by operations912921102). In this manner, video (or still) images from cameras can be analyzed and the regions of interest (from the top-down meta-data defined by the template image/overlay) as mapped to the viewpoint of the camera (e.g., the camera meta-data) are then analyzed for relevant activity (e.g., chips placed in betting regions, etc.) Table III below represents one example of homography which can be used to map points from the top view (template) to points on the view from the right camera (“TABLER”). If there is another camera at the same table (e.g., a left camera), a different such file would exist for the homography for the left camera. The homography itself is two sets of points: a set of points on the template image which correspond (map to) a set of points on an image taken by a camera at the table (but not by a camera which has the same point of view as the point of view from which the template image would be taken which would be directly from the top-down). The homography data can also include other parameters, such as an identifier of the camera being mapped to (e.g., right camera, left camera, etc.) Thus, for example, point 727,172 on the top view (template image) corresponds to point 26,511 on the camera image, etc. TABLE III“_CamName”: “TABLER”,“AutoDetectHomographyTopView”:“[727.0,172.0],[645.0,243.0],[548.0,287.0],[444.0,301.0],[340.0,284.0],[247.0,238.0],[171.0,166.0]”,“AutoDetectHomographySideView”:“[26.0,511.0],[227.0,498.0],[451.0,504.0],[688.0,535.0],[926.0,604.0],[1164.0,719.0],[1398.0,935.0]”, In an embodiment, a casino may wish to change the game currently offered at a particular table. In order to effectuate this change using the system, a casino (or administrator) employee would go to the computer system at the table (e.g., which can be embedded inside a table sign or on a separate device) and indicate (by pressing buttons, etc.) that they want to change the game. Then, they can identify the new game (layout being used) at the table (e.g., seeFIG.12) and the respective pre-stored template image (assuming it is available) is retrieved. Then, the method can proceed to operation912in order to initialize this new (retrieved) template in the same manner as if a template was updated. FIG.10is a flowchart illustrating an exemplary method of training cameras at the table in order to identify regions of interest, according to an embodiment. In operation1000, a technician (working for the administrator or the casino) would be at an actual casino table in the casino. The location of the cameras and the direction they are pointed would be set by the technician and fixed so they would not move (or turn/change viewpoint). Change the location or direction the camera is pointed would change the homography (when the table is in play the camera(s) should be mounted to remain fixed in their location and orientation). The technician would select the proper template image from among the library of templates on the casino server which matches the game (layout) on the current table. Different templates may exist for the same game depending on the number of seats, so the appropriate template should be selected based on the number of seats at the table. This can be done using a computer (or table sign serving as a computer) at the table itself, which has in output device and an input device (e.g., touch-screen). From operation1000, the method proceeds to operation1001, in which the cameras at the table capture training images of an empty table (no objects placed on the table). Table cameras can also refer to “side view cameras” since cameras at the table would typically take side views but would not be able to take top-down views because that would require a camera on the ceiling. In an embodiment, cameras mounted on the ceiling can be used in the same manner as side cameras as described herein. From operation1001, the method proceeds to operation1002, wherein the technician places chips in the main betting locations and live images are captured on the table cameras. Note that each camera at the table would be “trained” individually (operations1001to1008would be performed for each camera). From operation1002, the method proceeds to operation1003which subtracts the training image (taking from operation1001) from the live image (taken in operation1002) in order to find the objects (e.g., chips) placed in operation1002. In other words after the subtraction, only the objects placed in operation1003are included in the image. From operation1003, the method proceeds to operation1004, which determines an anchor object (in the subtracted image determined in operation1003) based on the technician's selected camera view direction. In other words, the anchor object is typically the object closest to the camera. The anchor object can be manually selected by the technician or automatically identified by the system. From operation1004, the method proceeds to operation1005, which automatically moves through all of the detected objects (using the subtracted image from operation1003) and numbers (names) the objects in order of their distance along the “bet horizon.” The numbers/names would be used to match these identified positions to their corresponding positions in the top-down meta-data. Note that typically the automatic number is done so that number 1 is the betting circle (chip) closest to the camera, and then 2 is the next betting circle (chip) and so on (in another embodiment, the automatic numbering can number the betting circles (FIG.17) so each betting circle's number/name matches its corresponding betting circle/betting region of interest from the template (FIG.7), in this embodiment number 1 in the camera image would instead say ‘7b’, number 2 in the camera image would instead be ‘6b’, etc.) The technician using the system can indicate to the system where the camera is located (e.g., left camera (pointed right) or right camera (pointed left)). In this way, the system would know which of the identified positions matches the respective betting region of interest in the top-down meta-data so the camera meta-data would use the same name/identifier for that betting region of interest. For example, betting circle number 1 inFIG.17would correspond to what would be betting circle 7b inFIG.7(the layout/template inFIG.7is not the same as the one used inFIG.17but nevertheless for illustrative purposes assumeFIG.7had a betting circle 7b numbered in the manner), and betting circle number 7 inFIG.17would correspond to betting circle 1b inFIG.7(again, these are not the same layouts but this is mentioned for exemplary purposes). Each region of interest can have a unique name/identifier which would be used to map a region of interest in the top-down meta-data to the same region of interest in the camera meta-data. The bet horizon is the path from the anchor object to the farthest object on the table. Each successive object is detected by using the previous angle and distance as hints to predict where the next object (chip) might be. The polygons illustrated inFIG.17represent the predicted locations for the objects. From operation1005, the method proceeds to operation1006, which displays all of the detected objects and their respective number (e.g., the detected objects should all be numbered in order based on their position on the table). From operation1006, the method proceeds to operation1007, which prompts the technician to confirm whether the all of the objects are detected and are numbered correctly. The numbering of objects can be automatically displayed (e.g., seeFIG.17) and the numbering/naming used for each region of interest in the camera meta-data should match the corresponding region of interest (e.g., betting circles) used on the top-down meta-data for the template image (e.g., seeFIG.7although the template inFIG.7does not correspond to the layout used inFIG.17but nevertheless illustrates a numbering of regions of interest). The numbering/naming of betting regions of interest should match so that the detected chips (by the camera) can be mapped to their corresponding betting region of interest on the template image (for example seeFIG.7). If the technician responds with yes (that all of the objects are detected and numbered correctly), then the method proceeds to connector B which continues ontoFIG.11. If in operation1007, the technician responds to the system that the objects are not all numbered correctly (or all of the objects are not detected), then the method proceeds to operation1008. In operation1008(seeFIG.19), the technician can swipe (trace out) on the touch-screen the bet (object) horizon (the arc of the object locations on the table) so that the system can use this information (as a “hint”) to look for the objects and order them in the proper order. The technician can also adjust some of the settings (e.g., white balance, etc.) in order to get a better image of the table. The method then returns to operation1004, where another attempt is made to identify all of the objects in the proper order. Note that the method illustrated inFIGS.10(and11) are performed for each camera at the table. After training one camera at the table (e.g., the left camera) next another camera at the same table (e.g., the right camera) needs to be trained in the same manner. FIG.11is a flowchart illustrating an exemplary method of generating the camera meta-data, according to an embodiment. From connector B, the system now knows where all of the objects are (e.g., their coordinates) for the respective camera. In operation1100, the homography is stored to the device (e.g., the camera or device/computer at the table driving the camera). The point used (coordinate) is typically the point at the center of the object (e.g., the center of the chips detected). The homography can be stored for example in a format such as that illustrating in Table III. Thus, the points detected inFIG.10are now all stored in a homography file along with corresponding top-down meta-data points. From operation1100, the method proceeds to operation1101which captures the live image at the table using the respective camera. This operation can be optional, as a previously captured live image can be used, or if the simple method is applied in operation1102then another image is not needed to apply the transformation. From operation1101, the method proceeds to operation1102. In one embodiment, operation1102can generate the camera meta-data by using the homography and the top-down meta-data as described herein (by converting the regions of interest in the top-down meta-data to camera regions of interest in the camera using the homography by using known math and/or open source functions such as PerspectiveTransform in OpenCV (or others)). This simply transforms the regions of interest described in the top-down meta-data using the homography to obtain their locations in the camera's image. Note that each camera would require its own transformation using the homography for that respective camera. All regions of interest (including the betting regions of interest where the chips were detected) present in the top-down meta-data should be converted (transformed) into camera meta-data (so the regions of interest in the images taken by the camera(s) can be analyzed). Note that (in an embodiment) when points are transformed from the top-down meta-data to the camera meta-data (using the homography), an ellipse (defining the region of interest) is also transformed from the template image to the camera image. For example, inFIG.7, betting region of interest 1b (and all betting regions of interest) are circles (a circle is an ellipse where the width equals the height). Note however, that when this circle is transformed into the image taken by the camera, the circle then becomes an ellipse (where the width does not equal the height). This elliptical region defines the betting region of interest on the image taken by the camera and is what should be analyzed by any software analyzing the regions of interest for activity. Thus, while points on the template image are transformed to the image (view) taken by the camera, there is also another similar transformation taking place regarding the boundaries of the betting region. This can be accomplished by computing points on the ellipse perimeter defining the region of interest on the template image (points on the ellipse can easily be computed using trigonometry). Parameters of the ellipse for each region of interest on the template image as defined in the top-down meta-data (e.g., center x, center y, angle, width, height). For a circle, the width=height and the angle (rotation) shouldn't matter. Any number of such points can be computed on the ellipse defining a region of interest on the template (top-down meta-data) used, e.g., 5 to 10 or more as the more points the more accurate it will be. These elliptical points on the template image are then also transformed to the camera view (in the same manner as transforming points herein described herein using the homography) to elliptical points on the camera image. Then, an ellipse is determined that is defined by the elliptical points on the camera image, using an OpenCV library function such as fitEllipse which returns the parameters defining an ellipse such as the center x (x coordinate of its center), center y (y coordinate of its center), width, height, angle (rotation in degrees) from the points on the perimeter of the ellipse. These parameters are then used to define the ellipse in the camera meta-data. In this way, all of the ellipses defining all of the regions of interest in the top-down meta-data (from the template image) are transformed into corresponding ellipses defining the same regions of interest but in the camera image (stored as camera meta-data). In this way, all regions of interest on the template image are transformed to their corresponding (respective) regions of interest in the respective camera view (these transformations are done for each camera). In another embodiment, instead of simply generating the camera meta-data using a simple transformation using the homography as described above, in operation1102a more complex method can be used to determine the camera meta-data. In this more complex embodiment, operation1102would apply (e.g., by calling it as a subroutine) the transformation illustrated inFIG.20starting with operation2000(and continues toFIGS.21-22). Note that operation2000is called for each camera being used. The end result is the camera meta-data may be improved over the camera meta-data which can be determined in operation1101without the transformation illustrated inFIG.20. One reason the more complex method may return better results is due to camera lens distortion or a possible movement of the camera or a device in which the camera is contained. From operation1102, the method proceeds to (optional) operation1103, which fine tunes the camera meta-data based on actual chip locations. This can be done as described with regard to operation913. For example (and this applies to the fine tuning in operation913as well), the system now knows the centers of the regions shown when determining the homographies (which are the centers of the bet circles). The system also knows the centers of the betting circles as calculated by the method illustrated inFIGS.20-22). The different (but corresponding) regions are compared and the regions of interest would be shifted based on the deltas (change between the two regions). If the simple method (the simple transformation in operation1102without calling operation2000) is implemented, then the fine-tuning step can be skipped but a lens distortion correction (as known in the art) should be applied. From operation1103, the method proceeds to operation1104, which stores the camera meta-data along with the template used and any other relevant data. Table IV below illustrates one example (of a small portion for illustrative purposes but an entire set of camera meta-data would be much longer) of how the camera meta-data can be stored. The camera meta-data stores all of the regions of interest and where they are located for each camera (i.e., on each image taken by that camera). So when an image is taken by that camera, the camera meta-data identifies the location (which can be identified by coordinates, etc.) on that image where regions of interest are located so they can be further analyzed by the system (or another system). “TABLER” is the right camera on a table, while “TABLEL” is the left camera on the table. If there is one camera being used, it should be able to see all of the regions of interest on the table. If there are two cameras being used, between both cameras they should be able to see all of the regions of interest on the table. This is true for as many cameras as are used on the table. The data illustrated in Table IV is the camera meta-data can be considered the “final product” of the system described herein (before operation104is performed which actually uses this data to analyze activity on the table). Note that this data can be stored in numerous ways and this is only one example. TABLE IV{“_CamConfigs”: {“TABLER”: {“_ROIBets”: {“Id”: “c649873e-ba21-419d-9d09-7f9b59f8a10e”,“_Num”: 3,“_Name”: “3b”,“_CamName”: “TABLER”,“_Ellipse”: {“_CenterX”: 451.0,“_CenterY”: 504.0,“_Angle”: 265.117,“_Width”: 155.475891,“_Height”: 40.06923,“_BoundingRect”: “424, 425, 54, 158”},“_Type”: 2},“_ROISideBets”: {“_Id”: “4a00eaa5-a1da-4f98-857a-98d5ed018f20”,“_Num”: 3,“_Name”: “3s”,“_CamName”: “TABLER”,“_Ellipse”: {“_CenterX”: 334.45166,“_CenterY”: 520.465149,“_Angle”: 262.702881,“_Width”: 73.53546,“_Height”: 20.4066086,“_BoundingRect”: “320, 483, 29, 75”},“_Type”: 3},“_CamName”: “TABLER”,“_AutoDetectHomographyTopView”:“[727.0,172.0],[645.0,243.0],[548.0,287.0],[444.0,301.0],[340.0,284.0],[247.0,238.0],[171.0,166.0]”,“AutoDetectHomographySideView”:“[26.0,511.0],[227.0,498.0],[451.0,504.0],[688.0,535.0],[926.0,604.0],[1164.0,719.0],[1398.0,935.0]”,},“TABLEL”: {“_ROIBets”: {“_Id”: “aaaa8491-4f04-40c3-9784-50b1310b6b77”,“_Num”: 4,“_Name”: “4b”,“_CamName”: “TABLEL”,“_Ellipse”: {“_CenterX”: 687.0,“_CenterY”: 535.0,“_Angle”: 268.084351,“_Width”: 178.361664,“_Height”: 51.9952927,“_BoundingRect”: “658, 445, 58, 180”},“_Type”: 2},“_CamName”: “TABLEL”,“_AutoDetectHomographyTopView”:“[727.0,172.0],[645.0,243.0],[548.0,287.0],[444.0,301.0],[340.0,284.0],[247.0,238.0],[171.0,166.0]”,“_AutoDetectHomographySideView”:“[26.0,512.0],[227.0,498.0],[451.0,504.0],[687.0,535.0],[926.0,604.0],[1164.0,719.0],[1401.0,936.0]”,}},} Note that regions of interest are defined for each of two cameras (although there can be any other number of cameras as well such as 1 to 10 or more). The “name” tag in Table IV is a unique identifier for the particular region of interest being defined (seeFIG.7) where regions of interest are given unique names (e.g., “3b”). The “Type” tag identifies what type of region of interest is defined (e.g., 2 for a betting region, 3 for a side bet, etc.) The field descriptions used in Table IV for the camera meta-data can be the same as used in Table I for the top-down meta-data. Note that a field CamName indicated which camera captures the ROI. Note the correlation between the camera meta-data in Table IV and the top-down meta-data in Table I (if portions of Table IV were to represent a transformation of ROI defined in Table I). For example, in the top-down meta-data, betting region “3b” is located at an ellipse with X=507.2059, Y=304.739624, angle=174.947662, width=75.48305, height=76.14297. Note that X and Y are the (x,y) coordinates in the width and height are the height of the ellipse (if the width=height it would be a circle), and angle can be the rotation of the ellipse. In the camera meta-data, this same betting region “3b” is found on the image taken on the right camera (“TABLER”) as an ellipse with X=451.0, Y=504.0, angle=265,117, width=155.475891, height=40.06923. Thus, this same region maps to a different area on the image because of course the images from the table cameras are taken at a different angle than top-down (the template is a top-down image) as well as a different resolution. So given the homography, all of template regions of interest (defined in the top-down meta-data) can be translated to camera meta-data (where the same regions of interest can be found on images taken by the respective camera) using the homography. Note that the “Boundingrect” tag defines a rectangle which can bound the defined ellipse. Each region of interest in the top-down meta-data would map to a left, right (or both) camera image in the camera meta-data. Different homography would exist for each camera (since each camera has a different view). Note the homography is also present in the camera meta-data in Table IV for both the right camera (“TABLER”) and the left camera (“TABLEL”). The tag “CamName” designates the particular camera, the tag “AutoDetectHomographyTopView”:“represents center of bet positions from top view for the template assigned to this table, and the tag “AutoDetectHomographySideView” represents center of the bet positions from a camera view (e.g., right camera, left camera, etc.) Note that the points in the AutoDetectHomographyTopView and AutoDetectHomographySideView represent the same locations but for the different frames of reference (e.g., template image for the AutoDetectHomographyTopView and camera image for the AutoDetectHomographysideView). Mathematically each set of points defines a plane and once both planes are defined it is a known mathematical function (e.g., the OpenCV function PerspectiveTransform) to transform any point on one plane to the other (which is how regions of interest known on the template can be converted to a camera view). The camera meta-data in general is generated by either 1a) a simple mathematical transformation from the top-down regions of interest to the camera regions of interest via the homography, or 1b) the more complex method illustrated inFIGS.20-22which in addition to the simple mathematical transformation from the top-down regions of interest to the camera regions of interest via the homography and then adjusting further by transforming the template image to match the camera image to improve accuracy of the regions of interest, and then (after either 1a or 1b) then 2) fine tuning (operation913or1103) to slightly shift the regions of interest to match where the actual objects (e.g., chips) were placed on the layout and also applying a lens distortion correction. FIG.12is an example of an output of different templates that can be selected to match a particular game at a casino table, according to an embodiment. In operation1000, a technician can scroll through multiple template images on a computer at the casino table to pick out (e.g., by touching) the particular template that matches the felt installed at the particular casino table. FIG.13is an example of a camera image of an empty table, according to an embodiment. In operation1001, the camera takes an image of an empty (clear) table with no objects on it. This is used later to identify objects placed on the table. FIG.14is an example of a camera image of bets placed at the empty table, according to an embodiment. In operation1002, the technician places objects on the table (e.g., a chip on all of the betting circles) so the locations of the betting circles can be identified. While the locations of the betting circles are known in the top-down meta-data, they are not yet known on the table cameras (e.g., right camera, left camera, and others) which are located and pointed at the table. FIG.15is an example of the bets placed image subtracted from the empty table, according to an embodiment. The empty table image (FIG.13) is subtracted from the empty table with objects image (FIG.14) to produce the image illustrated inFIG.15which shows just the new objects placed in operation1002. This shows the betting circles for each of the players (betting circles are regions of interest). The subtraction can be done, for example, by using Open CV functions (e.g., absdiff). This is done in operation1003. FIG.16is an example of an anchor object identified in the subtracted image, according to an embodiment. In operation1004, an anchor object is identified out of all of the objects identified in operation1003. Usually, for a camera located on the left side (left as seen by the players, right as seen from the dealer's side of the table) of the table, the anchor object will be in the lowermost right position out of all of the objects. FIG.17is an example of identified objects and polygons used to predict locations for the objects, according to an embodiment. In operation1005, the system moves through all of the objects to number them consecutively. Typically, the next closest object after the anchor object will be considered the second object. Then after that, the previous angle and distance are used as hints to predict where the next object would be. Polygons can be used as illustrated inFIG.17in order to represent the predicted locations for the objects. Note that the numbering can also be in reverse order as well (e.g., ‘1’ would be ‘7’, ‘2’ would be ‘6’, ‘3’ would be ‘5’, ‘4’ would be ‘4’, ‘5’ would be ‘3’, ‘6’ would be ‘2’, ‘7’ would be ‘1’) to be consistent with the numbering direction illustrated inFIG.7. These numbers can be used as names to identify each region of interest and each number can be appended with a letter (e.g., ‘b’ to indicate it is a betting region of interest). The numbering used on the regions of interest where the chips are placed should correspond to the numbering used on the top-down meta-data (of the template) so that the corresponding regions of interest on one (e.g., the top-down meta-data) can be mapped to the corresponding regions of interest on the other (e.g., the camera meta-data). FIG.18is an example of the system incorrectly identifying all of the new objects, according to an embodiment. FIG.18illustrates a situation where the system incorrectly identified all of the new objects. This can be due to noise, background activity, extra objects on the table, or other imperfections in the system. In this situation, the technician would indicate to the system (in operation1007) that the objects are not numbered correctly so that the method would proceed to operation1008so that the technician can try to give the system additional information to improve its identification and numbering of the objects. FIG.19is an example of a technician tracing out an arc of bet locations on the table, according to an embodiment. In operation1008, the technician can trace out the object arc where the objects are located. For example, inFIG.19there are two arcs that are traced (using a touch-screen) by the technician, whereas the objects (chips/bets) are in between the two arcs. In another embodiment, the technician can trace one line which touches all of the objects (chips/bets). The numbering inFIG.19can correspond to the numbering used for top-down meta-data mapping, or if different names/numbers are used the system would know (e.g., by the direction the camera is pointed which can be inputted into the system) which numbered detected chip in a betting circle corresponds to which region of interest on the template (top-down meta-data) so the camera meta-data can be generated in which the regions of interest therein correspond to the counterpart regions of interest in the top-down meta-data. While it has been described that once the top-down meta-data is known and the homography, then regions known on the template (i.e., the top-down meta-data) could be mathematically mapped to regions on the camera views (i.e., the camera meta-data). Instead of the simple mathematical transformation, an optional more complex algorithm (seeFIGS.20-22) can be used which takes into consideration the images on the table captured by the table cameras which can be correlated to template image. Using this additional information, corrections for things such as lens distortion by the camera lens as well as a slight change in position of the camera can be accounted and adjusted for. For example, the graphics on the template image (e.g., the “ACME” inFIG.4or5, the betting circles, etc.) can be transformed and correlated to the graphics on the camera image (e.g., the “ACME” in the camera image (for example seeFIG.14)) in order to more accurately correlate the camera image to the template image and make adjustments to the regions of interest determined initially by the homography to more accurate locate the regions of interest in the camera image (and then stored as the camera meta-data). FIG.20is a flowchart illustrating an exemplary method of implementing a transformation to determine camera regions of interest, according to an embodiment. In operation2000, the method starts at least with the following inputs: the template image(s), top-down meta-data, which camera image, homography, and the camera lens type (e.g., the focal length, etc.) From operation2000, the method proceeds to operation2001, which uses homography to rotate the template image to approximate the side view. Starting from a top-down view of a table the viewpoint is changed (using a so-called homography) to resemble what a camera on the side of the table would see. This can be done by using mathematical transformations known in the art (e.g., an Open CV function called WarpPerspective). The template image(s) should be warped (rotated/twisted etc. to the same plane as the table from the camera's point of view) so that pattern matching can be done. Because the camera can be in a slightly different location than originally ‘guessed’, the approximate side view and the view from the camera will be very similar but not identical (although in another embodiment, guesses are not used). The rest of the workflow tries to correct this difference. From operation2001, the method proceeds to operation2002, which cleans up the images. From a camera's side-view point of view the table with chips and cards is visible in the bottom of the camera image whereas the top part of the image (i.e., everything above the edge of the table) will contain people, chairs, and other information that is irrelevant to the ROI positioning. In the later step the approximate side-view gets aligned with the camera side view and to make this work as well as possible the area above the edge of the table gets erased before trying to align them (since otherwise irrelevant information can throw off the alignment). In order to do this the table edge in the top-down template is stored and also calculated when going from the top-down to the approximate side-view template. Everything above the table's edge is discarded in the approximate side-view template. From operation2002, the method proceeds operation2003, which applies lens distortion adjustments. Every lens causes an image to be distorted in certain ways. For example, a wide-angle lens may cause straight lines to become curve (e.g., barrel distortion or pincushion distortion). In order to remove any artifacts caused by the lens these distortions are removed from the side-view camera image. The focal length of the lens is used to remove the distortion, as known in the art. The system can keep a stock set of homographies used only for lens distortion. The focal length is just a mnemonic/key for a lookup to get the lens distortion homography to be applied. From operation2003, the method proceeds to operation2004which cleans up the image. The area that was discarded in the approximate side-view is reused to discard the same area above the table's edge in the undistorted camera side-side view. The area discarded in the camera side-view will not be the exact area that would ideally need to be discarded, but because it's a close guess it works well enough From operation2004, the method proceeds to operation2005, which uses homography to rotate top-down regions to approximate the side view. This can be done as described herein, for example using the OpenCV function Perspective Transform. This is described in operations912,1102with regard to the “simple” approach (which does not call operation2000) which uses an OpenCV function such as PerspectiveTransform) to transform regions of interest in the top-down meta-data to regions of interest in the camera meta-data. This transformation is described in numerous places herein. The same calculations used to take the top-down template image to the approximate side-view template are used to calculate the regions in the approximate side-view template. From operation2005, the method proceeds to operation2006, which creates OPENCV feature matchers (ORB or AKAZE). In order to align the approximate side-view (generated from the top-down view) with the undistorted camera view they need to be matches together and the best way of doing this is to compare features that appear in both images. To do this feature matching, characteristic points in each of the two images are identified and compared to one another. ORB and AKAZE are two known functions in OpenCV to do this. From operation2006, the method proceeds to operation2007, which runs feature match. The feature points extracted out of the camera image and the approximate side-view image are compared to one another and non-similar points will have to be filtered out. From operation2007, the method proceeds to operation2008which gets all angles between matched features. The camera view and the approximate side-view are put side-by-side, and then lines are drawn between the matching points. Each line will be at a certain angle. From operation2008, the method proceeds to operation2009, which removes invalid matched features based on angles and OPENCV methods. In most cases this will result in fairly parallel lines, and when this is not the case the matching points should get filtered out. In a similar fashion outliers can be filtered out as well as feature points that are too far apart from one another. From operation2009, the method proceeds to connector C which then continues onFIG.21. FIG.21is a flowchart continuing the exemplary method of implementing a transformation to determine camera regions of interest, according to an embodiment. From connector C, the method continues to operation2100, which gets the homography from matched features. Based on the filtered matching feature points it's possible to align the approximate side with the camera view, thereby creating a corrected approximate camera side-view (i.e., another better homography than the original one). From operation2100, the method proceeds to operation2101, which calculates a score based on a delta of the area regions between original homography and feature matched homography. When applying feature matching a scoring mechanism is used in order to determine if the matching was good or bad. Usually scores based on the featuring matching are used (e.g., the number of matching feature points), but in this situation good scores were acquired by comparing the surface areas of regions in the camera side-view to the surface areas of the regions in the approximate side-view. From operation2101, the method proceeds to operation2102, which determines whether the score (computed in operation2101) is too low. If in operation2102, it is determined that the score computed in operation2102is too low, then the method proceeds to operation2103, which returns rotated regions based on the original homography. There is too little confidence to continue. The feature matching process didn't succeed so stick with the best possible result, namely the regions derived from the approximate side-view template (from before the feature matching was applied). The method then proceeds to connector D (which continues inFIG.22). In operation2102, it is determined that the score (computed in operation2101) is not too low, then the method proceeds to operation2104, which uses rotated regions based on feature matched homography. Feature matching resulted in the approximate side-view template to be aligned with the camera side-view. The same calculation can then be used to calculate the region in the now aligned approximate side-view. From operation2104, the method proceeds to operation2105which runs feature match again. Because the aligned approximate side-view is usually better than the original approximate side-view, it means that the system can reapply feature matching again as well as perform some additional fine-tuning by matching each region inside the images as well. From operation2105, the method proceeds to operation2106, which performs image correction. The aligned approximate side-view is better than the original approximate side-view. Because of this the to-be-discarded zone above the table's edge also more closely matches the actual side-view camera's to-be-discarded zone. Hence use this zone instead to discard the top area in both the camera side-view and in the aligned approximate side-view template. The approximate side-view is also quite sharp compared to the camera side-view and so it gets blurred in order to more closely match the images. From operation2106, the method proceeds to operation2107, which fine tunes the regions using local feature matching. Feature matching is used to align (an area around) each region in the camera side-view and the aligned side-view template. From operation2107, the method proceeds to operation2108, which calculates a score as well for each region of interest (ROI). A score is calculated based on the number of regions that have been reliably fine-tuned, where a region is considered reliably fine-tuned if the difference in surface area before and after local fine-tuning hasn't changed too much. As one example, starting with a list of contours (a list of points defining an area) before and after some operation. The OpenCV function CvInvoke.ContourArea calculates the area of a contour. The score for any given region/contour can be=(Min(beforeArea,afterArea)/Max(beforeArea,afterArea)). The overall score for the template and homography is the minimum score for all the regions detected. Any other method to determine a score rating a match/fit of a homography and template image to an image taken by a camera (camera view) can be used. Note the score computation in operation2108can be the same score computation as in computed in operation2101. From operation2108, the method proceeds to connected E which continues onFIG.22. FIG.22is a flowchart completing the exemplary method of implementing a transformation to determine camera regions of interest, according to an embodiment. From connector E, the method proceeds to operation2200, which determines if the score (the overall score calculated in operation2108) is acceptable. If more than a certain percentage of the regions have been reliably fine-tuned then the method proceeds to operation2201, while otherwise the method proceeds to operation2202. In operation2201, only fine-tuning of regions with a very score are kept. The other regions need to be aligned again since the local feature matching failed for those regions. In this case calculate a global alignment of the images using the fine-tuned regions that were successfully aligned. For each region that wasn't fine-tuned this global correction is used instead. In operation2202, the fine-tuning isn't reliable enough so the results from just before the fine-tuning process, i.e., the aligned approximate side-view template are returned. Note that in all cases, an optional lens distortion correction can be applied. In2203, the transformed regions are returned from this transformation process. When starting with the initial “guess” homography to go from the top down to the side view the system obtains a final score after having applied all steps, where the score is readjusted depending on the outcome of each step. Multiple initial guesses for the camera positions can be tested, each resulting in a different score and finally the best score and match will be kept. Regions of interest can be defined in the top down image and the same homographies and image manipulations can be used to obtain the regions of interest in the final camera image. Note that all camera meta-data (and the top-down meta-data, template images, etc.) can be stored anywhere on the system, such as the respective table computer, the respective casino server, the administrator server, or any combination of these computers. Live video streams from the cameras at the table can be transmitted to any of the computers on the system (e.g., the respective table computer, respective casino server, administrator server, etc.) and such live video can be stored in a database for later retrieval. The live video from the cameras can be analyzed (using the camera meta-data so that the regions of interest are identified) by any computer (e.g., the table computer, the casino server, the administrator) so analysis of the video can be completed. The analysis of the camera video can then be reflected on any of the computers, for example, the casino server would store (or be in communication with another database that stores) information about player bets so the bets that each player makes can be tracked and stored in the system. Information about player cheating, dealer cheating, dealer efficiency, etc., determined from analyzing the video streams as described herein can also be stored on any one (or any combination) of the computers (e.g., casino servers, administrator server, table computer, etc.) The system may know the identity of players sitting at particular locations on the table (by presenting their players card to a casino employee), and so the system can track the betting activity of those respective players by examining the chips placed in the respective betting regions. For example, fromFIG.8, is a particular player is sitting at location 1b, then a portion of an image (still or video) taken by a camera at the casino table can be analyzed in the region of interest for 1b as defined in the camera meta-data to determine how much in chips are placed there for each hand. This amount can be stored in a casino database, and thus the casino would know the amount this particular player has wagered (and the times wagered, his/her average bet, etc.) Thus, all wagers placed in all betting regions of interest can be identified and associated with particular players (since each particular player can be associated with his/her own betting area region of interest), and all of this data can be stored in a database. FIG.23is a drawing of a sample casino table with cameras, according to an embodiment. A casino table2300has a table sign2301which has a video display (front and back) which displays advertisements on the front (and the back can be used for input/output with the system). A left video camera2302(which can be embedded inside the sign2301) and a right video camera2303are both at the casino table (mounted physically to the casino table or to a structure touching the casino table) pointed at the table and each camera can see all regions of interest. In another embodiment, not all cameras would see the entire table (and hence all of the regions of interest) but combining all cameras would yield views of all of the regions of interest. Regions of interest include betting circle 12310, betting circle 2,2311, betting circle 32312, betting circle 42313, betting circle 52314, card region 12320, card region 22321, card region 32322, card region 42323, card region 52324, dealer card region2325. The regions of interest can also be numbered in the reverse direction as well. This table can accommodate five players (although of course tables can accommodate other numbers of players) and each player has their own respective betting circle (which is a region of interest) and card region (which is a region of interest where that player's cards would be dealt). The dealer's card region2325would be where the dealer's cards would be dealt. For example, right camera2303could capture betting areas2314,2313, and2312(the other betting areas would not be in its field of view or too far out of the center of its view) while right left camera2302could capture betting areas2310,2311, and2312(the other betting areas would not be in its field of view of too far out of the center of its view). Betting area2312could be processed by either camera (or both). The regions of interest can be divided up among more than two cameras (e.g., 3-10), etc., including a camera on the ceiling (not shown inFIG.23). In another embodiment, different cameras pointed on the same table can each individually capture all regions of interest on the table (as opposed to the other embodiment described where different cameras pointed on the same table cannot individually capture all regions of interest on the table but combined can capture all regions of interest on the table). Note that while this table only has two cameras, any other number of cameras can be used on a table at the same time (e.g., one to five cameras or more). The more cameras, the better accuracy the system would have. Note that the associated hardware (such as a connected computer, etc.) can be implementing the methods described herein for each camera (e.g., would have its own training (e.g.,FIGS.10-11), its own homography, respective meta-data, etc.) Typically, the meta-data for the different cameras would still be stored in the same file (e.g., the camera meta-data) for simplicity. Note that (optional) ceiling camera2320is present on the ceiling (directly over the table or not directly over the table but off to the side). The ceiling camera can be a security camera that already exists in the casino ceiling (e.g., the “sky”) or a ceiling camera specifically installed to operate with the current system. The ceiling camera can be another camera that works with the system as any other camera (e.g., the right camera, the left camera, etc.) to provide another view to determine regions of interest. The ceiling camera is not at the casino table in that it is not physically connected (directly or indirectly) to the casino table but is mounted over the casino table. Note that all cameras (regardless of how many cameras are pointed at a table) can be present inside/on signs at the table (e.g., a betting limit sign or other type of sign), or some camera(s) can be inside/on signs at the table while other camera(s) at the table are not inside/on signs. The system can also use one or more cameras that are external to the table (e.g., on the ceiling, etc.) alone or in combination with one or more cameras that are at the table itself. The processing (e.g., the image processing, etc.) can be done on a computer at the table or can be remotely (e.g., on a casino server, administrator server, etc.) In an embodiment, the system can also be administered using only overhead cameras (such as the security cameras that typically already exist in a casino). FIG.24is a block diagram illustrating a computer system which can implement all of the methods described herein, according to an embodiment. The computer architecture illustrated inFIG.24can implement a computer running at the administrator's location, any server on the system, any computer/device of any kind operating at the casino table, etc. A processing unit2400(such as a microprocessor and any associated components) is connected to an output device2401(such as an LCD monitor, touch screen, CRT, etc.) which is used to display to the player any aspect of the method, and an input device2402(e.g., buttons, a touch screen, a keyboard, mouse, etc.) which can be used to input any input (whether described herein or not) from the user/technician to effectuate the method. The output device2401can output any information, status, etc., of any aspect of the system (whether described herein or not). The input device2402and output device2401can be for example embedded into a table sign using a touch screen, or can be on a separate device. There can also be multiple output devices2401and input devices2402connected to the processing unit2400. One example of a combined input device2402/output device2401can be a table sign located at the table itself. All methods, features, embodiments, etc., described herein can be performed by the processing unit2400(or multiple such processing units) by loading and executing respective instructions. Multiple such processing units can also work in collaboration with each other (in a same or different physical location). The processing unit2400can also be connected to a network connection2403, which can connect the device to a computer communications network such as the Internet, a LAN, WAN, etc. The processing unit2400can communicate with any other computer, device, server, etc., located in a same or different physical location via the network connection2403. The processing unit2400is also connected to a RAM2404and a ROM2405. The processing unit2400is also connected to a storage device106which can be a disk drive, DVD-drive, CD-ROM drive, flash memory, etc. A non-transitory computer readable storage medium2407(e.g., hard disk, CD-ROM, etc.), can store a program which can control the electronic device to perform any of the methods described herein and can be read by the storage device2406. Also connected to the processing unit2400is one or more cameras2410which can view an image (still or moving), digitize the image, and transmit the data representing the digitized image to the processing unit (or any other component) so it can be stored and/or analyzed. In another embodiment, the cameras might not be directly connected to the processing unit2400but can be connected via a network stream (e.g., wireless or wired network). In fact all components may either be directly connected to their connections or indirectly connected (e.g., via a wireless or wired network). While one processing unit is shown, it can be appreciated that one or more such processor can work together (either in a same physical location or in different locations) to combine and communicate to implement any of the methods described herein. Programs and/or data required to implement any of the methods/features described herein can all be stored on any one or more non-transitory computer readable storage medium (volatile or non-volatile, such as CD-ROM, RAM, ROM, EPROM, microprocessor cache, etc.) Processes can be split up among different processors, for example, some processing can be done by table computer (e.g., table computer2507), some by casino server (e.g., casino A server2501), some by administrator server (e.g., administrator sever2500), etc. All inputs (e.g., images and other inputs), can also be input (uploaded) at any processor on the system (e.g., table computer2507), casino server (e.g., casino A server2501), administrator server (e.g., administrator sever2500), etc. The processing can be divided up among different processor in the system in any possible manner (e.g., image processing can be done by the table computer, casino server, administrator server, any other server/computer on the system, and using any combination of such processors). All processors on the system can communicate with each-other (directly or indirectly) by using any type of computer communications network (e.g., internet). FIG.25is a block diagram illustrating a network showing how different components of the system can be interconnected, according to an embodiment. Note that while “server” is used to refer to devices, these devices can be databases, personal computers, or any type of computer, which all are able to perform any computer functions needed. Each of these servers can exist as one machine or multiple machines in communication with each-other. A miscellaneous server2503can be a source for original layouts, this can be a game developer, casino, game distributor, etc. The layouts can be distributed to an administrator server2500via email, web browser, etc. The administrator server2500is operated by the administrator of the entire system and performs operations such as receiving the layouts, generating and distributing the temple images and the top-down meta-data, and any other method/feature described herein. Administrator server2500can also be considered a cloud server which distributes the templates to casino servers (e.g., casino A server2501and casino B server2502). Casino servers are operated by a particular casino (or casino group) and receive the template image(s) and top-down meta-date from the administrator server2500. The casino servers can also periodically check the administrator server2500for updates to template images and top-down meta-data. The casino servers distribute the template images and top-down meta-data to the game tables (also referred to as tables) at their casino. For example, casino A server (“casino A” is a particular casino or casino group) distributes template images and top-down meta-data it receives to computers at casino A's tables which utilize the system (i.e., table 1 computer2507at casino A table 12504and table 2 computer2513at casino A table 22510). These computers at the table are connected to the cameras at the respective table. For example, table 1 computer2507is connected to table 1 left camera2505and table 1 right camera2506. Table 2 computer2513is connected to table 2 left camera2511and table 2 right camera2512. The table computers (table 1 computer2507and table 2 computer2513) are the computers the perform all of the image processing (although alternatively this can be done on any of the other computers on the system such as the casino server or the administrator server). Casino B is a different casino (or casino group) than casino A and has its own casino B server2502. Casino B server2502communicates with all of the tables in casino B's casino which utilize the system. In this example, casino B server2502communications with table 1 computer2523which communicates with casino B table 1 left camera2521and casino B table 1 right camera2522. In a further embodiment, instead of using operations1002-1008to identify objects on the table to map to the top-down meta-data which is then used to determine the homography, a more user-friendly (requiring less operations by the user) method can be implemented in which pre-stored homographies are used. A plurality of pre-stored homographies are processed to determine which one has the best fit to the current table and the camera, and then that homography is used to generate the camera meta-data from the top-down meta-data. In an embodiment, cameras can be mounted on the same mounts at the same positions on different tables and pointing at the same angles. If these parameters (camera type/lens, position of camera(s), direction camera is pointing, etc.) remains the same, then a homography determined for this set of parameters may work for a different table as long as these parameters remain the same. Pre-stored templates are also used so that the system can also automatically detect which template image (and hence which layout) corresponds to the felt currently on the casino table (which then defines the particular game set up for play at the table). FIG.26is a flowchart illustrating an exemplary method of using pre-stored homographies to find a best match, according to an embodiment. In operation2600, template image(s), their respective top-down meta-data are stored on the system. These can be generated as described herein and can be stored anywhere on the system (e.g., the cloud, casino server, table computer, etc.) Also stored are a plurality of homographies. The plurality of homographies are ones that have been previously generated (as described herein) and are all stored. Typically, homographies that have worked very well should be included in the stored plurality of homographies. From operation2600, the method proceeds to operation2601, wherein a detect button is pressed. The detect button can be virtual or real or any type of trigger in which a user (e.g., casino employee, technician, administrator, etc.) can initiate. The button/trigger can be located anywhere, such as on the table itself (e.g., on a table sign or on a computer at the table etc., or on a casino server or administrator server, etc.) Note that operations2602-2612would be performed for each unique camera being used for the system at the casino table. From operation2601, the method proceeds to operation2602, which captures an image of the casino table by the camera. The same image can be used for all of the further operations inFIG.26(but operations2602-2612are repeated for each different camera at the casino table in order to generate the camera meta-data for that camera for that casino table). From operation2602, the method proceeds to operation2603, which initiates a loop for each of the template images stored from operation2600. For each of the template images, connector Z is called (continues on the same page). Basically connector Z goes to an operation which loops through all of the homographies (stored in operation2600) and processes them to find the homography (in conjunction with the current template image being passed in operation2603) that has the best score. In other words, all pairs of template images and homographies are processed to determine their score, and the best score is used in operation2604. So after operation2603is completed, which means that for all of the stored template images a score is computed (in operation2611) which represents the best (operation2612) homography for each template, the method proceeds to operation2604. In operation2604, the best universal score is determined (best typically means higher) out of all of the returned scores (operation2612). This best universal score represents the pair of homography and template image which has the best score (known as the winning template image and the winning homography). In other words, in operation2612the highest score is determined/return of all homographies for each template image, and operation2604determines the universal highest (winning) score out of each of those highest scores (from operation2612). Thus, the highest score after all instances operation2612is executed wins and becomes the highest universal score which was generated from the winning template and the winning homography. The pair of template image and the homography that resulted in the best universal score is then used (known as the winning template image and the winning homography). The top-down meta-data for the winning template image (resulting in the best universal score) combined with the winning homography (resulting in the best universal score) are used to generate the camera meta-data. The camera meta-data is generated by using the top-down meta-data associated with the winning template image and using the winning homography as described herein. In this manner, by simply pressing a detect button (operation2601), the system can automatically identify the template (associated with the layout on the casino table), the homography, and hence automatically generate the camera meta-data. InFIG.26, from connector Z (on the right side ofFIG.26) the method continues to operation2610which initiates a loop for each of the homographies (stored in operation2600). The loop performs operation2611for each of the homographies (stored in operation2600). In operation2611, it processes the template image (passed in operation2603), the camera image (captured in operation2602), and all of the stored homographies (stored in operation2600) to determine the score. The processing means calling operation2000(and hence executing the method illustrated inFIGS.20-22) for each different homography stored (operation2600) which is processed. When the method illustrated inFIGS.20-22is completed, a score is computed (e.g., in operation2108which is returned from the processing in operation2203back to operation2611) which is a measure of how good the transformation is (e.g., a measure of how good the match is between the camera image and the template image based on the homography). The score (the overall score computed in operation2108) which is used to determine the best match (the pair of winning homography and winning template image) can be computed by starting with a list of contours (a list of points defining an area) before and after some operation. The OpenCV function Cvinvoke.ContourArea calculates the area of a contour. The score for any given region/contour is=(Min(beforeArea,afterArea)/Max(beforeArea,afterArea)). The overall score for the template and homography is the minimum score for all the regions detected. Any other method to determine a score rating a match/fit of a homography and template image to an image taken by a camera (camera view) can also be used. After all of the homographies (stored in operation2600) have been processed in operation2611(e.g., a score has been computed for each), then the method proceeds from operation2610to operation2612, which determines which out of all of the scores from operation2611for this particular template image is the best (typically highest). The homography with the best score is then identified and returned (along with the score) back to operation2603. Thus, for the individual template image being passed (when connector Z is called), operation2612returns the best homography (out of all of the stored homographies stored in operation2600) which works best for the current template image passed in operation2603. Thus, the general method inFIG.26tries all stored homographies with all stored template images to determine the score of respective pair. For example, if there are A stored template images and B stored homographies, then A*B scores would be computed, and the winning homography and the winning pair is the pair (of template image and homography) that has the best score out of all of the other pairs scored. This winning template image and winning homography can then be used by the rest of the system (as described herein) as the current template image and the current homography without regard to how they were actually determined. Thus, by using pre-stored homographies then the implementation of the process can be simpler because there is no need to identify to the system the template corresponding to the table (layout), nor is there a need to place chips on the table (operation1002). However, this method be may less accurate than the method illustrated inFIG.10. In another embodiment, instead of the system automatically determining the template (which corresponds to the layout on the casino table at the time), a casino employee can manually identify the layout/template image to the system by selecting the layout/template from a menu. In this embodiment, operation2603only needs to call connector Z for the selected template in order to find the winning homography. Once the camera meta-data has been determined, then it can be used as described herein, to analyze images within the regions of interest defined by the camera meta-data to automatically identify cards, bets made, etc., in order to track and analyze progress of the game being played at the casino table. Note that game rules can also be stored and associated with each template image. For example, blackjack games can have side bets with each side bet having a different layout (and hence corresponding template image). When an image template is identified using the method described herein, the associated rules with that game can be retrieved and displayed on a table sign (which can be an output device2401). For example, a blackjack game with a side bet such as “stupendous queens” has a paytable (rule set) such as “1 queen pays 2:1; 2 queens pays 3:1, and 3+ queens pays 10:1” and another blackjack game with a side bet such as “crazy 3's” has a paytable (rule set) such as “three 3's pays 10:1 and 4 3's pays 100:1.” The blackjack games can be identified by their unique layout (which is captured by the camera in operation2602) and the corresponding template image is then identified in operation2604. The associated rule set for that respective (identified) template image can then be displayed on an output device (such as a table sign) for the players to see. For example, if the system automatically recognizes the layout on the table as the “crazy 3's” game, then the particular rule set (e.g., three 3's pays 10:1 and 4 3's pays 100:1″) can automatically be displayed on the table sign at the table (or another such output device). Thus, the benefits of the inventive concepts herein are numerous. For example, when a template image is generated from a layout (which can be used as a table felt on a casino table), the regions of interest (where relevant activity occurs on the table) can be identified on the template image. One problem the present inventive concept solves is identifying where these same regions of interest (identified on the template and referred to as top-down meta-data) can be found on images taken by cameras at the table itself (each side cameras). The portions of the camera images which are the regions of interest can then be analyzed for relevant activity (e.g., to determine and track how much in chips players at different positions have bet, etc.) Software can be used to analyze an image of a stack of chips and determine how much in dollars the stack of chips is equivalent to (by recognizing each chip and its denomination by color). The many features and advantages of the invention are apparent from the detailed specification and, thus, it is intended by the appended claims to cover all such features and advantages of the invention that fall within the true spirit and scope of the invention. Further, since numerous modifications and changes will readily occur to those skilled in the art, it is not desired to limit the invention to the exact construction and operation illustrated and described, and accordingly all suitable modifications and equivalents may be resorted to, falling within the scope of the invention. | 115,648 |
11861867 | DETAILED DESCRIPTION OF THE INVENTION In the following description, various aspects of the invention will be described. For the purposes of explanation, specific details are set forth in order to provide a thorough understanding of the invention. It will be apparent to one skilled in the art that there are other embodiments of the invention that differ in details without affecting the essential nature thereof. Therefore the invention is not limited by that which is illustrated in the figure and described in the specification, but only as indicated in the accompanying claims, with the proper scope determined only by the broadest interpretation of said claims. The configurations disclosed herein can be combined in one or more of many ways to provide improved mass measuring methods, systems and devices of one or more occupying objects (e.g. driver or passengers) in a vehicle having an interior passenger compartment by analyzing one or more images of the occupying objects. One or more components of the configurations disclosed herein can be combined with each other in many ways. Systems and methods as described herein including obtaining one or more images of a vehicle interior passenger compartment including one or more objects such as one or more occupants (e.g. vehicle driver or passenger(s)), and at least one processor to extract visual data and depth data from the obtained images, combine the visual and depth data and analyze the combined data to estimate the mass of the one or more objects in the vehicle. In accordance with other embodiments, systems and methods as described herein including one or more imaging devices and one or more illumination sources can be used to capture one or more images of a vehicle interior passenger compartment including one or more objects such as one or more occupants (e.g. vehicle driver or passenger(s)), and at least one processor to extract visual data and depth data from the captured images, combine the visual and depth data and analyze the combined data to estimate the mass of the one or more objects in the vehicle. Specifically, in accordance with some embodiments there are provided methods for measuring the mass of one or more occupying objects (e.g. driver or passengers) in a vehicle having an interior passenger compartment, the method comprising using at least one processor to: obtain multiple images of said one or more occupants, wherein said multiple images comprising 2D (two dimensional) images and 3D (three dimensional) images such as a sequence of 2D images and 3D images of the vehicle cabin captured by an image sensor; apply a pose detection algorithm on each of the obtained sequences of 2D images to yield one or more skeleton representations of said one or more occupants; combine one or more 3D image of the sequence of 3D images with said one or more skeleton representations of the one or more occupants to yield at least one skeleton model for each one or more occupants wherein the skeleton model comprises information relating to the distance of one or more key-points of the skeleton model from a viewpoint; analyze the one or more skeleton models to extract one or more features of each of the one or more occupants; process the one or more extracted features of the skeleton models to estimate the mass or body mass classification of each said one or more occupants. According to some embodiments, the imaging device and the one or more illumination sources may be installed and/or embedded in a vehicle, specifically in a cabin of the vehicle (e.g. in proximity to the vehicle's front mirror or dashboard and/or integrated into the overhead console). According to another embodiment, there is provided an imaging system comprising one or more illumination sources configured to project one or more light beams in a predefined structured light pattern on vehicle cabin including one or more occupants and an imaging device comprising a sensor configured to capture a plurality of images comprising reflections of the structured light pattern for example from the one or more occupants in the vehicle cabin, and one or more processors configured to: obtain multiple images of said one or more occupants, wherein said multiple images comprising one or more 2D (two dimensional) images and 3D (three dimensional) images such as a sequence of 2D (two dimensional) images and 3D (three dimensional) images of the vehicle cabin captured by an image sensor; apply a pose detection algorithm on each of the obtained sequences of 2D images to yield one or more skeleton representations of said one or more occupants; combine one or more 3D image of said sequence of 3D images with said one or more skeleton representations of said one or more occupants to yield at least one skeleton model for each one or more occupants wherein said skeleton model comprises information relating to the distance of one or more key-points of the skeleton model from a viewpoint; analyze the one or more skeleton models to extract one or more features of each of the one or more occupants; process the one or more extracted features of the skeleton models to estimate the mass or body mass classification of each said one or more occupants. According to some embodiments, the systems and methods are configured to generate one or more outputs, such as output signals which may be associated with the operation of one or more devices, units, applications or systems of the vehicle based on the measured mass. For example, the output signals may include information configured to optimize the vehicles' units performances once activated. In some cases, the units or systems of the vehicle may include the vehicle's airbag, seats, and/or optimize vehicle's electronic stabilization control (ESC) according to the occupant's distribution and measured mass. Advantageously, the systems and methods in accordance with embodiments may include a sensing system comprising for example a single imaging device to capture one or more images of the scene and extract visual data, depth data and other data such as speckle pattern(s) from the captured images to detect vibrations (e.g. micro vibrations), for example, in real-time. For example, in accordance with embodiments, the vehicle's occupant's mass classification may be estimated using a stand-alone sensing system comprising, for example, a single imaging device and a single illumination source. In some cases, the imaging system may include more than one imaging device and illumination source. In some cases, two or more imaging devices may be used. As used herein, like characters refer to like elements. Prior to the detailed description of the invention being set forth, it may be helpful to set forth definitions of certain terms that will be used hereinafter. As used herein, the term “mass” encompasses the quantity of matter which a body contains, as measured by its acceleration under a given force or by the force exerted on it by a gravitational field. However, as in common usage, the present invention also refers to measuring the “weight” of an object where “weight” encompasses the force exerted on the mass of a body by a gravitational field. As used herein, the term “light” encompasses electromagnetic radiation having wavelengths in one or more of the ultraviolet, visible, or infrared portions of the electromagnetic spectrum. The term “structured light” as used herein is defined as the process of projecting a known pattern of pixels on to a scene. The way that these deform when striking surfaces allows vision systems to extract the depth and surface information of the objects in the scene. The terms “pattern” and “pattern feature(s)” as used in this application refer to the structured illumination discussed below. The term “pattern” is used to denote the forms and shapes produced by any non-uniform illumination, particularly structured illumination employed a plurality of pattern features, such as lines, stripes, dots, geometric shapes, etc., having uniform or differing characteristics such as shape, size, intensity, etc. As a non-limiting example, a structured light illumination pattern may comprise multiple parallel lines as pattern features. In some cases, the pattern is known and calibrated. The term “modulated structured light pattern” as used herein is defined as the process of projecting a modulated light in a known pattern of pixels on to a scene. The term “depth map” as used herein is defined as an image that contains information relating to the distance of the surfaces of scene objects from a viewpoint. A depth map may be in the form of a mesh connecting all dots with z-axis data. The term “object” or “occupying object” or “occupant” as used herein is defined as any target of sensing, including any number of particular elements and/or background, and including scenes with particular elements. The disclosed systems and methods may be applied to the whole target of imaging as the object and/or to specific elements as objects within an imaged scene. Nonlimiting examples of an “object” may include one or more persons such as vehicle passengers or driver. Referring now to the drawings,FIG.1Ais a side view of a vehicle110showing a passenger cabin105comprising the vehicle110units and a sensing system100configured and enabled to obtain visual (e.g. video images) and stereoscopic data (e.g. depth maps), for example 2D (two dimensional) images and 3D (three dimensional) images of areas and objects within the vehicle and analyze, for example in real-time or close to real-time, the visual and stereoscopic data to yield the mass (e.g. body mass classification) of the objects (e.g. occupants) in the vehicle, in accordance with embodiments. Specifically, the sensing system100is configured to monitor areas and objects within the vehicle110to obtain video images and depth maps of the areas and objects, and analyze the obtained video images and depth maps using one or more processors to estimate the mass of the objects. Nonlimiting examples of such objects may be one or more of the vehicle's occupants such as driver111or passenger(s)112, in accordance with embodiments. According to some embodiments the sensing system100may be installed, mounted, integrated and/or embedded in the vehicle110, specifically in a cabin of the vehicle such that the cabin interior and the object(s) present in the cabin may include, for example, the one or more a vehicle occupant (e.g. a driver, a passenger, a pet, etc.), one or more objects associated with the cabin (e.g. door, window, headrest, armrest, etc.), and/or the like. According to some embodiments, the systems and methods are configured to generate an output, such as one or more output signals106and107which may be associated with an operation of one or more of the vehicle's units to control one or more devices, applications or systems of the vehicle110based on the measured objects mass. For example, the output signals106and107which include the estimated mass of one or more occupants, such as driver111and passenger112, as measured by the sensing system110, may be transmitted to an ACU108and/or to Vehicle Computing System (VCS)109which are configured to activate, in case of an accident, one or more airbag systems such variable intensity airbag system111′ of driver111and variable intensity airbag system112′ of passenger112. In accordance with embodiments, the variable intensity airbags111′ and112′ may have different activation levels (e.g. strong/med/weak) and the pressure within the variable intensity airbags are accordingly activated to match the estimated mass classification of the vehicle occupants. In other words, the signal may be sent to the ACU108or VCS109which activates one or more airbags according to the measured category of each occupant. Specifically, adaptive airbag systems may utilize multi-stage airbags to adjust the pressure within the airbag according to the received mass estimation. The greater the pressure within the airbag, the more force the airbag will exert on the occupants as they come in contact with it. For example, as illustrated inFIG.1B, in a scenario where driver111weights around 100 kg and passenger112(child) weights less than 30 kg, upon collision the airbag of each passenger and driver is deployed, in real-time according to the passenger estimated mass, e.g. forcefully (high pressure) for the ‘100 kg’ weight driver111and less forcefully (mid or low pressure) to the 30 kg passenger. Alternatively or in combination, the output including the mass estimation result for each occupant may be transmitted to control the vehicle's seatbelt pre-tension. For example, upon collision seat belts (e.g. seat belts111″ and112″) are applied with pre-tension according to the mass estimation such that the passengers are optimally protected. In other embodiments, the output comprising the mass estimation data may be used to optimize a vehicle's electronic stabilization control (ESC) according to occupant's distribution in the vehicle; and/or activate deactivate any of the vehicle's units which the mass estimation may be related therein. According to some embodiments, the system100may include one or more sensors, for example of different types, such as a 2D imaging device and/or a 3D imaging device and/or an RF imaging device and/or a vibration sensor (micro-vibration) and the like to capture sensory data of the vehicle cabin. Specifically, the 2D imaging device may capture images of the vehicle cabin, for example from different angels, and generate original visual images of the cabin. In an embodiment, the system100may include an imaging device configured to capture 2D and 3D images of the vehicle cabin and at least one processor to analyze the images to generate a depth map of the cabin. In another embodiment, the system100may detect vibrations (e.g. micro vibrations) of one or more objects in the cabin using one or more vibration sensor and/or analyzing the captured 2D or 3D images to identify vibrations (e.g. micro vibrations) of the objects. According to another embodiment, the system100may further include a face detector sensor and/or face detection and/or face recognition software module for analyzing the captured 2D and/or 3D images. In an embodiment, the system100may include or may be in communication with a computing unit comprising one or more processors configured to receive the sensory data captured by the system's100image sensors and analyze the data according to one or more of computer vision and/or machine learning algorithms to estimate the mass of one or more occupants in the vehicle cabin as will be illustrated herein below. Specifically, in accordance with embodiments, the one or more processors are configured to combine 2D data (e.g. captured 2D images) and 3D data (depth maps) of the vehicle cabin to yield mass classification of one or more objects in the vehicle cabin, for example, the vehicle occupants. Advantageously, system100provides merely the minimal hardware such as one or more sensors and imagers for capturing visual and depth images of the vehicle110interior. In some cases, an interface connecting to system100may supply the necessary power and transfer the data acquired to the vehicle's computing and/or processing units such as VCS109and/or ACU108, where all the processing is being carried out, taking advantage of its computing power. Thus, in accordance with some embodiments, installing system100becomes very easy and using off-the-shelf components. FIG.1Cshows a schematic diagram of a sensing system102, configured and enabled to capture images of a scene, for example a vehicle cabin, including one or more objects (e.g. driver111and/or passenger112) and analyze the captured images to estimate the mass of the one or more objects, in accordance with embodiments. In some cases, the sensing system102may be the system100ofFIGS.1A and1B. System102includes an imaging device120, configured and enabled to capture sensory data of one or more objects, such as objects111and112in scene105, and a control unit150configured to analyze the captured sensory data to determine the mass of the one or more objects, in accordance with embodiments. Optionally the imaging device120and the control unit150are integrated together in a single device. In some cases, the imaging device120and the control unit150are integrated separately in different devices. According to one embodiment, the imaging device120may be a ToF (Time-of-Flight) imaging device including one or more ToF sensors such as Continuous Wave Modulation (CWM) sensors or other types of ToF sensors for obtaining 3D data of the scene and one or more sensors for obtaining 2D of the scene. According to one embodiment, the imaging device120may be a stereoscopic imaging device including one or more stereoscopic imagers for obtaining 3D data of the scene and one or more imagers for obtaining 2D of the scene. According to one embodiment, the imaging device120may be a structured light imaging device including one or more imagers for obtaining 3D data of the scene and one or more imagers for obtaining 2D of the scene, as illustrated herein below inFIG.1D. Specifically, in an embodiment, imaging device120comprises an illumination module130configured to illuminate scene105, and an imaging module123configured to capture 2D and/or 3D images of the scene. In some cases, imaging module123comprises one or more imagers such as cameras or video cameras of different types, such as cameras126and122. For example, camera126may capture 3D images or 3D video images of the scene (e.g. for measuring the depth of the scene and the depth of objects in the scene) while camera122may capture 2D images (e.g. original visual images) of the scene. For example, camera126may be a stereoscopic camera with two or more lenses having, for example, a separate image sensor for each lens and camera122may be a 2D camera. Alternatively or in combination, camera126may be a 3D camera adapted to capture reflections of the diffused light elements of the structured light pattern reflected from objects present in the scene. In some cases, the imaging module123may include a single camera configured to capture 2D and 3D images of the scene. The illumination module130is configured to illuminate the scene105, using one or more illumination sources such as illumination sources132and134. In some embodiments, the illumination module130is configured to illuminate the scene with broad-beamed light such as high-intensity flood light to allow good visibility of the scene (e.g. vehicle interior) and accordingly for capturing standard images of the scene. In some embodiments, the illumination module is configured to illuminate alternately the scene with structured light and non-structured light (e.g. floodlight) and accordingly capture 2D images and 3D images of the scene. For example, the imaging module123may capture one or more 2D images in floodlight and continuously capturing 3D images in structured light to yield alternate depth frames and video frames of the vehicle interior. For example, the illumination source132may be a broad-beamed illumination source and illumination source134may be a structured light source. In some cases, the 2D and 3D images are captured by a single imager. In some cases, the 2D and 3D images are captured by multiple synchronized images. It is understood that embodiments of the present invention may use any other kind of illumination sources and imagers to obtain visual (e.g. 2D images) and depth maps (e.g. 3D images) of the vehicle interior. In some embodiments, the 2D and 3D images are correctly aligned (e.g. synched) to each other so each point (e.g. pixel) in one can be found respectively in the other. This can either happen automatically from the way the structure is built, or require an additional alignment step between the two different modalities. According to one embodiment, the structured light pattern may be constructed of a plurality of diffused light elements, for example, a dot, a line, a shape and/or a combination thereof. According to some embodiments, the one or more light sources such as light source134, may be a laser and/or the like configured to emit coherent or incoherent light such that the structured light pattern is a coherent or incoherent structured light pattern. According to some embodiments, the illumination module130is configured to illuminate selected parts of the scene. In an embodiment, the light source134may include one or more optical elements for generating a pattern such as a pattern of spots that for example uniformly cover the field of view. This can be achieved by using one or more beam splitters including optical elements such as a diffractive optical element (DOE), split mirrors, one or more diffusers or any type of beam splitter configured to split the single laser spot to multiple spots. Other patterns such as a dot, a line, a shape and/or a combination thereof may be projected on the scene. In some cases, the illumination unit doesn't include a DOE. According to some embodiments, imager126may be a CMOS or CCD sensors. For example, the sensor may include a two-dimensional array of photo-sensitive or photo-responsive elements, for instance a two-dimensional array of photodiodes or a two-dimensional array of charge coupled devices (CODs), wherein each pixel of the imager126measures the time the light has taken to travel from the illumination module130(to the object and back to the focal plane array). In some cases, the imaging module123may further include one or more optical band-pass filter, for example for passing only the light with the same wavelength as the illumination unit. The imaging device120may optionally include a buffer communicatively coupled to the imager126to receive image data measured, captured or otherwise sensed or acquired by the imager126. The buffer may temporarily store image data until the image data is processed. In accordance with embodiments, the imaging device120is configured to estimate sensory data including for example visual images (e.g. 2D images) and depth parameters of the scene, e.g., the distance of the detected objects to the imaging device. The measured sensory data is analyzed for example by the one or more processors such as the processor152to extract 3D data including the distance of the detected objects to the imaging device (e.g. depth maps) based on the obtained 3D data and the pose/orientation of the detected objects from the visual images and combine both types of data to determine the mass of the objects in the scene105as will be described in further detail herein. The control board150may comprise one or more of processors152, memory154and communication circuitry156. Components of the control board150can be configured to transmit, store, and/or analyze the captured sensory data. Specifically, one or more processors are configured to analyze the captured sensory data to extract visual data and depth data FIG.1Dshows a schematic diagram of a sensing system103, configured and enabled to capture reflected structured light images of a vehicle cabin including one or more objects (e.g. driver111and/or passenger112) and analyze the captured images to estimate the mass of the one or more objects, in accordance with embodiments. In some cases, the sensing system103may be the system100ofFIGS.1A and1B. System103includes a structured light imaging device124, configured and enabled to capture sensory data of one or more objects, such as objects111and112in scene105, and a control unit150configured to analyze the captured sensory data to determine the mass of the one or more objects, in accordance with embodiments. Optionally the imaging device124and the control unit150are integrated together in a single device. In some cases, the imaging device120and the control unit150are integrated separately in different devices. In an embodiment, the structured light imaging device124comprises a structured light illumination module133configured to project a structured light pattern (e.g. modulated structure light) on scene105, for example in one or more light spectrums, and an imaging sensor125(e.g. a camera, an infrared camera and/or the like) to capture images of the scene. The imaging sensor125is adapted to capture reflections of the diffused light elements of the structured light pattern reflected from objects present in the scene. As such, the imaging sensor125may be adapted to operate in the light spectrum(s) applied by the illumination module133in order to capture the reflected structured light pattern. In accordance with embodiments, the imaging sensor125may include an imager127comprising one or more lens for gathering the reflected light and images from the scene onto the imager127. In accordance with embodiments, the imaging sensor125can capture visual images of the scene (e.g. 2D images) and images comprising the reflected light pattern which can be processed by one or more processors to extract 3D images for further measuring the depth of the scene and objects in the scene by quantifying the changes that an emitted light signal encounters when it bounces back from the one or more objects in the scene and use the reflected light pattern characteristics in at least one pixel of the sensor to identify the distance of the objects and/or the scene from the imaging device. In an embodiment, the depth data and the visual data (e.g. 2D images) derived from the analyses of images captured by the imaging sensor125are time synchronized. In other words, as the mass classification is derived from analysis of common images captured by the same imaging sensor (of the imaging system) they may also be inherently time (temporally) synchronized thus further simplifying correlation of the derived data with the object(s) in the scene. The illumination module133is configured to project a structured light pattern on scene105, for example in one or more light spectrums such as near-infrared light emitted by an illumination source135. The structured light pattern may be constructed of a plurality of diffused light elements. According to some embodiments, the illumination module133may comprise one or more light sources such as a single coherent or incoherent light source135, for example, a laser and/or the like configured to emit coherent light such that the structured light pattern is a coherent structured light pattern. According to some embodiments, the illumination module133is configured to illuminate selected parts of the scene. In an embodiment, the illumination module133may include one or more optical elements for generating a pattern such as a pattern of spots that for example uniformly cover the field of view. This can be achieved by using one or more beam splitters including optical elements such as a diffractive optical element (DOE), split mirrors, one or more diffusers or any type of beam splitter configured to split the single laser spot to multiple spots. Other patterns such as a dot, a line, a shape and/or a combination thereof may be projected on the scene. In some cases, the illumination unit doesn't include a DOE. In particular, the illumination source135may be controlled to produce or emit light such as modulated light in a number of spatial or two-dimensional patterns. Illumination may take the form of any of a large variety of wavelengths or ranges of wavelengths of electromagnetic energy. For instance, illumination may include electromagnetic energy of wavelengths in an optical range or portion of the electromagnetic spectrum including wavelengths in a human-visible range or portion (e.g., approximately 390 nm-750 nm) and/or wavelengths in the near-infrared (NIR) (e.g., approximately 750 nm-1400 nm) or infrared (e.g., approximately 750 nm-1 mm) portions and/or the near-ultraviolet (NUV) (e.g., approximately 400 nm-300 nm) or ultraviolet (e.g., approximately 400 nm-122 nm) portions of the electromagnetic spectrum. The particular wavelengths are exemplary and not meant to be limiting. Other wavelengths of electromagnetic energy may be employed. In some cases, the illumination source135wavelengths may be any one of the ranges of 830 nm or 840 nm or 850 nm or 940 nm. According to some embodiments, the imager127may be a CMOS or CCD sensors. For example, the sensor may include a two-dimensional array of photo-sensitive or photo-responsive elements, for instance a two-dimensional array of photodiodes or a two-dimensional array of charge coupled devices (CODs), wherein each pixel of the imager127measures the time the light has taken to travel from the illumination source135(to the object and back to the focal plane array). In some cases, the imaging sensor125may further include one or more optical band-pass filter, for example for passing only the light with the same wavelength as the illumination module133. The imaging device124may optionally include a buffer communicatively coupled to the imager127to receive image data measured, captured or otherwise sensed or acquired by the imager127. The buffer may temporarily store image data until the image data is processed. In accordance with embodiments, the imaging device124is configured to estimate sensory data including for example visual images and depth parameters of the scene, e.g., the distance of the detected objects to the imaging device. The measured sensory data is analyzed for example by the one or more processors such as the processor152to extract 3D data including the distance of the detected objects to the imaging device (e.g. depth maps) from the pattern images and the pose/orientation of the detected objects from the visual images and combine both types of data to determine the mass of the objects in the scene105as will be described in further detail herein. The control board150may comprise one or more of processors152, memory154and communication circuitry156. Components of the control board150can be configured to transmit, store, and/or analyze the captured sensory data. Specifically, one or more processors such as processors152are configured to analyze the captured sensory data to extract visual data and depth data. Optionally the imaging device124and the control unit150are integrated together in a single device or system such as system100. In some cases, the imaging device124and the control unit150are integrated separately in different devices. FIG.2Ais a block diagram of the processor152operating in one or more of systems100,101and102shown inFIGS.1A,1B,1C and1D, in accordance with embodiments. In the example shown inFIG.2A, the processor152includes a capture module212, a depth map module214, an estimation module such as a pose estimation module216, an integration module218, a feature extraction module220, a filter module222a mass prediction module224a 3D image data store232, a 2D image data store234, a depth maps representation data store236, an annotation data store238, a skeleton model data store240and measurements data store242. In alternative embodiments not shown, the processor152can include additional and/or different and/or fewer modules or data stores. Likewise, functions performed by various entities of the processor152may differ in different embodiments. In some aspects, the modules may be implemented in software (e.g., subroutines and code). In some aspects, some or all of the modules may be implemented in hardware (e.g., an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA), a Programmable Logic Device (PLD), a controller, a state machine, gated logic, discrete hardware components, or any other suitable devices) and/or a combination of both. Additional features and functions of these modules according to various aspects of the subject technology are further described in the present disclosure. Optionally, the modules can be integrated into one or more cloud-based servers. The capture module212obtains images of a scene (e.g. vehicle interior passenger compartment) of one or more objects in a scene (e.g. one or more passengers or driver in the vehicle). In one embodiment, the processor152instructs one or more sensors (e.g., the imaging device120shown inFIG.1Cor imaging device124shown inFIG.1D) to capture images of the scene for further extracting 2D and 3D data (e.g. images) of an object in a scene or the scene itself. As one example, the capture module212may obtain, for example synchronically and/or sequentially, a sequence of visual images (e.g. original 2D images) and images including depth data (e.g. 3D images) of the one or more objects in the scene. In accordance with embodiments, the 2D images are analyzed to determine the pose and orientation of the objects while the 3D images are analyzed to create a depth map representation of the objects as will be explained hereinbelow in detail. In one embodiment, the capture module212obtains 3D images of the objects illuminated by an illuminator that projects structured light with a specific illumination pattern onto the object and/or images obtained by a stereoscopic sensor and/or a ToF sensor as illustrated hereinabove. The captured image of the object provides useful information for a future generation of a depth map. For example, the captured image of the object illuminated with the structured light includes specific pattern features that correspond to the illumination patterns projected onto the object. The pattern features can be stripes, lines, dots or other geometric shapes, and includes uniform or non-uniform characteristics such as shape, size, and intensity. In some cases, the images are captured by other sensors such as a stereoscopic sensor or ToF sensor the depth data is presented differently. An example captured image310illuminated with specific structured light (e.g. dots) is described inFIG.3B. The captured image310includes two occupants315(e.g. the driver) and325(child passenger) seating at the vehicle front seats. In some cases, the captured images310and related image data (e.g., intensity, depth and gradient of each pixel) are stored in the 3D image data store232and the captured visual images (e.g. 2D images) and related image data are stored in the 2D image data store234, as more fully described below. The depth map module214retrieves the captured 3D image of the illuminated objects from the image 3D data store242and generates a depth map representation of the objects from the captured image (e.g. pattern image) of the illuminated object. As described above, a depth map representation of an object refers to an image containing information about distances of different parts of the surface of the object and/or the scene from a designated viewpoint. The designated viewpoint can be the position of a sensor that captures the image of the object. In an embodiment, the depth maps representations are stored at the Depth Maps representation data store236as more fully described below. An example depth map representation is further described below with reference toFIG.3A. In one embodiment, the depth map module214identifies and analyzes pattern features for deriving depth information of the captured image. Based on the identified and analyzed pattern features associated with the object, the depth map module214generates a depth map representation of the object. Examples of the depth information may be geometric deformation of the object due to differences of the depth of each pixel on the object in the captured image. The “depth” of a pixel on the object refers to the distance between the pixel on the actual object and the designated viewpoint (e.g., the position of the sensor). In some embodiments, the depth map module214generates a depth map representation of the object in the captured image based on the triangulation between the light pattern and the image sensor, the depth of the object illuminated by the light pattern can be extracted. A detected pattern refers to a pattern that is projected onto the object and rendered in the captured image, and a reference pattern refers to the original illumination pattern provided by the illuminator. For structured light having an illumination pattern that is projected unto an object, the pattern that is detected in the captured image of the object is a distorted version of the original illumination pattern of the structured light. The distorted version of the original pattern includes shifts and other distortions due to the depth of the object. By comparing the detected pattern with the original illumination pattern, or parts of the detected pattern with the corresponding parts of the original illumination pattern, the depth map module214identifies the shifts or distortions and generates a depth map representation of the object. FIG.3Ashows an example of captured image335including reflected light pattern spots, in accordance with embodiments. For illustration matters, each spot of the reflected light pattern spots is colored using gray scale color where each color represents the distance of the spot from a reference point (e.g. camera). For example, the scale282includes a grayscale color for a distance of around 40 cm from the camera and continuously the color representation changes to black scale for a distance around 140 cm from the camera and so on the color scale varies according to the distance. Accordingly, the multiple patterns spot281of the captured image285shown inFIG.3Bare analyzed to yield a depth representation image287of the scene as illustrated inFIG.3A. For example, the cluster of reflected dots on the driver's legs (presented by ellipse345) are typically around 20-50 cm from the camera while the center mass of the driver (presented be ellipse355) is more remote from the camera (around 50-80 cm). In accordance with embodiments, the depth map module214receives and analyzes images of the vehicle cabin such as image335to extract a depth map representation including depth values for each reflected pattern location (e.g. pixel) in the captured image according to the distance of the detected pattern in the captured image from the sensor. The pose estimation module216retrieves the captured original images (2D images) of the vehicle's illuminated occupying object(s) (typically one or more persons) from the 2D image data store234and analyzes the original images to identify the one or more persons in the images and further estimate their pose. In some cases, the identification includes generating a graphical representation such as a skeleton of points superposed on each identified person in the captured image. In some embodiments, the images including the superposed skeletons are stored at an annotation data store238. In one embodiment, the pose estimator module216uses DNN (Deep Neural Network) to identify in each retrieved image the one or more persons and superpose (e.g. mark) multiple annotations such as selected key-points locations at the identified objects. In case the objects are identified persons (e.g. passenger(s) or driver) the key-points represent body landmarks (e.g. joint body points) which are detected at the captured body image of the persons. In accordance with embodiments, the detected key-points may be graphically represented as a framework of key points or skeleton of the identified person's body. In accordance with embodiments, each key-point of the skeleton includes a coordinate (x, y) at the person(s) body image. In some cases, the skeleton is formed by linking every two key-points by marking a connection line between the two lines as illustrated inFIGS.4A and4B. The integration module218obtains the formed skeleton (e.g. 2D skeleton) and the depth maps representation of each object and combines them (e.g. mix them) to yield a skeleton model e.g. a 3D skeleton comprising 3D data for each object. In an embodiment, the integration process includes computationally combining the formed skeleton (2D skeleton) and the depth maps representation to yield the skeleton model which includes data for each key-point in the skeleton model in an (x,y,z) coordinate system. In an embodiment, the skeleton model includes depth data related to each joint key-point at the formed skeleton model, for example the location of each point of the person in the scene (x,y) and the distance (z) of such point from a respective image sensor in the (x, y, z) coordinate system. In other words, each key-point of the formed skeleton has a coordinate in the 2D image. Since the captured 2D and 3D images are co-registered to each other, it is possible in accordance with embodiments to obtain 3D value of the same coordinate in the 3D map. Hence, the Z value e.g. (distance) for some or for each key-point is obtained. An example of the combination process is illustrated inFIG.6. In some cases, the skeleton model data is stored at a skeleton model data store240. In an embodiment, the feature extraction module220is configured and enabled to analyze the skeleton model data and extract one or more data measurements for each related identified person at the scene. Generally, the extracted measurements include data related to the imaged persons and output derived values (e.g. features) intended to be informative and non-redundant information on the persons. Specifically, the extracted features of imaged occupants (e.g. persons) in the vehicle may include the measured length of body parts of the occupants such as the length of the occupants; torso; shoulders; width hips and pelvis location, etc. Generally, estimating the mass of a seating person, even by human eyes, is much more difficult than estimating the mass of a standing person as major body portions of the person (such as legs knee or hands) are hidden and/or are not fully presented. Specifically, estimating the mass of an occupying object, such as a person, based on body parts measurements (e.g. skeleton measurements) in a vehicle is accordingly challenging since the person's skeleton is seen in a highly non-standard pose, e.g. seating position or “crouching” positions. There is a need, in accordance with an embodiment, to identify these non-standard poses (e.g. “crouching” positions) and avoid using them in the mass estimation process to yield an accurate mass estimation. In accordance with embodiments, the filter module222is configured to solve this matter by obtaining images including skeleton model data of the objects from the skeleton model data store232and filter out one or more of the obtained images based on predefined filtering criteria such as predefined filtering criteria. The remained valid images including skeleton model data (e.g. valid skeleton model data), may be kept at the skeleton model data store240for further determining the mass of the objects. In some cases, the predefined filtering criteria includes specific selection rules which define a valid pose, posture or orientation of the objects and further discard ‘abnormal’ poses. In accordance with an embodiment, an ‘abnormal’ pose may be defined as an object's body pose (e.g. marked by the skeleton model) or body portion which do not reflect or present the complete or almost complete major portions of the object. Nonlimiting examples of filtering criteria include: defined spatial relation between skeleton features of the identified objects and/or identified abnormal poses; short imaged body portions length, object image position located away from an high-density area. In accordance with embodiments, the defined spatial relation between skeleton features of the identified objects include, for example, predefined relation between the object portions. Specifically, in cases where the object is an occupant seating in a vehicle, the criteria include defined spatial relation between the occupant body parts, such as a relation between the occupant's shoulders and torso or hands; a relation between the torso and knees in a seating position, and the like. In some cases, the spatial relation between the measured skeleton occupants body organs (e.g. knees; shoulders; hands) is measured and compared to predefined body proportion parameters, (e.g. in a seating position). For example, as illustrated inFIG.5Bthe spatial relation between the shoulders and torso of driver522do not match a predefined proportional parameter, and therefore image501will be discarded. In some cases, data relating to the occupant's body proportions and features is stored at the annotation data store238and is retrieved by the filter module222to accordingly discard or confirm captured images and/or skeleton models of the occupant. In accordance with embodiments, the high-density filtering criteria includes generating a high-density model (e.g. high dimensional space vector such as an eight-dimensional vector) according to measured parameters of one or more vehicle's parameters. The high-density model may include for each key-point (body joint) of an identified person an allowed region in the captured image which the key-point may be located. If the key point is identified by the high-density model to be out of this region then this image is discarded. The allowed region for each joint is provided by analyzing images with good “standard” sitting positions. In some cases, the generated high-density parameters are stored at the sensing system such as systems100,102or103(e.g. at processor152or storage154or at a remote processor or database such as cloud data store). Then, each of the generated skeletons is placed at and/or compared to the generated high-dimensional space to determine the location in space of the skeleton in respect to the high-density model. Accordingly, images that include skeletons that are not within a predetermined distance from the high-density area in this space are discarded. For example, a generated skeleton which is located far from the density center will be filtered out. The mass prediction module224obtains the valid images of the objects from the skeleton model data store240and analyzes the valid images to determine the mass of the objects, in accordance with embodiments. In some embodiments, the analysis includes inserting the extracted features of the valid skeleton data to a regression module such as a pre-trained regression module configured and enabled to estimate the mass. In some cases, the pre-trained regression module may use “decision trees” trained according to for example XGBoost methods, where each decision tree represents the measured mass of an object in each captured image according to the measured features of the object. For example, each formed tree may include data on a captured occupant's features such as the occupant's shoulder length; torso length; knees length which was measured based on valid images. In accordance with embodiments, the occupant's mass estimation process is optimized using the pre-trained regression module to provide the most accurate mass prediction (e.g. estimation prediction) for each captured object (e.g. persons). It is understood that in accordance with embodiments other types of pre-trained methods may be used. In some embodiments, the measured mass of each object is stored at the Mass measurements data store242as more fully described below. The 3D image data store232of the processor152stores captured 3D images of specific objects (e.g. persons) or scenes (e.g. vehicle cabin) and image data related to the captured images. In an embodiment, the captured 3D images stored in the 3D image data store232can be images including specific pattern features that correspond to the illumination patterns projected onto the object. For example, the images may include one or more reflected spots as illustrated inFIG.3A. In other embodiments, the 3D images may be images obtained from a stereoscopic camera or a ToF sensor or any known 3D capturing devices or methods. The depth map data store234of the processor152stores depth map representations and related data of an object generated by the depth map module214. For example, the depth map data store234stores the original depth map representation and related depth data as well as enhanced depth map representation and related depth data. As described above, the original depth map representation refers to the depth map representation that is derived from the original captured image. The depth map representation data store236of the processor152stores depth map representations and related data of the objects generated by the depth map module214. For example, the depth map data store234stores the original depth map representation and related depth data as well as enhanced depth map representation and related depth data. Specifically, in some cases, the related data may include image representation of light patterns of the image according to the measured distance of each image pixel from the image sensor. The annotation data store238of the processor152stores skeleton representations and related data of the objects generated by the pose estimation module216. For example, the annotation data store234stores the original 2D images and the related superposed skeleton for each object. According to one embodiment, the annotation data store234may further store related data for each pixel or key-point at the skeleton such as one or more confidence grades. The confidence grade may be defined as the intensity level of a key points heat map, as identified for example by the pose estimation module216. For example, the pose estimation module216may include or use a DNN to provide a “probability heat map” for some or for each key point at the captured image. In an embodiment, the “probability heat map” for each key point may be stored for example at the annotation data store238. For each skeleton point, the DNN (e.g. the pose estimation module216) states how confident, relevant and accurate the location of the generated key-point of the skeleton is, by adjusting the intensity of the maximal point in the probability map. For example, as illustrated inFIG.4Afor each key point at skeleton411(e.g. key points442,443,444,452,453,462,463,464,465,472473and474) a probability heat map is generated relating to the confidence rating of the DNN. The probability heat map may be further used for example with the density criteria score to determine the confidence grade of the skeleton key points and accordingly to approve or discard captured images. In some cases, the original images are divided according to the identified object in the image. For example, captured images of a vehicle cabin are separated to one or more images per each of the vehicle's seats (e.g. front or back seats). FIGS.4A and4Bshow, respectively, examples of captured images410and420of a vehicle interior passenger compartment412and422including two imaged occupants, a driver404and a passenger406in image410and a driver416and passenger418in image422. In an embodiment, the images410and420are captured by an imaging device mounted on the front section of the vehicle, for example on or in proximity to the front centered mirror. In an embodiment, the pose estimator module242retrieves the captured original visual images of the illuminated objects (e.g. occupants404,406,416and418) from the 2D image data store244and generate skeletons on the persons in the image (e.g. skeletons411and421on the captured images410and420). For example, as illustrated inFIG.4A, the object in the left side of the captured image410is identified as a person (e.g. passenger406) seating on the vehicle passenger front seat with a skeleton411superposed on the passenger's center body. In an embodiment, the pose estimation module216is configured to identify and localize major parts/joints of the passenger body (e.g. shoulders, ankle, knee, wrist, etc.) by detecting landmarks on the identified object (e.g. key-points) and linking the identified landmarks by connection lines. For example, as shown inFIG.4Athe skeleton generation process by the pose estimation module246includes identifying key-points442and443linked by line444for estimating the passenger shoulder; identifying key-points452and453for estimating the passenger's torso; identifying key-points462,463,464and465for estimating the passenger knees; identifying key-points472473and474for estimating the passenger right hand and key-points482and483for estimating the passenger's left hand. In accordance with embodiments, the key-points are obtained by a DNN trained to identify the specific key-points. In some cases, for each identified key-point a “probability map” is applied to yield a confident grade which defines the accuracy of the identified key-point. In some cases where the vehicle includes a number of seats (e.g. back seats, front seats, driver seats, baby seat and the like) and the captured image includes a number of occupants seating on the different seats, the module may identify, for example separately, each seat and the occupant seating on the identified seat for generating accordingly for each object (e.g., passenger and/or driver) a skeleton. For example, as shown inFIG.4Bthe driver seat and driver may be identified and accordingly, a second skeleton may be superposed on the identified driver. In accordance with embodiments, once a skeleton representation is generated for one or more objects for example for each object, for example by one or more processors (e.g. processor152), one or more skeleton properties of the objects are analyzed to estimate the object's mass. For example, as illustrated inFIGS.4C-4Gan image480of the vehicle interior cabin back seat including two occupants482and484are captured. For example, image480may be one frame of a plurality of captured frames of a vehicle cabin. In accordance with embodiments, a skeleton486superposed on occupant484is analyzed to yield the occupant's body (e.g. skeleton) properties such as the length of the occupant484shoulders (FIG.4C), hips (FIG.4D), torso (FIG.4E), legs (FIG.4F) and center of mass (FIG.4G). In some cases, the occupant's mass is estimated based on these five measured skeleton portions. In other embodiments, different and/or additional body organs of the occupant or elements in the occupant's surroundings may be measured. FIGS.4H-4Kshow data distribution of the estimated mass of one or more occupants in a vehicle as a function of various measured body characteristic features of the occupants, such as shoulders (FIG.4H), torso (FIG.4I), hips (FIG.4J) and legs (FIG.4K), in accordance with embodiments. The verticals lines at each graph ofFIGS.4H-4K, represent one or more subjects moving around the identified object. The mass estimation for each body portion, e.g. torso length estimation ofFIG.4I, includes noise and changes from image to image for the same object (e.g. person's torso). Advantageously, combining different measurements of different body parts of the same object yields an accurate mass estimation. It should be stressed that some of the graphs inFIGS.4H-4Kinclude less data as a result of less robustness measurement such as the occupant's legs measurements. In one embodiment, the pose estimation module216processes each of the images using one or more filters, obtained for example from the filter data store, to check and generate a confidence grade. The confidence grade is based on the reliability and/or accuracy of the formed skeleton, and is used specifically for examining the reliability and accuracy of each identified key-point. In some cases, the confidence grade may be determined based on the confidence grading rating as measured by the pose estimation module216(e.g. DNN) and the density criteria score as measured using a pose density model. In accordance with embodiments, the pose density model obtains the skeletons of each image from the pose estimation module216and places each of the object's skeleton configuration in a high-dimensional space for discarding any configurations which are within a predetermined distance from a high-density area in this space. In some cases, the distance is determined by the Euclidean distance between an 8-vector of the current frame's key points and the average points calculated from the complete training data. In one embodiment, the confidence rate is configured based on the skeleton's local density in the skeleton space. In some cases, temporal smoothing is performed on the obtained estimation, to reduce noise and fluctuations. FIG.2Bis a flow diagram250illustrating steps of capturing one or more images252of one or more objects, such as objects254and255in scene256and estimating the mass of the objects, according to one embodiment. In some cases, the scene may be a vehicle interior passenger having one or more seats and the objects are one or more passengers and/or driver seating on these seats. As shown inFIG.2B, an imaging device262comprising one or more illuminators265provides structured light with a specific illumination pattern (e.g., spots) to the objects254and255in scene256and a sensor266captures one or more images of the objects254and255in scene256. In other embodiments, device262may be or may include a stereoscopic imager or a ToF imager. For example, the imaging device may be a ToF imaging device and the illuminator comprises an illumination source configured to project light on the scene and the sensor is a ToF sensor configured to capture a plurality of images comprising reflections of said modulated structured light pattern from one or more objects in the scene. In some cases, the imaging device262may be a stereoscopic imaging device including a stereoscopic imager as known in the art. In various embodiments, the projected light pattern may be a pattern of spots that for example uniformly cover the scene or selective portions of the scene. As the light is projected into the scene, spots from the light pattern fall onto one or more objects of interest. In some cases, the light is projected by the illuminator265using diffractive optical element (DOE) to split a single laser spot to multiple spots as described inFIG.1B. Other patterns such as a dot, a line, a shape and/or a combination thereof may be projected on the scene. In some cases, the illumination unit doesn't include a DOE. In some cases, each reflected light pattern (e.g. spot) is covered by one or more of the sensor pixels266. For example, each spot may be covered by a 5×5 pixel window. In one embodiment, a processor152may instruct the illuminator265to illuminate the objects254and265with specific modulated structured light. One or more reflected pattern images260and clean images270(e.g. visual images which do not include reflected light pattern) are provided to the processor152to generate a depth map representation264, and skeleton model representation266of the objects254and255. To generate the skeleton model representation266for each of the objects254and255, the processor152first identifies the pose and/or orientation of the captured objects (272) in the original images (270) by correlating each point in the scene space256to a specific portion of an object. For example, in case the objects254and255are two vehicle passengers (e.g. persons) each point or selected points in the passenger image are linked to a specific body organ, such as legs, torso, etc. The processor152then filters the identified points by examining the reliability of each identified object based for example on the measured confidence grade (as described above) and applying a confidence grade for each identified point in space (274). Thereafter, in some cases, the processor152splits the captured images to one or more images (276) according to the identified pose and/or orientation and/or confidence grade of the identified objects to generate the skeleton representation (278) for each identified object. In some cases, the position and orientation of the object may be detected and measured by applying an OpenPose algorithm on the images and/or other DNN algorithms such as DensePose configured to extract body pose. In accordance with embodiments, to generate the depth map representation264of the objects254and255, the processor152analyzes the reflected pattern features rendered and/or ToF data and/or stereoscopic data in the captured images260to yield the depth, e.g. distance, of each reflected pattern from a reference point. In some cases, the pattern is a spot shaped pattern and the generated depth map representation264comprises a grid of points superposed on the captured images252where each point indicates the depth of the surfaces of the images, as illustrated inFIG.3A. The processor152then integrates (e.g. combine) the depth map representation (264) with the skeleton annotation representation (278) to yield a skeleton model representation (266) for each of the objects. In accordance with embodiments, the skeleton model representation (266) of each object is then analyzed by the processor152to extract objects features (268) such as the length or width of body portions in case the identified objects are persons. Nonlimiting examples of extracted features may include body portion length of the object, such as shoulder, torso knees length. In some embodiments, the processor152filters the skeleton model presentation images, according to predefined filtering criteria to yield one or more valid skeleton model presentations (269). In some cases, the filtering criteria are based on the measured confidence rating of each identified point and on one or more selection rules as described herein in respect toFIG.2A. In some cases, a grade is assigned to each analyzed frame reflecting the accuracy and reliability of the identified objects shape and position. In accordance with embodiments, based on the extracted features the processor152determines the mass of each object (280). For example, the extracted features for each captured images are inserted into a massing model such as a pre-trained regression massing model which receives the extracted object features for each obtained image over time (t) to determine the mass of each object in the scene (280) or mass classification (282). In an embodiment, the massing model considers previous mass predictions, as obtained from previous image processing steps, to select the most accurate mass prediction result. In some embodiments, the massing model also takes into account the measured grade for each skeleton model and optionally also the provided confidence grades to yield the most accurate mass prediction result. In some embodiments, a temporal filter is activated to stabilize and remove outliers, so that a single prediction is provided at each timestamp. For example, temporal filtering may include removing invalid images and determine a mass prediction based on previous valid frames. If the required output is a continuous mass value (e.g. which may include any numeric value such as5,97.3,42.160. . . etc.), then it is the temporal filter's output. Alternately or in combination, a mass classification (282) for each identified object in the scene, such as objects254and255may be determined in accordance with a number of pre-determined mass categories, e.g. child; teenager; adult. For example, a vehicle passenger weighing 60 kg, and/or between 50-65 kg will be classified “small adult” or “teenager”, while a child weighing 25 kg or in the range of 25 kg will be classified as “child”. FIGS.5A and5Billustrate examples of captured images500and501of an interior passenger compartment of a vehicle which are filtered out based on the predefined filtering criteria, in accordance with embodiments. As shown inFIG.5AandFIG.5Beach of the obtained images500and501comprise a graphical skeleton presentation formed by a number of lines superposed on the occupant's major body portions. In accordance with embodiments, each of these skeletons is analyzed to discriminate images comprising ‘abnormal’ or ‘non-valid’ poses and keep selected images (e.g. valid images including valid poses) which will be used for further accurate occupant's mass measurement calcification. FIG.5Ashows a captured image500of a passenger512seating on a passenger front seat of a vehicle which will be filtered out based on the predefined filtering criteria, such as due to the short measured torso. Specifically, as illustrated in image500, the passenger512is leaning forward with respect to an imaging sensor, and accordingly, the measured passenger's torso length516(e.g., the length between the neck and the pelvis measured between the skeleton points511and513) of the skeleton518is short relative to the shoulders' width. As the mass estimation based on such position shown in image500is difficult and inaccurate (due to the short measured torso as defined in the predefined filtering criteria), this image will be filtered out from the captured images of the vehicle cabin. FIG.5Bshows a captured image501of a driver522seating on a driver seat of the vehicle and a formed skeleton528superposed on the driver's upper body. In accordance with embodiments, image501will be filtered out from the list of captured images of the vehicle cabin since the captured image of the driver body as emphasized by skeleton528is located far from the high-density area Specifically, the body image of the driver is located at a low-density area in the “skeleton configuration space” (e.g. the skeleton was fitted far from the density center), meaning the body of the identified person is leaning out of the “standard” sitting position, and hence the joints are further than the allowed position. FIG.6is a flow diagram600illustrating the generation of a skeleton model (3D skeleton model650) of the occupants shown inFIG.3Bby combining the depth map representation335of the occupants shown inFIG.3Band the 2D skeleton representation image422of the occupants shown inFIGS.4A and4B, according to one embodiment. As shown inFIG.6, the captured image325renders pattern features on the captured occupants (e.g., a person). The depth map representation335of the occupants are derived from the captured image325of the occupants, where the depth map representation335of the occupants provide depth information of the occupants while the 2D skeleton representation422provides pose, orientation and size information on the occupants. The skeleton model650is created by combining the depth map representation335of the occupants and the 2D skeleton representation422of the object. In accordance with embodiments, the skeleton model650is created by applying a depth value (calculated for example from the nearest depth points that surround that point) to each skeleton key point. Alternatively or in combination, average depth in the region of the skeleton can be provided as a single constant number. This number may be used as a physical “scale” for each provided skeleton as further explain in respect toFIG.8. FIG.7Ais a schematic high-level flowchart of method700for measuring the mass of one or more occupants in a vehicle, in accordance with embodiments. For example, the method may include determining the mass of one or more occupants sitting on a vehicle seats, for example in real time, according to one or more mass classification categories and accordingly outputting one or more signals to activate and/or provide information associated with the activation of one or more of the vehicles units or applications. Some stages of method700may be carried out at least partially by at least one computer processor, e.g., by processor152and/or vehicle computing unit. Respective computer program products may be provided, which comprise a computer readable storage medium having computer readable program embodied therewith and configured to carry out of the relevant stages of method700. In other embodiments, the method includes different or additional steps than those described in conjunction withFIG.7. Additionally, in various embodiments, steps of the method may be performed in different orders than the order described in conjunction withFIG.7A. In some embodiments, some of the steps of the method are optional, such as the filtering process. At step710multiple images including one or more visual images, for example, a sequence of 2D images, and a sequence of 3D images of the vehicle cabin are obtained, in accordance with embodiments. The obtained sequence of 2D and 3D images include images of one or more occupants such as a driver and/or passenger(s) seating in the vehicle rear and/or back seats. In accordance with some embodiments, the 3D images are images including reflected light pattern and/or ToF data and/or any stereoscopic data while the 2D images are clean original visual images which do not include additional data such as reflected light pattern. In some embodiments, the multiple images (e.g. 2D and 3D images) are captured synchronically and/or sequentially by an image sensor located in the vehicle cabin, for example at the front section of the vehicle as illustrated inFIG.1A. In some cases, the images are obtained and processed in real-time. At step720one or more pose detection algorithms are applied on the obtained sequence of 2D images to detect the pose and orientation of the occupants in the vehicle cabin. Specifically, the pose detection algorithms are configured to identify and/or measure features such as position; orientation; body organs; length and width of the occupants. For example, the position and orientation of the object may be detected and measured by applying an OpenPose algorithm on the images and/or Dense-pose. Specifically, in accordance with embodiments, a Neural Network such as a DNN (Deep Neural Network) is applied for each obtain 2D image over time (t) to generate (e.g. superpose) a skeleton layer on each identified occupant. The skeleton layer may comprise multiple key-points locations which describe the occupant's joints. In other words, the key-points represent body landmarks (e.g. joint body points) which are detected at the captured body image forming the skeleton representation as shown inFIGS.4A and4B. In accordance with embodiments, each key-point of the skeleton representation includes an identified coordinate (x, y) at the occupant(s) body image to be used for extracting features of the identified occupants. In some embodiments, the pose estimation methods may be further used to identify the occupants and/or occupant's seat in each of the obtained 2D images. In some embodiments, the pose estimation methods are configured to extract one or more features of the occupants and/or the occupant's surroundings such as the occupant's body parts and the locations of the occupant's seat. In some embodiments, the identified occupants are separated from one another to yield a separate image for each identified occupant. In some embodiments, each separated image includes the identified occupant and optionally the occupant surroundings such as the occupant's seat. In some embodiments, each obtained 2D image of the sequence of 2D images is divided based on the number of identified occupants in the image so a separated skeleton is generated for each identified occupant. In some embodiments, a confidence grade is assigned to each estimated key-point is space (e.g. vehicle cabin). At step730the sequence of 3D images is analyzed to generate a depth map representation of the occupants, in accordance with embodiments. The captured 3D images of the object illuminated with the structured light include specific pattern features that correspond to the illumination patterns projected onto the object. The pattern features can be stripes, lines, dots or other geometric shapes, and includes uniform or non-uniform characteristics such as shape, size, and intensity. An example captured image illuminated with specific structured light (e.g. dots) is described inFIG.3A. At step740, the 3D map representations and the skeleton annotation layers of each occupant for each image are combined to yield a skeleton model (3D skeleton model) for example for each occupant, in accordance with embodiments. Generally, the generated skeleton model is used to identify the orientation/pose/distance of the occupants in the obtained images from the imaging device. Specifically, the skeleton model includes data such as 3D key points (x,y,z) representation of the occupants in respect to a X-Y-Z coordinate system, where the (x,y) point represent the location at the occupant's body joins surface in the obtained images and (z) represent the distance of the related (x, y) key-point surface from the image sensor. For example,FIG.7Bshows an image780including the combined 3D map layer and the skeleton layer of a vehicle interior passenger compartment782in accordance with embodiments. The image780shows a passenger785seating at the vehicle seat and multiple reflected light pattern (e.g. dots)788used for estimating the distance (e.g. depth) of each related body portion from the image sensor. The image further includes a skeleton790formed by connecting a number of selected pairs of key-points at the passenger body by connection lines. It should be stressed that while steps730and740ofFIG.7Aincludes obtaining reflected light pattern images to yield depth data for each image, the present invention may include obtaining 3D images and/or extracting depth data by any type of 3D systems, devices and methods, such as stereoscopic cameras or ToF sensors as known in the art. At step750the skeleton models are analyzed to extract one or more features of the occupants. In an embodiment, the extracted features may include data such as measured pose and/or orientation of each occupant in the vehicle. In some embodiments, the features may further include the length of one or more body parts of the occupants, such as major body parts of the occupant, e.g., shoulders, hips, torso, legs, body, etc. Advantageously, the generated skeleton model provides the “real length” (e.g. or actual length) of each body portion as opposed to “projected length” that can be obtained if only 2D images of the persons were obtained. The analysis based on the 3D data improves the accuracy of the mass estimation as “projected length” is very limited in providing mass estimation (e.g. sensitive for angle etc.) For example, as shown inFIG.7Bthe obtained image780comprising the reflected light pattern of dots and the skeleton superposed on the image of the passenger seating at the vehicle front or back seats are analyzed to estimate the length of the person body parts, e.g. shoulders, hips, torso, legs, body etc. At step760the one or more skeleton models of each occupant are analyzed to filter out (e.g. removed or deleted), one or more skeleton models, based on for example predefined filtering criteria, and yield valid skeleton models (e.g. suitable for mass estimation) of the occupants, in accordance with embodiments. The predefined filtering criteria include selection rules which define a required pose and orientation for estimating the occupant's mass. For example, the predefined filtering criteria include selection rules which define ‘abnormal’ or ‘non-valid’ pose or orientation of the occupants. An ‘abnormal’ pose or orientation may be defined as an occupant's pose or orientation where a full or almost full skeleton representation is not presented or imaged due for example of nonstandard sitting position of the occupant or as a result of imaging the occupant in an angle in respect to the image sensor where the occupant may not be completely seen. In some cases, the nonstandard pose may relate to a pose where the occupant is not sitting straight, for example in a bending position. Accordingly, the analysis of these ‘abnormal’ skeleton representations is used to discard poses defined as ‘abnormal’ (e.g. inaccurate or false measurement) and therefore these skeletons are omitted from the mass estimation process. Nonlimiting examples of filtering criteria include defined spatial relation between skeleton features of the identified objects and/or identified abnormal poses. Nonlimiting examples of discarded poses are illustrated inFIGS.5A and5B. In some cases, the analyzed images may be filtered using a pose density model method. In accordance with embodiments, the pose density model method includes placing each of the object's skeleton configuration in a high-dimensional space and discarding any configurations which are within a predetermined distance from a high-density area in this space. At step770the valid skeleton models of the occupants are analyzed to estimate the mass of the occupants, in accordance with embodiments. In some embodiments, the analysis process includes inserting the extracted fearers of the valid skeleton models to measurement model such as a pre-trained regression model configured to estimate the mass in time (t) of the occupants based on current and previous (t−i) mass measurements. In some cases, the estimation model is a machine learning estimation model configured to determine the mass and/or mass classification of the occupants. In some cases, the measurement model is configured to provide a continuous value of a predicted mass, or perform a coarser estimation and classify the occupant according to mass class (e.g. child, small adult, normal, big adult). Alternately or in combination the valid skeleton models of the occupants are processed to classify each occupant according to a predetermined mass classification. For example, a passenger weighing around 60 kg, e.g. 50-65 kg will be classified in a “small adult” subclass, while a child weighing around 25 kg, e.g. in the range of 10-30 kg will be classified as “child” sub-lass. FIG.7Cis a schematic flowchart of method705for estimating the mass of one or more occupants in a vehicle, in accordance with embodiments. Method705present all steps of the aforementioned method700but further includes at step781classifying the identified occupants in accordance with one or more measured mass sub-categories (e.g. child, small adult, normal, big adult). At step782an output, such as output signal, is generated based on the measured and determined mass or mass classification of each identified occupant. For example, the output signal including the estimated mass and/or mass classification may be transmitted to an airbag control unit (ACU) to determine whether airbags should be suppressed or deployed, and if so, at various output levels. According to other embodiments, the output including the mass estimation may control the vehicle's HVAC (Heating, Ventilating, and Air Conditioning) systems; and/or optimize vehicle's electronic stabilization control (ESC) according to each of the vehicles occupants measured mass. FIG.8is a schematic flowchart of method800for measuring the mass of one or more occupants in a vehicle, in accordance with another embodiment. For example, the method may include determining the mass of one or more occupants sitting on a vehicle seats, for example in real time, according to one or more mass classification categories and accordingly output one or more singles to activate one or more of the vehicles units. Some stages of method800may be carried out at least partially by at least one computer processor, e.g., by processor152and/or vehicle computing unit. Respective computer program products may be provided, which comprise a computer readable storage medium having computer readable program embodied therewith and configured to carry out of the relevant stages of method800. In other embodiments, the method includes different or additional steps than those described in conjunction withFIG.8. Additionally, in various embodiments, steps of the method may be performed in different orders than the order described in conjunction withFIG.8. At step810multiple images including one or more visual images, for example, a sequence of 2D images, and a sequence of 3D images of the vehicle cabin are obtained, in accordance with embodiments. The obtained sequence of 2D and 3D images include images of one or more occupants such as a driver and/or passenger(s) seating in the vehicle rear and/or back seats. In accordance with embodiments, the 3D images may be any type of stereoscopic images such as images captured by a stereoscopic camera. Alternatively or in combination, the 3D images may be captured by a ToF image sensor. Alternatively or in combination, the 3D images may include reflected light patterns. The 2D images may be clean visual images which for example do not include reflected light patterns. In some embodiments, the multiple images (e.g. 2D and 3D images) are captured synchronically and/or sequentially by an image sensor located in the vehicle cabin, for example at the front section of the vehicle as illustrated inFIG.1A. The 3D images may include depth map representations of the occupants, in accordance with embodiments. For example, the captured 3D images of the object illuminated with the structured light may include specific pattern features that correspond to the illumination patterns projected onto the object. The pattern features can be stripes, lines, dots or other geometric shapes, and includes uniform or non-uniform characteristics such as shape, size, and intensity. An example captured image illuminated with specific structured light (e.g. dots) is described inFIG.3A. In other embodiments, different type of 3D images may be used to extract the depth maps. In some cases, the images are obtained and processed in real-time. In some cases, the 2D images and 3D images may be captured by a single image sensor. In some cases, the 2D images and 3D images may be captured by different image sensors. At step820one or more detection algorithms such as pose detections and/or posture detection algorithms are applied on the obtained sequence of 2D images to detect the pose and orientation of the occupants in the vehicle cabin. Specifically, the pose detection algorithms are configured to generate a skeleton representation (e.g. 2D skeleton representation) or 2D skeleton models for each occupant to identify and/or measure features such as position; orientation; body parts; length and width of the occupants. For example, the position and orientation of the object may be detected and measured by applying an OpenPose algorithm on the images. Specifically, in accordance with embodiments, a Neural Network such as a DNN (Deep Neural Network) is applied for each obtain 2D image over time (t) to generate (e.g. superpose) a skeleton layer (e.g. 2D skeleton representation) on each identified occupant. The skeleton layer may comprise multiple key-points locations which describe the occupant's joints. In other words, the key-points represent body landmarks (e.g. joint body points) which are detected at the captured body image forming the skeleton representation as shown inFIGS.4A and4B. In accordance with embodiments, each key-point of the skeleton representation includes an identified coordinate (x, y) at the occupant(s) body image to be used for extracting features of the identified occupants. In some embodiments, the pose estimation methods may be further used to identify the occupants and/or occupant's seat in each of the obtained 2D images. In some embodiments, the pose estimation methods are configured to extract one or more features of the occupants and/or the occupant's surroundings such as the occupant's body parts and the locations of the occupant's seat. In some embodiments, the identified occupants are separated from one another to yield a separate image for each identified occupant. In some embodiments, each separated image includes the identified occupant and optionally the occupant surroundings such as the occupant's seat. In some embodiments, each obtained 2D image of the sequence of 2D images is divided based on the number of identified occupants in the image so a separated skeleton is generated for each identified occupant. In some embodiments, a confidence grade is assigned to each estimated key-point is space (e.g. vehicle cabin). At step830, the 3D images (e.g. depth maps) are analyzed to extract one or more distance or depth values relating to the distance of the scene or objects in the scene (e.g. occupants) or the vehicle's seats from a reference point such as an image sensor, in accordance with embodiments. The extraction of these depth values is required as objects in a captured 2D image located away from one another mistakenly look as having the same size. Therefore, to measure the actual size of the occupants in the vehicle the one or more extracted depth values may be used as a reference scale such as a scale factor or normalization factor to adjust the absolute values of the skeleton model. In some cases, the one or more distance values may be extracted by measuring the average depth value of the occupant's features (e.g. skeleton values such as hips, width, shoulders torso and/or other body organs), for example in pixels. In some cases, a single scale factor is extracted. In some cases, a scale factor is extracted for each occupant and/or for each obtained image. At step840the 2D skeleton models are analyzed to extract one or more features of the occupants. In an embodiment, the extracted features may include data such as measured pose and/or orientation of each occupant in the vehicle. In some embodiments, the features may further include the length of one or more body organs of the occupants, such as major body parts of the occupant, e.g., shoulders, hips, torso, legs, body, etc. At step850the one or more 2D skeleton models of each occupant are analyzed to filter out (e.g. remove or delete), one or more 2D skeleton models, based on for example the extracted one or more features and predefined filtering criteria to yield valid 2D skeleton models (e.g. suitable for weight estimation) of the occupants, in accordance with embodiments. The predefined filtering criteria include selection rules which define a required pose and orientation for estimating the occupant's mass. For example, the predefined filtering criteria include selection rules which define ‘abnormal’ pose or orientation of the occupants. An ‘abnormal’ pose or orientation may defined as an occupant's pose or orientation where a full or almost full skeleton representation is not presented or imaged due for example of nonstandard sitting position of the occupant or as a result of imaging the occupant in an angle in respect to the image sensor where the occupant may not be completely seen. In some cases, the nonstandard pose may relate to a pose where the occupant is not sitting straight, for example in a bending position. Accordingly, the analysis of these ‘abnormal’ skeleton model representations is used to discard poses defined as ‘abnormal’ (e.g. inaccurate or false measurement) and therefore these skeletons are deleted. Non limiting examples of filtering criteria include defined spatial relation between skeleton features of the identified objects and/or identified abnormal poses. Non limiting examples of discarded poses are illustrated inFIGS.5A and5B. In some cases, the analyzed images may be filtered using a pose density model method. In accordance with embodiments, the pose density model method includes placing each of the object's skeleton configuration in a high-dimensional space and discarding any configurations which are within a pre-determined distance from a high-density area in this space. At step860the measured scale factor, for example for each occupant or for each image, is applied accordingly on the valid 2D skeleton models of the related occupants to yield scaled 2D skeleton model of the occupants (e.g. correctly scaled 2D skeleton model of the occupants). The scaled 2D skeleton models of the occupants include information relating to the distance of the skeleton model from a viewpoint (e.g. image sensor). At step870the scaled skeleton models of the occupants are analyzed to estimate the mass of the occupants, in accordance with embodiments. In some embodiments, the analysis process includes inserting the extracted fearers of the scaled 2D skeleton models to a measurement model such as a pre-trained regression model configured to estimate the mass of the occupants. In some cases, the measurement model is a machine learning estimation model configured to determine the mass and/or mass classification of the occupants. In some cases, the measurement model is configured to provide a continuous value of a predicted mass, or perform a coarser estimation and classify the occupant according to mass class (e.g. child, small adult, normal, big adult). Alternately or in combination the valid skeleton model of the occupants are processed to classify each occupant according to a predetermined mass classification. For example, a passenger weighing around 60 kg, e.g. 50-65 kg will be classified in a “small adult” subclass, while a child weighing around 25 kg, e.g. in the range of 10-30 kg will be classified as “child” sub-lass. FIG.9Ashows a graph901of massing prediction results (Y axis) of one or more occupants in a vehicle cabin as a function of the real measured mass (X axis) of these occupants, based on the analysis of captured images over time and filtering out non-valid images of the occupants in the vehicle, in accordance with embodiments. Specifically, each captured image is analyzed and a massing prediction is generated for the identified valid images while non-valid images are discarded. In an embodiment, each point of the graph901represents a frame captured and analyzed in accordance with embodiments. As can be clearly illustrated from the graphs the predicted mass of the occupants is in the range of the real measured mass of the occupants. For example, the predicted mass of occupants massing 100 Kg is predicted accordingly based on the present method and system to be in the range of 80-120 kg (and the average is around 100 kg). FIG.9Bshows a massing prediction percentage presentation902of massing classifications, in accordance with embodiments. For example, the massing prediction of mass classification of 0-35 kg is 100%, 25-70 is 95.9% and 60+ is 94%. FIG.9Cshows another example of graph903of massing prediction results (Y axis) of one or more occupants seating in a vehicle cabin as a function of the real measured mass (X axis) of these occupants, based on the analysis of captured images such as images910,920and930of the occupants in the vehicle, in accordance with embodiments. As shown inFIG.9C, in some cases some non-valid images such as image910are not filtered out and accordingly they affect the accuracy of the massing prediction. Usually, non-valid images such as 2D image910will be automatically filtered out, for example in real-time, to analyze images including only standard position (e.g. valid images) of the occupants and accordingly to obtain an accurate massing prediction. In some cases, the identification of the non-standard position of an occupant such as the position shown in image910may be used to activate or deactivate one or more of the vehicle units, such as airbags. For example, the identification of an occupant bending or moving his head away from the road based on the pose estimation model as described herein may be reported to the vehicle's computer and/or processor and accordingly, the vehicle airbag or hazard alerts devices may be activated. It is understood that embodiments of the present invention may include mass estimation and/or mass determination of occupants in a vehicle. For example, systems and methods can provide a fast and accurate estimation of the occupants. In further embodiments, the processing unit may be a digital processing device including one or more hardware central processing units (CPU) that carry out the device's functions. In still further embodiments, the digital processing device further comprises an operating system configured to perform executable instructions. In some embodiments, the digital processing device is optionally connected to a computer network. In further embodiments, the digital processing device is optionally connected to the Internet such that it accesses the World Wide Web. In still further embodiments, the digital processing device is optionally connected to a cloud computing infrastructure. In other embodiments, the digital processing device is optionally connected to an intranet. In other embodiments, the digital processing device is optionally connected to a data storage device. In accordance with the description herein, suitable digital processing devices include, by way of non-limiting examples, server computers, desktop computers, laptop computers, notebook computers, sub-notebook computers, netbook computers, netpad computers, set-top computers, handheld computers, Internet appliances, mobile smartphones, tablet computers, personal digital assistants, video game consoles, and vehicles. Those of skill in the art will recognize that many smartphones are suitable for use in the system described herein. Those of skill in the art will also recognize that select televisions with optional computer network connectivity are suitable for use in the system described herein. Suitable tablet computers include those with booklet, slate, and convertible configurations, known to those of skill in the art. In some embodiments, the digital processing device includes an operating system configured to perform executable instructions. The operating system is, for example, software, including programs and data, which manages the device's hardware and provides services for execution of applications. Those of skill in the art will recognize that suitable server operating systems include, by way of non-limiting examples, FreeBSD, OpenBSD, NetBSD®, Linux, Apple® Mac OS X Server®, Oracle® Solaris®, Windows Server®, and Novell® NetWare®. Those of skill in the art will recognize that suitable personal computer operating systems include, by way of non-limiting examples, Microsoft® Windows®, Apple® Mac OS X®, UNIX®, and UNIX-like operating systems such as GNU/Linux®. In some embodiments, the operating system is provided by cloud computing. Those of skill in the art will also recognize that suitable mobile smart phone operating systems include, by way of non-limiting examples, Nokia® Symbian® OS, Apple® iOS®, Research In Motion® BlackBerry OS®, Google® Android®, Microsoft® Windows Phone® OS, Microsoft® Windows Mobile® OS, Linux®, and Palm® WebOS®. In some embodiments, the device includes a storage and/or memory device. The storage and/or memory device is one or more physical apparatuses used to store data or programs on a temporary or permanent basis. In some embodiments, the device is volatile memory and requires power to maintain stored information. In some embodiments, the device is non-volatile memory and retains stored information when the digital processing device is not powered. In further embodiments, the non-volatile memory comprises flash memory. In some embodiments, the non-volatile memory comprises dynamic random-access memory (DRAM). In some embodiments, the non-volatile memory comprises ferroelectric random access memory (FRAM). In some embodiments, the non-volatile memory comprises phase-change random access memory (PRAM). In other embodiments, the device is a storage device including, by way of non-limiting examples, CD-ROMs, DVDs, flash memory devices, magnetic disk drives, magnetic tapes drives, optical disk drives, and cloud computing based storage. In further embodiments, the storage and/or memory device is a combination of devices such as those disclosed herein. In some embodiments, the digital processing device includes a display to send visual information to a user. In some embodiments, the display is a cathode ray tube (CRT). In some embodiments, the display is a liquid crystal display (LCD). In further embodiments, the display is a thin film transistor liquid crystal display (TFT-LCD). In some embodiments, the display is an organic light emitting diode (OLED) display. In various further embodiments, an OLED display is a passive-matrix OLED (PMOLED) or active-matrix OLED (AMOLED) display. In some embodiments, the display is a plasma display. In other embodiments, the display is a video projector. In still further embodiments, the display is a combination of devices such as those disclosed herein. In some embodiments, the digital processing device includes an input device to receive information from a user. In some embodiments, the input device is a keyboard. In some embodiments, the input device is a pointing device including, by way of non-limiting examples, a mouse, trackball, track pad, joystick, game controller, or stylus. In some embodiments, the input device is a touch screen or a multi-touch screen. In other embodiments, the input device is a microphone to capture voice or other sound input. In other embodiments, the input device is a video camera to capture motion or visual input. In still further embodiments, the input device is a combination of devices such as those disclosed herein. In some embodiments, the system disclosed herein includes one or more non-transitory computer readable storage media encoded with a program including instructions executable by the operating system of an optionally networked digital processing device. In further embodiments, a computer readable storage medium is a tangible component of a digital processing device. In still further embodiments, a computer readable storage medium is optionally removable from a digital processing device. In some embodiments, a computer readable storage medium includes, by way of non-limiting examples, CD-ROMs, DVDs, flash memory devices, solid state memory, magnetic disk drives, magnetic tape drives, optical disk drives, cloud computing systems and services, and the like. In some cases, the program and instructions are permanently, substantially permanently, semi-permanently, or non-transitorily encoded on the media. In some embodiments, the system disclosed herein includes at least one computer program, or use of the same. A computer program includes a sequence of instructions, executable in the digital processing device's CPU, written to perform a specified task. Computer readable instructions may be implemented as program modules, such as functions, objects, Application Programming Interfaces (APIs), data structures, and the like, that perform particular tasks or implement particular abstract data types. In light of the disclosure provided herein, those of skill in the art will recognize that a computer program may be written in various versions of various languages. The functionality of the computer readable instructions may be combined or distributed as desired in various environments. In some embodiments, a computer program comprises one sequence of instructions. In some embodiments, a computer program comprises a plurality of sequences of instructions. In some embodiments, a computer program is provided from one location. In other embodiments, a computer program is provided from a plurality of locations. In various embodiments, a computer program includes one or more software modules. In various embodiments, a computer program includes, in part or in whole, one or more web applications, one or more mobile applications, one or more standalone applications, one or more web browser plug-ins, extensions, add-ins, or add-ons, or combinations thereof. In some embodiments, a computer program includes a mobile application provided to a mobile digital processing device. In some embodiments, the mobile application is provided to a mobile digital processing device at the time it is manufactured. In other embodiments, the mobile application is provided to a mobile digital processing device via the computer network described herein. In view of the disclosure provided herein, a mobile application is created by techniques known to those of skill in the art using hardware, languages, and development environments known to the art. Those of skill in the art will recognize that mobile applications are written in several languages. Suitable programming languages include, by way of non-limiting examples, C, C++, C#, Objective-C, Java™, Javascript, Pascal, Object Pascal, Python™, Ruby, VB.NET, WML, and XHTML/HTML with or without CSS, or combinations thereof. Suitable mobile application development environments are available from several sources. Commercially available development environments include, by way of non-limiting examples, AirplaySDK, alcheMo, Appcelerator®, Celsius, Bedrock, Flash Lite, .NET Compact Framework, Rhomobile, and WorkLight Mobile Platform. Other development environments are available without cost including, by way of non-limiting examples, Lazarus, MobiFlex, MoSync, and Phonegap. Also, mobile device manufacturers distribute software developer kits including, by way of non-limiting examples, iPhone and iPad (iOS) SDK, Android™ SDK, BlackBerry® SDK, BREW SDK, Palm® OS SDK, Symbian SDK, webOS SDK, and Windows® Mobile SDK. Those of skill in the art will recognize that several commercial forums are available for distribution of mobile applications including, by way of non-limiting examples, Apple® App Store, Android™ Market, BlackBerry® App World, App Store for Palm devices, App Catalog for webOS, Windows® Marketplace for Mobile, Ovi Store for Nokia® devices, Samsung® Apps, and Nintendo® DSi Shop. In some embodiments, the system disclosed herein includes software, server, and/or database modules, or use of the same. In view of the disclosure provided herein, software modules are created by techniques known to those of skill in the art using machines, software, and languages known to the art. The software modules disclosed herein are implemented in a multitude of ways. In various embodiments, a software module comprises a file, a section of code, a programming object, a programming structure, or combinations thereof. In further various embodiments, a software module comprises a plurality of files, a plurality of sections of code, a plurality of programming objects, a plurality of programming structures, or combinations thereof. In various embodiments, the one or more software modules comprise, by way of non-limiting examples, a web application, a mobile application, and a standalone application. In some embodiments, software modules are in one computer program or application. In other embodiments, software modules are in more than one computer program or application. In some embodiments, software modules are hosted on one machine. In other embodiments, software modules are hosted on more than one machine. In further embodiments, software modules are hosted on cloud computing platforms. In some embodiments, software modules are hosted on one or more machines in one location. In other embodiments, software modules are hosted on one or more machines in more than one location. In some embodiments, the system disclosed herein includes one or more databases, or use of the same. In view of the disclosure provided herein, those of skill in the art will recognize that many databases are suitable for storage and retrieval of information as described herein. In various embodiments, suitable databases include, by way of non-limiting examples, relational databases, non-relational databases, object oriented databases, object databases, entity-relationship model databases, associative databases, and XML databases. In some embodiments, a database is internet-based. In further embodiments, a database is web-based. In still further embodiments, a database is cloud computing-based. In other embodiments, a database is based on one or more local computer storage devices. In the above description, an embodiment is an example or implementation of the inventions. The various appearances of “one embodiment,” “an embodiment” or “some embodiments” do not necessarily all refer to the same embodiments. Although various features of the invention may be described in the context of a single embodiment, the features may also be provided separately or in any suitable combination. Conversely, although the invention may be described herein in the context of separate embodiments for clarity, the invention may also be implemented in a single embodiment. Reference in the specification to “some embodiments”, “an embodiment”, “one embodiment” or “other embodiments” means that a particular feature, structure, or characteristic described in connection with the embodiments is included in at least some embodiments, but not necessarily all embodiments, of the inventions. It is to be understood that the phraseology and terminology employed herein is not to be construed as limiting and are for descriptive purpose only. The principles and uses of the teachings of the present invention may be better understood with reference to the accompanying description, figures and examples. It is to be understood that the details set forth herein do not construe a limitation to an application of the invention. Furthermore, it is to be understood that the invention can be carried out or practiced in various ways and that the invention can be implemented in embodiments other than the ones outlined in the description above. It is to be understood that the terms “including”, “comprising”, “consisting” and grammatical variants thereof do not preclude the addition of one or more components, features, steps, or integers or groups thereof and that the terms are to be construed as specifying components, features, steps or integers. If the specification or claims refer to “an additional” element, that does not preclude there being more than one of the additional element. It is to be understood that where the claims or specification refer to “a” or “an” element, such reference is not be construed that there is only one of that element. It is to be understood that where the specification states that a component, feature, structure, or characteristic “may”, “might”, “can” or “could” be included, that particular component, feature, structure, or characteristic is not required to be included. Where applicable, although state diagrams, flow diagrams or both may be used to describe embodiments, the invention is not limited to those diagrams or to the corresponding descriptions. For example, flow need not move through each illustrated box or state, or in exactly the same order as illustrated and described. Methods of the present invention may be implemented by performing or completing manually, automatically, or a combination thereof, selected steps or tasks. The descriptions, examples, methods and materials presented in the claims and the specification are not to be construed as limiting but rather as illustrative only. Meanings of technical and scientific terms used herein are to be commonly understood as by one of ordinary skill in the art to which the invention belongs, unless otherwise defined. The present invention may be implemented in the testing or practice with methods and materials equivalent or similar to those described herein. While the invention has been described with respect to a limited number of embodiments, these should not be construed as limitations on the scope of the invention, but rather as exemplifications of some of the preferred embodiments. Other possible variations, modifications, and applications are also within the scope of the invention. Accordingly, the scope of the invention should not be limited by what has thus far been described, but by the appended claims and their legal equivalents. All publications, patents and patent applications mentioned in this specification are herein incorporated in their entirety by reference into the specification, to the same extent as if each individual publication, patent or patent application was specifically and individually indicated to be incorporated herein by reference. In addition, citation or identification of any reference in this application shall not be construed as an admission that such reference is available as prior art to the present invention. To the extent that section headings are used, they should not be construed as necessarily limiting. | 108,388 |
11861868 | DETAILED DESCRIPTION OF THE EMBODIMENTS A three-dimensional data encoding method according to one aspect of the present disclosure includes: determining whether a first valid node count is greater than or equal to a first threshold value predetermined, the first valid node count being a total number of valid nodes that are nodes each including a three-dimensional point, the valid nodes being included in first nodes belonging to a layer higher than a layer of a current node in an N-ary tree structure of three-dimensional points included in point cloud data, N being an integer greater than or equal to 2; when the first valid node count is greater than or equal to the first threshold value, performing first encoding on attribute information of the current node, the first encoding including a prediction process in which second nodes are used, the second nodes including a parent node of the current node and belonging to a same layer as the parent node; and when the first valid node count is less than the first threshold value, performing second encoding on attribute information of the current node, the second encoding not including the prediction process in which the second nodes are used. According to the three-dimensional data encoding method, whether to use the first encoding including a prediction process can be appropriately selected, and therefore, the encoding efficiency can be improved. For example, the first nodes may include the parent node and nodes belonging to the same layer as the parent node. For example, the first nodes may include a grandparent node of the current node and nodes belonging to a same layer as the grandparent node. For example, in the second encoding, a predicted value of attribute information of the current node may be set to zero. For example, the three-dimensional data encoding method may further include generating a bitstream including attribute information of the current node encoded and first information indicating whether the first encoding is applicable. For example, the three-dimensional data encoding method may further include generating a bitstream including attribute information of the current node encoded and second information indicating the first threshold value. For example, the three-dimensional data encoding method may further include: determining whether a second valid node count is greater than or equal to a second threshold value predetermined, the second valid node count being a total number of valid nodes included in second nodes including a grandparent node of the current node and nodes belonging to a same layer as the grandparent node; when the first valid node count is greater than the first threshold value, and the second valid node count is greater than or equal to the second threshold value, performing the first encoding on attribute information of the current node; and when the first valid node count is less than the first threshold value or the second valid node count is less than the second threshold value, performing the second encoding on attribute information of the current node. A three-dimensional data decoding method according to one aspect of the present disclosure includes: determining whether a first valid node count is greater than or equal to a first threshold value predetermined, the first valid node count being a total number of valid nodes that are nodes each including a three-dimensional point, the valid nodes being included in first nodes belonging to a layer higher than a layer of a current node in an N-ary tree structure of three-dimensional points included in point cloud data, N being an integer greater than or equal to 2; when the first valid node count is greater than or equal to the first threshold value, performing first decoding on attribute information of the current node, the first decoding including a prediction process in which second nodes are used, the second nodes including a parent node of the current node and belonging to a same layer as the parent node; and when the first valid node count is less than the first threshold value, performing second decoding on attribute information of the current node, the second decoding not including the prediction process in which the second nodes are used. According to the three-dimensional data decoding method, whether to use the first decoding including a prediction process can be appropriately selected, and therefore, the encoding efficiency can be improved. For example, the first nodes may include the parent node and nodes belonging to the same layer as the parent node. For example, the first nodes may include a grandparent node of the current node and nodes belonging to a same layer as the grandparent node. For example, in the second decoding, a predicted value of attribute information of the current node may be set to zero. For example, the three-dimensional data decoding method may further include obtaining first information indicating whether the first decoding is applicable, from a bitstream including attribute information of the current node encoded. For example, the three-dimensional data decoding method may further include obtaining second information indicating the first threshold value, from a bitstream including attribute information of the current node encoded. For example, the three-dimensional data decoding method may further include: determining whether a second valid node count is greater than or equal to a second threshold value predetermined, the second valid node count being a total number of valid nodes included in second nodes including a grandparent node of the current node and nodes belonging to a same layer as the grandparent node; when the first valid node count is greater than the first threshold value, and the second valid node count is greater than or equal to the second threshold value, performing the first decoding on attribute information of the current node; and when the first valid node count is less than the first threshold value or the second valid node count is less than the second threshold value, performing the second decoding on attribute information of the current node. A three-dimensional data encoding device according to one aspect of the present disclosure includes a processor and memory. Using the memory, the processor: determines whether a first valid node count is greater than or equal to a first threshold value predetermined, the first valid node count being a total number of valid nodes that are nodes each including a three-dimensional point, the valid nodes being included in first nodes belonging to a layer higher than a layer of a current node in an N-ary tree structure of three-dimensional points included in point cloud data, N being an integer greater than or equal to 2; when the first valid node count is greater than or equal to the first threshold value, performs first encoding on attribute information of the current node, the first encoding including a prediction process in which second nodes are used, the second nodes including a parent node of the current node and belonging to a same layer as the parent node; and when the first valid node count is less than the first threshold value, performs second encoding on attribute information of the current node, the second encoding not including the prediction process in which the second nodes are used. According to this configuration, since the three-dimensional data encoding device can appropriately select whether to use the first encoding including a prediction process, the three-dimensional encoding device can improve the encoding efficiency. A three-dimensional data decoding device according to one aspect of the present disclosure includes a processor and memory. Using the memory, the processor: determines whether a first valid node count is greater than or equal to a first threshold value predetermined, the first valid node count being a total number of valid nodes that are nodes each including a three-dimensional point, the valid nodes being included in first nodes belonging to a layer higher than a layer of a current node in an N-ary tree structure of three-dimensional points included in point cloud data, N being an integer greater than or equal to 2; when the first valid node count is greater than or equal to the first threshold value, performs first decoding on attribute information of the current node, the first decoding including a prediction process in which second nodes are used, the second nodes including a parent node of the current node and belonging to a same layer as the parent node; and when the first valid node count is less than the first threshold value, performs second decoding on attribute information of the current node, the second decoding not including the prediction process in which the second nodes are used. According to this configuration, since the three-dimensional data decoding device can appropriately select whether to use the first decoding including a prediction process, the three-dimensional data decoding device can improve the encoding efficiency. It is to be noted that these general or specific aspects may be implemented as a system, a method, an integrated circuit, a computer program, or a computer-readable recording medium such as a CD-ROM, or may be implemented as any combination of a system, a method, an integrated circuit, a computer program, and a recording medium. The following describes embodiments with reference to the drawings. It is to be noted that the following embodiments indicate exemplary embodiments of the present disclosure. The numerical values, shapes, materials, constituent elements, the arrangement and connection of the constituent elements, steps, the processing order of the steps, etc. indicated in the following embodiments are mere examples, and thus are not intended to limit the present disclosure. Of the constituent elements described in the following embodiments, constituent elements not recited in any one of the independent claims that indicate the broadest concepts will be described as optional constituent elements. Embodiment 1 First, the data structure of encoded three-dimensional data (hereinafter also referred to as encoded data) according to the present embodiment will be described.FIG.1is a diagram showing the structure of encoded three-dimensional data according to the present embodiment.FIG.1is a diagram showing the structure of encoded three-dimensional data according to the present embodiment. In the present embodiment, a three-dimensional space is divided into spaces (SPCs), which correspond to pictures in moving picture encoding, and the three-dimensional data is encoded on a SPC-by-SPC basis. Each SPC is further divided into volumes (VLMs), which correspond to macroblocks, etc. in moving picture encoding, and predictions and transforms are performed on a VLM-by-VLM basis. Each volume includes a plurality of voxels (VXLs), each being a minimum unit in which position coordinates are associated. Note that prediction is a process of generating predictive three-dimensional data analogous to a current processing unit by referring to another processing unit, and encoding a differential between the predictive three-dimensional data and the current processing unit, as in the case of predictions performed on two-dimensional images. Such prediction includes not only spatial prediction in which another prediction unit corresponding to the same time is referred to, but also temporal prediction in which a prediction unit corresponding to a different time is referred to. When encoding a three-dimensional space represented by point group data such as a point cloud, for example, the three-dimensional data encoding device (hereinafter also referred to as the encoding device) encodes the points in the point group or points included in the respective voxels in a collective manner, in accordance with a voxel size. Finer voxels enable a highly-precise representation of the three-dimensional shape of a point group, while larger voxels enable a rough representation of the three-dimensional shape of a point group. Note that the following describes the case where three-dimensional data is a point cloud, but three-dimensional data is not limited to a point cloud, and thus three-dimensional data of any format may be employed. Also note that voxels with a hierarchical structure may be used. In such a case, when the hierarchy includes n levels, whether a sampling point is included in the n−1th level or its lower levels (the lower levels of the n-th level) may be sequentially indicated. For example, when only the n-th level is decoded, and the n−1th level or its lower levels include a sampling point, the n-th level can be decoded on the assumption that a sampling point is included at the center of a voxel in the n-th level. Also, the encoding device obtains point group data, using, for example, a distance sensor, a stereo camera, a monocular camera, a gyroscope sensor, or an inertial sensor. As in the case of moving picture encoding, each SPC is classified into one of at least the three prediction structures that include: intra SPC (I-SPC), which is individually decodable; predictive SPC (P-SPC) capable of only a unidirectional reference; and bidirectional SPC (B-SPC) capable of bidirectional references. Each SPC includes two types of time information: decoding time and display time. Furthermore, as shown inFIG.1, a processing unit that includes a plurality of SPCs is a group of spaces (GOS), which is a random access unit. Also, a processing unit that includes a plurality of GOSs is a world (WLD). The spatial region occupied by each world is associated with an absolute position on earth, by use of, for example, GPS, or latitude and longitude information. Such position information is stored as meta-information. Note that meta-information may be included in encoded data, or may be transmitted separately from the encoded data. Also, inside a GOS, all SPCs may be three-dimensionally adjacent to one another, or there may be a SPC that is not three-dimensionally adjacent to another SPC. Note that the following also describes processes such as encoding, decoding, and reference to be performed on three-dimensional data included in processing units such as GOS, SPC, and VLM, simply as performing encoding/to encode, decoding/to decode, referring to, etc. on a processing unit. Also note that three-dimensional data included in a processing unit includes, for example, at least one pair of a spatial position such as three-dimensional coordinates and an attribute value such as color information. Next, the prediction structures among SPCs in a GOS will be described. A plurality of SPCs in the same GOS or a plurality of VLMs in the same SPC occupy mutually different spaces, while having the same time information (the decoding time and the display time). A SPC in a GOS that comes first in the decoding order is an I-SPC. GOSs come in two types: closed GOS and open GOS. A closed GOS is a GOS in which all SPCs in the GOS are decodable when decoding starts from the first I-SPC. Meanwhile, an open GOS is a GOS in which a different GOS is referred to in one or more SPCs preceding the first I-SPC in the GOS in the display time, and thus cannot be singly decoded. Note that in the case of encoded data of map information, for example, a WLD is sometimes decoded in the backward direction, which is opposite to the encoding order, and thus backward reproduction is difficult when GOSs are interdependent. In such a case, a closed GOS is basically used. Each GOS has a layer structure in height direction, and SPCs are sequentially encoded or decoded from SPCs in the bottom layer. FIG.2is a diagram showing an example of prediction structures among SPCs that belong to the lowermost layer in a GOS.FIG.3is a diagram showing an example of prediction structures among layers. A GOS includes at least one I-SPC. Of the objects in a three-dimensional space, such as a person, an animal, a car, a bicycle, a signal, and a building serving as a landmark, a small-sized object is especially effective when encoded as an I-SPC. When decoding a GOS at a low throughput or at a high speed, for example, the three-dimensional data decoding device (hereinafter also referred to as the decoding device) decodes only I-SPC(s) in the GOS. The encoding device may also change the encoding interval or the appearance frequency of I-SPCs, depending on the degree of sparseness and denseness of the objects in a WLD. In the structure shown inFIG.3, the encoding device or the decoding device encodes or decodes a plurality of layers sequentially from the bottom layer (layer l). This increases the priority of data on the ground and its vicinity, which involve a larger amount of information, when, for example, a self-driving car is concerned. Regarding encoded data used for a drone, for example, encoding or decoding may be performed sequentially from SPCs in the top layer in a GOS in height direction. The encoding device or the decoding device may also encode or decode a plurality of layers in a manner that the decoding device can have a rough grasp of a GOS first, and then the resolution is gradually increased. The encoding device or the decoding device may perform encoding or decoding in the order of layers 3, 8, 1, 9 . . . , for example. Next, the handling of static objects and dynamic objects will be described. A three-dimensional space includes scenes or still objects such as a building and a road (hereinafter collectively referred to as static objects), and objects with motion such as a car and a person (hereinafter collectively referred to as dynamic objects). Object detection is separately performed by, for example, extracting keypoints from point cloud data, or from video of a camera such as a stereo camera. In this description, an example method of encoding a dynamic object will be described. A first method is a method in which a static object and a dynamic object are encoded without distinction. A second method is a method in which a distinction is made between a static object and a dynamic object on the basis of identification information. For example, a GOS is used as an identification unit. In such a case, a distinction is made between a GOS that includes SPCs constituting a static object and a GOS that includes SPCs constituting a dynamic object, on the basis of identification information stored in the encoded data or stored separately from the encoded data. Alternatively, a SPC may be used as an identification unit. In such a case, a distinction is made between a SPC that includes VLMs constituting a static object and a SPC that includes VLMs constituting a dynamic object, on the basis of the identification information thus described. Alternatively, a VLM or a VXL may be used as an identification unit. In such a case, a distinction is made between a VLM or a VXL that includes a static object and a VLM or a VXL that includes a dynamic object, on the basis of the identification information thus described. The encoding device may also encode a dynamic object as at least one VLM or SPC, and may encode a VLM or a SPC including a static object and a SPC including a dynamic object as mutually different GOSs. When the GOS size is variable depending on the size of a dynamic object, the encoding device separately stores the GOS size as meta-information. The encoding device may also encode a static object and a dynamic object separately from each other, and may superimpose the dynamic object onto a world constituted by static objects. In such a case, the dynamic object is constituted by at least one SPC, and each SPC is associated with at least one SPC constituting the static object onto which the each SPC is to be superimposed. Note that a dynamic object may be represented not by SPC(s) but by at least one VLM or VXL. The encoding device may also encode a static object and a dynamic object as mutually different streams. The encoding device may also generate a GOS that includes at least one SPC constituting a dynamic object. The encoding device may further set the size of a GOS including a dynamic object (GOS_M) and the size of a GOS including a static object corresponding to the spatial region of GOS_M at the same size (such that the same spatial region is occupied). This enables superimposition to be performed on a GOS-by-GOS basis. SPC(s) included in another encoded GOS may be referred to in a P-SPC or a B-SPC constituting a dynamic object. In the case where the position of a dynamic object temporally changes, and the same dynamic object is encoded as an object in a GOS corresponding to a different time, referring to SPC(s) across GOSs is effective in terms of compression rate. The first method and the second method may be selected in accordance with the intended use of encoded data. When encoded three-dimensional data is used as a map, for example, a dynamic object is desired to be separated, and thus the encoding device uses the second method. Meanwhile, the encoding device uses the first method when the separation of a dynamic object is not required such as in the case where three-dimensional data of an event such as a concert and a sports event is encoded. The decoding time and the display time of a GOS or a SPC are storable in encoded data or as meta-information. All static objects may have the same time information. In such a case, the decoding device may determine the actual decoding time and display time. Alternatively, a different value may be assigned to each GOS or SPC as the decoding time, and the same value may be assigned as the display time. Furthermore, as in the case of the decoder model in moving picture encoding such as Hypothetical Reference Decoder (HRD) compliant with HEVC, a model may be employed that ensures that a decoder can perform decoding without fail by having a buffer of a predetermined size and by reading a bitstream at a predetermined bit rate in accordance with the decoding times. Next, the topology of GOSs in a world will be described. The coordinates of the three-dimensional space in a world are represented by the three coordinate axes (x axis, y axis, and z axis) that are orthogonal to one another. A predetermined rule set for the encoding order of GOSs enables encoding to be performed such that spatially adjacent GOSs are contiguous in the encoded data. In an example shown inFIG.4, for example, GOSs in the x and z planes are successively encoded. After the completion of encoding all GOSs in certain x and z planes, the value of the y axis is updated. Stated differently, the world expands in the y axis direction as the encoding progresses. The GOS index numbers are set in accordance with the encoding order. Here, the three-dimensional spaces in the respective worlds are previously associated one-to-one with absolute geographical coordinates such as GPS coordinates or latitude/longitude coordinates. Alternatively, each three-dimensional space may be represented as a position relative to a previously set reference position. The directions of the x axis, the y axis, and the z axis in the three-dimensional space are represented by directional vectors that are determined on the basis of the latitudes and the longitudes, etc. Such directional vectors are stored together with the encoded data as meta-information. GOSs have a fixed size, and the encoding device stores such size as meta-information. The GOS size may be changed depending on, for example, whether it is an urban area or not, or whether it is inside or outside of a room. Stated differently, the GOS size may be changed in accordance with the amount or the attributes of objects with information values. Alternatively, in the same world, the encoding device may adaptively change the GOS size or the interval between I-SPCs in GOSs in accordance with the object density, etc. For example, the encoding device sets the GOS size to smaller and the interval between I-SPCs in GOSs to shorter, as the object density is higher. In an example shown inFIG.5, to enable random access with a finer granularity, a GOS with a high object density is partitioned into the regions of the third to tenth GOSs. Note that the seventh to tenth GOSs are located behind the third to sixth GOSs. Embodiment 2 The present embodiment will describe a method of transmitting/receiving three-dimensional data between vehicles. FIG.7is a schematic diagram showing three-dimensional data607being transmitted/received between own vehicle600and nearby vehicle601. In three-dimensional data that is obtained by a sensor mounted on own vehicle600(e.g., a distance sensor such as a rangefinder, as well as a stereo camera and a combination of a plurality of monocular cameras), there appears a region, three-dimensional data of which cannot be created, due to an obstacle such as nearby vehicle601, despite that such region is included in sensor detection range602of own vehicle600(such region is hereinafter referred to as occlusion region604). Also, while the obtainment of three-dimensional data of a larger space enables a higher accuracy of autonomous operations, a range of sensor detection only by own vehicle600is limited. Sensor detection range602of own vehicle600includes region603, three-dimensional data of which is obtainable, and occlusion region604. A range, three-dimensional data of which own vehicle600wishes to obtain, includes sensor detection range602of own vehicle600and other regions. Sensor detection range605of nearby vehicle601includes occlusion region604and region606that is not included in sensor detection range602of own vehicle600. Nearby vehicle601transmits information detected by nearby vehicle601to own vehicle600. Own vehicle600obtains the information detected by nearby vehicle601, such as a preceding vehicle, thereby obtaining three-dimensional data607of occlusion region604and region606outside of sensor detection range602of own vehicle600. Own vehicle600uses the information obtained by nearby vehicle601to complement the three-dimensional data of occlusion region604and region606outside of the sensor detection range. The usage of three-dimensional data in autonomous operations of a vehicle or a robot includes self-location estimation, detection of surrounding conditions, or both. For example, for self-location estimation, three-dimensional data is used that is generated by own vehicle600on the basis of sensor information of own vehicle600. For detection of surrounding conditions, three-dimensional data obtained from nearby vehicle601is also used in addition to the three-dimensional data generated by own vehicle600. Nearby vehicle601that transmits three-dimensional data607to own vehicle600may be determined in accordance with the state of own vehicle600. For example, the current nearby vehicle601is a preceding vehicle when own vehicle600is running straight ahead, an oncoming vehicle when own vehicle600is turning right, and a following vehicle when own vehicle600is rolling backward. Alternatively, the driver of own vehicle600may directly specify nearby vehicle601that transmits three-dimensional data607to own vehicle600. Alternatively, own vehicle600may search for nearby vehicle601having three-dimensional data of a region that is included in a space, three-dimensional data of which own vehicle600wishes to obtain, and that own vehicle600cannot obtain. The region own vehicle600cannot obtain is occlusion region604, or region606outside of sensor detection range602, etc. Own vehicle600may identify occlusion region604on the basis of the sensor information of own vehicle600. For example, own vehicle600identifies, as occlusion region604, a region which is included in sensor detection range602of own vehicle600, and three-dimensional data of which cannot be created. The following describes example operations to be performed when a vehicle that transmits three-dimensional data607is a preceding vehicle.FIG.8is a diagram showing an example of three-dimensional data to be transmitted in such case. AsFIG.8shows, three-dimensional data607transmitted from the preceding vehicle is, for example, a sparse world (SWLD) of a point cloud. Stated differently, the preceding vehicle creates three-dimensional data (point cloud) of a WLD from information detected by a sensor of such preceding vehicle, and extracts data having an amount of features greater than or equal to the threshold from such three-dimensional data of the WLD, thereby creating three-dimensional data (point cloud) of the SWLD. Subsequently, the preceding vehicle transmits the created three-dimensional data of the SWLD to own vehicle600. Own vehicle600receives the SWLD, and merges the received SWLD with the point cloud created by own vehicle600. The SWLD to be transmitted includes information on the absolute coordinates (the position of the SWLD in the coordinates system of a three-dimensional map). The merge is achieved by own vehicle600overwriting the point cloud generated by own vehicle600on the basis of such absolute coordinates. The SWLD transmitted from nearby vehicle601may be: a SWLD of region606that is outside of sensor detection range602of own vehicle600and within sensor detection range605of nearby vehicle601; or a SWLD of occlusion region604of own vehicle600; or the SWLDs of the both. Of these SWLDs, a SWLD to be transmitted may also be a SWLD of a region used by nearby vehicle601to detect the surrounding conditions. Nearby vehicle601may change the density of a point cloud to transmit, in accordance with the communication available time, during which own vehicle600and nearby vehicle601can communicate, and which is based on the speed difference between these vehicles. For example, when the speed difference is large and the communication available time is short, nearby vehicle601may extract three-dimensional points having a large amount of features from the SWLD to decrease the density (data amount) of the point cloud. The detection of the surrounding conditions refers to judging the presence/absence of persons, vehicles, equipment for roadworks, etc., identifying their types, and detecting their positions, travelling directions, traveling speeds, etc. Own vehicle600may obtain braking information of nearby vehicle601instead of or in addition to three-dimensional data607generated by nearby vehicle601. Here, the braking information of nearby vehicle601is, for example, information indicating that the accelerator or the brake of nearby vehicle601has been pressed, or the degree of such pressing. In the point clouds generated by the vehicles, the three-dimensional spaces are segmented on a random access unit, in consideration of low-latency communication between the vehicles. Meanwhile, in a three-dimensional map, etc., which is map data downloaded from the server, a three-dimensional space is segmented in a larger random access unit than in the case of inter-vehicle communication. Data on a region that is likely to be an occlusion region, such as a region in front of the preceding vehicle and a region behind the following vehicle, is segmented on a finer random access unit as low-latency data. Data on a region in front of a vehicle has an increased importance when on an expressway, and thus each vehicle creates a SWLD of a range with a narrowed viewing angle on a finer random access unit when running on an expressway. When the SWLD created by the preceding vehicle for transmission includes a region, the point cloud of which own vehicle600can obtain, the preceding vehicle may remove the point cloud of such region to reduce the amount of data to transmit. Embodiment 3 The present embodiment describes a method, etc. of transmitting three-dimensional data to a following vehicle.FIG.9is a diagram showing an exemplary space, three-dimensional data of which is to be transmitted to a following vehicle, etc. Vehicle801transmits, at the time interval of Δt, three-dimensional data, such as a point cloud (a point group) included in a rectangular solid space802, having width W, height H, and depth D, located ahead of vehicle801and distanced by distance L from vehicle801, to a cloud-based traffic monitoring system that monitors road situations or a following vehicle. When a change has occurred in the three-dimensional data of a space that is included in space802already transmitted in the past, due to a vehicle or a person entering space802from outside, for example, vehicle801also transmits three-dimensional data of the space in which such change has occurred. AlthoughFIG.9illustrates an example in which space802has a rectangular solid shape, space802is not necessarily a rectangular solid so long as space802includes a space on the forward road that is hidden from view of a following vehicle. Distance L may be set to a distance that allows the following vehicle having received the three-dimensional data to stop safely. For example, set as distance L is the sum of; a distance traveled by the following vehicle while receiving the three-dimensional data; a distance traveled by the following vehicle until the following vehicle starts speed reduction in accordance with the received data; and a distance required by the following vehicle to stop safely after starting speed reduction. These distances vary in accordance with the speed, and thus distance L may vary in accordance with speed V of the vehicle, just like L=a×V+b (a and b are constants). Width W is set to a value that is at least greater than the width of the lane on which vehicle801is traveling. Width W may also be set to a size that includes an adjacent space such as right and left lanes and a side strip. Depth D may have a fixed value, but may vary in accordance with speed V of the vehicle, just like D=c×V+d (c and d are constants). Also, D that is set to satisfy D>V×Δt enables the overlap of a space to be transmitted and a space transmitted in the past. This enables vehicle801to transmit a space on the traveling road to the following vehicle, etc. completely and more reliably. As described above, vehicle801transmits three-dimensional data of a limited space that is useful to the following vehicle, thereby effectively reducing the amount of the three-dimensional data to be transmitted and achieving low-latency, low-cost communication. Embodiment 4 In Embodiment 3, an example is described in which a client device of a vehicle or the like transmits three-dimensional data to another vehicle or a server such as a cloud-based traffic monitoring system. In the present embodiment, a client device transmits sensor information obtained through a sensor to a server or a client device. A structure of a system according to the present embodiment will first be described.FIG.10is a diagram showing the structure of a transmission/reception system of a three-dimensional map and sensor information according to the present embodiment. This system includes server901, and client devices902A and902B. Note that client devices902A and902B are also referred to as client device902when no particular distinction is made therebetween. Client device902is, for example, a vehicle-mounted device equipped in a mobile object such as a vehicle. Server901is, for example, a cloud-based traffic monitoring system, and is capable of communicating with the plurality of client devices902. Server901transmits the three-dimensional map formed by a point cloud to client device902. Note that a structure of the three-dimensional map is not limited to a point cloud, and may also be another structure expressing three-dimensional data such as a mesh structure. Client device902transmits the sensor information obtained by client device902to server901. The sensor information includes, for example, at least one of information obtained by LIDAR, a visible light image, an infrared image, a depth image, sensor position information, or sensor speed information. The data to be transmitted and received between server901and client device902may be compressed in order to reduce data volume, and may also be transmitted uncompressed in order to maintain data precision. When compressing the data, it is possible to use a three-dimensional compression method on the point cloud based on, for example, an octree structure. It is possible to use a two-dimensional image compression method on the visible light image, the infrared image, and the depth image. The two-dimensional image compression method is, for example, MPEG-4 AVC or HEVC standardized by MPEG. Server901transmits the three-dimensional map managed by server901to client device902in response to a transmission request for the three-dimensional map from client device902. Note that server901may also transmit the three-dimensional map without waiting for the transmission request for the three-dimensional map from client device902. For example, server901may broadcast the three-dimensional map to at least one client device902located in a predetermined space. Server901may also transmit the three-dimensional map suited to a position of client device902at fixed time intervals to client device902that has received the transmission request once. Server901may also transmit the three-dimensional map managed by server901to client device902every time the three-dimensional map is updated. Client device902sends the transmission request for the three-dimensional map to server901. For example, when client device902wants to perform the self-location estimation during traveling, client device902transmits the transmission request for the three-dimensional map to server901. Note that in the following cases, client device902may send the transmission request for the three-dimensional map to server901. Client device902may send the transmission request for the three-dimensional map to server901when the three-dimensional map stored by client device902is old. For example, client device902may send the transmission request for the three-dimensional map to server901when a fixed period has passed since the three-dimensional map is obtained by client device902. Client device902may also send the transmission request for the three-dimensional map to server901before a fixed time when client device902exits a space shown in the three-dimensional map stored by client device902. For example, client device902may send the transmission request for the three-dimensional map to server901when client device902is located within a predetermined distance from a boundary of the space shown in the three-dimensional map stored by client device902. When a movement path and a movement speed of client device902are understood, a time when client device902exits the space shown in the three-dimensional map stored by client device902may be predicted based on the movement path and the movement speed of client device902. Client device902may also send the transmission request for the three-dimensional map to server901when an error during alignment of the three-dimensional data and the three-dimensional map created from the sensor information by client device902is at least at a fixed level. Client device902transmits the sensor information to server901in response to a transmission request for the sensor information from server901. Note that client device902may transmit the sensor information to server901without waiting for the transmission request for the sensor information from server901. For example, client device902may periodically transmit the sensor information during a fixed period when client device902has received the transmission request for the sensor information from server901once. Client device902may determine that there is a possibility of a change in the three-dimensional map of a surrounding area of client device902having occurred, and transmit this information and the sensor information to server901, when the error during alignment of the three-dimensional data created by client device902based on the sensor information and the three-dimensional map obtained from server901is at least at the fixed level. Server901sends a transmission request for the sensor information to client device902. For example, server901receives position information, such as GPS information, about client device902from client device902. Server901sends the transmission request for the sensor information to client device902in order to generate a new three-dimensional map, when it is determined that client device902is approaching a space in which the three-dimensional map managed by server901contains little information, based on the position information about client device902. Server901may also send the transmission request for the sensor information, when wanting to (i) update the three-dimensional map, (ii) check road conditions during snowfall, a disaster, or the like, or (iii) check traffic congestion conditions, accident/incident conditions, or the like. Client device902may set an amount of data of the sensor information to be transmitted to server901in accordance with communication conditions or bandwidth during reception of the transmission request for the sensor information to be received from server901. Setting the amount of data of the sensor information to be transmitted to server901is, for example, increasing/reducing the data itself or appropriately selecting a compression method. FIG.11is a block diagram showing an example structure of client device902. Client device902receives the three-dimensional map formed by a point cloud and the like from server901, and estimates a self-location of client device902using the three-dimensional map created based on the sensor information of client device902. Client device902transmits the obtained sensor information to server901. Client device902includes data receiver1011, communication unit1012, reception controller1013, format converter1014, sensors1015, three-dimensional data creator1016, three-dimensional image processor1017, three-dimensional data storage1018, format converter1019, communication unit1020, transmission controller1021, and data transmitter1022. Data receiver1011receives three-dimensional map1031from server901. Three-dimensional map1031is data that includes a point cloud such as a WLD or a SWLD. Three-dimensional map1031may include compressed data or uncompressed data. Communication unit1012communicates with server901and transmits a data transmission request (e.g. transmission request for three-dimensional map) to server901. Reception controller1013exchanges information, such as information on supported formats, with a communications partner via communication unit1012to establish communication with the communications partner. Format converter1014performs a format conversion and the like on three-dimensional map1031received by data receiver1011to generate three-dimensional map1032. Format converter1014also performs a decompression or decoding process when three-dimensional map1031is compressed or encoded. Note that format converter1014does not perform the decompression or decoding process when three-dimensional map1031is uncompressed data. Sensors815are a group of sensors, such as LIDARs, visible light cameras, infrared cameras, or depth sensors that obtain information about the outside of a vehicle equipped with client device902, and generate sensor information1033. Sensor information1033is, for example, three-dimensional data such as a point cloud (point group data) when sensors1015are laser sensors such as LIDARs. Note that a single sensor may serve as sensors1015. Three-dimensional data creator1016generates three-dimensional data1034of a surrounding area of the own vehicle based on sensor information1033. For example, three-dimensional data creator1016generates point cloud data with color information on the surrounding area of the own vehicle using information obtained by LIDAR and visible light video obtained by a visible light camera. Three-dimensional image processor1017performs a self-location estimation process and the like of the own vehicle, using (i) the received three-dimensional map1032such as a point cloud, and (ii) three-dimensional data1034of the surrounding area of the own vehicle generated using sensor information1033. Note that three-dimensional image processor1017may generate three-dimensional data1035about the surroundings of the own vehicle by merging three-dimensional map1032and three-dimensional data1034, and may perform the self-location estimation process using the created three-dimensional data1035. Three-dimensional data storage1018stores three-dimensional map1032, three-dimensional data1034, three-dimensional data1035, and the like. Format converter1019generates sensor information1037by converting sensor information1033to a format supported by a receiver end. Note that format converter1019may reduce the amount of data by compressing or encoding sensor information1037. Format converter1019may omit this process when format conversion is not necessary. Format converter1019may also control the amount of data to be transmitted in accordance with a specified transmission range. Communication unit1020communicates with server901and receives a data transmission request (transmission request for sensor information) and the like from server901. Transmission controller1021exchanges information, such as information on supported formats, with a communications partner via communication unit1020to establish communication with the communications partner. Data transmitter1022transmits sensor information1037to server901. Sensor information1037includes, for example, information obtained through sensors1015, such as information obtained by LIDAR, a luminance image obtained by a visible light camera, an infrared image obtained by an infrared camera, a depth image obtained by a depth sensor, sensor position information, and sensor speed information. A structure of server901will be described next.FIG.12is a block diagram showing an example structure of server901. Server901transmits sensor information from client device902and creates three-dimensional data based on the received sensor information. Server901updates the three-dimensional map managed by server901using the created three-dimensional data. Server901transmits the updated three-dimensional map to client device902in response to a transmission request for the three-dimensional map from client device902. Server901includes data receiver1111, communication unit1112, reception controller1113, format converter1114, three-dimensional data creator1116, three-dimensional data merger1117, three-dimensional data storage1118, format converter1119, communication unit1120, transmission controller1121, and data transmitter1122. Data receiver1111receives sensor information1037from client device902. Sensor information1037includes, for example, information obtained by LIDAR, a luminance image obtained by a visible light camera, an infrared image obtained by an infrared camera, a depth image obtained by a depth sensor, sensor position information, sensor speed information, and the like. Communication unit1112communicates with client device902and transmits a data transmission request (e.g. transmission request for sensor information) and the like to client device902. Reception controller1113exchanges information, such as information on supported formats, with a communications partner via communication unit1112to establish communication with the communications partner. Format converter1114generates sensor information1132by performing a decompression or decoding process when the received sensor information1037is compressed or encoded. Note that format converter1114does not perform the decompression or decoding process when sensor information1037is uncompressed data. Three-dimensional data creator1116generates three-dimensional data1134of a surrounding area of client device902based on sensor information1132. For example, three-dimensional data creator1116generates point cloud data with color information on the surrounding area of client device902using information obtained by LIDAR and visible light video obtained by a visible light camera. Three-dimensional data merger1117updates three-dimensional map1135by merging three-dimensional data1134created based on sensor information1132with three-dimensional map1135managed by server901. Three-dimensional data storage1118stores three-dimensional map1135and the like. Format converter1119generates three-dimensional map1031by converting three-dimensional map1135to a format supported by the receiver end. Note that format converter1119may reduce the amount of data by compressing or encoding three-dimensional map1135. Format converter1119may omit this process when format conversion is not necessary. Format converter1119may also control the amount of data to be transmitted in accordance with a specified transmission range. Communication unit1120communicates with client device902and receives a data transmission request (transmission request for three-dimensional map) and the like from client device902. Transmission controller1121exchanges information, such as information on supported formats, with a communications partner via communication unit1120to establish communication with the communications partner. Data transmitter1122transmits three-dimensional map1031to client device902. Three-dimensional map1031is data that includes a point cloud such as a WLD or a SWLD. Three-dimensional map1031may include one of compressed data and uncompressed data. An operational flow of client device902will be described next.FIG.13is a flowchart of an operation when client device902obtains the three-dimensional map. Client device902first requests server901to transmit the three-dimensional map (point cloud, etc.) (S1001). At this point, by also transmitting the position information about client device902obtained through GPS and the like, client device902may also request server901to transmit a three-dimensional map relating to this position information. Client device902next receives the three-dimensional map from server901(S1002). When the received three-dimensional map is compressed data, client device902decodes the received three-dimensional map and generates an uncompressed three-dimensional map (S1003). Client device902next creates three-dimensional data1034of the surrounding area of client device902using sensor information1033obtained by sensors1015(S1004). Client device902next estimates the self-location of client device902using three-dimensional map1032received from server901and three-dimensional data1034created using sensor information1033(S1005). FIG.14is a flowchart of an operation when client device902transmits the sensor information. Client device902first receives a transmission request for the sensor information from server901(S1011). Client device902that has received the transmission request transmits sensor information1037to server901(S1012). Note that client device902may generate sensor information1037by compressing each piece of information using a compression method suited to each piece of information, when sensor information1033includes a plurality of pieces of information obtained by sensors1015. An operational flow of server901will be described next.FIG.15is a flowchart of an operation when server901obtains the sensor information. Server901first requests client device902to transmit the sensor information (S1021). Server901next receives sensor information1037transmitted from client device902in accordance with the request (S1022). Server901next creates three-dimensional data1134using the received sensor information1037(S1023). Server901next reflects the created three-dimensional data1134in three-dimensional map1135(S1024). FIG.16is a flowchart of an operation when server901transmits the three-dimensional map. Server901first receives a transmission request for the three-dimensional map from client device902(S1031). Server901that has received the transmission request for the three-dimensional map transmits the three-dimensional map to client device902(S1032). At this point, server901may extract a three-dimensional map of a vicinity of client device902along with the position information about client device902, and transmit the extracted three-dimensional map. Server901may compress the three-dimensional map formed by a point cloud using, for example, an octree structure compression method, and transmit the compressed three-dimensional map. The following describes variations of the present embodiment. Server901creates three-dimensional data1134of a vicinity of a position of client device902using sensor information1037received from client device902. Server901next calculates a difference between three-dimensional data1134and three-dimensional map1135, by matching the created three-dimensional data1134with three-dimensional map1135of the same area managed by server901. Server901determines that a type of anomaly has occurred in the surrounding area of client device902, when the difference is greater than or equal to a predetermined threshold. For example, it is conceivable that a large difference occurs between three-dimensional map1135managed by server901and three-dimensional data1134created based on sensor information1037, when land subsidence and the like occurs due to a natural disaster such as an earthquake. Sensor information1037may include information indicating at least one of a sensor type, a sensor performance, and a sensor model number. Sensor information1037may also be appended with a class ID and the like in accordance with the sensor performance. For example, when sensor information1037is obtained by LIDAR, it is conceivable to assign identifiers to the sensor performance. A sensor capable of obtaining information with precision in units of several millimeters is class1, a sensor capable of obtaining information with precision in units of several centimeters is class2, and a sensor capable of obtaining information with precision in units of several meters is class3. Server901may estimate sensor performance information and the like from a model number of client device902. For example, when client device902is equipped in a vehicle, server901may determine sensor specification information from a type of the vehicle. In this case, server901may obtain information on the type of the vehicle in advance, and the information may also be included in the sensor information. Server901may change a degree of correction with respect to three-dimensional data1134created using sensor information1037, using the obtained sensor information1037. For example, when the sensor performance is high in precision (class1), server901does not correct three-dimensional data1134. When the sensor performance is low in precision (class3), server901corrects three-dimensional data1134in accordance with the precision of the sensor. For example, server901increases the degree (intensity) of correction with a decrease in the precision of the sensor. Server901may simultaneously send the transmission request for the sensor information to the plurality of client devices902in a certain space. Server901does not need to use all of the sensor information for creating three-dimensional data1134and may, for example, select sensor information to be used in accordance with the sensor performance, when having received a plurality of pieces of sensor information from the plurality of client devices902. For example, when updating three-dimensional map1135, server901may select high-precision sensor information (class1) from among the received plurality of pieces of sensor information, and create three-dimensional data1134using the selected sensor information. Server901is not limited to only being a server such as a cloud-based traffic monitoring system, and may also be another (vehicle-mounted) client device.FIG.17is a diagram of a system structure in this case. For example, client device902C sends a transmission request for sensor information to client device902A located nearby, and obtains the sensor information from client device902A. Client device902C then creates three-dimensional data using the obtained sensor information of client device902A, and updates a three-dimensional map of client device902C. This enables client device902C to generate a three-dimensional map of a space that can be obtained from client device902A, and fully utilize the performance of client device902C. For example, such a case is conceivable when client device902C has high performance. In this case, client device902A that has provided the sensor information is given rights to obtain the high-precision three-dimensional map generated by client device902C. Client device902A receives the high-precision three-dimensional map from client device902C in accordance with these rights. Server901may send the transmission request for the sensor information to the plurality of client devices902(client device902A and client device902B) located nearby client device902C. When a sensor of client device902A or client device902B has high performance, client device902C is capable of creating the three-dimensional data using the sensor information obtained by this high-performance sensor. FIG.18is a block diagram showing a functionality structure of server901and client device902. Server901includes, for example, three-dimensional map compression/decoding processor1201that compresses and decodes the three-dimensional map and sensor information compression/decoding processor1202that compresses and decodes the sensor information. Client device902includes three-dimensional map decoding processor1211and sensor information compression processor1212. Three-dimensional map decoding processor1211receives encoded data of the compressed three-dimensional map, decodes the encoded data, and obtains the three-dimensional map. Sensor information compression processor1212compresses the sensor information itself instead of the three-dimensional data created using the obtained sensor information, and transmits the encoded data of the compressed sensor information to server901. With this structure, client device902does not need to internally store a processor that performs a process for compressing the three-dimensional data of the three-dimensional map (point cloud, etc.), as long as client device902internally stores a processor that performs a process for decoding the three-dimensional map (point cloud, etc.). This makes it possible to limit costs, power consumption, and the like of client device902. As stated above, client device902according to the present embodiment is equipped in the mobile object, and creates three-dimensional data1034of a surrounding area of the mobile object using sensor information1033that is obtained through sensor1015equipped in the mobile object and indicates a surrounding condition of the mobile object. Client device902estimates a self-location of the mobile object using the created three-dimensional data1034. Client device902transmits the obtained sensor information1033to server901or another mobile object. This enables client device902to transmit sensor information1033to server901or the like. This makes it possible to further reduce the amount of transmission data compared to when transmitting the three-dimensional data. Since there is no need for client device902to perform processes such as compressing or encoding the three-dimensional data, it is possible to reduce the processing amount of client device902. As such, client device902is capable of reducing the amount of data to be transmitted or simplifying the structure of the device. Client device902further transmits the transmission request for the three-dimensional map to server901and receives three-dimensional map1031from server901. In the estimating of the self-location, client device902estimates the self-location using three-dimensional data1034and three-dimensional map1032. Sensor information1034includes at least one of information obtained by a laser sensor, a luminance image, an infrared image, a depth image, sensor position information, or sensor speed information. Sensor information1033includes information that indicates a performance of the sensor. Client device902encodes or compresses sensor information1033, and in the transmitting of the sensor information, transmits sensor information1037that has been encoded or compressed to server901or another mobile object902. This enables client device902to reduce the amount of data to be transmitted. For example, client device902includes a processor and memory. The processor performs the above processes using the memory. Server901according to the present embodiment is capable of communicating with client device902equipped in the mobile object, and receives sensor information1037that is obtained through sensor1015equipped in the mobile object and indicates a surrounding condition of the mobile object. Server901creates three-dimensional data1134of a surrounding area of the mobile object using the received sensor information1037. With this, server901creates three-dimensional data1134using sensor information1037transmitted from client device902. This makes it possible to further reduce the amount of transmission data compared to when client device902transmits the three-dimensional data. Since there is no need for client device902to perform processes such as compressing or encoding the three-dimensional data, it is possible to reduce the processing amount of client device902. As such, server901is capable of reducing the amount of data to be transmitted or simplifying the structure of the device. Server901further transmits a transmission request for the sensor information to client device902. Server901further updates three-dimensional map1135using the created three-dimensional data1134, and transmits three-dimensional map1135to client device902in response to the transmission request for three-dimensional map1135from client device902. Sensor information1037includes at least one of information obtained by a laser sensor, a luminance image, an infrared image, a depth image, sensor position information, or sensor speed information. Sensor information1037includes information that indicates a performance of the sensor. Server901further corrects the three-dimensional data in accordance with the performance of the sensor. This enables the three-dimensional data creation method to improve the quality of the three-dimensional data. In the receiving of the sensor information, server901receives a plurality of pieces of sensor information1037received from a plurality of client devices902, and selects sensor information1037to be used in the creating of three-dimensional data1134, based on a plurality of pieces of information that each indicates the performance of the sensor included in the plurality of pieces of sensor information1037. This enables server901to improve the quality of three-dimensional data1134. Server901decodes or decompresses the received sensor information1037, and creates three-dimensional data1134using sensor information1132that has been decoded or decompressed. This enables server901to reduce the amount of data to be transmitted. For example, server901includes a processor and memory. The processor performs the above processes using the memory. Embodiment 5 In the present embodiment, a variation of Embodiment 4 will be described.FIG.19is a diagram illustrating a configuration of a system according to the present embodiment. The system illustrated inFIG.19includes server2001, client device2002A, and client device2002B. Client device2002A and client device2002B are each provided in a mobile object such as a vehicle, and transmit sensor information to server2001. Server2001transmits a three-dimensional map (a point cloud) to client device2002A and client device2002B. Client device2002A includes sensor information obtainer2011, storage2012, and data transmission possibility determiner2013. It should be noted that client device2002B has the same configuration. Additionally, when client device2002A and client device2002B are not particularly distinguished below, client device2002A and client device2002B are also referred to as client device2002. FIG.20is a flowchart illustrating operation of client device2002according to the present embodiment. Sensor information obtainer2011obtains a variety of sensor information using sensors (a group of sensors) provided in a mobile object. In other words, sensor information obtainer2011obtains sensor information obtained by the sensors (the group of sensors) provided in the mobile object and indicating a surrounding state of the mobile object. Sensor information obtainer2011also stores the obtained sensor information into storage2012. This sensor information includes at least one of information obtained by LiDAR, a visible light image, an infrared image, or a depth image. Additionally, the sensor information may include at least one of sensor position information, speed information, obtainment time information, or obtainment location information. Sensor position information indicates a position of a sensor that has obtained sensor information. Speed information indicates a speed of the mobile object when a sensor obtained sensor information. Obtainment time information indicates a time when a sensor obtained sensor information. Obtainment location information indicates a position of the mobile object or a sensor when the sensor obtained sensor information. Next, data transmission possibility determiner2013determines whether the mobile object (client device2002) is in an environment in which the mobile object can transmit sensor information to server2001(S2002). For example, data transmission possibility determiner2013may specify a location and a time at which client device2002is present using GPS information etc., and may determine whether data can be transmitted. Additionally, data transmission possibility determiner2013may determine whether data can be transmitted, depending on whether it is possible to connect to a specific access point. When client device2002determines that the mobile object is in the environment in which the mobile object can transmit the sensor information to server2001(YES in S2002), client device2002transmits the sensor information to server2001(S2003). In other words, when client device2002becomes capable of transmitting sensor information to server2001, client device2002transmits the sensor information held by client device2002to server2001. For example, an access point that enables high-speed communication using millimeter waves is provided in an intersection or the like. When client device2002enters the intersection, client device2002transmits the sensor information held by client device2002to server2001at high speed using the millimeter-wave communication. Next, client device2002deletes from storage2012the sensor information that has been transmitted to server2001(S2004). It should be noted that when sensor information that has not been transmitted to server2001meets predetermined conditions, client device2002may delete the sensor information. For example, when an obtainment time of sensor information held by client device2002precedes a current time by a certain time, client device2002may delete the sensor information from storage2012. In other words, when a difference between the current time and a time when a sensor obtained sensor information exceeds a predetermined time, client device2002may delete the sensor information from storage2012. Besides, when an obtainment location of sensor information held by client device2002is separated from a current location by a certain distance, client device2002may delete the sensor information from storage2012. In other words, when a difference between a current position of the mobile object or a sensor and a position of the mobile object or the sensor when the sensor obtained sensor information exceeds a predetermined distance, client device2002may delete the sensor information from storage2012. Accordingly, it is possible to reduce the capacity of storage2012of client device2002. When client device2002does not finish obtaining sensor information (NO in S2005), client device2002performs step S2001and the subsequent steps again. Further, when client device2002finishes obtaining sensor information (YES in S2005), client device2002completes the process. Client device2002may select sensor information to be transmitted to server2001, in accordance with communication conditions. For example, when high-speed communication is available, client device2002preferentially transmits sensor information (e.g., information obtained by LiDAR) of which the data size held in storage2012is large. Additionally, when high-speed communication is not readily available, client device2002transmits sensor information (e.g., a visible light image) which has high priority and of which the data size held in storage2012is small. Accordingly, client device2002can efficiently transmit sensor information held in storage2012, in accordance with network conditions Client device2002may obtain, from server2001, time information indicating a current time and location information indicating a current location. Moreover, client device2002may determine an obtainment time and an obtainment location of sensor information based on the obtained time information and location information. In other words, client device2002may obtain time information from server2001and generate obtainment time information using the obtained time information. Client device2002may also obtain location information from server2001and generate obtainment location information using the obtained location information. For example, regarding time information, server2001and client device2002perform clock synchronization using a means such as the Network Time Protocol (NTP) or the Precision Time Protocol (PTP). This enables client device2002to obtain accurate time information. What's more, since it is possible to synchronize clocks between server2001and client devices2002, it is possible to synchronize times included in pieces of sensor information obtained by separate client devices2002. As a result, server2001can handle sensor information indicating a synchronized time. It should be noted that a means of synchronizing clocks may be any means other than the NTP or PTP. In addition, GPS information may be used as the time information and the location information. Server2001may specify a time or a location and obtain pieces of sensor information from client devices2002. For example, when an accident occurs, in order to search for a client device in the vicinity of the accident, server2001specifies an accident occurrence time and an accident occurrence location and broadcasts sensor information transmission requests to client devices2002. Then, client device2002having sensor information obtained at the corresponding time and location transmits the sensor information to server2001. In other words, client device2002receives, from server2001, a sensor information transmission request including specification information specifying a location and a time. When sensor information obtained at a location and a time indicated by the specification information is stored in storage2012, and client device2002determines that the mobile object is present in the environment in which the mobile object can transmit the sensor information to server2001, client device2002transmits, to server2001, the sensor information obtained at the location and the time indicated by the specification information. Consequently, server2001can obtain the pieces of sensor information pertaining to the occurrence of the accident from client devices2002, and use the pieces of sensor information for accident analysis etc. It should be noted that when client device2002receives a sensor information transmission request from server2001, client device2002may refuse to transmit sensor information. Additionally, client device2002may set in advance which pieces of sensor information can be transmitted. Alternatively, server2001may inquire of client device2002each time whether sensor information can be transmitted. A point may be given to client device2002that has transmitted sensor information to server2001. This point can be used in payment for, for example, gasoline expenses, electric vehicle (EV) charging expenses, a highway toll, or rental car expenses. After obtaining sensor information, server2001may delete information for specifying client device2002that has transmitted the sensor information. For example, this information is a network address of client device2002. Since this enables the anonymization of sensor information, a user of client device2002can securely transmit sensor information from client device2002to server2001. Server2001may include servers. For example, by servers sharing sensor information, even when one of the servers breaks down, the other servers can communicate with client device2002. Accordingly, it is possible to avoid service outage due to a server breakdown. A specified location specified by a sensor information transmission request indicates an accident occurrence location etc., and may be different from a position of client device2002at a specified time specified by the sensor information transmission request. For this reason, for example, by specifying, as a specified location, a range such as within XX meters of a surrounding area, server2001can request information from client device2002within the range. Similarly, server2001may also specify, as a specified time, a range such as within N seconds before and after a certain time. As a result, server2001can obtain sensor information from client device2002present for a time from t-N to t+N and in a location within XX meters from absolute position S. When client device2002transmits three-dimensional data such as LiDAR, client device2002may transmit data created immediately after time t. Server2001may separately specify information indicating, as a specified location, a location of client device2002from which sensor information is to be obtained, and a location at which sensor information is desirably obtained. For example, server2001specifies that sensor information including at least a range within YY meters from absolute position S is to be obtained from client device2002present within XX meters from absolute position S. When client device2002selects three-dimensional data to be transmitted, client device2002selects one or more pieces of three-dimensional data so that the one or more pieces of three-dimensional data include at least the sensor information including the specified range. Each of the one or more pieces of three-dimensional data is a random-accessible unit of data. In addition, when client device2002transmits a visible light image, client device2002may transmit pieces of temporally continuous image data including at least a frame immediately before or immediately after time t. When client device2002can use physical networks such as 5G, Wi-Fi, or modes in 5G for transmitting sensor information, client device2002may select a network to be used according to the order of priority notified by server2001. Alternatively, client device2002may select a network that enables client device2002to ensure an appropriate bandwidth based on the size of transmit data. Alternatively, client device2002may select a network to be used, based on data transmission expenses etc. A transmission request from server2001may include information indicating a transmission deadline, for example, performing transmission when client device2002can start transmission by time t. When server2001cannot obtain sufficient sensor information within a time limit, server2001may issue a transmission request again. Sensor information may include header information indicating characteristics of sensor data along with compressed or uncompressed sensor data. Client device2002may transmit header information to server2001via a physical network or a communication protocol that is different from a physical network or a communication protocol used for sensor data. For example, client device2002transmits header information to server2001prior to transmitting sensor data. Server2001determines whether to obtain the sensor data of client device2002, based on a result of analysis of the header information. For example, header information may include information indicating a point cloud obtainment density, an elevation angle, or a frame rate of LiDAR, or information indicating, for example, a resolution, an SN ratio, or a frame rate of a visible light image. Accordingly, server2001can obtain the sensor information from client device2002having the sensor data of determined quality. As stated above, client device2002is provided in the mobile object, obtains sensor information that has been obtained by a sensor provided in the mobile object and indicates a surrounding state of the mobile object, and stores the sensor information into storage2012. Client device2002determines whether the mobile object is present in an environment in which the mobile object is capable of transmitting the sensor information to server2001, and transmits the sensor information to server2001when the mobile object is determined to be present in the environment in which the mobile object is capable of transmitting the sensor information to server2001. Additionally, client device2002further creates, from the sensor information, three-dimensional data of a surrounding area of the mobile object, and estimates a self-location of the mobile object using the three-dimensional data created. Besides, client device2002further transmits a transmission request for a three-dimensional map to server2001, and receives the three-dimensional map from server2001. In the estimating, client device2002estimates the self-location using the three-dimensional data and the three-dimensional map. It should be noted that the above process performed by client device2002may be realized as an information transmission method for use in client device2002. In addition, client device2002may include a processor and memory. Using the memory, the processor may perform the above process. Next, a sensor information collection system according to the present embodiment will be described.FIG.21is a diagram illustrating a configuration of the sensor information collection system according to the present embodiment. As illustrated inFIG.21, the sensor information collection system according to the present embodiment includes terminal2021A, terminal2021B, communication device2022A, communication device2022B, network2023, data collection server2024, map server2025, and client device2026. It should be noted that when terminal2021A and terminal2021B are not particularly distinguished, terminal2021A and terminal2021B are also referred to as terminal2021. Additionally, when communication device2022A and communication device2022B are not particularly distinguished, communication device2022A and communication device2022B are also referred to as communication device2022. Data collection server2024collects data such as sensor data obtained by a sensor included in terminal2021as position-related data in which the data is associated with a position in a three-dimensional space. Sensor data is data obtained by, for example, detecting a surrounding state of terminal2021or an internal state of terminal2021using a sensor included in terminal2021. Terminal2021transmits, to data collection server2024, one or more pieces of sensor data collected from one or more sensor devices in locations at which direct communication with terminal2021is possible or at which communication with terminal2021is possible by the same communication system or via one or more relay devices. Data included in position-related data may include, for example, information indicating an operating state, an operating log, a service use state, etc. of a terminal or a device included in the terminal. In addition, the data include in the position-related data may include, for example, information in which an identifier of terminal2021is associated with a position or a movement path etc. of terminal2021. Information indicating a position included in position-related data is associated with, for example, information indicating a position in three-dimensional data such as three-dimensional map data. The details of information indicating a position will be described later. Position-related data may include at least one of the above-described time information or information indicating an attribute of data included in the position-related data or a type (e.g., a model number) of a sensor that has created the data, in addition to position information that is information indicating a position. The position information and the time information may be stored in a header area of the position-related data or a header area of a frame that stores the position-related data. Further, the position information and the time information may be transmitted and/or stored as metadata associated with the position-related data, separately from the position-related data. Map server2025is connected to, for example, network2023, and transmits three-dimensional data such as three-dimensional map data in response to a request from another device such as terminal2021. Besides, as described in the aforementioned embodiments, map server2025may have, for example, a function of updating three-dimensional data using sensor information transmitted from terminal2021. Data collection server2024is connected to, for example, network2023, collects position-related data from another device such as terminal2021, and stores the collected position-related data into a storage of data collection server2024or a storage of another server. In addition, data collection server2024transmits, for example, metadata of collected position-related data or three-dimensional data generated based on the position-related data, to terminal2021in response to a request from terminal2021. Network2023is, for example, a communication network such as the Internet. Terminal2021is connected to network2023via communication device2022. Communication device2022communicates with terminal2021using one communication system or switching between communication systems. Communication device2022is a communication satellite that performs communication using, for example, (1) a base station compliant with Long-Term Evolution (LTE) etc., (2) an access point (AP) for Wi-Fi or millimeter-wave communication etc., (3) a low-power wide-area (LPWA) network gateway such as SIGFOX, LoRaWAN, or Wi-SUN, or (4) a satellite communication system such as DVB-S2. It should be noted that a base station may communicate with terminal2021using a system classified as an LPWA network such as Narrowband Internet of Things (NB IoT) or LTE-M, or switching between these systems. Here, although, in the example given, terminal2021has a function of communicating with communication device2022that uses two types of communication systems, and communicates with map server2025or data collection server2024using one of the communication systems or switching between the communication systems and between communication devices2022to be a direct communication partner; a configuration of the sensor information collection system and terminal2021is not limited to this. For example, terminal2021need not have a function of performing communication using communication systems, and may have a function of performing communication using one of the communication systems. Terminal2021may also support three or more communication systems. Additionally, each terminal2021may support a different communication system. Terminal2021includes, for example, the configuration of client device902illustrated inFIG.11. Terminal2021estimates a self-location etc. using received three-dimensional data. Besides, terminal2021associates sensor data obtained from a sensor and position information obtained by self-location estimation to generate position-related data. Position information appended to position-related data indicates, for example, a position in a coordinate system used for three-dimensional data. For example, the position information is coordinate values represented using a value of a latitude and a value of a longitude. Here, terminal2021may include, in the position information, a coordinate system serving as a reference for the coordinate values and information indicating three-dimensional data used for location estimation, along with the coordinate values. Coordinate values may also include altitude information. The position information may be associated with a data unit or a space unit usable for encoding the above three-dimensional data. Such a unit is, for example, WLD, GOS, SPC, VLM, or VXL. Here, the position information is represented by, for example, an identifier for identifying a data unit such as the SPC corresponding to position-related data. It should be noted that the position information may include, for example, information indicating three-dimensional data obtained by encoding a three-dimensional space including a data unit such as the SPC or information indicating a detailed position within the SPC, in addition to the identifier for identifying the data unit such as the SPC. The information indicating the three-dimensional data is, for example, a file name of the three-dimensional data. As stated above, by generating position-related data associated with position information based on location estimation using three-dimensional data, the system can give more accurate position information to sensor information than when the system appends position information based on a self-location of a client device (terminal2021) obtained using a GPS to sensor information. As a result, even when another device uses the position-related data in another service, there is a possibility of more accurately determining a position corresponding to the position-related data in an actual space, by performing location estimation based on the same three-dimensional data. It should be noted that although the data transmitted from terminal2021is the position-related data in the example given in the present embodiment, the data transmitted from terminal2021may be data unassociated with position information. In other words, the transmission and reception of three-dimensional data or sensor data described in the other embodiments may be performed via network2023described in the present embodiment. Next, a different example of position information indicating a position in a three-dimensional or two-dimensional actual space or in a map space will be described. The position information appended to position-related data may be information indicating a relative position relative to a keypoint in three-dimensional data. Here, the keypoint serving as a reference for the position information is encoded as, for example, SWLD, and notified to terminal2021as three-dimensional data. The information indicating the relative position relative to the keypoint may be, for example, information that is represented by a vector from the keypoint to the point indicated by the position information, and indicates a direction and a distance from the keypoint to the point indicated by the position information. Alternatively, the information indicating the relative position relative to the keypoint may be information indicating an amount of displacement from the keypoint to the point indicated by the position information along each of the x axis, the y axis, and the z axis. Additionally, the information indicating the relative position relative to the keypoint may be information indicating a distance from each of three or more keypoints to the point indicated by the position information. It should be noted that the relative position need not be a relative position of the point indicated by the position information represented using each keypoint as a reference, and may be a relative position of each keypoint represented with respect to the point indicated by the position information. Examples of position information based on a relative position relative to a keypoint include information for identifying a keypoint to be a reference, and information indicating the relative position of the point indicated by the position information and relative to the keypoint. When the information indicating the relative position relative to the keypoint is provided separately from three-dimensional data, the information indicating the relative position relative to the keypoint may include, for example, coordinate axes used in deriving the relative position, information indicating a type of the three-dimensional data, and/or information indicating a magnitude per unit amount (e.g., a scale) of a value of the information indicating the relative position. The position information may include, for each keypoint, information indicating a relative position relative to the keypoint. When the position information is represented by relative positions relative to keypoints, terminal2021that intends to identify a position in an actual space indicated by the position information may calculate candidate points of the position indicated by the position information from positions of the keypoints each estimated from sensor data, and may determine that a point obtained by averaging the calculated candidate points is the point indicated by the position information. Since this configuration reduces the effects of errors when the positions of the keypoints are estimated from the sensor data, it is possible to improve the estimation accuracy for the point in the actual space indicated by the position information. Besides, when the position information includes information indicating relative positions relative to keypoints, if it is possible to detect any one of the keypoints regardless of the presence of keypoints undetectable due to a limitation such as a type or performance of a sensor included in terminal2021, it is possible to estimate a value of the point indicated by the position information. A point identifiable from sensor data can be used as a keypoint. Examples of the point identifiable from the sensor data include a point or a point within a region that satisfies a predetermined keypoint detection condition, such as the above-described three-dimensional feature or feature of visible light data is greater than or equal to a threshold value. Moreover, a marker etc. placed in an actual space may be used as a keypoint. In this case, the maker may be detected and located from data obtained using a sensor such as LiDER or a camera. For example, the marker may be represented by a change in color or luminance value (degree of reflection), or a three-dimensional shape (e.g., unevenness). Coordinate values indicating a position of the marker, or a two-dimensional bar code or a bar code etc. generated from an identifier of the marker may be also used. Furthermore, a light source that transmits an optical signal may be used as a marker. When a light source of an optical signal is used as a marker, not only information for obtaining a position such as coordinate values or an identifier but also other data may be transmitted using an optical signal. For example, an optical signal may include contents of service corresponding to the position of the marker, an address for obtaining contents such as a URL, or an identifier of a wireless communication device for receiving service, and information indicating a wireless communication system etc. for connecting to the wireless communication device. The use of an optical communication device (a light source) as a marker not only facilitates the transmission of data other than information indicating a position but also makes it possible to dynamically change the data. Terminal2021finds out a correspondence relationship of keypoints between mutually different data using, for example, a common identifier used for the data, or information or a table indicating the correspondence relationship of the keypoints between the data. When there is no information indicating a correspondence relationship between keypoints, terminal2021may also determine that when coordinates of a keypoint in three-dimensional data are converted into a position in a space of another three-dimensional data, a keypoint closest to the position is a corresponding keypoint. When the position information based on the relative position described above is used, terminal2021that uses mutually different three-dimensional data or services can identify or estimate a position indicated by the position information with respect to a common keypoint included in or associated with each three-dimensional data. As a result, terminal2021that uses the mutually different three-dimensional data or the services can identify or estimate the same position with higher accuracy. Even when map data or three-dimensional data represented using mutually different coordinate systems are used, since it is possible to reduce the effects of errors caused by the conversion of a coordinate system, it is possible to coordinate services based on more accurate position information. Hereinafter, an example of functions provided by data collection server2024will be described. Data collection server2024may transfer received position-related data to another data server. When there are data servers, data collection server2024determines to which data server received position-related data is to be transferred, and transfers the position-related data to a data server determined as a transfer destination. Data collection server2024determines a transfer destination based on, for example, transfer destination server determination rules preset to data collection server2024. The transfer destination server determination rules are set by, for example, a transfer destination table in which identifiers respectively associated with terminals2021are associated with transfer destination data servers. Terminal2021appends an identifier associated with terminal2021to position-related data to be transmitted, and transmits the position-related data to data collection server2024. Data collection server2024determines a transfer destination data server corresponding to the identifier appended to the position-related data, based on the transfer destination server determination rules set out using the transfer destination table etc.; and transmits the position-related data to the determined data server. The transfer destination server determination rules may be specified based on a determination condition set using a time, a place, etc. at which position-related data is obtained. Here, examples of the identifier associated with transmission source terminal2021include an identifier unique to each terminal2021or an identifier indicating a group to which terminal2021belongs. The transfer destination table need not be a table in which identifiers associated with transmission source terminals are directly associated with transfer destination data servers. For example, data collection server2024holds a management table that stores tag information assigned to each identifier unique to terminal2021, and a transfer destination table in which the pieces of tag information are associated with transfer destination data servers. Data collection server2024may determine a transfer destination data server based on tag information, using the management table and the transfer destination table. Here, the tag information is, for example, control information for management or control information for providing service assigned to a type, a model number, an owner of terminal2021corresponding to the identifier, a group to which terminal2021belongs, or another identifier. Moreover, in the transfer destination able, identifiers unique to respective sensors may be used instead of the identifiers associated with transmission source terminals2021. Furthermore, the transfer destination server determination rules may be set by client device2026. Data collection server2024may determine data servers as transfer destinations, and transfer received position-related data to the data servers. According to this configuration, for example, when position-related data is automatically backed up or when, in order that position-related data is commonly used by different services, there is a need to transmit the position-related data to a data server for providing each service, it is possible to achieve data transfer as intended by changing a setting of data collection server2024. As a result, it is possible to reduce the number of steps necessary for building and changing a system, compared to when a transmission destination of position-related data is set for each terminal2021. Data collection server2024may register, as a new transfer destination, a data server specified by a transfer request signal received from a data server; and transmit position-related data subsequently received to the data server, in response to the transfer request signal. Data collection server2024may store position-related data received from terminal2021into a recording device, and transmit position-related data specified by a transmission request signal received from terminal2021or a data server to request source terminal2021or the data server in response to the transmission request signal. Data collection server2024may determine whether position-related data is suppliable to a request source data server or terminal2021, and transfer or transmit the position-related data to the request source data server or terminal2021when determining that the position-related data is suppliable. When data collection server2024receives a request for current position-related data from client device2026, even if it is not a timing for transmitting position-related data by terminal2021, data collection server2024may send a transmission request for the current position-related data to terminal2021, and terminal2021may transmit the current position-related data in response to the transmission request. Although terminal2021transmits position information data to data collection server2024in the above description, data collection server2024may have a function of managing terminal2021such as a function necessary for collecting position-related data from terminal2021or a function used when collecting position-related data from terminal2021. Data collection server2024may have a function of transmitting, to terminal2021, a data request signal for requesting transmission of position information data, and collecting position-related data. Management information such as an address for communicating with terminal2021from which data is to be collected or an identifier unique to terminal2021is registered in advance in data collection server2024. Data collection server2024collects position-related data from terminal2021based on the registered management information. Management information may include information such as types of sensors included in terminal2021, the number of sensors included in terminal2021, and communication systems supported by terminal2021. Data collection server2024may collect information such as an operating state or a current position of terminal2021from terminal2021. Registration of management information may be instructed by client device2026, or a process for the registration may be started by terminal2021transmitting a registration request to data collection server2024. Data collection server2024may have a function of controlling communication between data collection server2024and terminal2021. Communication between data collection server2024and terminal2021may be established using a dedicated line provided by a service provider such as a mobile network operator (MNO) or a mobile virtual network operator (MVNO), or a virtual dedicated line based on a virtual private network (VPN). According to this configuration, it is possible to perform secure communication between terminal2021and data collection server2024. Data collection server2024may have a function of authenticating terminal2021or a function of encrypting data to be transmitted and received between data collection server2024and terminal2021. Here, the authentication of terminal2021or the encryption of data is performed using, for example, an identifier unique to terminal2021or an identifier unique to a terminal group including terminals2021, which is shared in advance between data collection server2024and terminal2021. Examples of the identifier include an international mobile subscriber identity (IMSI) that is a unique number stored in a subscriber identity module (SIM) card. An identifier for use in authentication and an identifier for use in encryption of data may be identical or different. The authentication or the encryption of data between data collection server2024and terminal2021is feasible when both data collection server2024and terminal2021have a function of performing the process. The process does not depend on a communication system used by communication device2022that performs relay. Accordingly, since it is possible to perform the common authentication or encryption without considering whether terminal2021uses a communication system, the user's convenience of system architecture is increased. However, the expression “does not depend on a communication system used by communication device2022that performs relay” means a change according to a communication system is not essential. In other words, in order to improve the transfer efficiency or ensure security, the authentication or the encryption of data between data collection server2024and terminal2021may be changed according to a communication system used by a relay device. Data collection server2024may provide client device2026with a User Interface (UI) that manages data collection rules such as types of position-related data collected from terminal2021and data collection schedules. Accordingly, a user can specify, for example, terminal2021from which data is to be collected using client device2026, a data collection time, and a data collection frequency. Additionally, data collection server2024may specify, for example, a region on a map from which data is to be desirably collected, and collect position-related data from terminal2021included in the region. When the data collection rules are managed on a per terminal2021basis, client device2026presents, on a screen, a list of terminals2021or sensors to be managed. The user sets, for example, a necessity for data collection or a collection schedule for each item in the list. When a region on a map from which data is to be desirably collected is specified, client device2026presents, on a screen, a two-dimensional or three-dimensional map of a region to be managed. The user selects the region from which data is to be collected on the displayed map. Examples of the region selected on the map include a circular or rectangular region having a point specified on the map as the center, or a circular or rectangular region specifiable by a drag operation. Client device2026may also select a region in a preset unit such as a city, an area or a block in a city, or a main road, etc. Instead of specifying a region using a map, a region may be set by inputting values of a latitude and a longitude, or a region may be selected from a list of candidate regions derived based on inputted text information. Text information is, for example, a name of a region, a city, or a landmark. Moreover, data may be collected while the user dynamically changes a specified region by specifying one or more terminals2021and setting a condition such as within 100 meters of one or more terminals2021. When client device2026includes a sensor such as a camera, a region on a map may be specified based on a position of client device2026in an actual space obtained from sensor data. For example, client device2026may estimate a self-location using sensor data, and specify, as a region from which data is to be collected, a region within a predetermined distance from a point on a map corresponding to the estimated location or a region within a distance specified by the user. Client device2026may also specify, as the region from which the data is to be collected, a sensing region of the sensor, that is, a region corresponding to obtained sensor data. Alternatively, client device2026may specify, as the region from which the data is to be collected, a region based on a location corresponding to sensor data specified by the user. Either client device2026or data collection server2024may estimate a region on a map or a location corresponding to sensor data. When a region on a map is specified, data collection server2024may specify terminal2021within the specified region by collecting current position information of each terminal2021, and may send a transmission request for position-related data to specified terminal2021. When data collection server2024transmits information indicating a specified region to terminal2021, determines whether terminal2021is present within the specified region, and determines that terminal2021is present within the specified region, rather than specifying terminal2021within the region, terminal2021may transmit position-related data. Data collection server2024transmits, to client device2026, data such as a list or a map for providing the above-described User Interface (UI) in an application executed by client device2026. Data collection server2024may transmit, to client device2026, not only the data such as the list or the map but also an application program. Additionally, the above UI may be provided as contents created using HTML displayable by a browser. It should be noted that part of data such as map data may be supplied from a server, such as map server2025, other than data collection server2024. When client device2026receives an input for notifying the completion of an input such as pressing of a setup key by the user, client device2026transmits the inputted information as configuration information to data collection server2024. Data collection server2024transmits, to each terminal2021, a signal for requesting position-related data or notifying position-related data collection rules, based on the configuration information received from client device2026, and collects the position-related data. Next, an example of controlling operation of terminal2021based on additional information added to three-dimensional or two-dimensional map data will be described. In the present configuration, object information that indicates a position of a power feeding part such as a feeder antenna or a feeder coil for wireless power feeding buried under a road or a parking lot is included in or associated with three-dimensional data, and such object information is provided to terminal2021that is a vehicle or a drone. A vehicle or a drone that has obtained the object information to get charged automatically moves so that a position of a charging part such as a charging antenna or a charging coil included in the vehicle or the drone becomes opposite to a region indicated by the object information, and such vehicle or a drone starts to charge itself. It should be noted that when a vehicle or a drone has no automatic driving function, a direction to move in or an operation to perform is presented to a driver or an operator by using an image displayed on a screen, audio, etc. When a position of a charging part calculated based on an estimated self-location is determined to fall within the region indicated by the object information or a predetermined distance from the region, an image or audio to be presented is changed to a content that puts a stop to driving or operating, and the charging is started. Object information need not be information indicating a position of a power feeding part, and may be information indicating a region within which placement of a charging part results in a charging efficiency greater than or equal to a predetermined threshold value. A position indicated by object information may be represented by, for example, the central point of a region indicated by the object information, a region or a line within a two-dimensional plane, or a region, a line, or a plane within a three-dimensional space. According to this configuration, since it is possible to identify the position of the power feeding antenna unidentifiable by sensing data of LiDER or an image captured by the camera, it is possible to highly accurately align a wireless charging antenna included in terminal2021such as a vehicle with a wireless power feeding antenna buried under a road. As a result, it is possible to increase a charging speed at the time of wireless charging and improve the charging efficiency. Object information may be an object other than a power feeding antenna. For example, three-dimensional data includes, for example, a position of an AP for millimeter-wave wireless communication as object information. Accordingly, since terminal2021can identify the position of the AP in advance, terminal2021can steer a directivity of beam to a direction of the object information and start communication. As a result, it is possible to improve communication quality such as increasing transmission rates, reducing the duration of time before starting communication, and extending a communicable period. Object information may include information indicating a type of an object corresponding to the object information. In addition, when terminal2021is present within a region in an actual space corresponding to a position in three-dimensional data of the object information or within a predetermined distance from the region, the object information may include information indicating a process to be performed by terminal2021. Object information may be provided by a server different from a server that provides three-dimensional data. When object information is provided separately from three-dimensional data, object groups in which object information used by the same service is stored may be each provided as separate data according to a type of a target service or a target device. Three-dimensional data used in combination with object information may be point cloud data of WLD or keypoint data of SWLD. Embodiment 6 Hereinafter, a scan order of an octree representation and voxels will be described. A volume is encoded after being converted into an octree structure (made into an octree). The octree structure includes nodes and leaves. Each node has eight nodes or leaves, and each leaf has voxel (VXL) information.FIG.22is a diagram showing an example structure of a volume including voxels.FIG.23is a diagram showing an example of the volume shown inFIG.22having been converted into the octree structure. Among the leaves shown inFIG.23, leaves 1, 2, and 3 respectively represent VXL 1, VXL 2, and VXL 3, and represent VXLs including a point group (hereinafter, active VXLs). An octree is represented by, for example, binary sequences of 1s and 0s. For example, when giving the nodes or the active VXLs a value of 1 and everything else a value of 0, each node and leaf is assigned with the binary sequence shown inFIG.23. Thus, this binary sequence is scanned in accordance with a breadth-first or a depth-first scan order. For example, when scanning breadth-first, the binary sequence shown in A ofFIG.24is obtained. When scanning depth-first, the binary sequence shown in B ofFIG.24is obtained. The binary sequences obtained through this scanning are encoded through entropy encoding, which reduces an amount of information. Depth information in the octree representation will be described next. Depth in the octree representation is used in order to control up to how fine a granularity point cloud information included in a volume is stored. Upon setting a great depth, it is possible to reproduce the point cloud information to a more precise level, but an amount of data for representing the nodes and leaves increases. Upon setting a small depth, however, the amount of data decreases, but some information that the point cloud information originally held is lost, since pieces of point cloud information including different positions and different colors are now considered as pieces of point cloud information including the same position and the same color. For example,FIG.25is a diagram showing an example in which the octree with a depth of 2 shown inFIG.23is represented with a depth of 1. The octree shown inFIG.25has a lower amount of data than the octree shown inFIG.23. In other words, the binarized octree shown inFIG.25has a lower bit count than the octree shown inFIG.23. Leaf 1 and leaf 2 shown inFIG.23are represented by leaf 1 shown inFIG.24. In other words, the information on leaf 1 and leaf 2 being in different positions is lost. FIG.26is a diagram showing a volume corresponding to the octree shown inFIG.25. VXL 1 and VXL 2 shown inFIG.22correspond to VXL 12 shown inFIG.26. In this case, the three-dimensional data encoding device generates color information of VXL 12 shown inFIG.26using color information of VXL 1 and VXL 2 shown inFIG.22. For example, the three-dimensional data encoding device calculates an average value, a median, a weighted average value, or the like of the color information of VXL 1 and VXL 2 as the color information of VXL 12. In this manner, the three-dimensional data encoding device may control a reduction of the amount of data by changing the depth of the octree. The three-dimensional data encoding device may set the depth information of the octree to units of worlds, units of spaces, or units of volumes. In this case, the three-dimensional data encoding device may append the depth information to header information of the world, header information of the space, or header information of the volume. In all worlds, spaces, and volumes associated with different times, the same value may be used as the depth information. In this case, the three-dimensional data encoding device may append the depth information to header information managing the worlds associated with all times. Embodiment 7 Hereinafter, a method using a RAHT (Region Adaptive Hierarchical Transform) will be described as another method of encoding the attribute information of a three-dimensional point.FIG.27is a diagram for describing the encoding of the attribute information by using a RAHT. First, the three-dimensional data encoding device generates Morton codes based on the geometry information of three-dimensional points, and sorts the attribute information of the three-dimensional points in the order of the Morton codes. For example, the three-dimensional data encoding device may perform sorting in the ascending order of the Morton codes. Note that the sorting order is not limited to the order of the Morton codes, and other orders may be used. Next, the three-dimensional data encoding device generates a high-frequency component and a low-frequency component of the layer L by applying the Haar conversion to the attribute information of two adjacent three-dimensional points in the order of the Morton codes. For example, the three-dimensional data encoding device may use the Haar conversion of 2×2 matrices. The generated high-frequency component is included in a coding coefficient as the high-frequency component of the layer L, and the generated low-frequency component is used as the input value for the higher layer L+1 of the layer L. After generating the high-frequency component of the layer L by using the attribute information of the layer L, the three-dimensional data encoding device subsequently performs processing of the layer L+1. In the processing of the layer L+1, the three-dimensional data encoding device generates a high-frequency component and a low-frequency component of the layer L+1 by applying the Haar conversion to two low-frequency components obtained by the Haar conversion of the attribute information of the layer L. The generated high-frequency component is included in a coding coefficient as the high-frequency component of the layer L+1, and the generated low-frequency component is used as the input value for the higher layer L+2 of the layer L+1. The three-dimensional data encoding device repeats such layer processing, and determines that the highest layer Lmax has been reached at the time when a low-frequency component that is input to a layer becomes one. The three-dimensional data encoding device includes the low-frequency component of the layer Lmax−1 that is input to the Layer Lmax in a coding coefficient. Then, the value of the low-frequency component or high-frequency component included in the coding coefficient is quantized, and is encoded by using entropy encoding or the like. Note that, when only one three-dimensional point exists as two adjacent three-dimensional points at the time of application of the Haar conversion, the three-dimensional data encoding device may use the value of the attribute information of the existing one three-dimensional point as the input value for a higher layer. In this manner, the three-dimensional data encoding device hierarchically applies the Haar conversion to the input attribute information, generates a high-frequency component and a low-frequency component of the attribute information, and performs encoding by applying quantization described later or the like. Accordingly, the coding efficiency can be improved. When the attribute information is N dimensional, the three-dimensional data encoding device may independently apply the Haar conversion for each dimension, and may calculate each coding coefficient. For example, when the attribute information is color information (RGB, YUV, or the like), the three-dimensional data encoding device applies the Haar conversion for each component, and calculates each coding coefficient. The three-dimensional data encoding device may apply the Haar conversion in the order of the layers L, L+1, . . . , Lmax. The closer to the layer Lmax, a coding coefficient including the more low-frequency components of the input attribute information is generated. w0 and w1 shown inFIG.27are the weights assigned to each three-dimensional point. For example, the three-dimensional data encoding device may calculate the weight based on the distance information between two adjacent three-dimensional points to which the Haar conversion is applied, or the like. For example, the three-dimensional data encoding device may improve the coding efficiency such that the closer the distance, the greater the weight. Note that the three-dimensional data encoding device may calculate this weight with another technique, or need not use the weight. In the example shown inFIG.27, the pieces of the input attribute information are a0, a1, a2, a3, a4, and a5. Additionally, Ta1, Ta5, Tb1, Tb3, Tc1, and d0 are encoded among the coding coefficients after the Haar conversion. The other coding coefficients (b0, b2, bc0 and the like) are medians, and are not encoded. Specifically, in the example shown inFIG.27, the high-frequency component Ta1 and the low-frequency component b0 are generated by performing the Haar conversion on a0 and a1. Here, when the weights w0 and w1 are equal, the low-frequency component b0 is the average value of a0 and a1, and the high-frequency component Ta1 is the difference between a0 and a1. Since there is no attribute information to be paired with a2, a2 is used as b1 as is. Similarly, since there is no attribute information to be paired with a3, a3 is used as b2 as is. Additionally, the high-frequency component Ta5 and the low-frequency component b3 are generated by performing the Haar conversion on a4 and a5. In the layer L+1, the high-frequency component Tb1 and the low-frequency component c0 are generated by performing the Haar conversion on b0 and b1. Similarly, the high-frequency component Tb3 and the low-frequency component c1 are generated by performing the Haar conversion on b2 and b3. In the layer Lmax−1, the High-frequency component Tc1 and the low-frequency component d0 are generated by performing the Haar conversion on c0 and c1. The three-dimensional data encoding device may encode the coding coefficients to which the Haar conversion has been applied, after quantizing the coding coefficients. For example, the three-dimensional data encoding device performs quantization by dividing the coding coefficient by the quantization scale (also called the quantization step (QS)). In this case, the smaller the quantization scale, the smaller the error (quantization error) that may occur due to quantization. Conversely, the larger the quantization scale, the larger the quantization error. Note that the three-dimensional data encoding device may change the value of the quantization scale for each layer.FIG.28is a diagram showing an example of setting the quantization scale for each layer. For example, the three-dimensional data encoding device sets smaller quantization scales to the higher layers, and larger quantization scales to the lower layers. Since the coding coefficients of the three-dimensional points belonging to the higher layers include more low-frequency components than the lower layers, there is a high possibility that the coding coefficients are important components in human visual characteristics and the like. Therefore, by suppressing the quantization error that may occur in the higher layers by making the quantization scales for the higher layers small, visual deterioration can be suppressed, and the coding efficiency can be improved. Note that the three-dimensional data encoding device may add the quantization scale for each layer to a header or the like. Accordingly, the three-dimensional decoding device can correctly decode the quantization scale, and can appropriately decode a bitstream. Additionally, the three-dimensional data encoding device may adaptively switch the value of the quantization scale according to the importance of a current three-dimensional point to be encoded. For example, the three-dimensional data encoding device uses a small quantization scale for a three-dimensional point with high importance, and uses a large quantization scale for a three-dimensional point with low importance. For example, the three-dimensional data encoding device may calculate the importance from the weight at the time of the Haar conversion, or the like. For example, the three-dimensional data encoding device may calculate the quantization scale by using the sum of w0 and w1. In this manner, by making the quantization scale of a three-dimensional point with high importance small, the quantization error becomes small, and the coding efficiency can be improved. Additionally, the value of the QS may be made smaller for the higher layers. Accordingly, the higher the layer, the larger the value of the QW, and the prediction efficiency can be improved by suppressing the quantization error of the three-dimensional point. Here, a coding coefficient Ta1q after quantization of the coding coefficient Ta1 of the attribute information a1 is represented by Ta1/QS_L. Note that QS may be the same value in all the layers or a part of the layers. The QW (Quantization Weight) is the value that represents the importance of a current three-dimensional point to be encoded. For example, the above-described sum of w0 and w1 may be used as the QW. Accordingly, the higher the layer, the larger the value of the QW, and the prediction efficiency can be improved by suppressing the quantization error of the three-dimensional point. For example, the three-dimensional data encoding device may first initialize the values of the QWs of all the three-dimensional points with1, and may update the QW of each three-dimensional point by using the values of w0 and w1 at the time of the Haar conversion. Alternatively, the three-dimensional data encoding device may change the initial value according to the layers, without initializing the values of the QWs of all the three-dimensional points with a value of 1. For example, the quantization scales for the higher layers becomes small by setting larger QW initial values for the higher layers. Accordingly, since the prediction error in the higher layers can be suppressed, the prediction accuracy of the lower layers can be increased, and the coding efficiency can be improved. Note that the three-dimensional data encoding device need not necessarily use the QW. When using the QW, the quantized value Ta1q of Ta1 is calculated by (Equation K1) and (Equation K2). [Math.1]Ta1q=Ta1+QS_L2OS_LoD1×QWTa1(EquationK1)QWTa1=1+∑i=01wi(EquationK2) Additionally, the three-dimensional data encoding device scans and encodes the coding coefficients (unsigned integer values) after quantization in a certain order. For example, the three-dimensional data encoding device encodes a plurality of three-dimensional points from the three-dimensional points included in the higher layers toward the lower layers in order. For example, in the example shown inFIG.27, the three-dimensional data encoding device encodes a plurality of three-dimensional points in the order of Tc1q Tb1q, Tb3q, Ta1q, and Ta5q from d0q included in the higher layer Lmax. Here, there is a tendency that the lower the layer L, the more likely it is that the coding coefficient after quantization becomes 0. This can be due to the following and the like. Since the coding coefficient of the lower layer L shows a higher frequency component than the higher layers, there is a tendency that the coding coefficient becomes 0 depending on a current three-dimensional point. Additionally, by switching the quantization scale according to the above-described importance or the like, the lower the layer, the larger the quantization scales, and the more likely it is that the coding coefficient after quantization becomes 0. In this manner, the lower the layer, the more likely it is that the coding coefficient after quantization becomes 0, and the value 0 consecutively occurs in the first code sequence.FIG.29is a diagram showing an example of the first code sequence and the second code sequence. The three-dimensional data encoding device counts the number of times that the value 0 occurs in the first code sequence, and encodes the number of times that the value 0 consecutively occurs, instead of the consecutive values 0. That is, the three-dimensional data encoding device generates a second code sequence by replacing the coding coefficient of the consecutive values 0 in the first code sequence with the number of consecutive times (ZeroCnt) of 0. Accordingly, when there are consecutive values 0 of the coding coefficients after quantization, the coding efficiency can be improved by encoding the number of consecutive times of 0, rather than encoding a lot of 0s. Additionally, the three-dimensional data encoding device may entropy encode the value of ZeroCnt. For example, the three-dimensional data encoding device binarizes the value of ZeroCnt with the truncated unary code of the total number T of the encoded three-dimensional points, and arithmetically encodes each bit after the binarization.FIG.30is a diagram showing an example of the truncated unary code in the case where the total number of encoded three-dimensional points is T. At this time, the three-dimensional data encoding device may improve the coding efficiency by using a different coding table for each bit. For example, the three-dimensional data encoding device uses coding table 1 for the first bit, uses coding table 2 for the second bit, and coding table 3 for the subsequent bits. In this manner, the three-dimensional data encoding device can improve the coding efficiency by switching the coding table for each bit. Additionally, the three-dimensional data encoding device may arithmetically encode ZeroCnt after binarizing ZeroCnt with an Exponential-Golomb. Accordingly, when the value of ZeroCnt easily becomes large, the efficiency can be more improved than the binarized arithmetic encoding with the truncated unary code. Note that the three-dimensional data encoding device may add a flag for switching between using the truncated unary code and using the Exponential-Golomb to a header. Accordingly, the three-dimensional data encoding device can improve the coding efficiency by selecting the optimum binarization method. Additionally, the three-dimensional data decoding device can correctly decode a bitstream by referring to the flag included in the header to switch the binarization method. The three-dimensional decoding device may convert the decoded coding coefficient after the quantization from an unsigned integer value to a signed integer value with a method contrary to the method performed by the three-dimensional data encoding device. Accordingly, when the coding coefficient is entropy encoded, the three-dimensional decoding device can appropriately decode a bitstream generated without considering the occurrence of a negative integer. Note that the three-dimensional decoding device does not necessarily need to convert the coding coefficient from an unsigned integer value to a signed integer value. For example, when decoding a bitstream including an encoded bit that has been separately entropy encoded, the three-dimensional decoding device may decode the sign bit. The three-dimensional decoding device decodes the coding coefficient after the quantization converted to the signed integer value, by the inverse quantization and the inverse Haar conversion. Additionally, the three-dimensional decoding device utilizes the coding coefficient after the decoding for the prediction after the current three-dimensional point to be decoded. Specifically, the three-dimensional decoding device calculates the inverse quantized value by multiplying the coding coefficient after the quantization by the decoded quantization scale. Next, the three-dimensional decoding device obtains the decoded value by applying the inverse Haar conversion described later to the inverse quantized value. For example, the three-dimensional decoding device converts the decoded unsigned integer value to a signed integer value with the following method. When the LSB (least significant bit) of the decoded unsigned integer value a2u is 1, the signed integer value Ta1q is set to −((a2u+1)>>1). When the LSB of the decoded unsigned integer value a2u is not 1 (when it is 0), the signed integer value Ta1q is set to (a2u>>1). Additionally, the inverse quantized value of Ta1 is represented by Ta1q×QS_L. Here, Ta1q is the quantized value of Ta1. In addition, QS_L is the quantization step for the layer L. Additionally, the QS may be the same value for all the layers or a part of the layers. In addition, the three-dimensional data encoding device may add the information indicating the QS to a header or the like. Accordingly, the three-dimensional decoding device can correctly perform inverse quantization by using the same QS as the QS used by the three-dimensional data encoding device. Next, the inverse Haar conversion will be described.FIG.31is a diagram for describing the inverse Haar conversion. The three-dimensional decoding device decodes the attribute value of a three-dimensional point by applying the inverse Haar conversion to the coding coefficient after the inverse quantization. First, the three-dimensional decoding device generates the Morton codes based on the geometry information of three-dimensional points, and sorts the three-dimensional points in the order of the Morton codes. For example, the three-dimensional decoding device may perform the sorting in ascending order of the Morton codes. Note that the sorting order is not limited to the order of the Morton codes, and the other order may be used. Next, the three-dimensional decoding device restores the attribute information of three-dimensional points that are adjacent to each other in the order of the Morton codes in the layer L, by applying the inverse Haar conversion to the coding coefficient including the low-frequency component of the layer L+1, and the coding coefficient including the high-frequency component of the layer L. For example, the three-dimensional decoding device may use the inverse Haar conversion of a 2×2 matrix. The attribute information of the restored layer L is used as the input value for the lower layer L−1. The three-dimensional decoding device repeats such layer processing, and ends the processing when all the attribute information of the bottom layer is decoded. Note that, when only one three-dimensional point exists as two three-dimensional points that are adjacent to each other in the layer L−1 at the time of application of the inverse Haar conversion, the three-dimensional decoding device may assign the value of the encoding component of the layer L to the attribute value of the one existing three-dimensional point. Accordingly, the three-dimensional decoding device can correctly decode a bitstream with improved coding efficiency by applying the Haar conversion to all the values of the input attribute information. When the attribute information is N dimensional, the three-dimensional decoding device may independently apply the inverse Haar conversion for each dimension, and may decode each coding coefficient. For example, when the attribute information is color information (RGB, YUV, or the like), the three-dimensional data decoding device applies the inverse Haar conversion to the coding coefficient for each component, and decodes each attribute value. The three-dimensional decoding device may apply the inverse Haar conversion in the order of Layers Lmax, L+1, . . . , L. Additionally, w0 and w1 shown inFIG.31are the weights assigned to each three-dimensional point. For example, the three-dimensional data decoding device may calculate the weight based on the distance information between two adjacent three-dimensional points to which the inverse Haar conversion is applied, or the like. For example, the three-dimensional data encoding device may decode a bitstream with improved coding efficiency such that the closer the distance, the greater the weight. In the example shown inFIG.31, the coding coefficients after the inverse quantization are Ta1, Ta5, Tb1, Tb3, Tc1, and d0, and a0, a1, a2, a3, a4, and a5 are obtained as the decoded values. FIG.32is a diagram showing a syntax example of the attribute information (attribute_data). The attribute information (attribute_data) includes the number of consecutive zeros (ZeroCnt), the number of attribute dimensions (attribute_dimension), and the coding coefficient (value [j] [i]). The number of consecutive zeros (ZeroCnt) indicates the number of times that the value 0 continues in the coding coefficient after quantization. Note that the three-dimensional data encoding device may arithmetically encode ZeroCnt after binarizing ZeroCnt. Additionally, as shown inFIG.32, the three-dimensional data encoding device may determine whether or not the layer L (layerL) to which the coding coefficient belongs is equal to or more than a predefined threshold value TH_layer, and may switch the information added to a bitstream according to the determination result. For example, when the determination result is true, the three-dimensional data encoding device adds all the coding coefficients of the attribute information to a bitstream. In addition, when the determination result is false, the three-dimensional data encoding device may add a part of the coding coefficients to a bitstream. Specifically, when the determination result is true, the three-dimensional data encoding device adds the encoded result of the three-dimensional information of the color information RGB or YUV to a bitstream. When the determination result is false, the three-dimensional data encoding device may add a part of information such as G or Y of the color information to a bitstream, and need not to add the other components to the bitstream. In this manner, the three-dimensional data encoding device can improve the coding efficiency by not adding a part of the coding coefficients of the layer (the layer smaller than TH_layer) including the coding coefficients indicating the high-frequency component with less visually noticeable degradation to a bitstream. The number of attribute dimensions (attribute_dimension) indicates the number of dimensions of the attribute information. For example, when the attribute information is the color information (RGB, YUV, or the like) of a three-dimensional point, since the color information is three-dimensional, the number of attribute dimensions is set to a value 3. When the attribute information is the reflectance, since the reflectance is one-dimensional, the number of attribute dimensions is set to a value 1. Note that the number of attribute dimensions may be added to the header of the attribute information of a bit stream or the like. The coding coefficient (value [j] [i]) indicates the coding coefficient after quantization of the attribute information of the j-th dimension of the i-th three-dimensional point. For example, when the attribute information is color information, value [99] [1] indicates the coding coefficient of the second dimension (for example, the G value) of the 100th three-dimensional point. Additionally, when the attribute information is reflectance information, value [119] [0] indicates the coding coefficient of the first dimension (for example, the reflectance) of the 120th three-dimensional point. Note that, when the following conditions are satisfied, the three-dimensional data encoding device may subtract the value 1 from value [j] Lit and may entropy encode the obtained value. In this case, the three-dimensional data decoding device restores the coding coefficient by adding the value 1 to value W [i] after entropy decoding. The above-described conditions are (1) when attribute_dimension=1, or (2) when attribute_dimension is 1 or more, and when the values of all the dimensions are equal. For example, when the attribute information is the reflectance, since attribute_dimension=1, the three-dimensional data encoding device subtracts the value 1 from the coding coefficient to calculate value, and encodes the calculated value. The three-dimensional decoding device calculates the coding coefficient by adding the value 1 to the value after decoding. More specifically, for example, when the coding coefficient of the reflectance is 10, the three-dimensional data encoding device encodes the value 9 obtained by subtracting the value 1 from the value 10 of the coding coefficient. The three-dimensional data decoding device adds the value 1 to the decoded value 9 to calculate the value 10 of the coding coefficient. Additionally, since attribute_dimension=3 when the attribute information is the color, for example, when the coding coefficient after quantization of each of the components R, G, and B is the same, the three-dimensional data encoding device subtracts the value 1 from each coding coefficient, and encodes the obtained value. The three-dimensional data decoding device adds the value 1 to the value after decoding. More specifically, for example, when the coding coefficient of R, G, and B=(1, 1, 1), the three-dimensional data encoding device encodes (0, 0, 0). The three-dimensional data decoding device adds 1 to each component of (0, 0, 0) to calculate (1, 1, 1). Additionally, when the coding coefficients of R, G, and B=(2, 1, 2), the three-dimensional data encoding device encodes (2, 1, 2) as is. The three-dimensional data decoding device uses the decoded (2, 1, 2) as is as the coding coefficients. In this manner, by providing ZeroCnt, since the pattern in which all the dimensions are 0 as value is not generated, the value obtained by subtracting 1 from the value indicated by value can be encoded. Therefore, the coding efficiency can be improved. Additionally, value [0] [i] shown inFIG.32indicates the coding coefficient after quantization of the attribute information of the first dimension of the i-th three-dimensional point. As shown inFIG.32, when the layer L (layerL) to which the coding coefficient belongs is smaller than the threshold value TH_layer, the code amount may be reduced by adding the attribute information of the first dimension to a bitstream (not adding the attribute information of the second and following dimensions to the bitstream). The three-dimensional data encoding device may switch the calculation method of the value of ZeroCnt depending on the value of attribute_dimension. For example, when attribute_dimension=3, the three-dimensional data encoding device may count the number of times that the values of the coding coefficients of all the components (dimensions) become 0.FIG.33is a diagram showing an example of the coding coefficient and ZeroCnt in this case. For example, in the case of the color information shown inFIG.33, the three-dimensional data encoding device counts the number of the consecutive coding coefficients having 0 for all of the R, G, and B components, and adds the counted number to a bitstream as ZeroCnt. Accordingly, it becomes unnecessary to encode ZeroCnt for each component, and the overhead can be reduced. Therefore, the coding efficiency can be improved. Note that the three-dimensional data encoding device may calculate ZeroCnt for each dimension even when attribute_dimension is two or more, and may add the calculated ZeroCnt to a bitstream. FIG.34is a flowchart of the three-dimensional data encoding processing according to the present embodiment. First, the three-dimensional data encoding device encodes geometry information (geometry) (S6601). For example, the three-dimensional data encoding device performs encoding by using an octree representation. Next, the three-dimensional data encoding device converts the attribute information (S6602). For example, after the encoding of the geometry information, when the position of a three-dimensional point is changed due to quantization or the like, the three-dimensional data encoding device reassigns the attribute information of the original three-dimensional point to the three-dimensional point after the change. Note that the three-dimensional data encoding device may interpolate the value of the attribute information according to the amount of change of the position to perform the reassignment. For example, the three-dimensional data encoding device detects N three-dimensional points before the change near the three dimensional position after the change, performs the weighted averaging of the value of the attribute information of the N three-dimensional points based on the distance from the three-dimensional position after the change to each of the N three-dimensional points, and sets the obtained value as the value of the attribute information of the three-dimensional point after the change. Additionally, when two or more three-dimensional points are changed to the same three-dimensional position due to quantization or the like, the three-dimensional data encoding device may assign the average value of the attribute information in the two or more three-dimensional points before the change as the value of the attribute information after the change. Next, the three-dimensional data encoding device encodes the attribute information (S6603). For example, when encoding a plurality of pieces of attribute information, the three-dimensional data encoding device may encode the plurality of pieces of attribute information in order. For example, when encoding the color and the reflectance as the attribute information, the three-dimensional data encoding device generates a bitstream to which the encoding result of the reflectance is added after the encoding result of the color. Note that a plurality of encoding results of the attribute information added to a bitstream may be in any order. Additionally, the three-dimensional data encoding device may add the information indicating the start location of the encoded data of each attribute information in a bitstream to a header or the like. Accordingly, since the three-dimensional data decoding device can selectively decode the attribute information that needs to be decoded, the decoding processing of the attribute information that does not need to be decoded can be omitted. Therefore, the processing amount for the three-dimensional data decoding device can be reduced. Additionally, the three-dimensional data encoding device may encode a plurality of pieces of attribute information in parallel, and may integrate the encoding results into one bitstream. Accordingly, the three-dimensional data encoding device can encode a plurality of pieces of attribute information at high speed. FIG.35is a flowchart of the attribute information encoding processing (S6603). First, the three-dimensional data encoding device generates a coding coefficient from attribute information by the Haar conversion (S6611). Next, the three-dimensional data encoding device applies quantization to the coding coefficient (S6612). Next, the three-dimensional data encoding device generates encoded attribute information (bitstream) by encoding the coding coefficient after the quantization (S6613). Additionally, the three-dimensional data encoding device applies inverse quantization to the coding coefficient after the quantization (S6614). Next, the three-dimensional decoding device decodes the attribute information by applying the inverse Haar conversion to the coding coefficient after the inverse quantization (S6615). For example, the decoded attribute information is referred to in the following encoding. FIG.36is a flowchart of the coding coefficient encoding processing (S6613). First, the three-dimensional data encoding device converts a coding coefficient from a signed integer value to an unsigned integer value (S6621). For example, the three-dimensional data encoding device converts a signed integer value to an unsigned integer value as follows. When signed integer value Ta1q is smaller than 0, the unsigned integer value is set to −1−(2×Ta1q). When the signed integer value Ta1q is equal to or more than 0, the unsigned integer value is set to 2×Ta1q. Note that, when the coding coefficient does not become a negative value, the three-dimensional data encoding device may encode the coding coefficient as the unsigned integer value as is. When not all coding coefficients have been processed (No in S6622), the three-dimensional data encoding device determines whether the value of the coding coefficient to be processed is zero (S6623). When the value of the coding coefficient to be processed is zero (Yes in S6623), the three-dimensional data encoding device increments ZeroCnt by 1 (S6624), and returns to step S6622. When the value of the coding coefficient to be processed is not zero (No in S6623), the three-dimensional data encoding device encodes ZeroCnt, and resets ZeroCnt to zero (S6625). Additionally, the three-dimensional data encoding device arithmetically encodes the coding coefficient to be processed (S6626), and returns to step S6622. For example, the three-dimensional data encoding device performs binary arithmetic encoding. In addition, the three-dimensional data encoding device may subtract the value 1 from the coding coefficient, and may encode the obtained value. Additionally, the processing of steps S6623to S6626is repeatedly performed for each coding coefficient. In addition, when all the coding coefficients have been processed (Yes in S6622), the three-dimensional data encoding device ends the processing. FIG.37is a flowchart of the three-dimensional data decoding processing according to the present embodiment. First, the three-dimensional decoding device decodes geometry information (geometry) from a bitstream (S6631). For example, the three-dimensional data decoding device performs decoding by using an octree representation. Next, the three-dimensional decoding device decodes the attribute information from the bitstream (S6632). For example, when decoding a plurality of pieces of attribute information, the three-dimensional decoding device may decode the plurality of pieces of attribute information in order. For example, when decoding the color and the reflectance as the attribute information, the three-dimensional data decoding device decodes the encoding result of the color and the encoding result of the reflectance according to the order in which they are added to the bitstream. For example, when the encoding result of the reflectance is added after the encoding result of the color in a bitstream, the three-dimensional data decoding device decodes the encoding result of the color, and thereafter decodes the encoding result of the reflectance. Note that the three-dimensional data decoding device may decode the encoding results of the attribute information added to a bitstream in any order. Additionally, the three-dimensional decoding device may obtain the information indicating the start location of the encoded data of each attribute information in a bitstream by decoding a header or the like. Accordingly, since the three-dimensional data decoding device can selectively decode the attribute information that needs to be decoded, the decoding processing of the attribute information that does not need to be decoded can be omitted. Therefore, the processing amount of the three-dimensional decoding device can be reduced. Additionally, the three-dimensional data decoding device may decode a plurality of pieces of attribute information in parallel, and may integrate the decoding results into one three-dimensional point cloud. Accordingly, the three-dimensional data decoding device can decode a plurality of pieces of attribute information at high speed. FIG.38is a flowchart of the attribute information decoding processing (S6632). First, the three-dimensional decoding device decodes a coding coefficient from a bitstream (S6641). Next, the three-dimensional decoding device applies the inverse quantization to the coding coefficient (S6642). Next, the three-dimensional decoding device decodes the attribute information by applying the inverse Haar conversion to the coding coefficient after the inverse quantization (S6643). FIG.39is a flowchart of the coding coefficient decoding processing (S6641). First, the three-dimensional decoding device decodes ZeroCnt from a bitstream (S6651). When not all coding coefficients have been processed (No in S6652), the three-dimensional decoding device determines whether ZeroCnt is larger than 0 (S6653). When ZeroCnt is larger than zero (Yes in S6653), the three-dimensional decoding device sets the coding coefficient to be processed to 0 (S6654). Next, the three-dimensional decoding device subtracts 1 from ZeroCnt (S6655), and returns to step S6652. When ZeroCnt is zero (No in S6653), the three-dimensional decoding device decodes the coding coefficient to be processed (S6656). For example, the three-dimensional decoding device uses binary arithmetic decoding. Additionally, the three-dimensional decoding device may add the value 1 to the decoded coding coefficient. Next, the three-dimensional decoding device decodes ZeroCnt, sets the obtained value to ZeroCnt (S6657), and returns to step S6652. Additionally, the processing of steps S6653to S6657is repeatedly performed for each coding coefficient. In addition, when all the coding coefficients have been processed (Yes in S6652), the three-dimensional data encoding device converts a plurality of decoded coding coefficients from unsigned integer values to signed integer values (S6658). For example, the three-dimensional data decoding device may convert the decoded coding coefficients from unsigned integer values to signed integer values as follows. When the LSB (least significant bit) of the decoded unsigned integer value Ta1u is 1, the signed integer value Ta1q is set to −((Ta1u+1)>>1). When the LSB of the decoded unsigned integer value Ta1u is not 1 (when it is 0), the signed integer value Ta1q is set to (Ta1u>>1). Note that, when the coding coefficient does not become a negative value, the three-dimensional data decoding device may use the decoded coding coefficient as is as the signed integer value. FIG.40is a block diagram of attribute information encoder6600included in the three-dimensional data encoding device. Attribute information encoder6600includes sorter6601, Haar transformer6602, quantizer6603, inverse quantizer6604, inverse Haar converter6605, memory6606, and arithmetic encoder6607. Sorter6601generates the Morton codes by using the geometry information of three-dimensional points, and sorts the plurality of three-dimensional points in the order of the Morton codes. Haar transformer6602generates the coding coefficient by applying the Haar conversion to the attribute information. Quantizer6603quantizes the coding coefficient of the attribute information. Inverse quantizer6604inverse quantizes the coding coefficient after the quantization. Inverse Haar converter6605applies the inverse Haar conversion to the coding coefficient. Memory6606stores the values of pieces of attribute information of a plurality of decoded three-dimensional points. For example, the attribute information of the decoded three-dimensional points stored in memory6606may be utilized for prediction and the like of an unencoded three-dimensional point. Arithmetic encoder6607calculates ZeroCnt from the coding coefficient after the quantization, and arithmetically encodes ZeroCnt. Additionally, arithmetic encoder6607arithmetically encodes the non-zero coding coefficient after the quantization. Arithmetic encoder6607may binarize the coding coefficient before the arithmetic encoding. In addition, arithmetic encoder6607may generate and encode various kinds of header information. FIG.41is a block diagram of attribute information decoder6610included in the three-dimensional decoding device. Attribute information decoder6610includes arithmetic decoder6611, inverse quantizer6612, inverse Haar converter6613, and memory6614. Arithmetic decoder6611arithmetically decodes ZeroCnt and the coding coefficient included in a bitstream. Note that arithmetic decoder6611may decode various kinds of header information. Inverse quantizer6612inverse quantizes the arithmetically decoded coding coefficient. Inverse Haar converter6613applies the inverse Haar conversion to the coding coefficient after the inverse quantization. Memory6614stores the values of pieces of attribute information of a plurality of decoded three-dimensional points. For example, the attribute information of the decoded three-dimensional points stored in memory6614may be utilized for prediction of an undecoded three-dimensional point. Note that, in the above-described embodiment, although the example has been shown in which the three-dimensional points are encoded in the order of the lower layers to the higher layers as the encoding order, it is not necessarily limit to this. For example, a method may be used that scans the coding coefficients after the Haar conversion in the order of the higher layers to the lower layers. Note that, also in this case, the three-dimensional data encoding device may encode the number of consecutive times of the value 0 as ZeroCnt. Additionally, the three-dimensional data encoding device may switch whether or not to use the encoding method using ZeroCnt described in the present embodiment per WLD, SPC, or volume. In this case, the three-dimensional data encoding device may add the information indicating whether or not the encoding method using ZeroCnt has been applied to the header information. Accordingly, the three-dimensional decoding device can appropriately perform decoding. As an example of the switching method, for example, the three-dimensional data encoding device counts the number of times of occurrence of the coding coefficient having a value of 0 with respect to one volume. When the count value exceeds a predefined threshold value, the three-dimensional data encoding device applies the method using ZeroCnt to the next volume, and when the count value is equal to or less than the threshold value, the three-dimensional data encoding device does not apply the method using ZeroCnt to the next volume. Accordingly, since the three-dimensional data encoding device can appropriately switch whether or not to apply the encoding method using ZeroCnt according to the characteristic of a current three-dimensional point to be encoded, the coding efficiency can be improved. Hereinafter, another technique (modification) of the present embodiment will be described. The three-dimensional data encoding device scans and encodes the coding coefficients (unsigned integer values) after the quantization according to a certain order. For example, the three-dimensional data encoding device encodes a plurality of three-dimensional points from the three-dimensional points included in the lower layers toward the higher layers in order. FIG.42is a diagram showing an example of the first code sequence and the second code sequence in the case where this technique is used for the attribute information shown inFIG.27. In the case of this example, the three-dimensional data encoding device encodes a plurality of coding coefficients in the order of Ta5q, Tb1q, Tb3q, Tc1q, and d0q from Ta1q included in the lower layer L. Here, there is a tendency that the lower the layer, the more likely it is that the coding coefficient after quantization becomes 0. This can be due to the following and the like. Since the coding coefficients of the lower layers L show a higher frequency component than the higher layers, the coding coefficients tend to be 0 depending on the current three-dimensional point to be encoded. Additionally, by switching the quantization scale according to the above-described importance or the like. The lower the layer, the larger the quantization scale, and the coding coefficient after the quantization easily become 0. In this manner, the lower the layer, the more likely it is that the coding coefficient after the quantization becomes 0, and the value 0 is likely to be consecutively generated for the first code sequence. The three-dimensional data encoding device counts the number of times that the value 0 occurs in the first code sequence, and encodes the number of times (ZeroCnt) that the value 0 consecutively occurs, instead of the consecutive values 0. Accordingly, when there are consecutive values 0 of the coding coefficients after the quantization, the coding efficiency can be improved by encoding the number of consecutive times of 0, rather than encoding a lot of 0s. Additionally, the three-dimensional data encoding device may encode the information indicating the total number of times of occurrence of the value 0. Accordingly, the overhead of encoding ZeroCnt can be reduced, and the coding efficiency can be improved. For example, the three-dimensional data encoding device encodes the total number of the coding coefficients having a value of 0 as TotalZeroCnt. Accordingly, in the example shown inFIG.42, at the time when the three-dimensional data decoding device decodes the second ZeroCnt (value 1) included in the second code sequence, the total number of decoded ZeroCnts will be N+1 (=TotalZeroCnt). Therefore, the three-dimensional data decoding device can identify that 0 does not occur after this. Therefore, subsequently, it becomes unnecessary for the three-dimensional data encoding device to encode ZeroCnt for each value, and the code amount can be reduced. Additionally, the three-dimensional data encoding device may entropy encode TotalZeroCnt. For example, the three-dimensional data encoding device binarizes the value of TotalZeroCnt with the truncated unary code of the total number T of the encoded three-dimensional points, and arithmetically encodes each bit after binarization. At this time, the three-dimensional data encoding device may improve the coding efficiency by using a different coding table for each bit. For example, the three-dimensional data encoding device uses coding table 1 for the first bit, uses coding table 2 for the second bit, and coding table 3 for the subsequent bits. In this manner, the three-dimensional data encoding device can improve the coding efficiency by switching the coding table for each bit. Additionally, the three-dimensional data encoding device may arithmetically encode TotalZeroCnt after binarizing TotalZeroCnt with an Exponential-Golomb. Accordingly, when the value of TotalZeroCnt easily becomes large, the efficiency can be more improved than the binarized arithmetic encoding with the truncated unary code. Note that the three-dimensional data encoding device may add a flag for switching between using the truncated unary code and using the Exponential-Golomb to a header. Accordingly, the three-dimensional data encoding device can improve the coding efficiency by selecting the optimum binarization method. Additionally, the three-dimensional data decoding device can correctly decode a bitstream by referring to the flag included in the header to switch the binarization method. FIG.43is a diagram showing a syntax example of the attribute information (attribute_data) in the present modification. The attribute information (attribute_data) shown inFIG.43further includes the total number of zeros (TotalZeroCnt) in addition to the attribute information shown inFIG.32. Note that the other information is the same as that inFIG.32. The total number of zeros (TotalZeroCnt) indicates the total number of the coding coefficients having a value of 0 after quantization. Additionally, the three-dimensional data encoding device may switch the calculation method of the values of TotalZeroCnt and ZeroCnt depending on the value of attribute_dimension. For example, when attribute_dimension=3, the three-dimensional data encoding device may count the number of times that the values of the coding coefficients of all the components (dimensions) become 0.FIG.44is a diagram showing an example of the coding coefficient, ZeroCnt, and TotalZeroCnt in this case. For example, in the case of the color information shown inFIG.44, the three-dimensional data encoding device counts the number of the consecutive coding coefficients having 0 for all of the R, G, and B components, and adds the counted number to a bitstream as TotalZeroCnt and ZeroCnt. Accordingly, it becomes unnecessary to encode Total ZeroCnt and ZeroCnt for each component, and the overhead can be reduced. Therefore, the coding efficiency can be improved. Note that the three-dimensional data encoding device may calculate TotalZeroCnt and ZeroCnt for each dimension even when attribute_dimension is two or more, and may add the calculated TotalZeroCnt and ZeroCnt to a bitstream. FIG.45is a flowchart of the coding coefficient encoding processing (S6613) in the present modification. First, the three-dimensional data encoding device converts the coding coefficient from a signed integer value to an unsigned integer value (S6661). Next, the three-dimensional data encoding device encodes TotalZeroCnt (S6662). When not all coding coefficients have been processed (No in S6663), the three-dimensional data encoding device determines whether the value of the coding coefficient to be processed is zero (S6664). When the value of the coding coefficient to be processed is zero (Yes in S6664), the three-dimensional data encoding device increments ZeroCnt by 1 (S6665), and returns to step S6663. When the value of the coding coefficient to be processed is not zero (No in S6664), the three-dimensional data encoding device determines whether TotalZeroCnt is larger than 0 (S6666). When TotalZeroCnt is larger than 0 (Yes in S6666), the three-dimensional data encoding device encodes ZeroCnt, and sets TotalZeroCnt to TotalZeroCnt−ZeroCnt (S6667). After step S6667, or when TotalZeroCnt is 0 (No in S6666), the three-dimensional data encoding device encodes the coding coefficient, resets ZeroCnt to 0 (S6668), and returns to step S6663. For example, the three-dimensional data encoding device performs binary arithmetic encoding. Additionally, the three-dimensional data encoding device may subtract the value 1 from the coding coefficient, and encode the obtained value. Additionally, the processing of steps S6664to S6668is repeatedly performed for each coding coefficient. In addition, when all the coding coefficients have been processed (Yes in S6663), the three-dimensional data encoding device ends processing. FIG.46is a flowchart of the coding coefficient decoding processing (S6641) in the present modification. First, the three-dimensional decoding device decodes TotalZeroCnt from a bitstream (S6671). Next, the three-dimensional decoding device decodes ZeroCnt from the bitstream, and sets TotalZeroCnt to TotalZeroCnt−ZeroCnt (S6672). When not all coding coefficients have been processed (No in S6673), the three-dimensional data encoding device determines whether ZeroCnt is larger than 0 (S6674). When ZeroCnt is larger than zero (Yes in S6674), the three-dimensional data decoding device sets the coding coefficient to be processed to 0 (S6675). Next, the three-dimensional data decoding device subtracts 1 from ZeroCnt (S6676), and returns to step S6673. When ZeroCnt is zero (No in S6674), the three-dimensional data decoding device decodes the coding coefficient to be processed (S6677). For example, the three-dimensional data decoding device uses binary arithmetic decoding. Additionally, the three-dimensional data decoding device may add the value 1 to the decoded coding coefficient. Next, the three-dimensional data decoding device determines whether TotalZeroCnt is larger than 0 (S6678). When TotalZeroCnt is larger than 0 (Yes in S6678), the three-dimensional data decoding device decodes ZeroCnt, sets the obtained value to ZeroCnt, sets TotalZeroCnt to TotalZeroCnt−ZeroCnt (S6679), and returns to step S6673. Additionally, when TotalZeroCnt is 0 (No in S6678), the three-dimensional decoding device returns to step S6673. Additionally, the processing of steps S6674to S6679is repeatedly performed for each coding coefficient. In addition, when all the coding coefficients have been processed (Yes in S6673), the three-dimensional data encoding device converts the decoded coding coefficient from an unsigned integer value to a signed integer value (S6680). FIG.47is a diagram showing another syntax example of the attribute information (attribute_data). The attribute information (attribute_data) shown inFIG.47includes value [j] [i]_greater_zero_flag, value [j] [i]_greater_one_flag, and value [j] [i], instead of the coding coefficient (value [j] [i]) shown inFIG.32. Note that the other information is the same as that inFIG.32. Value [j] [i]_greater_zero_flag indicates whether or not the value of the coding coefficient (value [j] [i]) is larger than 0. In other words, value [j] [i]_greater_zero_flag indicates whether or not the value of the coding coefficient (value [j] [i]) is 0. For example, when the value of the coding coefficient is larger than 0, value [j] [i]_greater_zero_flag is set to the value 1, and when the value of the coding coefficient is 0, value [j] [i]_greater_zero_flag is set to the value 0. When the value of value [j] [i]_greater_zero_flag is 0, the three-dimensional data encoding device need not add value [j] [i] to a bitstream. In this case, the three-dimensional decoding device may determine that the value of value [j] [i] is the value 0. Accordingly, the code amount can be reduced. Value [j] [i]_greater_one_flag indicates whether or not the value of the coding coefficient (value [j] [i]) is larger than 1 (is equal to or larger than 2). In other words, value [j] [i]_greater_one_flag indicates whether or not the value of the coding coefficient (value [j] [i]) is 1. For example, when the value of the coding coefficient is larger than 1, value [j] [i]_greater_one_flag is set to the value 1. Otherwise (when the value of the coding coefficient is equal to or less than 1), value [j] [i]_greater_one_flag is set to the value 0. When the value of value [j] [i]_greater_one_flag is 0, the three-dimensional data encoding device need not add value [j] [i] to a bitstream. In this case, the three-dimensional decoding device may determine that the value of value [j] [i] is the value 1. Value [j] [i] indicates the coding coefficient after quantization of the attribute information of the j-th dimension of the i-th three-dimensional point. For example, when the attribute information is color information, value [99] [1] indicates the coding coefficient of the second dimension (for example, the G value) of the 100th three-dimensional point. Additionally, when the attribute information is reflectance information, value [119] [0] indicates the coding coefficient of the first dimension (for example, the reflectance) of the 120th three-dimensional point. When value [j] [i]_greater_zero_flag=1, and value [j] [i]_greater_one_flag=1, the three-dimensional data encoding device may add value [j] [i] to a bitstream. Additionally, the three-dimensional data encoding device may add the value obtained by subtracting 2 from value [j] [i] to the bitstream. In this case, the three-dimensional decoding device calculates the coding coefficient by adding the value 2 to the decoded value [j] [i]. The three-dimensional data encoding device may entropy encode value [j] [i]_greater_zero_flag and value [j] [i]_greater_one_flag. For example, binary arithmetic encoding and binary arithmetic decoding may be used. Accordingly, the coding efficiency can be improved. Embodiment 8 In this embodiment, a reversible (Lossless) attribute encoding will be described. To achieve high compression, attribute information included in Point Cloud Compression (PCC) data is transformed in a plurality of methods, such as Lifting, Region Adaptive Hierarchical Transform (RAHT) and other transform methods. Here, Lifting is one of transform methods using Level of Detail (LoD). Important signal information tends to be included in a low-frequency component, and therefore the code amount is reduced by quantizing a high-frequency component. That is, the transform process has strong energy compression characteristics. On the other hand, in order to maintain the original information while reducing the number of bits, the reversible compression is needed. Existing transforms, such as Lifting and RAHT, involves a division and a square-root operator and therefore cannot achieve reversible compression. In order to achieve an efficient and effective reversible compression, an integer-to-integer transform that is not complicated is needed. FIG.48is a diagram showing a configuration of a three-dimensional data encoding device. As shown inFIG.48, the three-dimensional data encoding device includes integer transformer8301and entropy encoder8302. Integer transformer8301generates a coefficient value by integer-transforming input point cloud data. Entropy encoder8302generates a bitstream by entropy-encoding the coefficient value. FIG.49is a diagram showing a configuration of a three-dimensional data decoding device. As shown inFIG.49, the three-dimensional data decoding device includes entropy decoder8303and inverse integer transformer8304. Entropy decoder8303obtains a coefficient value by decoding a bitstream. Inverse integer transformer8304generates output point cloud data by inverse integer-transforming the coefficient value. In the following, RAHT will be describe. RAHT is an example of the transform processings applied to three-dimensional points.FIG.50is a diagram for illustrating RAHT. m-th Low-frequency component Ll, mand m-th high-frequency component Hl, min layer l are expressed by two low-frequency components Cl+1, 2mand Cl+1, 2m+1in layer l+1 according to the following (Equation O1). That is, low-frequency component Ll, mis expressed by (Equation O2), and high-frequency component Hl, mis expressed by (Equation O3). A high-frequency component is encoded by quantization and entropy encoding. A low-frequency component is used in the subsequent layer as shown by (Equation O4). Coefficient α and β are updated for each upper layer. Coefficients α and β are expressed by (Equation O5) and (Equation O6), respectively. Weight Wl, mis expressed by (Equation O7). [Math.2][Ll,mHl,m]=[αβ-βα][Cl+1,2mCl+1,2m+1](EquationO1)Ll,m=αCl+1,2m+βCl+1,2m+1(EquationO2)Hl,m=αCl+1,2m+1-βCl+1,2m(EquationO3)Cl,m=Ll.m(EquationO4)α=wl+1,2mwl+1,2m+wl+1,2m+1(EquationO5)β=wl+1,2m+1wl+1,2m+wl+1,2m+1(EquationO6)wl,m=wl+1,2m+wl+1,2m+1(EquationO7) Next, an integer-to-integer transform will be described. The RAHT process involves a square-root operator and a division. This means that information is lost in RAHT, and RAHT cannot achieve reversible compression. On the other hand, the integer-to-integer transform can achieve reversible compression. FIG.51is a diagram for illustrating an integer-to-integer transform. In the integer-to-integer transform, a fixed value is used as a coefficient in RAHT. For example, an unnormalized Haar transform expressed by the following (Equation O8) is used. That is, low-frequency component Ll, mis expressed by (Equation O9), and high-frequency component Hl, mis expressed by (Equation O10). A high-frequency component is encoded by quantization and entropy encoding. A low-frequency component is used in the subsequent layer as shown by (Equation O11). [Math.3][Ll,mHl,m]=[1212-11][Cl+1,2mCl+1,2m+1](EquationO8)Ll,m=(Cl+1,2m+Cl+1,2m+1)/2(EquationO9)Hl,m=Cl+1,2m+1-Cl+1,2m(EquationO10)Cl,m=Ll,m(EquationO11) The unnormalized Haar transform can be rewritten as (Equation O12) and (Equation O13). [Math.4]Hl,m=Cl+1,2m+1-Cl+1,2mLl,m=Cl+1,2m+Cl+1,2m+12(EquationO12)=Cl+1,2m+Hl,m2(EquationO13) An integer Haar transform is achieved according to (Equation O14) and (Equation O15), and an inverse integer Haar transform is achieved according to (Equation O16) and (Equation O17). Here, | | represents a floor function. Since both (Equation O15) and (Equation O16) includes |Hl, m/2|, a loss caused by |Hl, m/2| is cancelled by the integer Haar transform and the inverse integer Haar transform. In this way, reversible compression is achieved. Here, Ci, jis defined as an integer, and therefore, Hi, jand Li, jare also integers. [Math. 5] Hl,m=Cl+1,2m+1−Cl+1,2m(Equation O14) Ll,m=Cl+1,2m+└Hl,m/2┘ (Equation O15) Cl+1,2m=Ll,m−┌Hl,m/2┘ (Equation O16) Cl+1,2m+1=Hl,m+Cl+1,2m(Equation O17) Therefore, an efficient implementation can be achieved by the following (Equation O18) to (Equation O21). That is, a transform can be achieved by one addition, one subtraction, and one right shifting (down shifting). [Math. 6] Hl,m=Cl+1,2m+1−Cl+1,2m(Equation O18) Ll,m=Cl+1,2m+(Hl,m>>1) (Equation O19) Cl+1,2m=Ll,m−(Hl,m>>1) (Equation O20) Cl+1,2m+1=Hl,m+Cl+1,2m(Equation O21) A recursive integer-to-integer transform will be described.FIG.52is a diagram for illustrating a hierarchical transform processing. When the Haar transform is applied to an image, a pair of pieces of data is required in order to perform a transform suitable for pixel transform. In the Haar transform for a three-dimensional point cloud, an integer Haar is applied when a three-dimensional point pair, which is a pair of point clouds, can be formed, and data on three-dimensional points is moved to the subsequent layer (level) when a three-dimensional point pair is not available. Then, this process is recursively performed. Next, a configuration of a three-dimensional data encoding device will be described.FIG.53is a block diagram of three-dimensional data encoding device8310. Three-dimensional data encoding device8310generates encoded data (encoded stream) by encoding point cloud data (point cloud). Three-dimensional data encoding device8310includes geometry information encoder8311, lossless attribute information encoder8312, additional information encoder8313, and multiplexer8314. Geometry information encoder8311generates encoded geometry information by encoding geometry information. For example, geometry information encoder8311encodes geometry information using an N-ary tree structure, such as an octree. Specifically, in the case of an octree, a current space is divided into eight nodes (subspaces), and 8-bit information (occupancy code) that indicates whether each node includes a point cloud or not is generated. A node including a point cloud is further divided into eight nodes, and 8-bit information that indicates whether each of the eight nodes includes a point cloud or not is generated. This process is repeated until a predetermined layer is reached or the number of the point clouds included in each node becomes smaller than or equal to a threshold. Lossless attribute information encoder8312generates encoded attribute information, which is encoded data, by encoding attribute information using configuration information generated by geometry information encoder8311. Additional information encoder8313generates encoded additional information by encoding additional information included in point cloud data. Multiplexer8314generates encoded data (encoded stream) by multiplexing the encoded geometry information, the encoded attribute information, and the encoded additional information, and transmits the generated encoded data. The encoded additional information is used in the decoding. FIG.54is a block diagram of lossless attribute information encoder8312. Lossless attribute information encoder8312includes integer transformer8321and entropy encoder8322. Integer transformer8321generates a coefficient value by performing an integer transform (such as an integer Haar transform) on attribute information. Entropy encoder8322generates encoded attribute information by entropy-encoding the coefficient value. FIG.55is a block diagram of integer transformer8321. Integer transformer8321includes re-ordering unit8323and integer Haar transformer8324. Re-ordering unit8323re-orders attribute information based on geometry information. For example, re-ordering unit8323re-orders attribute information in Morton order. Integer Haar transformer8324generates a coefficient value by performing an integer Haar transform on the re-ordered attribute information. Next, a configuration of a three-dimensional data decoding device according to this embodiment will be described.FIG.56is a block diagram showing a configuration of three-dimensional data decoding device8330. Three-dimensional data decoding device8330reproduces point cloud data by decoding encoded data (encoded stream) generated by encoding the point cloud data. Three-dimensional data decoding device8330includes demultiplexer8331, a plurality of geometry information decoders8332, a plurality of lossless attribute information decoders8333, and additional information decoder8334. Demultiplexer8331generates encoded geometry information, encoded attribute information, and encoded additional information by demultiplexing encoded data (encoded stream). Geometry information decoder8332generates geometry information by decoding encoded geometry information. Lossless attribute information decoder8333generates attribute information by decoding encoded attribute information. For example, lossless attribute information decoder8333generates attribute information by performing an inverse integer transform (such as an inverse integer Haar transform) on encoded attribute information.FIG.57is a block diagram of lossless attribute information decoder8333. Lossless attribute information decoder8333includes entropy decoder8341and inverse integer transformer8342. Entropy decoder8341generates a coefficient value by entropy-decoding encoded attribute information. Inverse integer transformer8342generates attribute information by performing an inverse integer transform (such as an inverse integer Haar transform) on the coefficient value. FIG.58is a block diagram of inverse integer transformer8342. Inverse integer transformer8342includes re-ordering unit8343and inverse integer Haar transformer8344. Re-ordering unit8343re-orders coefficient values based on geometry information. For example, re-ordering unit8343re-orders coefficient values in Morton order. Inverse integer Haar transformer8344generates attribute information by performing an inverse integer Haar transform on the re-ordered coefficient values. The three-dimensional data encoding device may add, to the header of the bitstream or the like, information that indicates which of the reversible (Lossless) encoding and the irreversible (lossy) encoding has been used. For example, the three-dimensional data encoding device may add lossless_enable_flag to the header. When lossless_enable_flag=1, the three-dimensional data decoding device decodes the reversibly encoded bitstream by applying the inverse integer Haar transform. When lossless_enable_flag=0, the three-dimensional data decoding device decodes the irreversibly encoded bitstream by applying the inverse RAHT. In this way, the three-dimensional data decoding device can properly decode the bitstream by changing the inverse transform processing in accordance with the value of lossless_enable_flag. Note that the information that indicates which of the reversible encoding and the irreversible encoding has been used for encoding is not necessarily limited thereto, and the value of quantization parameter QP or quantization step Qstep can also be used, for example. For example, when the value of the quantization parameter or quantization step is a particular value (QP=4 or Qstep=1, for example), the three-dimensional data decoding device may determine that the bitstream has been encoded by reversible encoding, and decode the reversibly encoded bitstream by applying the inverse integer Haar transform. When the value of the quantization parameter or quantization step is greater than the particular value (QP=4 or Qstep=1, for example), the three-dimensional data decoding device may determine that the bitstream has been encoded by irreversible encoding, and decode the irreversibly encoded bitstream by applying the inverse RAHT. Next, a lossless attribute information encoding processing will be described.FIG.59is a flowchart of a lossless attribute information encoding processing. First, the three-dimensional data encoding device re-orders attribute information on a three-dimensional point cloud (S8301). For example, the three-dimensional data encoding device re-orders attribute information on a three-dimensional point cloud in Morton order. The three-dimensional data encoding device then selects a current point to be processed from the three-dimensional point cloud (S8302). Specifically, the three-dimensional data encoding device selects the leading three-dimensional point in the three-dimensional point cloud re-ordered in Morton order. The three-dimensional data encoding device then determines whether or not there is a three-dimensional point pair (Point Pair), which a three-dimensional point located next to the current three-dimensional point in Morton order (S8303). When there is a three-dimensional point pair (if Yes in S8304), the three-dimensional data encoding device generates a coefficient value including a high-frequency component and a low-frequency component by performing the integer Haar transform using the three-dimensional point pair (S8305). The three-dimensional data encoding device then encodes (entropy-encodes, for example) the generated high-frequency component, and stores the encoded high-frequency component in the bitstream (S8306). The three-dimensional data encoding device also stores the low-frequency component in a memory or the like for the processing for the subsequent layer (S8307). On the other hand, when there is no three-dimensional point pair if No in S8304), the three-dimensional data encoding device stores the attribute information on the current three-dimensional point in the memory or the like for the subsequent layer (S8307). When the current three-dimensional point is not the last three-dimensional point in the current layer, which is the layer to be processed (if No in S8308), the three-dimensional data encoding device selects the subsequent three-dimensional point in Morton order as a current three-dimensional point (S8302), and performs step S8303and the following process on the selected current three-dimensional point. The “subsequent three-dimensional point in Morton order” means the three-dimensional point subsequent to the three-dimensional point pair when there is a three-dimensional point pair, and refers to the three-dimensional point subsequent to the current three-dimensional point when there is no three-dimensional point pair. When the current three-dimensional point is the last three-dimensional point in the current layer of Yes in S8308), the three-dimensional data encoding device starts the processing for the subsequent layer (the layer directly above the current layer) (S8309). When the former current layer is not the last layer (top layer) (if No in S8310), the three-dimensional data encoding device selects the first three-dimensional point in Morton order in the subsequent layer as a current three-dimensional point (S8302), and performs step S8303and the following process on the selected current three-dimensional point. When the former current layer is the last layer of Yes in S8310), the three-dimensional data encoding device encodes (entropy-encodes, for example) the low-frequency component generated in the last layer (top layer), and stores the encoded low-frequency component in the bitstream (S8311). By the process described above, encoded attribute information is generated which includes the encoded high-frequency component for the three-dimensional point pair included in each layer and the encoded low-frequency component for the top layer. Next, a lossless attribute information decoding processing will be described.FIG.60is a flowchart of a lossless attribute information decoding processing. First, the three-dimensional data decoding device decodes coefficient values from the bitstream (S8321). The coefficient value includes the high-frequency component for the three-dimensional point pair included in each layer and the low-frequency component for the top layer. The three-dimensional data decoding device then re-orders the obtained coefficient values (S8322). For example, the three-dimensional data decoding device re-orders a plurality of high-frequency components in Morton order. The three-dimensional data decoding device then obtains a low-frequency component to be processed and a high-frequency component to be processed, which are the low-frequency component and the high-frequency component of the three-dimensional point pair to be processed (S8323and S8324). Specifically, the low-frequency component to be processed for the top layer is the low-frequency component decoded from the bitstream, and the low-frequency component to be processed for the layers other than the top layer is the low-frequency component obtained by the inverse transform processing in the layer directly above the layer. The high-frequency component to be processed for the top layer is the leading high-frequency component of the high-frequency components re-ordered in Morton order. Note that, when there is no three-dimensional point pair, there is no high-frequency component to be processed. When there is a three-dimensional point pair (if Yes in S8325), that is, when there is a high-frequency component to be processed, the three-dimensional data decoding device generates a low-frequency component for the layer directly below the layer by performing the inverse integer Haar transform using the low-frequency component to be processed and the high-frequency component to be processed (S8326). Note that, when the current layer is the bottom layer, attribute information is generated by the inverse integer Haar transform. The three-dimensional data decoding device stores the generated low-frequency component in a memory or the like for the processing for the subsequent layer (S8327). On the other hand, when there is no three-dimensional point pair if No in S8325), the three-dimensional data decoding device stores the low-frequency component to be processed in the memory or the like for the subsequent layer (S8327). When the coefficient value (three-dimensional point pair) to be processed is not the last coefficient value in the current layer (if No in S8328), the three-dimensional data decoding device selects the subsequent three-dimensional point pair in Morton order as a three-dimensional point to be processed, and performs step S8323and the following process on the selected three-dimensional point pair. When the coefficient value to be processed is the last coefficient value in the current layer (if Yes in S8328), the three-dimensional data decoding device starts the processing for the subsequent layer (the layer directly below the layer) (S8329). When the former current layer is not the last layer (bottom layer) (if No in S8330), the three-dimensional data decoding device selects the first three-dimensional point pair in Morton order in the subsequent layer as a three-dimensional point pair to be processed, and performs step S8323and the following process on the selected three-dimensional point pair. When the former current layer is the last layer of Yes in S8330), the three-dimensional data decoding device ends the processing. By the processing described above, the attribute information on all the three-dimensional points is obtained. Next, example configurations of integer Haar transformer8324and inverse integer Haar transformer8344will be described.FIG.61is a diagram showing an example configuration of integer Haar transformer8324. As shown inFIG.61, integer Haar transformer8324includes subtractor8351, right shifter8352, and adder8353. Here, C1and C2represent attribute information on a three-dimensional point pair for the bottom layer, and represent low-frequency components of a three-dimensional point pair obtained in the layer directly below the layer for a layer other than the bottom layer. H represents a high-frequency component of a three-dimensional point pair, and L represents a low-frequency component of a three-dimensional point pair. With the configuration in the drawing, the calculations expressed by (Equation O18) and (Equation O19) are implemented. FIG.62is a diagram showing an example configuration of inverse integer Haar transformer8344. As shown inFIG.62, inverse integer Haar transformer8344includes right shifter8354, subtractor8355, and adder8356. With the configuration in the drawing, the calculations expressed by (Equation O20) and (Equation O21) are implemented. Note that, in at least any one of the forward transform and the inverse transform, input data may be divided into a plurality of pieces of data on a predetermined unit basis, and the resulting divisional data may be processed in parallel. In this way, the processing speed can be increased. Next, an example where the encoding is switched between the reversible encoding (integer Haar transform) and the irreversible encoding (RAHT) will be described.FIG.63is a diagram showing a configuration of a three-dimensional data encoding device in this case. The three-dimensional data encoding device selectively performs the reversible encoding (reversible compression) or the irreversible encoding (irreversible compression). The three-dimensional data encoding device may indicate the encoding mode with a flag or QP. The three-dimensional data encoding device shown inFIG.63includes re-ordering unit8361, switcher8362, RAHT unit8363, quantizer8364, integer transformer8365, and entropy encoder8366. Re-ordering unit8361re-orders attribute information in Morton order, for example, based on geometry information. Switcher8362outputs the re-ordered attribute information to RAHT unit8363or integer transformer8365. For example, switcher8362switches between using the RAHT and using the integer Haar transform based on LOSSLESS_FLAG. Here, LOSSLESS_FLAG is a flag that indicates which of RAHT (irreversible encoding) and the integer Haar transform (reversible encoding) is to be used. The integer Haar transform (reversible encoding) is used when LOSSLESS_FLAG is on (a value of 1, for example), and RAHT (irreversible encoding) is used when LOSSLESS_FLAG is off (a value of 0, for example). Alternatively, the three-dimensional data encoding device may determine to use the reversible encoding when the value of quantization parameter QP is particular value a. Here, value α is a value with which the quantized value or the value of quantization step Qstep calculated from the QP value is 1. For example, if Qstep=1 when QP=4, α=4. The switching between RAHT and the integer Haar transform is not exclusively performed based on LOSSLESS_FLAG or QP value, but can be performed in any manner. For example, the three-dimensional data encoding device may add an Enable_Integer_Haar_Transform flag to the header or the like, and applies the integer Haar transform when Enable_Integer_Haar_Transform=1, and apply RAHT when Enable_Integer_Haar_Transform=0. RAHT unit8363generates a coefficient value by applying RAHT to the attribute information. Quantizer8364generates a quantized coefficient by quantizing the coefficient value. integer transformer8365generates a coefficient value by applying the integer Haar transform on the attribute information. Entropy encoder8366generates encoded attribute information by entropy-encoding the quantized value generated by quantizer8364or the coefficient value generated by integer transformer8365. FIG.64is a diagram showing a configuration of a three-dimensional data decoding device corresponding to the three-dimensional data encoding device shown inFIG.63. The three-dimensional data decoding device shown inFIG.64includes entropy decoder8371, re-ordering unit8372, switcher8373, inverse quantizer8374, inverse RAHT unit8375, and inverse integer transformer8376. Entropy decoder8371generates coefficient values (or quantized coefficients) by entropy-decoding encoded attribute information. Re-ordering unit8372re-orders the coefficient values in Morton order, for example, based on geometry information. Switcher8373outputs the re-ordered coefficient values to inverse quantizer8374or inverse integer transformer8376. For example, switcher8373switches between using RAHT and using the integer Haar transform based on LOSSLESS_FLAG. Note that the way of switching by switcher8373is the same as the way of switching by switcher8362described above. Note that the three-dimensional data decoding device obtains LOSSLESS_FLAG, the QP value, or the Enable_Integer_Haar_Transform flag from the bitstream. Inverse quantizer8374generates a coefficient value by inverse-quantizing the quantized coefficient. Inverse RAHT unit8375generates attribute information by applying inverse RAHT on the coefficient value. Inverse integer transformer8376generates attribute information by applying the inverse integer Haar transform on the coefficient value. Note that, although no quantization processing is performed when the integer Haar transform is applied in the example shown inFIG.63andFIG.64, a quantization processing may be performed when the integer Haar transform is applied.FIG.65is a diagram showing a configuration of a three-dimensional data encoding device in that case.FIG.66is a diagram showing a configuration of a three-dimensional data decoding device in that case. As shown inFIG.65, quantizer8364A generates quantized coefficients by quantizing the coefficient value generated by RAHT unit8363and the coefficient value generated by integer transformer8365. As shown inFIG.66, inverse quantizer8374A generates coefficient values by inverse-quantizing the quantized coefficients. FIG.67andFIG.68are diagrams showing example configurations of the bitstream (encoded attribute information) generated by the three-dimensional data encoding device. For example, as shown inFIG.67, LOSSLESS_FLAG is stored in the header of the bitstream. Alternatively, as shown inFIG.68, the QP value is included in the header of the bitstream. The reversible encoding is applied when the QP value is predetermined value α. Embodiment 9 In this embodiment, an integer RAHT, which is an irreversible transform closer to the reversible transform than the normal RAHT, will be described. In order to facilitate the implementation of the hardware, a fixed point RAHT can be introduced. The fixed point RAHT can be implemented according to the following (Equation O22) and (Equation O23). Here, l represents a low-frequency component, and h represents a high-frequency component. c1and c2represent attribute information on a three-dimensional point pair for the bottom layer, and represent low-frequency components of a three-dimensional point pair obtained in the layer directly below the layer for a layer other than the bottom layer. The transform is orthonormal, and (Equation O24) holds. [Math.7][lh]=[ab-ba][c1c2](EquationO22)a=w1w1+w2,b=w1w1+w2(EquationO23)a2+b2=w1w1+w2+w2w1+w2=1(EquationO24) Weight w after the update is expressed as w=w1+w2when c1and c2are a three-dimensional point pair, and is expressed as w=w1when c1and c2are not a pair. The above (Equation O22) is transformed into the following (Equation O25) to (Equation O29). [Math.8][lh]=[w1w1+w2w2w1+w2-w2w1+w2w1w1+w2][c1c2](EquationO25)[lw1+w2h]=[w1w1+w2w2w1+w2-w1w2w1+w2w1w2w1+w2][c1w1c2w2](EquationO26)[lw1+w2hw1+w2w1w2]=[w1w1+w2w2w1+w2-11][c1w1c2w2](EquationO27)[l′h′]=[a2b2-11][c1′c2′](EquationO28)l′=lw1+w2,h′=hw1+w2w1w2(EquationO29) Therefore, the forward transform is expressed by (Equation O30) to (Equation O32). [Math.9][l′h′]=[a2b2-11][c1′c2′](EquationO30)h′=c2′-c1′(EquationO31)l′=a2c1′+b2c2′=c1′+b2h′(EquationO32) The inverse transform is expressed by (Equation O33) to (Equation O34). [Math. 10] c1′=l′−b2h′(Equation O33) c2′=h′+c1′ (Equation O34) An adjusted quantization step (Aqs: Adjusted Quantization Step) is expressed by (Equation O36) based on (Equation O35). Therefore, (Equation O37) holds. In this way, the integer RAHT can be implemented by fixed point implementation of b2. [Math.11]h′=hw1+w2w1w2(EquationO35)Aqs=QSw1+w2w1w2(EquationO36)h′Aqs=hQS(EquationO37) In the following, a relationship between the integer RAHT and the integer Haar transform will be described. The integer RAHT and the integer Haar transform can be implemented by a common processing. Specifically, the integer Haar transform can be implemented by setting the weights for all the layers in RAHT so that w1=w2=1. That is, the forward transform in the integer RAHT is expressed by (Equation O38) to (Equation O40), and the inverse transform is expressed by (Equation O41) to (Equation O42). In addition, (Equation O43) holds. [Math.12][l′h′]=[a2b2-11][c1′c2′](EquationO38)h′=c2′-c1′(EquationO39)l′=c1′+b2h′(EquationO40)c1′=l′-b2h′(EquationO41)c2′=h′+c1′(EquationO42)a2=w1w1+w2,b2=w2w1+w2(EquationO43) In (Equation O38) to (Equation O43), if the weight is set so that w1=w2=1, the forward transform is expressed by (Equation O44) to (Equation O45), and the inverse transform is expressed by (Equation O46) to (Equation O47). That is, the integer Haar transform is implemented. [Math. 13] h′=c2′−c1′ (Equation O44) l′=c1′+h′/2 (Equation O45) c1′=l′−h′/2 (Equation O46)= c2′=h′+c1′ (Equation O47) Next, an example will be described in which switching occurs between the irreversible encoding (RAHT), the irreversible encoding (integer RAHT) close to the reversible encoding, and the reversible encoding (integer Haar transform).FIG.69is a diagram showing a configuration of a three-dimensional data encoding device in this case. The three-dimensional data encoding device selectively performs the irreversible encoding (RAHT), the irreversible encoding (integer RAHT) close to the reversible encoding, or the reversible encoding (integer Haar transform). The switching is performed based on a flag or a QP value. The three-dimensional data encoding device shown inFIG.69includes re-ordering unit8401, integer RAHT/Haar transformer8402, quantizer8403, and entropy encoder8404. Re-ordering unit8401re-orders attribute information in Morton order, for example, based on geometry information. Integer RAHT/Haar transformer8402generates a coefficient value by transforming the attribute information by selectively using the irreversible encoding (RAHT), the irreversible encoding (integer RAHT) close to the reversible encoding, or the reversible encoding (integer Haar transform). Specifically, the three-dimensional data encoding device uses the reversible encoding (integer Haar transform) when the value of quantization parameter QP is particular value α, and RAHT-HAAR_FLAG=HAAR. Here, value α is a value with which the quantized value or the value of quantization step Qstep calculated from the QP value is 1. For example, if Qstep=1 when QP=4, α=4. Alternatively, the value of α may be different between RAHT and Haar. For example, when RAHT-HAAR_FLAG=RAHT, and QP is greater than α, the irreversible encoding (RAHT) is used. When RAHT-HAAR_FLAG=RAHT, and QP=α, the irreversible encoding (integer RAHT) close to the reversible encoding is used. When RAHT-HAAR_FLAG=HAAR, and QP=α, the reversible encoding (integer Haar transform) is used. When RAHT-HAAR_FLAG=HAAR, and QP is greater than a, the irreversible encoding (RAHT) may be used. When RAHT-HAAR_FLAG=HAAR, integer RAHT/Haar transformer8402performs the Haar transform by setting w1=w2=1. Quantizer8403generates a quantized coefficient by quantizing the coefficient value using QP. Entropy encoder8404generates encoded attribute information by entropy-encoding the quantized coefficient. FIG.70is a diagram showing a configuration of a three-dimensional data decoding device corresponding to the three-dimensional data encoding device shown inFIG.69. The three-dimensional data decoding device shown inFIG.70includes entropy decoder8411, inverse quantizer8412, re-ordering unit8413, and inverse integer RAHT/Haar transformer8414. Entropy decoder8411generates quantized coefficients by entropy-decoding encoded attribute information. Inverse quantizer8412generates coefficient values by inverse-quantizing the quantized coefficients using QP. Re-ordering unit8413re-orders the coefficient values in Morton order, for example, based on geometry information. Inverse integer RAHT/Haar transformer8414generates attribute information by inverse-transforming the coefficient values by selectively using the irreversible encoding (RAHT), the irreversible encoding (integer RAHT) close to the reversible encoding, or the reversible encoding (integer Haar transform). Note that the way of switching is the same as the way of switching by integer RAHT/Haar transformer8402described above. Note that the three-dimensional data decoding device obtains LOSSLESS_FLAG and the QP value from the bitstream. FIG.71is a diagram showing an example configuration of the bitstream (encoded attribute information) generated by the three-dimensional data encoding device. For example, as shown inFIG.71, RAHT-HAAR_FLAG and the QP value are stored in the header of the bitstream. RAHT-HAAR_FLAG is a flag that indicates which of the irreversible encoding (RAHT), the irreversible encoding (integer RAHT) close to the reversible encoding, and the reversible encoding (integer Haar transform) is to be used. Note that RAHT-HAAR_FLAG may indicate which of the reversible encoding (integer Haar transform) and the irreversible encoding (RAHT) or the irreversible encoding (integer RAHT) close to the reversible encoding is to be used. In the following, an example implementation of a configuration for performing the integer RAHT will be described. The integer RAHT can be implemented as described below. B represents the integer accuracy of b2, and is expressed by (Equation O48). [Math.14]B=(w2⪡kBit)w1+w2(EquationO48) kBit represents the accuracy of B. For example, in the case of 8-bit accuracy, kBit=8. kHalf=(1<<(kBit−1)) represents the accuracy that supports a rounding (such as rounding down or rounding off). The adjusted quantization step (Aqs) can be implemented by (Equation O49). [Math.15]Aqs=sqrt_integer(QS*QS*(w1+w2)w1*w2)(EquationO49) Here, QS represents a quantization step (Quantization Step). The forward transform is expressed by (Equation O50) to (Equation O51). [Math. 16] h′=c2′−c1′ (Equation O50) l′=c1′+((B*h′+kHalf)>>kBit) (Equation O51) The quantized high-frequency component is expressed by (Equation O52). The inverse quantization of the high-frequency component is expressed by (Equation O53). [Math. 17] quantized_h′=((Aqs>>1)+(h′<<kBit))/Aqs (Equation O52) h′=((quantized_h′*Aqs)+kHalf)>>kBit (Equation O53) The inverse transform is expressed by (Equation O54) to (Equation O55). [Math. 18] c1′=l′((B*+kHalf)>>kBit) (Equation O54) c2′+h′+c1′ (Equation O55) In the following, an example will be described in which the integer Haar is implemented by RAHT by using a conditional flag, setting the bit accuracy of the rounding to be 0, and setting Aqs. When the integer Haar is applied, the three-dimensional data encoding device sets the weight to be 1 (w1=w2=1). The three-dimensional data encoding device also sets kHalf to be 0 (kHalf=0). The three-dimensional data encoding device also switches Aqs as follows. The three-dimensional data encoding device sets Aqs=QS when using the integer Haar transform. The three-dimensional data encoding device sets Aqs=sqrt_integer (((QS*QS)*(w1+w2))/(w1*w2)) when using the integer RAHT. Here, sqrt_integer(n) represents the integer part of the square root of n. Therefore, (Equation O56) holds. [Math.19]B=(1⪡kBit)2=(1⪡(kBit-1))(EquationO56) The forward transform of the integer Haar in RAHT is expressed by (Equation O57) to (Equation O59). [Math. 20] h′=c2′−c1′ (Equation O57) l′=c1′((B*h′+0)>>kBit)=c1′+(h′>>1) (Equation O58) quantized_h′=((Aqs>>1)+(h′<<kBit))/Aqs (Equation O59) When the reversible encoding is used, QS is set to be 1, and therefore, (Equation O60) holds. When the reversible encoding is used, the quantization and the inverse quantization may be skipped. [Math.21]quantized_h′=(1⪢1)+(h′⪡kBit)1=h′⪡kBit(EquationO60) The inverse quantization of the high-frequency component is expressed by (Equation O61). The inverse integer transform is expressed by (Equation O62) to (Equation O63). [Math. 22] h′=(quantized_h′*Aqs)>>kBit (Equation O61) c1′=l′−((B*h′+0)>>kBit)=l′−(h′>>1) (Equation O62) c2′=h′+c1′ (Equation O63) As another example of the implementation of the reversible encoding, the bits shift calculations described below may be used. These calculations are performed for the integer data type (fixed point calculation). The attribute information is shifted up with the kBit accuracy before the transform processing. When applying the integer Haar, the three-dimensional data encoding device sets the weight to be 1 (w1=w2=1). The three-dimensional data encoding device also sets kHalf to be 0 (kHalf=0). The three-dimensional data encoding device also switches Aqs as follows. The three-dimensional data encoding device sets Aqs=QS when using the integer Haar transform. The three-dimensional data encoding device sets Aqs=sqrt_integer (((QS*QS)*(w1+w2))/(w1*w2)) when using the integer RAHT. Therefore, (Equation O64) holds. [Math.23]B=(1⪡kBit)2=(1⪡(kBit-1))(EquationO64) As shown by (Equation O65), the attribute information is shifted up with the kBit accuracy before the transform processing. [Math. 24] ci=ci<<kBit (Equation O65) The forward transform of the integer Haar in RATH is expressed by (Equation O66) to (Equation O67), and is performed with the kBit accuracy. [Math. 25] h′=c2′−c1′ (Equation O66) t_l′=((B*h++kHalf)>>kBit) (Equation O67) In order to remove the floating-point accuracy of B for the floor function, the kBit accuracy for the low-frequency component is removed. Therefore, the low-frequency component is expressed by (Equation O68). [Math. 26] l′=c1′+((t_l′>>kBit)<<kBit) (Equation O68) The inverse integer transform is expressed by (Equation O69) to (Equation O71). In this way, switching from the integer RAHT can be reduced. [Math. 27] t_l′=((B*h′+khalf)>>kBit) (Equation O69)= c1′=l′((t_l′>>kBit)<<kBit) (Equation O70) c2′=h′+c1′ (Equation O71) In the following, an example configuration for the forward transform will be described.FIG.72is a diagram showing an example configuration of integer RAHT/Haar transformer8402. Integer RAHT/Haar transformer8402includes left shifters8421,8422, and8430, subtractor8423, divider8424, right shifters8425,8427, and8329, multiplier8426, switcher8428, and adder8431. Left shifters8421and8422shift up (shift to the left) c1 and c2 by kBit when c1 and c2 are original signals (signals located in the bottom layer of RAHT) of attribute information. As a result, the bit accuracy of the original signals increases, and therefore, the calculation accuracy of the transform processing can be improved. Therefore, the encoding efficiency can be improved. When c1 and c2 are signals in a layer higher than the bottom layer of RAHT, the kBit shift-up need not be applied. Note that, when the integer Haar transform is applied, and QS=1 (which means the reversible encoding), the three-dimensional data encoding device need not apply the kBit shift-up to the original signals of the attribute information located in the bottom layer of RAHT. In this way, the reversible encoding can be achieved while reducing the processing amount. Subtractor8423subtracts shifted-up c1 from shifted-up c2. Divider8424divides the value obtained by subtractor8423by Aqs. Here, Aqs is expressed by (Equation O72). integer_square_root(n) represents the integer part of the square root of n. That is, Aqs depends on QS (quantization step) and the weight. When the integer Haar transform is used, Aqs is set so that Aqs=QS. [Math.28]Aqs=integer_square_root(QS*QS*(w1+w2)w1*w2)(EquationO72) Right shifter8425generates high-frequency component h by shifting down the value obtained by divider8424. Multiplier8426multiplies the value obtained by subtractor8423by B. B is expressed by (Equation O73). When the integer Haar transformer is used, the weight is set so that w1=w2=1. [Math.29]B=(w2⪡kBit)w1+w2(EquationO73) Right shifter8427shifts down the value obtained by multiplier8426. Switcher8428outputs the value obtained by right shifter8427to right shifter8429when the integer Haar transformer is used, and outputs the value obtained by right shifter8427to adder8431when the integer Haar transformer is not used. When the integer Haar transformer is applied, right shifter8429shifts down the value obtained by right shifter8427by kBit, and left shifter8430shifts up the value obtained by right shifter8427by kBit. In this way, the values of lower-order kBit bits are set to be 0. That is, the digits after the decimal point resulting from the division by a value of 2 when the integer Haar transformer is applied can be deleted, and thus, the processing of rounding a value down (floor processing) can be achieved. Note that any method that can achieve the process of rounding a value down can be applied. Note that, when QS>1 (which means the irreversible encoding) in the integer Haar transformer, the kBit shift down and the kBit shift up need not be applied to the value obtained by right shifter8427. In this way, the accuracy after the decimal point resulting from the division by a value of 2 can be maintained when the integer Haar transformer is applied. Therefore, the calculation accuracy and the encoding efficiency are improved. When the original signals (located in the bottom layer of RAHT) of the attribute information are not shifted up by kBit, the kBit shift down and the kBit shift up need not be applied to the value obtained by right shifter8427. In this way, the processing amount can be reduced. Adder8431generates low-frequency component 1 by adding the value obtained by left shifter8430or right shifter8427to the value obtained by left shifter8421. Note that, in the calculation for the top layer, resulting low-frequency component 1 is subjected to the kBit shift down. Next, an example configuration for the inverse transform will be described.FIG.73is a diagram showing an example configuration of inverse integer RAHT/Haar transformer8414. Inverse integer RAHT/Haar transformer8414includes left shifters8441and8447, multipliers8442,8443, and8449, right shifters8444,8446,8450, and8452, switcher8445, and subtractors8448and8451. Left shifter8441shifts up (shifts to the left) high-frequency component h by kBit. Multiplier8442multiplies the value obtained by left shifter8441by Aqs. Note that Aqs is the same as Aqs described above with reference toFIG.72. Multiplier8443multiplies the value obtained by multiplier8442by B. Note that B is the same as B described above with reference toFIG.72. Right shifter8444shifts down the value obtained by multiplier8443. Switcher8445outputs the value obtained by right shifter8444to right shifter8446when the integer Haar transformer is used, and outputs the value obtained by right shifter8444to subtractor8448when the integer Haar transformer is not used. When the integer Haar transformer is applied, right shifter8446shifts down the value obtained by right shifter8444by kBit, and left shifter8447shifts up the value obtained by right shifter8444by kBit. In this way, the values of lower-order kBit bits are set to be 0. That is, the digits after the decimal point resulting from the division by a value of 2 when the integer Haar transformer is applied can be deleted, and thus, the processing of rounding a value down (floor processing) can be achieved. Note that any method that can achieve the process of rounding a value down can be applied. Note that, when QS>1 (which means the irreversible encoding) in the integer Haar transformer, the kBit shift down and the kBit shift up need not be applied to the value obtained by right shifter8444. In this way, the accuracy after the decimal point resulting from the division by a value of 2 can be maintained when the integer Haar transformer is applied. Therefore, a bitstream improved in the calculation accuracy and the encoding efficiency can be properly decoded. When the decoded signals (located in the bottom layer of RAHT) of the attribute information are not shifted down by kBit, the kBit shift down and the kBit shift up need not be applied to the value obtained by right shifter8444. In this way, the processing amount can be reduced. Subtractor8448subtracts the value obtained by left shifter8447or right shifter8444from low-frequency component 1. Note that, in the calculation for the top layer, low-frequency component 1 is shifted up by kBit, and subtractor8448subtracts the value obtained by left shifter8447or right shifter8444from shifted-up low-frequency component 1. Multiplier8449multiplies the value obtained by subtractor8448by −1. Right shifter8450shifts down the value obtained by multiplier8449by kBit. Subtractor8451subtracts the value obtained by subtractor8448from the value obtained by multiplier8442. Right shifter8452shifts down the value obtained by subtractor8451by kBit. In this way, the original bit accuracy of c1 and c2 is recovered. By this processing, the decoding result can be obtained with the original bit accuracy while improving the calculation accuracy of the transform processing. The three-dimensional data decoding device need not apply the kBit shift down to any signal in a layer higher than the bottom layer of RAHT. Note that, when the integer Haar transform is applied, and QS=1 (which means the reversible encoding), the three-dimensional data decoding device need not apply the kBit shift-up to the decoded signals of the attribute information located in the bottom layer of RAHT. In this way, a reversibly encoded bitstream can be properly decoded while reducing the processing amount. As stated above, the three-dimensional data encoding device according to the present embodiment performs the process shown byFIG.74. The three-dimensional data encoding device converts pieces of attribute information of three-dimensional points included in point cloud data into coefficient values (S8401), and encodes the coefficient values to generate a bitstream (S8402). In the converting (S8401), the three-dimensional data encoding device performs weighting calculation hierarchically to generate the coefficient values belonging to one of layers, the weighting calculation separating each of the pieces of attribute information into a high-frequency component and a low-frequency component. In the weighting calculation, the three-dimensional data encoding device performs the weighting calculation using weights fixed or not fixed in the layers. The bitstream includes first information (at least one of RAHT-HAAR_FLAG or QP) indicating whether to fix the weights in the layers. With this, since the three-dimensional data encoding device is capable of reducing the loss caused by the transform, by fixing the weight for a plurality of layers, the three-dimensional data encoding device is capable of improving the accuracy. For example, when the weights are fixed in the layers, the weights are fixed to 1. For example, as shown byFIG.72, in the weighting calculation, the three-dimensional data encoding device subtracts first attribute information (e.g., c1) from second attribute information (e.g., c2) to calculate a first value, the first attribute information and the second attribute information being included in the pieces of attribute information, and divides the first value by a first coefficient (e.g., Aqs) to calculate the high-frequency component (e.g., h). The first coefficient (e.g., Aqs) depends on a quantization step (e.g., QS) and the weights (e.g., w1, w2). For example, as shown byFIG.72, in the weighting calculation, the three-dimensional data encoding device multiplies the first value by a second coefficient (e.g., B) depending on the weights to calculate a second value, shifts down the second value by a predetermined bit count and shifts up the second value by the predetermined bit count to calculate a third value, and adds the third value to the first attribute information to calculate the low-frequency component (e.g., 1). For example, the three-dimensional data encoding device includes a processor and memory, and the processor performs the above process using the memory. The three-dimensional data decoding device according to the present embodiment performs the process shown byFIG.75. The three-dimensional data decoding device obtains, from a bitstream, first information (at least one of RAHT-HAAR_FLAG or QP) indicating whether to fix weights in layers (S8411), decodes coefficient values from the bitstream (S8412), and reverse converts the coefficient values to generate pieces of attribute information of three-dimensional points included in point cloud data (S8413). The coefficient values belong to one of the layers. In the reverse converting, the three-dimensional data decoding device performs inverse weighting calculation to generate the pieces of attribute information, the inverse weighting calculation combining the coefficient values with a high-frequency component and a low-frequency component. In the inverse weighting calculation, the three-dimensional data decoding device performs the inverse weighting calculation using the weights fixed or not fixed in the layers, according to the first information. With this, since the three-dimensional data decoding device is capable of reducing the loss caused by the transform, by fixing the weight for a plurality of layers, the three-dimensional data decoding device is capable of improving the accuracy. For example, when the weights are fixed in the layers, the weights are fixed to 1. For example, as shown byFIG.73, in the inverse weighting calculation, the three-dimensional data decoding device multiplies the high-frequency component by a first coefficient (e.g., Aqs) to calculate a first value, calculates first attribute information (e.g., c1) included in the pieces of attribute information from a second value based on the low-frequency component (e.g.,1), and subtracts the second value from the first value to calculate second attribute information (e.g., c2) included in the pieces of attribute information. The first coefficient (e.g., Aqs) depends on a quantization step (e.g., QS) and the weights (e.g., w1, w2). For example, as shown byFIG.73, in the inverse weighting calculation, the three-dimensional data decoding device multiplies the first value by a second coefficient (e.g., B) depending on the weights to calculate a third value, shifts down the third value by a predetermined bit count and shifts up the third value by the predetermined bit count to calculate a fourth value, and subtracts the low-frequency component from the fourth value to calculate the second value. For example, the three-dimensional data decoding device includes a processor and memory, and the processor performs the above process using the memory. Embodiment 10 In the present embodiment, a region adaptive hierarchical transform (RAHT) process or a Haar transform process using prediction will be described.FIG.76is a diagram for describing a prediction process and illustrates a hierarchical structure in the RAHT process or the Haar transform process. At the time of hierarchical encoding using the RAHT or the Haar transform, a three-dimensional data encoding device may predict attribute values (items of attribute information) of a low layer from attribute values of a high layer and may encode difference values between the attribute values of the low layer and their predicted values obtained by the prediction. For example, when hierarchically repeating the encoding from a high layer toward a low layer to encode attribute values of each layer, the three-dimensional data encoding device uses attribute values calculated at the high layer (attribute values of a parent node group) to predict attribute values of the low layer (attribute values of a child node group). The three-dimensional data encoding device may encode, instead of an attribute value of each child node, a difference value resulting from subtracting a predicted value from the attribute value of each child node. According to the above, the difference values of attribute values of a low layer can be reduced by generating appropriate predicted values from attribute values of a high layer, thereby improving the coding efficiency. Note that the same prediction process may be performed by the three-dimensional data decoding device. FIG.77is a diagram illustrating an example of the relation among nodes in an octree structure based on items of geometry information on three-dimensional points. As illustrated inFIG.77, the three-dimensional data encoding device predicts an attribute value of a child node from attribute values of parent nodes or neighboring parent nodes. Here, a plurality of nodes belonging to the same layer as a parent node will be referred to as a parent node group. A plurality of child nodes of the parent node will be referred to as a child node group. Neighboring parent nodes are nodes that are included in the parent node group, are different from the parent node, and neighbor the parent node. Note that a parent node group may include some of the plurality of nodes belonging to the same layer as the parent node. For example, a parent node group may include a parent node and its plurality of neighboring parent nodes. Alternatively, a parent node group may include nodes within a predetermined distance from a parent node (or a current node). An item of attribute information of a node belonging to a high layer such as a parent node is calculated from, for example, items of attribute information of a low layer of the node. For example, an item of attribute information of a parent node is an average value of a plurality of items of attribute information of a plurality of child nodes of the parent node. Note that the method for calculating an item of attribute information of a node belonging to a high layer is not limited to calculating an arithmetic average and may be another method such as calculating a weighted average. For example, the three-dimensional data encoding device may calculate distances d between a current node that is a child node to be subjected to the encoding and its parent nodes or neighboring parent nodes in a three-dimensional space and uses a weighted average value calculated with the distances d as a predicted value. For example, a predicted value cp of a child node c may be calculated by the following (Equation P1) and (Equation P2). Ai denotes a value of an item of attribute information of a parent node pi, and d(c,pi) is a distance, which is a Euclidean distance for example, between the child node c and the parent node pi. The symbol n denotes the total number of parent nodes and neighboring parent nodes used for generating the predicted value. [Math.30]cp=∑i=0nwi×Ai(EquationP1)wi=d(c,pi)∑j=0nd(c,pj)(EquationP2) Alternatively, the three-dimensional data encoding device may use an attribute value of a parent node or a neighboring parent node as it is as the predicted value. For example, the three-dimensional data encoding device may use an attribute value Ap of a parent node as a predicted value of a child node or may use an attribute value Anp of a neighboring parent node as the predicted value of the child node. Alternatively, the three-dimensional data encoding device may select whether to use a calculated value (e.g., a weighted average value) calculated from items of attribute information of a plurality of nodes included in a parent node group or to use an attribute value of a parent node or a neighboring parent node as it is. In this case, the three-dimensional data encoding device may add information indicating which of the calculated value and the attribute value of a parent node or a neighboring parent node is used as the predicted value (prediction mode) to a bitstream for each child node group, for example. This allows the three-dimensional data encoding device to select an appropriate prediction mode for each child node group, so that the coding efficiency can be improved. In addition, by adding a prediction mode to a bitstream, the three-dimensional data decoding device can generate a predicted value using the prediction mode selected by the three-dimensional data encoding device. The three-dimensional data decoding device thus can decode the bitstream appropriately. Note that, rather than being added for each child node group, a prediction mode may be added for each unit larger or smaller than the child node group. For example, the three-dimensional data encoding device adds a prediction mode for every N child node group, N being an integer greater than or equal to 1, so that the coding efficiency can be improved while the overhead for encoding the prediction mode is reduced. The three-dimensional data encoding device may add a prediction mode to a header or the like such as APS. Here, APS is a parameter set of items of attribute information for each frame. Next, a first example of the encoding method with prediction will be described.FIG.78is a diagram illustrating the first example of the encoding method. When calculating difference values between attribute values and predicted values, the three-dimensional data encoding device applies the RAHT or the Haar transform to both the attribute values and the predicted values to calculate transform coefficients of the attribute values and transform coefficients of the predicted values. The three-dimensional data encoding device determines difference values between the transform coefficients of the attribute values and the transform coefficients of the predicted values. The difference values to be encoded thus can be decreased, so that the coding efficiency can be improved. In a case where the three-dimensional data encoding device selects predicted values from between the calculated values and the attribute values of parent nodes or neighboring parent nodes (in a case where a prediction mode is added to a bitstream), for example, the three-dimensional data encoding device may use the predicted values in each prediction mode to calculate the difference values of the transform coefficients, determine a cost value from a sum of absolute values of the difference values, and select a prediction mode that minimizes the cost. The prediction mode minimizing the difference values thus can be selected appropriately, so that the coding efficiency can be improved. For example, the three-dimensional data encoding device may use the following (Equation P3) to calculate a cost value cost. [Math. 31] cost=Σi=0m|Ti−PTi|+λ*Predbit (Equation P3) Here, m denotes the number of child nodes included in a child node group. The symbol λ denotes an adjustment parameter. Predbit denotes the number of bits necessary for encoding a prediction mode. Ti denotes a transform coefficient of an attribute value, and PTi denotes a transform coefficient of a predicted value. Note that this does not limit the method for selecting the prediction mode, and the prediction mode may be selected under other conditions or based on instructions or the like from the outside. As illustrated inFIG.78, the three-dimensional data encoding device applies the RAHT or the Haar transform to attribute values of a child node group to calculate transform coefficients Ti of the attribute values (S9101). In addition, the three-dimensional data encoding device predicts the attribute values of the child node group from attribute values of a parent node group to generate predicted values of the child node group (S9102). The three-dimensional data encoding device next applies the RAHT or the Haar transform to the predicted values to calculate transform coefficients PTi of the predicted values (S9103). The three-dimensional data encoding device next calculates difference values that are differences between the transform coefficients Ti of the attribute values and the transform coefficients PTi of the predicted values (S9104). The three-dimensional data encoding device next quantizes the difference values (S9105) and performs arithmetic encoding on the quantized difference values (S9106) to generate encoded data (a bitstream). Note that, when using a lossless coding, the three-dimensional data encoding device may skip the quantization (S9105). Next, a first example of a decoding method for decoding the encoded data (bitstream) generated by the encoding method in the first example will be described.FIG.79is a diagram illustrating the first example of a decoding method. First, the three-dimensional data decoding device performs arithmetic decoding on the encoded data (bitstream) (S9111) and performs inverse quantization on the resulting signal (S9112) to generate the difference values of the transform coefficients of the child node group. Note that, when using a lossless decoding (when a lossless coding is used), the three-dimensional data decoding device may skip the inverse quantization (S9112). In addition, the three-dimensional data decoding device predicts the attribute values of the child node group from the attribute values of the parent node group to generate the predicted values (S9113). Note that in a case where the three-dimensional data encoding device has made a selection as to whether to use calculated values or to use attribute values of parent nodes or neighboring parent nodes (in a case where a prediction mode is added to a bitstream) to generate predicted values, the three-dimensional data decoding device uses the decoded prediction mode to generate the predicted values. The three-dimensional data decoding device next applies the RAHT or the Haar transform to the predicted values to calculate transform coefficients PTi of the predicted values (S9114). The three-dimensional data decoding device next adds the transform coefficients PTi of the predicted values to the difference values of the transform coefficients of the child node group to calculate transform coefficients Ti of the child node group (S9115). The three-dimensional data decoding device next applies inverse RAHT or inverse Haar transform to the transform coefficients Ti of the child node group to generate decoded values of the attribute values of the child node group (S9116). The three-dimensional data decoding device thus can decode the bitstream appropriately. Next, a second example of the encoding method will be described. In this second example, integer Haar transform is used in place of the RATH or the Haar transform.FIG.80is a diagram illustrating the second example of the encoding method. When calculating difference values between attribute values and predicted values, the three-dimensional data encoding device may apply the integer Haar transform to both the attribute values and the predicted values to calculate transform coefficients of the attribute values and transform coefficients of the predicted values and determine difference values between the transform coefficients of the attribute values and the transform coefficients of the predicted values. The three-dimensional data encoding device thus can decrease the difference values to be encoded, so that the coding efficiency can be improved. The three-dimensional data encoding device may be configured such that, when selecting predicted values from between the calculated values and the attribute values of parent nodes or neighboring parent nodes (in a case where a prediction mode is added to a bitstream), for example, the three-dimensional data encoding device uses the predicted values in each prediction mode to calculate the difference values of the transform coefficients, determines a cost value from a sum of absolute values of the difference values, and selects a prediction mode that minimizes the cost. The three-dimensional data encoding device thus can appropriately select the prediction mode minimizing the difference values, so that the coding efficiency can be improved. For example, the three-dimensional data encoding device may use the above (Equation P3) to calculate a cost value cost. As illustrated inFIG.80, the three-dimensional data encoding device applies the integer Haar transform to attribute values of a child node group to calculate transform coefficients Ti of the attribute values (S9101A). In addition, the three-dimensional data encoding device predicts the attribute values of the child node group from attribute values of a parent node group to generate predicted values of the child node group (S9102). When applying the integer Haar transform, the three-dimensional data encoding device discards values of fractional portions of the predicted values (S9107). Alternatively, the three-dimensional data encoding device may apply rounding or the like to the predicted values to set 0 to the values of the fractional portions of the predicted values. For example, when kBit bits of an attribute value represent a fractional portion of the attribute value, the three-dimensional data encoding device sets 0 to a value of the fractional portion being the kBit bits by subjecting the value to discarding (flooring process) in which a right shift by kBit and a left shift by kBit is applied to the predicted values. By converting the predicted values into integral values before the integer Haar transform is applied, transform coefficients of the predicted values after the integer Haar transform are also integral values. Thus, difference values between the transform coefficients Ti of the attribute values of the child node group and transform coefficients PTi of the predicted value of the child node group are also integral values. A lossless coding with the integer Haar transform thus can be implemented. Note that any method that sets 0 to the values of the fractional portions of the predicted values is applicable. Alternatively, the three-dimensional data encoding device may store a flag indicating whether to apply the integer Haar transform in a bitstream and may set 0 to the fractional portions of the predicted values when applying the integer Haar transform or need not set 0 to the fractional portions when not applying the integer Haar transform. Note that, in a case of QS>1 (lossy coding) in the integer Haar transform, the three-dimensional data encoding device need not apply the process of setting 0 to the fractional portions of the predicted values. This maintains the precision of the fractional portions of the predicted values when the integer Haar transform is applied, so that the precision in the calculation is improved. The coding efficiency thus can be improved. The three-dimensional data encoding device next applies the integer Haar transform to the predicted values after the discarding to calculate the transform coefficients PTi of the predicted values (S9103A). The three-dimensional data encoding device next calculates difference values that are differences between the transform coefficients Ti of the attribute values and the transform coefficients PTi of the predicted values (S9104). Here, the difference values of the transform coefficients may be converted into integral values and then subjected to arithmetic encoding. In this case, information on their fractional portions is lost. For that reason, the three-dimensional data encoding device sets a value of 0 beforehand to values of the fractional portions of the difference values of the transform coefficients, for supporting a lossless coding. This prevents the information from being lost when the difference values are converted into the integral values before arithmetic encoding, enabling the implementation of lossless coding. The three-dimensional data encoding device next quantizes the difference values (S9105) and performs arithmetic encoding on the quantized difference values (S9106) to generate encoded data (a bitstream). Note that, when using a lossless coding, the three-dimensional data encoding device may skip the quantization (S9105). Next, a second example of a decoding method for decoding the encoded data (bitstream) generated by the encoding method in the second example will be described.FIG.81is a diagram illustrating the second example of a decoding method. First, the three-dimensional data decoding device performs arithmetic decoding on the encoded data (bitstream) (S9111) and performs inverse quantization on the resulting signal (S9112) to generate the difference values of the transform coefficients of the child node group. Note that, when using a lossless decoding (when a lossless coding is used), the three-dimensional data decoding device may skip the inverse quantization (S9112). In addition, the three-dimensional data decoding device predicts the attribute values of the child node group from the attribute values of the parent node group to generate the predicted values (S9113). Note that in a case where the three-dimensional data encoding device has made a selection as to whether to use calculated values or to use attribute values of parent nodes or neighboring parent nodes (in a case where a prediction mode is added to a bitstream) to generate predicted values, the three-dimensional data decoding device uses the decoded prediction mode to generate the predicted values. When applying the integer Haar transform, the three-dimensional data decoding device discards values of fractional portions of the predicted values (S9117). Alternatively, the three-dimensional data decoding device may apply rounding or the like to the predicted values to set 0 to the values of the fractional portions of the predicted values. For example, when kBit bits of an attribute value represent a fractional portion of the attribute value, the three-dimensional data decoding device may set 0 to a value of a fractional portion of a predicted value being kBit bits of the predicted value by subjecting the value to discarding (flooring process) in which a right shift by kBit and a left shift by kBit is applied to the predicted value. By converting the predicted values into integral values before the integer Haar transform is applied, transform coefficients of the predicted values after the integer Haar transform are also integral values. Thus, added values of the transform coefficients Ti of the attribute values of the child node group and transform coefficients PTi of the predicted value of the child node group are also integral values. The three-dimensional data decoding device thus can appropriately decode the bitstream resulting from the lossless coding with the integer Haar transform. Note that any method that sets 0 to the values of the fractional portions of the predicted values is applicable. Alternatively, the three-dimensional data decoding device may obtain the flag indicating whether to apply the integer Haar transform in a bitstream and may set 0 to the fractional portions of the predicted values when applying the integer Haar transform or need not set 0 to the fractional portions when not applying the integer Haar transform. Note that, in a case of QS>1 (lossy decoding) in the integer Haar transform, the three-dimensional data decoding device need not apply the process of setting 0 to the fractional portions of the predicted values. The three-dimensional data decoding device thus maintains the precision of the fractional portions of the predicted values when the integer Haar transform is applied, so that the precision in the calculation is improved, and the three-dimensional data decoding device can appropriately decode the bitstream with the improved coding efficiency. The three-dimensional data decoding device next applies the integer Haar transform to the predicted values after the discarding to calculate the transform coefficients PTi of the predicted values (S9114A). The three-dimensional data decoding device next adds the transform coefficients PTi of the predicted values to the difference values of the transform coefficients of the child node group to calculate transform coefficients Ti of the child node group (S9115). The three-dimensional data decoding device next applies inverse integer Haar transform to the transform coefficients Ti of the child node group to generate decoded values of the attribute values of the child node group (S9116A). The three-dimensional data decoding device thus can decode the bitstream appropriately. Next, a third example of the encoding method will be described. In this third example, the integer Haar transform is used as in the second example. In addition, the third example differs from the second example in that a timing for performing discarding (S9108) is after the integer Haar transform (S9103A) is applied. FIG.82is a diagram illustrating the third example of the encoding method. The following description will be given mainly of differences from the second example, and redundant description will be omitted. As illustrated inFIG.82, the three-dimensional data encoding device applies the integer Haar transform to attribute values of a child node group to calculate transform coefficients Ti of the attribute values (S9101A). In addition, the three-dimensional data encoding device predicts the attribute values of the child node group from attribute values of a parent node group to generate predicted values of the child node group (S9102). The three-dimensional data encoding device next applies the integer Haar transform to the predicted values to calculate the transform coefficients PTi of the predicted values (S9103A). The three-dimensional data encoding device next performs the discarding on the transform coefficients PTi to set 0 to fractional portions of the transform coefficients (S9108). Note that the detail of the discarding is the same as that of step S9107described above except that a signal to be subjected to the process is not the predicted values but the transform coefficients PTi. The three-dimensional data encoding device next calculates difference values that are differences between the transform coefficients Ti of the attribute values and the transform coefficients PTi of the predicted values after the discarding (S9104). Next, a third example of a decoding method for decoding the encoded data (bitstream) generated by the encoding method in the third example will be described.FIG.83is a diagram illustrating the third example of a decoding method. First, the three-dimensional data decoding device performs arithmetic decoding on the encoded data (bitstream) (S9111) and performs inverse quantization on the resulting signal (S9112) to generate the difference values of the transform coefficients of the child node group. Note that, when using a lossless decoding (when a lossless coding is used), the three-dimensional data decoding device may skip the inverse quantization (S9112). In addition, the three-dimensional data decoding device predicts the attribute values of the child node group from the attribute values of the parent node group to generate the predicted values (S9113). The three-dimensional data decoding device next applies the integer Haar transform to the predicted values to calculate the transform coefficients PTi of the predicted values (S9114A). The three-dimensional data decoding device next performs discarding on the transform coefficients PTi to set 0 to fractional portions of the transform coefficients (S9118). Note that the detail of the discarding is the same as that of step S9117described above except that a signal to be subjected to the process is not the predicted values but the transform coefficients PTi. The three-dimensional data decoding device next adds the transform coefficients PTi of the predicted values after the discarding to the difference values of the transform coefficients of the child node group to calculate transform coefficients Ti of the child node group (S9115). The three-dimensional data decoding device next applies the inverse integer Haar transform to the transform coefficients Ti of the child node group to generate decoded values of the attribute values of the child node group (S9116A). The three-dimensional data decoding device thus can decode the bitstream appropriately. Next, a fourth example of the encoding method will be described. Through the first to third examples, the examples in which one of the transform processes are applied to the attribute values and the predicted values, and the difference values between the generated transform coefficients of the attribute values and the generated transform coefficients of the predicted values have been described. In the fourth example, the three-dimensional data encoding device calculates difference values between attribute values and predicted values and applies the RAHT or the Haar transform to the difference values to calculate transform coefficients of the difference values. The three-dimensional data encoding device thus can decrease the transform coefficients of the difference values to be encoded while suppressing an increase in a processing load, so that the coding efficiency can be improved. The three-dimensional data encoding device may be configured such that, when selecting predicted values from between the calculated values and the attribute values of parent nodes or neighboring parent nodes (in a case where a prediction mode is added to a bitstream), for example, the three-dimensional data encoding device uses the predicted values in each prediction mode to calculate the difference values of the attribute values, determines a cost value described below from a sum of absolute values of the difference values, and selects a prediction mode that minimizes the cost. The three-dimensional data encoding device thus can appropriately select the prediction mode minimizing the difference values, so that the coding efficiency can be improved. For example, the three-dimensional data encoding device may use the following (Equation P4) to calculate a cost value cost. [Math. 32] cost=Σi+0m|Di|+λ*Predbit (Equation. P4) Here, m denotes the number of child nodes included in a child node group. The symbol λ denotes an adjustment parameter. Predbit denotes the number of bits necessary for encoding a prediction mode. Di denotes a difference value between an attribute value and a predicted value. Note that the three-dimensional data encoding device may use the following (Equation P5) to calculate a cost value cost. The three-dimensional data encoding device thus can select the prediction mode decreasing values of the transform coefficients of the difference values, so that the coding efficiency can be improved. [Math. 33] cost=Σi=0m|Ti|+ζ*Predbit (Equation P5) Here, Ti denotes a transform coefficient of a difference value. Note that this does not limit the method for selecting the prediction mode, and the prediction mode may be selected under other conditions or based on instructions or the like from the outside. FIG.84is a diagram illustrating the fourth example of the encoding method. As illustrated inFIG.84, the three-dimensional data encoding device predicts the attribute values of the child node group from the attribute values of the parent node group to generate the predicted values Pi (S9121). The three-dimensional data encoding device next calculates difference values Di that are differences between attribute values Ai of the child node group and the predicted values Pi (S9122). The three-dimensional data encoding device next applies the RAHT or the Haar transform to the difference values Di to calculate transform coefficients Ti of the difference values (S9123). The three-dimensional data encoding device next quantizes the transform coefficients Ti of the difference values (S9124) and performs arithmetic encoding on the quantized transform coefficients Ti (S9125) to generate encoded data (a bitstream). Note that, when using a lossless coding, the three-dimensional data encoding device may skip the quantization (S9124). Next, a fourth example of a decoding method for decoding the encoded data (bitstream) generated by the encoding method in the fourth example will be described.FIG.85is a diagram illustrating the fourth example of a decoding method. First, the three-dimensional data decoding device performs arithmetic decoding on the encoded data (bitstream) (S9131) and performs inverse quantization on the resulting signal (S9132) to generate the transform coefficients Ti of the difference values of the child node group. Note that, when using a lossless decoding (when a lossless coding is used), the three-dimensional data decoding device may skip the inverse quantization (S9132). The three-dimensional data decoding device next applies inverse RAHT or inverse Haar transform to the transform coefficients Ti to generate the difference values Di of the attribute values (S9133). In addition, the three-dimensional data decoding device predicts the attribute values of the child node group from the attribute values of the parent node group to generate the predicted values (S9134). Note that in a case where the three-dimensional data encoding device has made a selection as to whether to use calculated values or to use attribute values of parent nodes or neighboring parent nodes (in a case where a prediction mode is added to a bitstream) to generate predicted values, the three-dimensional data decoding device uses the decoded prediction mode to generate the predicted values. The three-dimensional data decoding device next adds the difference values Di and the predicted values Pi to generate decoded values of the attribute values of the child node group (S9135). The three-dimensional data decoding device thus can decode the bitstream appropriately. Next, a fifth example of the encoding method will be described. This fifth example differs from the fourth example in that the integer Haar transform is used in place of the RATH or the Haar transform.FIG.86is a diagram illustrating the fifth example of the encoding method. The three-dimensional data encoding device predicts the attribute values of the child node group from the attribute values of the parent node group to generate the predicted values Pi (S9121). The three-dimensional data encoding device next performs discarding on the predicted values Pi to set 0 to fractional portions of the predicted values (S9126). Note that the detail of this process is the same as, for example, that of step S9107illustrated inFIG.80. The three-dimensional data encoding device next calculates difference values Di that are differences between attribute values Ai of the child node group and the predicted values Pi after the discarding (S9122). The three-dimensional data encoding device next applies the integer Haar transform to the difference values Di to calculate transform coefficients Ti of the difference values (S9123A). The three-dimensional data encoding device next quantizes the transform coefficients Ti of the difference values (S9124) and performs arithmetic encoding on the quantized transform coefficients Ti (S9125) to generate encoded data (a bitstream). Note that, when using a lossless coding, the three-dimensional data encoding device may skip the quantization (S9124). Here, the transform coefficients of the difference values may be converted into integral values and then subjected to arithmetic encoding, when information on fractional portions of the transform coefficients is lost. In contrast, the three-dimensional data encoding device sets a value of 0 beforehand to values of the fractional portions of the transform coefficients of the difference values. This prevents the information from being lost when the transform coefficients Ti of the difference values are converted into the integral values before arithmetic encoding, enabling the implementation of lossless coding. Next, a fifth example of a decoding method for decoding the encoded data (bitstream) generated by the encoding method in the fifth example will be described.FIG.87is a diagram illustrating the fifth example of a decoding method. First, the three-dimensional data decoding device performs arithmetic decoding on the encoded data (bitstream) (S9131) and performs inverse quantization on the resulting signal (S9132) to generate the transform coefficients Ti of the difference values of the child node group. Note that, when using a lossless decoding (when a lossless coding is used), the three-dimensional data decoding device may skip the inverse quantization (S9132). The three-dimensional data decoding device next applies the inverse integer Haar transform to the transform coefficients Ti to generate the difference values Di of the attribute values (S9133A). In addition, the three-dimensional data decoding device predicts the attribute values of the child node group from the attribute values of the parent node group to generate the predicted values (S9134). The three-dimensional data decoding device next performs discarding on the predicted values Pi to set 0 to fractional portions of the predicted values (S9136). Note that the detail of this process is the same as, for example, that of step S9117illustrated inFIG.81. The three-dimensional data decoding device next adds the difference values Di and the predicted values Pi after the discarding to generate decoded values of the attribute values of the child node group (S9135). The three-dimensional data decoding device thus can decode the bitstream appropriately. As described above, the three-dimensional data encoding device according to the present embodiment performs the process shown byFIG.88. First, the three-dimensional data encoding device generates a predicted value of an item of attribute information of a current node in an N-ary tree structure (e.g., an octree structure) of three-dimensional points included in point cloud data, N being an integer greater than or equal to 2 (S9141). Next, the three-dimensional data encoding device encodes the item of attribute information of the current node using the predicted value and a transform process (e.g., RAHT, Haar transform, or integer Haar transform) that hierarchically repeats an operation for separating each of input signals into a high-frequency component and a low-frequency component (S9142). In the generating of the predicted value (S9141), the three-dimensional data encoding device selects, from items of attribute information of first nodes (e.g., a parent node group), an item of attribute information of a node among the first nodes which is to be used in generating the predicted value of the current node, the first nodes including a parent node of the current node and belonging to a same layer as the parent node. The three-dimensional data encoding device thus can appropriately select an item of attribute information which is to be used in generating the predicted value, so that coding efficiency can be improved. For example, in the selecting, the three-dimensional data encoding device selects whether to use an item of attribute information of a second node directly as the predicted value or to calculate the predicted value from the items of attribute information of the first nodes, the second node being included in the first nodes. For example, the second node is the parent node. For example, the three-dimensional data encoding device generates predicted values of third nodes (e.g., a child node group) that include the current node and belong to a same layer as the current node. In the encoding (S9142), the three-dimensional data encoding device performs the transform process on items of attribute information of the third nodes to generate first transform coefficients (e.g., S9101inFIG.78); performs the transform process on the predicted values of the third nodes to generate second transform coefficients (e.g., S9103); calculates difference values between corresponding ones of the first transform coefficients and the second transform coefficients (e.g., S9104); and encodes the difference values calculated (e.g., S9106). For example, the transform process is an integer-to-integer transform (e.g., integer Haar transform). In the generating of the second transform coefficients, the three-dimensional data encoding device discards fractional portions of the predicted values of the third nodes (e.g., S9107inFIG.80), and performs the transform process on the predicted values after the discarding to generate the second transform coefficients (e.g., S9103A). For example, the transform process is an integer-to-integer transform (e.g., integer Haar transform). In the calculating of the difference values, the three-dimensional data encoding device discards fractional portions of the second transform coefficients (e.g., S9108inFIG.82), and calculates the difference values using the second transform coefficients after the discarding (S9104). For example, the three-dimensional data encoding device includes a processor and memory. Using the memory, the processor performs the above-described process. The three-dimensional data decoding device according to the present embodiment performs the process shown byFIG.89. First, the three-dimensional data decoding device obtains, from a bitstream, a difference value between an item of attribute information and a predicted value of a current node in an N-ary tree structure (e.g., an octree structure) of three-dimensional points included in point cloud data, N being an integer greater than or equal to 2 (S9151). Next, the three-dimensional data decoding device generates the predicted value (S9152). Finally, the three-dimensional data decoding device decodes the item of attribute information of the current node using the difference value, the predicted value, and an inverse transform process of a transform process (e.g., RAHT, Haar transform, or integer Haar transform) that hierarchically repeats an operation for separating each of input signals into a high-frequency component and a low-frequency component (S9153). In the generating of the predicted value (S9152), the three-dimensional data decoding device selects, from items of attribute information of first nodes (e.g., a parent node group), an item of attribute information of a node among the first nodes which is to be used in generating the predicted value of the current node, the first nodes including a parent node of the current node and belonging to a same layer as the parent node. The three-dimensional data decoding device thus can appropriately select an item of attribute information which is to be used in generating the predicted value, so that coding efficiency can be improved. For example, in the selecting, the three-dimensional data decoding device selects whether to use an item of attribute information of a second node directly as the predicted value or to calculate the predicted value from the items of attribute information of the first nodes, the second node being included in the first nodes. For example, the second node is the parent node. For example, the three-dimensional data decoding device obtains, from the bitstream, difference values of third nodes (e.g., child node group) that include the current node and belong to a same layer as the current node (e.g., S9111inFIG.79), and generates predicted values of the third nodes (S9113). In the decoding (S9153), the three-dimensional data decoding device performs the transform process on the predicted values of the third nodes to generate second transform coefficients (e.g., S9114); adds a difference value among the difference values and a second transform coefficient among the second transform coefficients to generate each of first transform coefficients, the difference value corresponding to the second transform coefficient (e.g., S9115); and performs the inverse transform process on the first transform coefficients to generate items of attribute information of the third nodes (e.g., S9116). For example, the transform process is an integer-to-integer transform (e.g., integer Haar transform). In the generating of the second transform coefficients, the three-dimensional data decoding device discards fractional portions of the predicted values of the third nodes (e.g., S9117inFIG.81), and performs the transform process on the predicted values after the discarding to generate the second transform coefficients (e.g., S9114A). For example, the transform process is an integer-to-integer transform (e.g., integer Haar transform). In the generating of the first transform coefficients, the three-dimensional data decoding device discards fractional portions of the second transform coefficients (e.g., S9118inFIG.83), and generates the first transform coefficients using the second transform coefficients after the discarding (e.g., S9115). For example, the three-dimensional data decoding device includes a processor and memory. Using the memory, the processor performs the above-described process. Embodiment 11 In this embodiment, a region adaptive hierarchical transform (RAHT) process or Haar process based on prediction will be described.FIG.90is a diagram for describing a prediction process, which illustrates a hierarchical structure in a RAHT or Haar process. In a hierarchical encoding based on RAHT or Haar transform, the three-dimensional data encoding device performs a hierarchical predictive encoding, which predicts an attribute value (attribute information) for a lower layer from an attribute value for a higher layer and encodes a difference value between the attribute information and a predicted value obtained by prediction. The three-dimensional data encoding device adaptively switches, based on a certain condition, whether to perform the hierarchical predictive encoding or not. The certain condition may be the condition described below. Condition 1 is that the number of valid nodes is greater than a previously determined threshold (THnode). The three-dimensional data encoding device applies the hierarchical predictive encoding when the number of valid nodes is greater than the threshold, and does not apply the hierarchical predictive encoding when the number of valid nodes is smaller than or equal to the threshold. Here, a valid node is a node having an attribute value used for prediction among a plurality of nodes (parent nodes and neighboring parent nodes) included in a parent node group for a child node group that is to be encoded. In other words, a valid node is a node that includes a three-dimensional point or a node a descendant node (a child node, a grandchild node or the like) of which includes a three-dimensional point. Note that a child node group includes a plurality of nodes (child nodes) to be encoded. A parent node group includes a parent node and a plurality of neighboring parent nodes. A neighboring parent node is a node that belongs to the same layer as the parent node and is neighboring to the parent node. For example, in the example shown inFIG.90, the valid node count, which is the total number of valid nodes included in the parent node group, is 11. For example, when THnode=5, the valid node count (=11)>THnode, and therefore, the three-dimensional data encoding device encodes the child node group using the hierarchical predictive encoding. When THnode=12, the valid node count (=11)<=THnode, the three-dimensional data encoding device does not encode the child node group using the hierarchical predictive encoding. In this way, when the valid node count is greater than the threshold, the three-dimensional data encoding device can generate a predicted value of high precision using the attribute value of the parent node or a neighboring parent node and therefore can improve the encoding efficiency by applying the hierarchical predictive encoding. When the valid node count is smaller than or equal to the threshold, the three-dimensional data encoding device does not apply the hierarchical predictive encoding and therefore can reduce the processing amount. Note that when the three-dimensional data encoding device does not apply the hierarchical predictive encoding, the three-dimensional data encoding device applies the RAHT or Haar transform to the attribute value of the child node group and entropy-encodes the resulting transform coefficient. FIG.91is a diagram illustrating a first example of the encoding process. When calculating the difference value between the attribute value and the predicted value, the three-dimensional data encoding device applies the RAHT or Haar transform to each of the attribute value and the predicted value to calculate a transform coefficient for the attribute value and a transform coefficient for the predicted value. The three-dimensional data encoding device determines the difference value between the transform coefficient for the attribute value and the transform coefficient for the predicted value. In this way, the difference value to be encoded can be made smaller, and therefore, the encoding efficiency can be improved. Note that when the predicted value is to be selected from among a calculated value and the attribute value for the parent node or a neighboring parent node (when a prediction mode is to be added to the bitstream), the three-dimensional data encoding device may calculate difference values of the transform coefficients from predicted values for each prediction mode, determine a cost value from the absolute sum of the difference values, and select a prediction mode for which the cost value is the smallest, for example. In this way, the prediction mode for which the difference value is the smallest can be appropriately selected, and the encoding efficiency can be improved. For example, the three-dimensional data encoding device can calculate cost value cost according to the following equation (Equation R1). [Math. 34] cost=Σi=0m|Ti−PTi|+λ*Predbit (Equation R1) Here, m represents the number of child nodes included in a child node group. λ represents an adjustment parameter. Predbit represents the amount of bits for encoding a prediction mode. Ti represents a transform coefficient for an attribute value, and PTi represents a transform coefficient for a predicted value. Note that the method of selecting a prediction mode is not limited to this, and a prediction mode can be selected based on other conditions or instructions from the outside, for example. As shown inFIG.91, the three-dimensional data encoding device calculates transform coefficients Ti for the attribute values for the child node group by applying the RAHT or Haar transform to the attribute values (S9501). In the example shown inFIG.91, the valid node count is 2, and the valid node count (=2)<=THnode (11, for example). In this case, the three-dimensional data encoding device does not perform generation of a predicted value. For example, the three-dimensional data encoding device uses a predicted value=0. The three-dimensional data encoding device then calculates difference values that are the difference between transform coefficients Ti for the attribute values and the predicted value=0 (S9502). That is, the three-dimensional data encoding device directly outputs transform coefficients Ti as the difference values. The three-dimensional data encoding device then quantizes the difference values (transform coefficients Ti) (S9503), and arithmetically encodes the quantized difference values (S9504), thereby generating encoded data (bitstream). Note that when using a lossless encoding, the three-dimensional data encoding device may skip the quantization (S9504). Note that when the valid node count>THnode, the three-dimensional data encoding device generates a predicted value for the child node group by predicting an attribute value for the child node group from an attribute value for the parent node group. The three-dimensional data encoding device then calculates transform coefficient PTi for the predicted value by applying the RAHT or Haar transform to the predicted value. In step S9502, the three-dimensional data encoding device also calculates difference values that are the differences between the transform coefficients Ti for the attribute values and transform coefficient PTi for the predicted value. Next, an example of a decoding process of decoding the encoded data (bitstream) generated in the first example of the encoding process described above will be described.FIG.92is a diagram illustrating a first example of the decoding process. First, the three-dimensional data decoding device arithmetically decodes the encoded data (bitstream) (S9511), and inverse-quantizes the resulting signal (S9512) to generate difference values of the transform coefficients for the child node group. Note that, when using a lossless decoding (when a lossless encoding is used), the inverse quantization (S9512) may be skipped. In the example shown inFIG.92, the valid node count is 2, and the valid node count (=2)<=THnode (11, for example). In this case, the three-dimensional data decoding device does not perform generation of a predicted value. For example, the three-dimensional data decoding device uses a predicted value=0. The three-dimensional data decoding device then calculates transform coefficients Ti for the child node group by adding the predicted value=0 to the difference values of the transform coefficients for the child node group (S9513). That is, the three-dimensional data decoding device directly outputs the difference values as transform coefficients Ti. The three-dimensional data decoding device then applies inverse RAHT or inverse Haar transform to transform coefficients Ti for the child node group to generate decoded values of the attribute values for the child node group (S9514). In this way, the three-dimensional data decoding device can appropriately decode the bitstream. Note that when the valid node count>THnode, the three-dimensional data decoding device generates a predicted value by predicting an attribute value for the child node group from an attribute value for the parent node group. Note that when the three-dimensional data encoding device has selected whether to use a calculated value or an attribute value for the parent node or a neighboring parent node for generation of the predicted value (when a prediction mode is added to the bitstream), the three-dimensional data decoding device generates predicted values using the decoded prediction mode. The three-dimensional data decoding device then calculates transform coefficient PTi for the predicted value by applying the RAHT or Haar transform to the predicted value. In step S9513, the three-dimensional data decoding device also calculates transform coefficients Ti for the child node group by adding transform coefficient PTi for the predicted value to the difference values of the transform coefficients for the child node group. FIG.93is a diagram illustrating an example syntax of an attribute information header (attribute header). The attribute information header is a header of attribute information included in a bitstream, and is a header for a frame or for a plurality of frames. As shown inFIG.93, the attribute information header includes RAHTPredictionFlag (hierarchical predictive encoding flag) and THnode (first threshold information). RAHTPredictionFlag is information for switching whether or not to apply the hierarchical predictive encoding, which predicts an attribute value for a lower layer from an attribute value for a higher layer, in the hierarchical encoding based on RAHT or Haar. RAHTPredictionFlag=1 indicates that the hierarchical predictive encoding is to be applied. RAHTPredictionFlag=0 indicates that the hierarchical predictive encoding is not to be applied. THnode is information for switching whether or not to apply the hierarchical encoding for each child node group. THnode is added to the attribute information header when RAHTPredictionFlag=1, and is not added to the attribute information header when RAHTPredictionFlag=0. The hierarchical predictive encoding is applied when the valid node count of the parent node group is greater than THnode, and is not applied when the valid node count is smaller than or equal to THnode. FIG.94is a diagram illustrating another example syntax of the attribute information header. The attribute information header shown in FIG.94does not include RAHTPredictionFlag, although the attribute information header includes THnode. When the minimum value of the valid node count of the parent node is 1, the three-dimensional data encoding device can always apply the hierarchical predictive encoding for each child node group by setting THnode=0. Therefore, in this case, RAHTPredictionFlag can be omitted. Therefore, the data size of the header can be reduced. Note that the three-dimensional data encoding device may add RAHTPredictionFlag or THnode to the header after entropy-encoding RAHTPredictionFlag or THnode. For example, the three-dimensional data encoding device may binarize and arithmetically encode each value. Alternatively, the three-dimensional data encoding device may encode each value with a fixed length in order to reduce the processing amount. RAHTPredictionFlag or THnode does not always have to be added to the header. For example, the value of RAHTPredictionFlag or THnode may be defined by profile or level of a standard or the like. In this way, the bit amount of the header can be reduced. FIG.95is a flowchart of a three-dimensional data encoding process (a switching process for the hierarchical predictive encoding). First, the three-dimensional data encoding device calculates the valid node count of the parent node group (S9521). The three-dimensional data encoding device then determines whether the valid node count is greater than THnode (S9522). When the valid node count is greater than THnode (if Yes in S9522), the three-dimensional data encoding device performs the hierarchical predictive encoding of the attribute values of the child node group (S9523). On the other hand, when the valid node count is smaller than or equal to THnode (if No in S9522), the three-dimensional data encoding device performs a hierarchical non-predictive encoding of the attribute values of the child node group (S9524). The hierarchical non-predictive encoding is an encoding that does not use the hierarchical predictive encoding, and is an encoding that includes no prediction process, for example. FIG.96is a flowchart of a three-dimensional data decoding process (a switching process for the hierarchical predictive decoding). First, the three-dimensional data decoding device calculates the valid node count of the parent node group (S9531). The three-dimensional data decoding device then determines whether the valid node count is greater than THnode (S9532). When the valid node count is greater than THnode (if Yes in S9532), the three-dimensional data decoding device performs a hierarchical predictive decoding of the attribute values of the child node group (S9533). Here, the hierarchical predictive decoding is a process of decoding a signal generated by the hierarchical predictive encoding described above. That is, in the hierarchical predictive decoding, a decoded value (attribute value) is generated by adding a predicted value obtained by prediction to a decoded difference value. On the other hand, when the valid node count is smaller than or equal to THnode (if No in S9532), the three-dimensional data decoding device performs a hierarchical non-predictive decoding of the attribute values of the child node group (S9534). Here, the hierarchical non-predictive decoding is a process of decoding a signal generated by the hierarchical non-predictive encoding described above. The hierarchical non-predictive decoding is a decoding that does not use the hierarchical predictive decoding, and is a decoding that includes no prediction process, for example. Next, a second example of the encoding process will be described. As a condition for switching whether to use the hierarchical predictive encoding or not, condition 2 described below may be used. Condition 2 is that the valid node count of a grandparent node group is greater than a previously determined threshold (THpnode). The three-dimensional data encoding device applies the hierarchical predictive encoding when the valid node count of the grandparent node group is greater than the threshold, and does not apply the hierarchical predictive encoding when the valid node count of the grandparent node group is smaller than or equal to the threshold. Here, the grandparent node group includes a grandparent node and a neighboring node of the grandparent node. That is, the grandparent node group includes a parent node of a parent node and a parent node of a neighboring parent node. Here, the grandparent node is a parent node of a parent node of a current node. That is, the valid node count of the grandparent node is the valid node count of an encoded parent node. Condition 1 and condition 2 may be combined. That is, it is possible that the three-dimensional data encoding device applies the hierarchical predictive encoding when the valid node count of the grandparent node group is greater than threshold THpnode and the valid node count of the parent node is greater than threshold THnode, and does not apply the hierarchical predictive encoding otherwise. FIG.97is a diagram illustrating a second example of the encoding process. For example, in the example shown inFIG.97, the valid node count of the grandparent node is 3. For example, when THpnode=1, the valid node count of the grandparent node (=3)>THnode, and therefore, the three-dimensional data encoding device encodes the child node group using the hierarchical predictive encoding. When THpnode=5, the valid node count (=3)<=THnode, and therefore, the three-dimensional data encoding device does not encode the child node group using the hierarchical predictive encoding. In this way, when the valid node count of the grandparent node group is greater than the threshold, the three-dimensional data encoding device can generate a predicted value of high precision using the attribute value of the parent node or a neighboring parent node and therefore can improve the encoding efficiency by applying the hierarchical predictive encoding. When the valid node count of the grandparent node group is smaller than or equal to the threshold, the three-dimensional data encoding device does not apply the hierarchical predictive encoding and therefore can reduce the processing amount. Note that when the three-dimensional data encoding device does not apply the hierarchical predictive encoding, the three-dimensional data encoding device applies the RAHT or Haar transform to the attribute value of the child node group and entropy-encodes the resulting transform coefficient, for example. Note that when the valid node count of the grandparent node group is smaller than or equal to the threshold, the three-dimensional data encoding device does not apply the hierarchical predictive encoding and therefore does not have to perform the process of determining a parent node group and calculating the valid node count of the parent node group. In this way, the processing amount can be reduced. Note that when the three-dimensional data encoding device does not calculate the valid node count of the parent node group, the three-dimensional data encoding device may set the valid node count of the parent node group at 0 so that the hierarchical predictive encoding is not applied to the child nodes of the child node group to be encoded. In this way, the processing amount can be reduced. The three-dimensional data encoding device may refer to the valid node count of a higher layer than the grandparent node. FIG.98is a diagram illustrating an example syntax of the attribute information header in the second example. The attribute information header shown inFIG.98includes THpnode (second threshold information) in addition to the components of the attribute information header shown inFIG.93. THpnode is information for switching whether or not to apply the hierarchical encoding for each child node group. THpnode is added to the attribute information header when RAHTPredictionFlag=1, and is not added to the attribute information header when RAHTPredictionFlag=0. The hierarchical predictive encoding is applied when the valid node count of the grandparent node group is greater than THpnode, and is not applied when the valid node count of the grandparent node group is smaller than or equal to THpnode. FIG.99is a diagram illustrating another example syntax of the attribute information header. The attribute information header shown inFIG.99includes THpnode (second threshold information) in addition to the components of the attribute information header shown inFIG.94. When the minimum value of the valid node count of the grandparent node is 1, the hierarchical predictive encoding can be always applied for each child node group by setting THpnode=0. Therefore, in this case, RAHTPredictionFlag can be omitted. Therefore, the data size of the header can be reduced. Note that the three-dimensional data encoding device may add RAHTPredictionFlag, THpnode, or THnode to the header after entropy-encoding RAHTPredictionFlag, THpnode, or THnode. For example, the three-dimensional data encoding device may binarize and arithmetically encode each value. Alternatively, the three-dimensional data encoding device may encode each value with a fixed length in order to reduce the processing amount. RAHTPredictionFlag, THpnode, or THnode does not always have to be added to the header. For example, the value of RAHTPredictionFlag, THpnode, or THnode may be defined by profile or level of a standard or the like. In this way, the bit amount of the header can be reduced. FIG.100is a flowchart of a three-dimensional data encoding process (a switching process for the hierarchical predictive encoding) in the second example. The process shown inFIG.100is the process shown inFIG.95additionally including step S9525. First, the three-dimensional data encoding device determines whether the valid node count of the grandparent node group is greater than THpnode (S9525). When the valid node count of the grandparent node group is greater than THpnode (if Yes in S9525), the same processings as those inFIG.95(S9521to S9524) are performed. On the other hand, when the valid node count of the grandparent node group is smaller than or equal to THpnode (if No in S9525), the three-dimensional data encoding device performs the hierarchical non-predictive encoding of the attribute values of the child node group (S9524). Note that when the valid node count of the grandparent node group is smaller than or equal to THpnode (if No in S9525), the three-dimensional data encoding device may set the valid node count of the parent node group at 0. When the valid node count of the grandparent node group is greater than THpnode (if Yes in S9525), the three-dimensional data encoding device may perform the hierarchical predictive encoding of the attribute values of the child node group (S9523) without performing steps S9521to S9522. FIG.101is a flowchart of a three-dimensional data decoding process (a switching process for the hierarchical predictive decoding) in the second example. The process shown inFIG.101is the process shown inFIG.96additionally including step S9535. First, the three-dimensional data decoding device determines whether the valid node count of the grandparent node group is greater than THpnode (S9535). When the valid node count of the grandparent node group is greater than THpnode (if Yes in S9535), the same processings as those inFIG.96(S9531to S9534) are performed. On the other hand, when the valid node count of the grandparent node group is smaller than or equal to THpnode (if No in S9535), the three-dimensional data decoding device performs the hierarchical non-predictive decoding of the attribute values of the child node group (S9534). Note that when the valid node count of the grandparent node group is smaller than or equal to THpnode (if No in S9535), the three-dimensional data decoding device may set the valid node count of the parent node group at 0. When the valid node count of the grandparent node group is greater than THpnode (if Yes in S9535), the three-dimensional data decoding device performs the hierarchical predictive decoding of the attribute values of the child node group (S9533) without performing steps S9531to S9532. Next, a third example of the encoding process will be described. As a condition for switching whether to use the hierarchical predictive encoding or not, condition 3 described below may be used. Condition 3 is that the layer to which the child node group belongs is greater than a threshold (THlayer). The three-dimensional data encoding device applies the hierarchical predictive encoding when the layer to which the child node group belongs is greater than the threshold, and does not apply the hierarchical predictive encoding when the layer to which the child node group belongs is smaller than or equal to the threshold. Here, the layer is a layer for which the hierarchical encoding of the child node group to be encoded is performed with the RAHT or Haar transform, and corresponds to a value assigned to each layer. FIG.102is a diagram illustrating a third example of the encoding process. For example, in the example shown inFIG.102, the layer to which the parent node group belongs is 4, and the layer to which the child node group belongs is 1. For example, when THlayer=0, layer (=1)>THlayer, and therefore, the three-dimensional data encoding device encodes the child node group using the hierarchical predictive encoding. When THlayer=4, layer (=1)<=THlayer, and therefore, the three-dimensional data encoding device does not encode the child node group using the hierarchical predictive encoding. In this way, for child nodes belonging to a higher layer, for which the valid node count tends to be greater, the three-dimensional data encoding device can generate a predicted value of high precision using the attribute value of the parent node or a neighboring parent node and therefore can improve the encoding efficiency by applying the hierarchical predictive encoding. On the other hand, when the layer is smaller than or equal to the threshold, the three-dimensional data encoding device does not apply the hierarchical predictive encoding and therefore can reduce the processing amount. Note that when the three-dimensional data encoding device does not apply the hierarchical predictive encoding, the three-dimensional data encoding device applies the RAHT or Haar transform to the attribute value of the child node group and entropy-encodes the resulting transform coefficient, for example. FIG.103is a diagram illustrating an example syntax of the attribute information header in the third example. The attribute information header shown inFIG.103includes THlayer (third threshold information) in addition to the components of the attribute information header shown inFIG.98. THlayer is information for switching whether or not to apply the hierarchical predictive encoding for each layer to which the child nodes belong. THlayer is added to the attribute information header when RAHTPredictionFlag=1, and is not added to the attribute information header when RAHTPredictionFlag=0. The hierarchical predictive encoding is applied when the layer to which the child node group belongs is greater than THlayer, and is not applied when the layer to which the child node group belongs is smaller than or equal to THlayer. FIG.104is a diagram illustrating another example syntax of the attribute information header. The attribute information header shown inFIG.104includes THlayer (third threshold information) in addition to the components of the attribute information header shown inFIG.99. By setting THlayer at the minimum value (−1, for example) of the layers, the hierarchical encoding can be always applied for each layer. Therefore, in this case, RAHTPredictionFlag can be omitted. Therefore, the data size of the header can be reduced. Note that THlayer [i] may be provided for each layer i for the RAHT or Haar transform, thereby indicating, for each layer, whether the hierarchical predictive encoding has been applied or not. In this way, an optimal threshold can be selected for each layer, and therefore, the encoding efficiency can be improved. Note that the three-dimensional data encoding device may add RAHTPredictionFlag, THlayer, THpnode, or THnode to the header after entropy-encoding RAHTPredictionFlag, THlayer, THpnode, or THnode. For example, the three-dimensional data encoding device may binarize and arithmetically encode each value. Alternatively, the three-dimensional data encoding device may encode each value with a fixed length in order to reduce the processing amount. RAHTPredictionFlag, THlayer, THpnode, or THnode does not always have to be added to the header. For example, the value of RAHTPredictionFlag, THlayer, THpnode, or THnode may be defined by profile or level of a standard or the like. In this way, the bit amount of the header can be reduced. FIG.105is a flowchart of a three-dimensional data encoding process (a switching process for the hierarchical predictive encoding) in the third example. The process shown inFIG.105is the process shown inFIG.100additionally including step S9526. First, the three-dimensional data encoding device determines whether the layer of the child node group is greater than THlayer (S9526). When the layer of the child node group is greater than THlayer (if Yes in S9526), the same processings as those inFIG.100(S9525and the following steps) are performed. On the other hand, when the layer of the child node group is smaller than or equal to THlayer (if No in S9526), the three-dimensional data encoding device performs the hierarchical non-predictive encoding of the attribute values of the child node group (S9524). When the layer of the child node group is greater than THlayer (if Yes in S9526), the three-dimensional data encoding device may perform the hierarchical predictive encoding of the attribute values of the child node group (S9523) without performing steps S9525and S9521to S9522. Alternatively, the three-dimensional data encoding device may perform one of step S9525and steps S9521to S9522, and need not perform the other. FIG.106is a flowchart of a three-dimensional data decoding process (a switching process for the hierarchical predictive decoding) in the third example. The process shown inFIG.106is the process shown inFIG.101additionally including step S9536. First, the three-dimensional data decoding device determines whether the layer of the child node group is greater than THlayer (S9536). When the layer of the child node group is greater than THlayer (if Yes in S9536), the same processings as those inFIG.101(S9535and the following steps) are performed. On the other hand, when the layer of the child node group is smaller than or equal to THlayer (if No in S9536), the three-dimensional data decoding device performs the hierarchical non-predictive decoding of the attribute values of the child node group (S9534). When the layer of the child node group is greater than THlayer (if Yes in S9536), the three-dimensional data decoding device may perform the hierarchical predictive decoding of the attribute values of the child node group (S9533) without performing steps S9535and S9531to S9532. Alternatively, the three-dimensional data decoding device may perform one of step S9535and steps S9531to S9532, and need not perform the other. In the following, modifications of this embodiment will be described. In this embodiment, examples have been shown above in which when performing the hierarchical encoding using the RAHT or Haar transform, whether to apply the hierarchical predictive encoding or not is switched based on any one or a combination of conditions 1, 2, and 3. However, the present invention is not necessarily limited to this, and whether to apply the hierarchical predictive encoding or not can be switched in any manner. For example, it is possible that the three-dimensional data encoding device applies the hierarchical predictive encoding when the absolute sum of the difference values of the transform coefficients of the child node group or the cost value is smaller than a threshold, and does not apply the hierarchical predictive encoding otherwise. In that case, the three-dimensional data encoding device may generate, for each child node group, information (PredFlag, for example) that indicates whether the hierarchical predictive encoding has been applied or not, and add the generated information to the bitstream. For example, PredFlag=1 when the hierarchical predictive encoding has been applied, and PrefFlag=0 when the hierarchical predictive encoding has not been applied. In this way, the three-dimensional data decoding device can determine whether or not to apply the hierarchical encoding for each child node by decoding PredFlag in the bitstream, and therefore can appropriately decode the bitstream. Note that the three-dimensional data encoding device may arithmetically encode PredFlag after binarizing PredFlag. Alternatively, the three-dimensional data encoding device may change the encoding table for arithmetically encoding binarized data of PredFlag according to condition 1, 2, or 3. That is, the three-dimensional data encoding device may change the encoding table according to the valid node count of the parent node group, the valid node count of the grandparent node, or the layer to which the child node group belongs. In this way, the encoding efficiency can be improved by the hierarchical predictive encoding, while reducing the bit amount of PredFlag. Note that the three-dimensional data encoding device may combine the method that switches whether or not to apply the hierarchical predictive encoding based on any one or a combination of conditions 1, 2, and 3 and the method that uses PredFlag. In this way, the encoding efficiency can be improved. Note that the three-dimensional data decoding device may use the same approach as described above for selecting the encoding table used for arithmetic decoding of PredFlag. In this embodiment, the three-dimensional data encoding device switches whether or not to apply the hierarchical predictive encoding for each child node group or for each layer to which the child node group belongs. However, the present invention is not necessarily limited to this. For example, the three-dimensional data encoding device may switch whether or not to apply the hierarchical predictive encoding for every N child node groups or for every M layers. Alternatively, the three-dimensional data encoding device may switch whether or not to apply the hierarchical predictive encoding on a basis of a larger unit than layer, such as slice or tile. In this way, the encoding efficiency can be improved while reducing the processing amount. In this embodiment, examples have been shown above in which the three-dimensional data encoding device switches whether to apply the hierarchical encoding or not. However, the present invention is not necessarily limited to this. For example, the three-dimensional data encoding device may change the range or number of the nodes to be referred to and included in the parent node group according to the conditions described above, for example.FIG.107is a diagram illustrating an example in which the range and number of the parent nodes to be referred to are changed. For example, as shown inFIG.107, the three-dimensional data encoding device may reduce the range or number of parent node groups to be referred to when the layer to which the child node group belongs is smaller than or equal to threshold THlayer. In this way, the amount of processing, such as the calculation of the valid nodes of the parent node group, can be reduced. As stated above, the three-dimensional data encoding device according to the present embodiment performs the process shown byFIG.108. The three-dimensional data encoding device: determines whether a first valid node count is greater than a first threshold value predetermined (S9541), the first valid node count being a total number of valid nodes that are nodes each including a three-dimensional point, the valid nodes being included in first nodes belonging to a layer higher than a layer of a current node in an N-ary tree structure of three-dimensional points included in point cloud data, N being an integer greater than or equal to 2; when the first valid node count is greater than the first threshold value (YES in S9541), performs first encoding (e.g., layer predictive encoding) on attribute information of the current node (S9542), the first encoding including a prediction process in which second nodes are used, the second nodes including a parent node of the current node and belonging to a same layer as the parent node; and when the first valid node count is less than or equal to the first threshold value (NO in S9541), performs second encoding (e.g., layer non-predictive encoding) on attribute information of the current node (S9543), the second encoding not including the prediction process in which the second nodes are used. According to this configuration, since the three-dimensional data encoding device can appropriately select whether to use the first encoding including a prediction process, the three-dimensional encoding device can improve the encoding efficiency. For example, the first nodes (e.g., a parent node group) include the parent node and nodes belonging to the same layer as the parent node. For example, the first nodes (e.g., a grandparent node group) include a grandparent node of the current node and nodes belonging to a same layer as the grandparent node. For example, in the second encoding, a predicted value of attribute information of the current node is set to zero. For example, the three-dimensional data encoding device further generates a bitstream including attribute information of the current node encoded and first information (e.g., RAHTPredictionFlag) indicating whether the first encoding is applicable. For example, the three-dimensional data encoding device performs the process shown byFIG.108when the first information indicates that the first encoding is applicable; and performs the second encoding (e.g., layer non-predictive encoding) on the attribute information of the current node when the first information fails to indicate that the first encoding is applicable (when the first information indicates that the first encoding is not applicable). For example, the three-dimensional data encoding device further generates a bitstream including attribute information of the current node encoded and second information (e.g., THpnode or THnode) indicating the first threshold value. For example, the three-dimensional data encoding device: determines whether a second valid node count is greater than a second threshold value predetermined, the second valid node count being a total number of valid nodes included in second nodes (e.g., a grandparent node group) including a grandparent node of the current node and nodes belonging to a same layer as the grandparent node; when the first valid node count is greater than the first threshold value, and the second valid node count is greater than the second threshold value, performs the first encoding on attribute information of the current node; and when the first valid node count is less than or equal to the first threshold value or the second valid node count is less than or equal to the second threshold value, performs the second encoding on attribute information of the current node. For example, the three-dimensional data encoding device incudes a processor and memory, and the processor performs the above process using the memory. The three-dimensional data decoding device according to the present embodiment performs the process shown byFIG.109. The three-dimensional data decoding device: determines whether a first valid node count is greater than a first threshold value predetermined (S9551), the first valid node count being a total number of valid nodes that are nodes each including a three-dimensional point, the valid nodes being included in first nodes belonging to a layer higher than a layer of a current node in an N-ary tree structure of three-dimensional points included in point cloud data, N being an integer greater than or equal to 2; when the first valid node count is greater than the first threshold value (YES in S9551), performs first decoding on attribute information of the current node (S9552), the first decoding including a prediction process in which second nodes are used, the second nodes including a parent node of the current node and belonging to a same layer as the parent node; and when the first valid node count is less than or equal to the first threshold value (NO in S9551), performs second decoding on attribute information of the current node (S9553), the second decoding not including the prediction process in which the second nodes are used. According to this configuration, since the three-dimensional data decoding device can appropriately select whether to use the first decoding including a prediction process, the three-dimensional decoding device can improve the encoding efficiency. For example, the first nodes (e.g., a parent node group) include the parent node and nodes belonging to the same layer as the parent node. For example, the first nodes (e.g., a grandparent node group) include a grandparent node of the current node and nodes belonging to a same layer as the grandparent node. For example, in the second encoding, a predicted value of attribute information of the current node is set to zero. For example, the three-dimensional data decoding device further obtains first information (e.g., RAHTPredictionFlag) indicating whether the first decoding is applicable, from a bitstream including attribute information of the current node encoded. For example, the three-dimensional data decoding device performs the process shown byFIG.109when the first information indicates that the first decoding is applicable; and performs the second decoding (e.g., layer non-predictive encoding) on the attribute information of the current node when the first information fails to indicate that the first decoding is applicable (when the first information indicates that the first decoding is not applicable). For example, the three-dimensional data decoding device obtains second information (e.g., THpnode or THnode) indicating the first threshold value, from a bitstream including attribute information of the current node encoded. For example, the three-dimensional data decoding device: determines whether a second valid node count is greater than a second threshold value predetermined, the second valid node count being a total number of valid nodes included in second nodes (e.g., a grandparent node group) including a grandparent node of the current node and nodes belonging to a same layer as the grandparent node; when the first valid node count is greater than the first threshold value, and the second valid node count is greater than the second threshold value, performs the first decoding on attribute information of the current node; and when the first valid node count is less than or equal to the first threshold value or the second valid node count is less than or equal to the second threshold value, performs the second decoding on attribute information of the current node. For example, the three-dimensional data decoding device incudes a processor and memory, and the processor performs the above process using the memory. Note that although it has been described here that when the first valid node count is greater than the first threshold value, the three-dimensional data encoding device performs the first encoding on the attribute information of the current node, the first encoding including a prediction process in which a plurality of second nodes are used, and the second nodes including a parent node of the current node and belonging to the same layer as the parent node, the same approach can be taken when the first valid node count is greater than or equal to the first threshold. That is, when the encoding including the prediction process is performed when the first valid node count of the parent node group is greater than or equal to the first threshold, the three-dimensional data encoding device performs the second encoding that does not include the prediction process on the attribute information of the current node when the first valid node count of the parent node group is smaller than the first threshold. Similarly, although it has been described that when the first valid node count is greater than the first threshold value, the three-dimensional data decoding device performs the first decoding on the attribute information of the current node, the first decoding including a prediction process in which a plurality of second nodes are used, and the second nodes including a parent node of the current node and belonging to the same layer as the parent node, the same approach can be taken when the first valid node count is greater than or equal to the first threshold. That is, when the decoding including the prediction process is performed when the first valid node count of the parent node group is greater than or equal to the first threshold, the three-dimensional data decoding device performs the second decoding that does not include the prediction process on the attribute information of the current node when the first valid node count of the parent node group is smaller than the first threshold. A three-dimensional data encoding device, a three-dimensional data decoding device, and the like according to the embodiments of the present disclosure have been described above, but the present disclosure is not limited to these embodiments. Note that each of the processors included in the three-dimensional data encoding device, the three-dimensional data decoding device, and the like according to the above embodiments is typically implemented as a large-scale integrated (LSI) circuit, which is an integrated circuit (IC). These may take the form of individual chips, or may be partially or entirely packaged into a single chip. Such IC is not limited to an LSI, and thus may be implemented as a dedicated circuit or a general-purpose processor. Alternatively, a field programmable gate array (FPGA) that allows for programming after the manufacture of an LSI, or a reconfigurable processor that allows for reconfiguration of the connection and the setting of circuit cells inside an LSI may be employed. Moreover, in the above embodiments, the structural components may be implemented as dedicated hardware or may be realized by executing a software program suited to such structural components. Alternatively, the structural components may be implemented by a program executor such as a CPU or a processor reading out and executing the software program recorded in a recording medium such as a hard disk or a semiconductor memory. The present disclosure may also be implemented as a three-dimensional data encoding method, a three-dimensional data decoding method, or the like executed by the three-dimensional data encoding device, the three-dimensional data decoding device, and the like. Also, the divisions of the functional blocks shown in the block diagrams are mere examples, and thus a plurality of functional blocks may be implemented as a single functional block, or a single functional block may be divided into a plurality of functional blocks, or one or more functions may be moved to another functional block. Also, the functions of a plurality of functional blocks having similar functions may be processed by single hardware or software in a parallelized or time-divided manner. Also, the processing order of executing the steps shown in the flowcharts is a mere illustration for specifically describing the present disclosure, and thus may be an order other than the shown order. Also, one or more of the steps may be executed simultaneously (in parallel) with another step. A three-dimensional data encoding device, a three-dimensional data decoding device, and the like according to one or more aspects have been described above based on the embodiments, but the present disclosure is not limited to these embodiments. The one or more aspects may thus include forms achieved by making various modifications to the above embodiments that can be conceived by those skilled in the art, as well forms achieved by combining structural components in different embodiments, without materially departing from the spirit of the present disclosure. Although only some exemplary embodiments of the present disclosure have been described in detail above, those skilled in the art will readily appreciate that many modifications are possible in the exemplary embodiments without materially departing from the novel teachings and advantages of the present disclosure. Accordingly, all such modifications are intended to be included within the scope of the present disclosure. INDUSTRIAL APPLICABILITY The present disclosure is applicable to a three-dimensional data encoding device and a three-dimensional data decoding device. | 313,097 |
11861869 | Similar reference numerals may have been used in different figures to denote similar components. DESCRIPTION OF EXAMPLE EMBODIMENTS The present application describes methods of encoding and decoding point clouds, and encoders and decoders for encoding and decoding point clouds. A bit sequence signaling an occupancy pattern for sub-volumes of a volume may be coded using binary entropy coding. Contexts may be based on neighbour configuration and a partial sequence of previously-coded bits of the bit sequence. A determination is made as to whether to apply a context reduction operation and, if so, the operation reduces the number of available contexts. Example context reduction operations include reducing neighbour configurations based on shielding by sub-volumes associated with previously-coded bits, special handling for empty neighbour configurations, and statistics-based context consolidation. The reduction may be applied in advance of coding and a determination may be made during coding as to whether the circumstances for using a reduced context set are met. In one aspect, the present application provides a method of encoding a point cloud to generate a bitstream of compressed point cloud data, the point cloud being defined in a tree structure having a plurality of nodes having parent-child relationships and that represent the geometry of a volumetric space recursively split into sub-volumes and containing the points of the point cloud, wherein occupancy of sub-volumes of a volume is indicated using a bit sequence with each bit of the bit sequence indicating occupancy of a respective sub-volume in a scan order within the volume, and wherein a volume has a plurality of neighbouring volumes, a pattern of occupancy of the neighbouring volumes being a neighbour configuration. The method includes, for at least one bit in the bit sequence of the volume, determining that a context reduction condition is met and, on that basis, selecting a reduced context set that contains fewer contexts than the product of a count of neighbour configurations and a number of previously-coded bits in the sequence; selecting, for coding the at least one bit, a context from the reduced context set based on an occupancy status of at least some of the neighbouring volumes and at least one previously-coded bit of the bit sequence; entropy encoding the at least one bit based on the selected context using a binary entropy encoder to produce encoded data for the bitstream; and updating the selected context. In another aspect, the present application provides a method of decoding a bitstream of compressed point cloud data to produce a reconstructed point cloud, the point cloud being defined in a tree structure having a plurality of nodes having parent-child relationships and that represent the geometry of a volumetric space recursively split into sub-volumes and containing the points of the point cloud, wherein occupancy of sub-volumes of a volume is indicated using a bit sequence with each bit of the bit sequence indicating occupancy of a respective sub-volume in a scan order within the volume, and wherein a volume has a plurality of neighbouring volumes, a pattern of occupancy of the neighbouring volumes being a neighbour configuration. The method of decoding includes, for at least one bit in the bit sequence of the volume, determining that a context reduction condition is met and, on that basis, selecting a reduced context set that contains fewer contexts than the product of a count of neighbour configurations and a number of previously-coded bits in the sequence; selecting, for coding the at least one bit, a context from the reduced context set based on an occupancy status of at least some of the neighbouring volumes and at least one previously-coded bit of the bit sequence; entropy decoding the at least one bit based on the selected context using a binary entropy decoder to produce a reconstructed bit from the bitstream; and updating the selected context. In some implementations, the context reduction condition may include determining that one or more previously-coded occupancy bits is associated with one or more respective sub-volumes positioned between the sub-volume associated with the at least one bit and the one or more of the neighbouring volumes. In some cases this may include determining that four sub-volumes associated with previously-encoded bits share a face with a particular neighbour volume. In some implementations, the context reduction condition may include determining that at least four bit of the bit sequence have been previously coded. In some implementations, determining that the context reduction condition is met may include determining that the pattern of occupancy of the neighbouring volumes indicates that the plurality of neighbouring volumes is unoccupied. In some of those cases, the selected reduced context set may include a number of contexts corresponding to the number of previously-coded bits in the bit sequence and, optionally, selecting the context may include selecting the context based on a sum of previously-coded bits in the bit sequence. In some implementations, the context reduction condition may include determining that at least a threshold number of bits in the bit sequence have been previously-coded, and the reduced context set may include a look-up table mapping each possible combination of neighbour configuration and pattern of previously-coded bits in the bit sequence to the fewer contexts. In some examples, the look-up table may be generated based on an iterative grouping of available contexts into a plurality of classes on the basis of determining that a distance measurement between respective pairs of available contexts is less than a threshold value, and each class in the plurality of classes may include a respective context in the smaller set, and there may be an available contexts for each the possible combination of neighbour configuration and pattern of previously-coded bits in the bit sequence. In some implementations, at least some of the neighbouring volumes are neighbouring volumes that share at least one face with the volume. In a further aspect, the present application describes encoders and decoders configured to implement such methods of encoding and decoding. In yet a further aspect, the present application describes non-transitory computer-readable media storing computer-executable program instructions which, when executed, cause one or more processors to perform the described methods of encoding and/or decoding. In yet another aspect, the present application describes a computer-readable signal containing program instructions which, when executed by a computer, cause the computer to perform the described methods of encoding and/or decoding. Other aspects and features of the present application will be understood by those of ordinary skill in the art from a review of the following description of examples in conjunction with the accompanying figures. Any feature described in relation to one aspect or embodiment of the invention may also be used in respect of one or more other aspects/embodiments. These and other aspects of the present invention will be apparent from, and elucidated with reference to, the embodiments described herein. At times in the description below, the terms “node”, “volume” and “sub-volume” may be used interchangeably. It will be appreciated that a node is associated with a volume or sub-volume. The node is a particular point on the tree that may be an internal node or a leaf node. The volume or sub-volume is the bounded physical space that the node represents. The term “volume” may, in some cases, be used to refer to the largest bounded space defined for containing the point cloud. A volume may be recursively divided into sub-volumes for the purpose of building out a tree-structure of interconnected nodes for coding the point cloud data. In the present application, the term “and/or” is intended to cover all possible combinations and sub-combinations of the listed elements, including any one of the listed elements alone, any sub-combination, or all of the elements, and without necessarily excluding additional elements. In the present application, the phrase “at least one of . . . or . . . ” is intended to cover any one or more of the listed elements, including any one of the listed elements alone, any sub-combination, or all of the elements, without necessarily excluding any additional elements, and without necessarily requiring all of the elements. A point cloud is a set of points in a three-dimensional coordinate system. The points are often intended to represent the external surface of one or more objects. Each point has a location (position) in the three-dimensional coordinate system. The position may be represented by three coordinates (X, Y, Z), which can be Cartesian or any other coordinate system. The points may have other associated attributes, such as colour, which may also be a three component value in some cases, such as R, G, B or Y, Cb, Cr. Other associated attributes may include transparency, reflectance, a normal vector, etc., depending on the desired application for the point cloud data. Point clouds can be static or dynamic. For example, a detailed scan or mapping of an object or topography may be static point cloud data. The LiDAR-based scanning of an environment for machine-vision purposes may be dynamic in that the point cloud (at least potentially) changes over time, e.g. with each successive scan of a volume. The dynamic point cloud is therefore a time-ordered sequence of point clouds. Point cloud data may be used in a number of applications, including conservation (scanning of historical or cultural objects), mapping, machine vision (such as autonomous or semi-autonomous cars), and virtual reality systems, to give some examples. Dynamic point cloud data for applications like machine vision can be quite different from static point cloud data like that for conservation purposes. Automotive vision, for example, typically involves relatively small resolution, non-coloured, highly dynamic point clouds obtained through LiDAR (or similar) sensors with a high frequency of capture. The objective of such point clouds is not for human consumption or viewing but rather for machine object detection/classification in a decision process. As an example, typical LiDAR frames contain on the order of tens of thousands of points, whereas high quality virtual reality applications require several millions of points. It may be expected that there will be a demand for higher resolution data over time as computational speed increases and new applications are found. While point cloud data is useful, a lack of effective and efficient compression, i.e. encoding and decoding processes, may hamper adoption and deployment. A particular challenge in coding point clouds that does not arise in the case of other data compression, like audio or video, is the coding of the geometry of the point cloud. Point clouds tend to be sparsely populated, which makes efficiently coding the location of the points that much more challenging. One of the more common mechanisms for coding point cloud data is through using tree-based structures. In a tree-based structure, the bounding three-dimensional volume for the point cloud is recursively divided into sub-volumes. Nodes of the tree correspond to sub-volumes. The decision of whether or not to further divide a sub-volume may be based on resolution of the tree and/or whether there are any points contained in the sub-volume. A leaf node may have an occupancy flag that indicates whether its associated sub-volume contains a point or not. Splitting flags may signal whether a node has child nodes (i.e. whether a current volume has been further split into sub-volumes). These flags may be entropy coded in some cases and in some cases predictive coding may be used. A commonly-used tree structure is an octree. In this structure, the volumes/sub-volumes are all cubes and each split of a sub-volume results in eight further sub-volumes/sub-cubes. Another commonly-used tree structure is a KD-tree, in which a volume (cube or rectangular cuboid) is recursively divided in two by a plane orthogonal to one of the axes. Octrees are a special case of KD-trees, where the volume is divided by three planes, each being orthogonal to one of the three axes. Both these examples relate to cubes or rectangular cuboids; however, the present application is not restricted to such tree structures and the volumes and sub-volumes may have other shapes in some applications. The partitioning of a volume is not necessarily into two sub-volumes (KD-tree) or eight sub-volumes (octree), but could involve other partitions, including division into non-rectangular shapes or involving non-adjacent sub-volumes. The present application may refer to octrees for ease of explanation and because they are a popular candidate tree structure for automotive applications, but it will be understood that the methods and devices described herein may be implemented using other tree structures. Reference is now made toFIG.1, which shows a simplified block diagram of a point cloud encoder10in accordance with aspects of the present application. The point cloud encoder10includes a tree building module12for receiving point cloud data and producing a tree (in this example, an octree) representing the geometry of the volumetric space containing point cloud and indicating the location or position of points from the point cloud in that geometry. The basic process for creating an octree to code a point cloud may include:1. Start with a bounding volume (cube) containing the point cloud in a coordinate system2. Split the volume into 8 sub-volumes (eight sub-cubes)3. For each sub-volume, mark the sub-volume with 0 if the sub-volume is empty, or with 1 if there is at least one point in it4. For all sub-volumes marked with 1, repeat (2) to split those sub-volumes, until a maximum depth of splitting is reached5. For all leaf sub-volumes (sub-cubes) of maximum depth, mark the leaf cube with 1 if it is non-empty, 0 otherwise The above process might be described as an occupancy-equals-splitting process, where splitting implies occupancy, with the constraint that there is a maximum depth or resolution beyond which no further splitting will occur. In this case, a single flag signals whether a node is split and hence whether it is occupied by at least one point, and vice versa. At the maximum depth, the flag signals occupancy, with no further splitting possible. In some implementations, splitting and occupancy are independent such that a node may be occupied and may or may not be split. There are two variations of this implementation:1. Split-then-occupied. A signal flag indicates whether a node is split. If split, then the node must contain a point—that is splitting implies occupancy. Otherwise, if the node is not to be split then a further occupancy flag signals whether the node contains at least one point. Accordingly, when a node is not further split, i.e. it is a leaf node, the leaf node must have an associated occupancy flag to indicate whether it contains any points.2. Occupied-then-split. A single flag indicates whether the node is occupied. If not occupied, then no splitting occurs. If it is occupied, then a splitting flag is coded to indicate whether the node is further split or not. Irrespective of which of the above-described processes is used to build the tree, it may be traversed in a pre-defined order (breadth-first or depth-first, and in accordance with a scan pattern/order within each divided sub-volume) to produce a sequence of bits from the flags (occupancy and/or splitting flags). This may be termed the serialization or binarization of the tree. As shown inFIG.1, in this example, the point cloud encoder10includes a binarizer14for binarizing the octree to produce a bitstream of binarized data representing the tree. This sequence of bits may then be encoded using an entropy encoder16to produce a compressed bitstream. The entropy encoder16may encode the sequence of bits using a context model18that specifies probabilities for coding bits based on a context determination by the entropy encoder16. The context model18may be adaptively updated after coding of each bit or defined set of bits. The entropy encoder16may, in some cases, be a binary arithmetic encoder. The binary arithmetic encoder may, in some implementations, employ context-adaptive binary arithmetic coding (CABAC). In some implementations, coders other than arithmetic coders may be used. In some cases, the entropy encoder16may not be a binary coder, but instead may operate on non-binary data. The output octree data from the tree building module12may not be evaluated in binary form but instead may be encoded as non-binary data. For example, in the case of an octree, the eight flags within a sub-volume (e.g. occupancy flags) in their scan order may be considered a 28−1 bit number (e.g. an integer having a value between 1 and 255 since the value 0 is not possible for a split sub-volume, i.e. it would not have been split if it was entirely unoccupied). This number may be encoded by the entropy encoder using a multi-symbol arithmetic coder in some implementations. Within a sub-volume, e.g. a cube, the sequence of flags that defines this integer may be termed a “pattern”. Like with video or image coding, point cloud coding can include predictive operations in which efforts are made to predict the pattern for a sub-volume. Predictions may be spatial (dependent on previously coded sub-volumes in the same point cloud) or temporal (dependent on previously coded point clouds in a time-ordered sequence of point clouds). A block diagram of an example point cloud decoder50that corresponds to the encoder10is shown inFIG.2. The point cloud decoder50includes an entropy decoder52using the same context model54used by the encoder10. The entropy decoder52receives the input bitstream of compressed data and entropy decodes the data to produce an output sequence of decompressed bits. The sequence is then converted into reconstructed point cloud data by a tree reconstructor56. The tree reconstructor56rebuilds the tree structure from the decompressed data and knowledge of the scanning order in which the tree data was binarized. The tree reconstructor56is thus able to reconstruct the location of the points from the point cloud (subject to the resolution of the tree coding). An example partial sub-volume100is shown inFIG.3. In this example, a sub-volume100is shown in two-dimensions for ease of illustration, and the size of the sub-volume100is 16×16. It will be noted that the sub-volume has been divided into four 8×8 sub-squares, and two of those have been further subdivided into 4×4 sub-squares, three of which are further divided to 2×2 sub-squares, and one of the 2×2 sub-square is then divided into 1×1 squares. The 1×1 squares are the maximum depth of the tree and represent the finest resolution for positional point data. The points from the point cloud are shown as dots in the figure. The structure of the tree102is shown to the right of the sub-volume100. The sequence of splitting flags104and the corresponding sequence of occupancy flags106, obtained in a pre-defined breadth-first scan order, is shown to the right of the tree102. It will be observed that in this illustrative example, there is an occupancy flag for each sub-volume (node) that is not split, i.e. that has an associated splitting flag set to zero. These sequences may be entropy encoded. Another example, which employs an occupied ≡ splitting condition, is shown inFIG.4.FIG.4illustrates the recursive splitting and coding of an octree150. Only a portion of the octree150is shown in the figure. A FIFO152is shown as processing the nodes for splitting to illustrate the breadth-first nature of the present process. The FIFO152outputs an occupied node154that was queued in the FIFO152for further splitting after processing of its parent node156. The tree builder splits the sub-volume associated with the occupied node154into eight sub-volumes (cubes) and determines their occupancy. The occupancy may be indicated by an occupancy flag for each sub-volume. In a prescribed scan order, the flags may be referred to as the occupancy pattern for the node154. The pattern may be specified by the integer representing the sequence of occupancy flags associated with the sub-volumes in the pre-defined scan order. In the case of an octree, the pattern is an integer in the range [1, 255]. The entropy encoder then encodes that pattern using a non-binary arithmetic encoder based on probabilities specified by the context model. In this example, the probabilities may be a pattern distribution based on an initial distribution model and adaptively updated. In one implementation, the pattern distribution is effectively a counter of the number of times each pattern (integer from 1 to 255) has been encountered during coding. The pattern distribution may be updated after each sub-volume is coded. The pattern distribution may be normalized, as needed, since the relative frequency of the patterns is germane to the probability assessment and not the absolute count. Based on the pattern, those child nodes that are occupied (e.g. have a flag=1) are then pushed into the FIFO152for further splitting in turn (provided the nodes are not a maximum depth of the tree). Reference is now made toFIG.5, which shows an example cube180from an octree. The cube180is subdivided into eight sub-cubes. The scan order for reading the flags results in an eight bit string, which can be read as an integer [1, 255] in binary. Based on the scan order and the resulting bit position of each sub-cube's flag in the string, the sub-cubes have the values shown inFIG.5. The scan order may be any sequence of the sub-cubes, provided both the encoder and decoder use the same scan order. As an example,FIG.6shows the cube180in which the four “front” sub-cubes are occupied. This would correspond to pattern85, on the basis that the sub-cubes occupied are cubes 1+4+16+64. The integer pattern number specifies the pattern of occupancy in the sub-cubes. An octree representation, or more generally any tree representation, is efficient at representing points with a spatial correlation because trees tend to factorize the higher order bits of the point coordinates. For an octree, each level of depth refines the coordinates of points within a sub-volume by one bit for each component at a cost of eight bits per refinement. Further compression is obtained by entropy coding the split information, i.e. pattern, associated with each tree node. This further compression is possible because the pattern distribution is not uniform—non-uniformity being another consequence of the correlation. One potential inefficiency in current systems is that the pattern distribution (e.g. the histogram of pattern numbers seen in previously-coded nodes of the tree) is developed over the course of coding the point cloud. In some cases, the pattern distribution may be initialized as equiprobable, or may be initialized to some other pre-determined distribution; but the use of one pattern distribution means that the context model does not account for, or exploit, local geometric correlation. In European patent application no. 18305037.6, the present applicants described methods and devices for selecting among available pattern distributions to be used in coding a particular node's pattern of occupancy based on some occupancy information from previously-coded nodes near the particular node. In one example implementation, the occupancy information is obtained from the pattern of occupancy of the parent to the particular node. In another example implementation, the occupancy information is obtained from one or more nodes neighbouring the particular node. The contents of European patent application no. 18305037.6 are incorporated herein by reference. Reference is now made toFIG.7, which shows, in flowchart form, one example method200of encoding a point cloud. The method200in this example involves recursive splitting of occupied nodes (sub-volumes) and a breadth-first traversal of the tree for coding. In operation202, the encoder determines the pattern of occupancy for the current node. The current node is an occupied node that has been split into eight child nodes, each corresponding to a respective sub-cube. The pattern of occupancy for the current node specifies the occupancy of the eight child nodes in scan order. As described above, this pattern of occupancy may be indicated using an integer between 1 and 255, e.g. an eight-bit binary string. In operation204, the encoder selects a probability distribution from among a set of probability distributions. The selection of the probability distribution is based upon some occupancy information from nearby previously-coded nodes, i.e. at least one node that is a neighbour to the current node. Two nodes are neighbouring, in some embodiments, if they are associated with respective sub-volumes that share at least one face. In a broader definition, nodes are neighboring if they share at least one edge. In yet a broader definition, two nodes are neighboring if they share at least one vertex. The parent pattern within which the current node is a child node, provides occupancy data for the current node and the seven sibling nodes to the current node. In some implementations, the occupancy information is the parent pattern. In some implementations, the occupancy information is occupancy data for a set of neighbour nodes that include nodes at the same depth level of the tree as the current node, but having a different parent node. In some cases, combinations of these are possible. For example, a set of neighbour nodes may include some sibling nodes and some non-sibling nodes. Once the probability distribution has been selected, the encoder then entropy encodes the occupancy pattern for the current node using the selected probability distribution, as indicated by operation206. It then updates the selected probability distribution in operation208based on the occupancy pattern, e.g. it may increment the count corresponding to that occupancy pattern. In operation210, the encoder evaluates whether there are further nodes to code and, if so, returns to operation202to code the next node. The probability distribution selection in operation204is to be based on occupancy data for nearby previously-coded nodes. This allows both the encoder and decoder to independently make the same selection. For the below discussion of probability distribution selection, reference will be made toFIG.8, which diagrammatically illustrates a partial octree300, including a current node302. The current node302is an occupied node and is being evaluated for coding. The current node302is one of eight children of a parent node306, which in turn is a child to a grand-parent node (not shown). The current node302is divided into eight child nodes304. The occupancy pattern for the current node302is based on the occupancy of the child nodes304. For example, as illustrated, using the convention that a black dot is an occupied node, the occupancy pattern may be 00110010, i.e. pattern50. The current node302has sibling nodes308that have the same parent node306. The parent pattern is the occupancy pattern for the parent node306, which as illustrated would be 00110000, i.e. pattern48. The parent pattern may serve as the basis for selecting a suitable probability distribution for entropy encoding the occupancy pattern for the current node. FIG.9illustrates a set of neighbors surrounding a current node, where neighbour is defined as nodes sharing a face. In this example, the nodes/sub-volumes are cubes and the cube at the center of the image has six neighbours, one for each face. In an octree, it will be appreciated that neighbours to the current node will include three sibling nodes. It will also include three nodes that do not have the same parent node. Accordingly, occupancy data for some of the neighboring nodes will be available because they are siblings, but occupancy data for some neighbouring nodes may or may not be available, depending on whether those nodes were previously coded. Special handling may be applied to deal with missing neighbours. In some implementations, the missing neighbour may be presumed to be occupied or may be presumed to be unoccupied. It will be appreciated that the neighbour definition may be broadened to include neighbouring nodes based on a shared edge or based on a shared vertex to include additional adjacent sub-volumes in the assessment. It will be appreciated that the foregoing processes look at the occupancy of nearby nodes in an attempt to determine the likelihood of occupancy of the current node302so as to select more suitable context(s) and use more accurate probabilities for entropy coding the occupancy data of the current node302. It will be understood that the occupancy status of neighbouring nodes that share a face with the current node302may be a more accurate assessment of whether the current node302is likely to be isolated or not than basing that assessment on the occupancy status of sibling nodes, three of which will only share an edge and one of which will only share a vertex (in the case of an octree). However, the assessment of occupancy status of siblings has the advantage of being modular in that all the relevant data for the assessment is part of the parent node, meaning it has a smaller memory footprint for implementation, whereas assessment of neighbour occupancy status involves buffering tree occupancy data in case it is needed when determining neighbour occupancy status in connection with coding a future nearby node. The occupancy of the neighbours may be read in a scan order that effectively assigns a value to each neighbour, much like as is described above with respect to occupancy patterns. As illustrated, the neighbouring nodes effectively take values of 1, 2, 4, 8, 16 or 32, and there are therefore 64 (0 to 63) possible neighbour occupancy configurations. This value may be termed the “neighbour configuration” herein. As an example,FIG.10illustrates an example of neighbour configuration 15, in which neighbours 1, 2, 4 and 8 are occupied and neighbours 16 and 32 are empty. In some cases, the two above criteria (parent pattern and neighbour configuration) may be both applied or may be selected between. For example, if neighbours are available then the probability distribution selection may be made based on the neighbouring nodes; however, if one or more of the neighbours are unavailable because they are from nodes not-yet coded, then the probability distribution selection may revert to an analysis based on sibling nodes (parent pattern). In yet another embodiment, the probability distribution selection may be alternatively, or additionally, be based on the grandparent pattern. In other words, the probability distribution selection may be based on the occupancy status of the uncle nodes that are siblings to the parent node306. In yet further implementation, additional or alternative assessments may be factored into the probability distribution selection. For example, the probability distribution selection may look at the occupancy status of neighbour nodes to the parent node, or neighbour nodes to the grand-parent node. Any two or more of the above criteria for assessing local occupancy status may be used in combination in some implementations. In the case of a non-binary entropy coder, the occupancy data for the current node may be coded by selecting a probability distribution. The probability distribution contains a number of probabilities corresponding to the number of possible occupancy patterns for the current node. For example, in the case of coding the occupancy pattern of an octree, there are 28−1=255 possible patterns, meaning each probability distribution includes 255 probabilities. In some embodiments, the number of probability distributions may equal the number of possible occupancy outcomes in the selection criteria, i.e. using neighbour, sibling, and/or parent occupancy data. For example, in a case where a parent pattern for an octree is used as the selection criteria for determining the probability distribution to use, there would be 255 probability distributions involving 255 probabilities each. In the case of neighbour configuration, if neighbour is defined as sharing a face, there would be 64 probability distributions with each distribution containing 255 probabilities. It will be understood that too many distributions may result in slow adaptation due to scarcity of data, i.e. context dilution. Accordingly, in some embodiments, similar patterns may be grouped so as to use the same probability distribution. For example separate distributions may be used for patterns corresponding to fully occupied, vertically-oriented, horizontally-oriented, mostly empty, and then all other cases. This could reduce the number of probability distributions to about five. It will be appreciated that different groupings of patterns could be formed to result in a different number of probability distributions. Reference is now made toFIG.11, which diagrammatically shows one illustrative embodiment of a process400of point cloud entropy encoding using parent-pattern-dependent context. In this example, a current node402has been split into eight child nodes and its occupancy pattern404is to be encoded using a non-binary entropy encoder406. The non-binary entropy encoder406uses a probability distribution selected from one of six possible probability distributions408. The selection is based on the parent pattern—that is the selection is based on occupancy information from the parent node to the current node402. The parent pattern is identified by an integer between 1 and 255. The selection of the probability distribution may be a decision tree that assesses whether the pattern corresponds to a full node (e.g. pattern=255), a horizontal structure (e.g. pattern=170 or 85; assuming the Z axis is vertical), a vertical structure (e.g. pattern=3, 12, 48, 192), a sparsely populated distribution (e.g. pattern=1, 2, 4, 8, 16, 32, 64, or 128; i.e. none of the sibling nodes are occupied), a semi-sparsely populated distribution (total number of occupied nodes among current node and sibling nodes ≤3), and all other cases. The example patterns indicated for the different categories are merely examples. For example, the “horizontal” category may include patterns involving two or three occupied cubes on the same horizontal level. The “vertical” category may include patterns involving three or four occupied cubes in a wall-like arrangement. It will also be appreciated that finer gradations may be used. For example, the “horizontal” category may be further subdivided into horizontal in the upper part of the cube and horizontal in the bottom part of the cube with different probability distributions for each. Other groupings of occupancy patterns having some correlation may be made and allocated to a corresponding probability distribution. Further discussion regarding grouping of patterns in the context of neighbour configurations, and invariance between neighbour configurations is set out further below. FIG.12shows an illustrative embodiment of a process500of point cloud entropy encoding using neighbour-configuration-dependent context. This example assumes the definition of neighbour and neighbour configuration numbering used above in connection withFIG.9. This example also presumes that each neighbour configuration has a dedicated probability distribution, meaning there are 64 different probability distributions. A current node502has an occupancy pattern504to be encoded. The probability distribution is selected based on the neighbouring nodes to the current node502. That is, the neighbour configuration NC in [0, 63] is found and used to select the associated probability distribution. It will be appreciated that in some embodiments, neighbour configurations may be grouped such that more than one neighbour configuration uses the same probability distribution based on similarities in the patterns. In some embodiments, the process may use a different arrangement of neighbours for contextualisation (selection) of the distributions. Additional neighbours may be added such as the eight neighbours diagonally adjacent on all three axes, or the twelve diagonally adjacent on two axes. Embodiments that avoid particular neighbours may also be used, for example to avoid using neighbours that introduce additional dependencies in a depth-first scan, or only introduce dependencies on particular axes so as to reduce codec state for large trees. In this example, the case of NC=0 is handled in a specific manner. If there are no neighbours that are occupied, it may indicate that the current node502is isolated. Accordingly, the process500further checks how many of the child nodes to the current node502are occupied. If only one child node is occupied, i.e. NumberOccupied (NO) is equal to 1, then a flag is encoded indicating that a single child node is occupied and the index to the node is coded using 3 bits. If more than one child node is occupied, then the process500uses the NC=0 probability distribution for coding the occupancy pattern. Reference is now made toFIG.13, which shows, in flowchart form, one example method600for decoding a bitstream of encoded point cloud data. In operation602, the decoder selects one of the probability distributions based on occupancy information from one or more nodes near the current node. As described above, the occupancy information may be a parent pattern from the parent node to the current node, i.e. occupancy of the current node and its siblings, or it may be occupancy of neighbouring nodes to the current node, which may include some of the sibling nodes. Other or additional occupancy information may be used in some implementations. Once the probability distribution has been selected then in operation604the decoder entropy decodes a portion of the bitstream using the selected probability distribution to reconstruct the occupancy pattern for the current node. The occupancy pattern is used by the decoder in reconstructing the tree so as to reconstruct the encoded point cloud data. Once the point cloud data is decoded, it may be output from the decoder for use, such as for rendering a view, segmentation/classification, or other applications. In operation606, the decoder updates the probability distribution based on the reconstructed occupancy pattern, and then if there are further nodes to decode, then it moves to the next node in the buffer and returns to operation602. Example implementations of the above-described methods have proven to provide a compression improvement with a negligible increase in coding complexity. The neighbour-based selection shows a better compression performance than the parent-pattern based selection, although it has a greater computational complexity and memory usage. In some testing the relative improvement in bits-per-point over the MPEG Point Cloud Test Model is between 4 and 20%. It has been noted that initializing the probability distributions based on a distribution arrived at with test data leads to improved performance as compared to initializing with a uniform distribution. Some of the above examples are based on a tree coding process that uses a non-binary coder for signaling occupancy pattern. New developments to employ binary entropy coders are presented further below. In one variation to the neighbour-based probability distribution selection, the number of distributions may be reduced by exploiting the symmetry of the neighbourhood. By permuting the neighbourhood or permuting the pattern distribution, structurally similar configurations having a line of symmetry can re-use the same distribution. In other words, neighbour configurations that can use the same pattern distribution may be grouped into a class. A class containing more than one neighbour configuration may be referred to herein as a “neighbour configuration” in that one of the neighbour configurations effectively subsumes other neighbour configurations by way of reflection or permutation of those other configurations. Consider, as an example, the eight corner patterns NC∈[21, 22, 25, 26, 37, 38, 41, 42], each representing a symmetry of a corner neighbour pattern. It is likely that these values of NC are well correlated with particular but different patterns of a node. It is further likely that these correlated patterns follow the same symmetries as the neighbour pattern. By way of example, a method may be implemented that re-uses a single distribution to represent multiple cases of NC by way of permuting the probabilities of that distribution. An encoder derives the pattern number of a node based on the occupancy of the child nodes. The encoder selects a distribution and a permutation function according to the neighbour configuration. The encoder reorders the probabilities contained within the distribution according to the permutation function, and subsequently uses the permuted distribution to arithmetically encode the pattern number. Updates to the probabilities of the permuted distribution by the arithmetic encoder are mapped back to the original distribution by way of an inverse permutation function. A corresponding decoder first selects the same distribution and permutation function according to the neighbour configuration. A permuted distribution is produced in an identical manner to the encoder, with the permuted distribution being used by the arithmetic decoder to entropy decode the pattern number. The bits comprising the pattern number are then each assigned to the corresponding child. It should be noted that the same permutation may be achieved without reordering the data of the distribution itself, but rather introducing a level of indirection and using the permutation function to permute the lookup of a given index in the distribution. An alternative embodiment considers permutations of the pattern itself rather than the distribution, allowing for a shuffling prior to or after entropy encoding/decoding respectively. Such a method is likely to be more amenable to efficient implementation through bit-wise shuffle operations. In this case, no reordering of the distribution is performed by either the encoder or decoder, rather the computation of the encoded pattern number is modified to be pn=∑i=072icσ(i), where ciis the i-th child's occupancy state, and σ(i) is a permutation function. One such example permutation function σ=(0123456747650321) allows the distribution for NC=22 to be used for that of NC=41. The permutation function may be used by a decoder to derive a child node's occupancy state from the encoded pattern number using ci=└pn/2σ(i)┘ mod 2. Methods to derive the required permutation may be based on rotational symmetries of the neighbour configurations, or may be based on reflections along particular axes. Furthermore, it is not necessary for the permutation to permute all positions according to, for instance, the symmetry; a partial permutation may be used instead. For example, when permuting NC=22 to NC=41, the positions in the axis of symmetry may not be permuted, leading to the mapping σ=(0123456707254361), where positions 0, 2, 4, 6 are not permuted. In other embodiments, only the pair 1 and 7 are swapped. Examples of embodiments based on rotational symmetries and reflections are provided hereafter for the specific case of an octree with six neighbors sharing a common face with the current cube. Without loss of generality, as shown inFIG.16, the Z axis extends vertically relative to the direction of viewing the figure. Relative positions of neighbours such as “above” (resp. “below”) should then be understood as along the Z axis in increasing (resp. decreasing) Z direction. The same remark applies for left/right along the X axis, and front/back along the Y axis. FIG.16shows three rotations2102,2104and2106along the Z, Y and X axes, respectively. The angle of these three rotations is 90 degrees, i.e. they perform a rotation along their respective axis by a quarter of a turn. FIG.17shows classes of invariance of neighbour configuration under one or several iterations of the rotation2102along the Z axis. This invariance is representative of the same statistical behavior of the point cloud geometry along any direction belonging to the XY plane. This is particularly true for the use-case of a car moving on the Earth surface that is locally approximated by the XY plane. A horizontal configuration is the given occupancy of the four neighbors (located at the left, right, front and back of the current cube) independently of the occupancy of the above neighbour (2202) and the below neighbour (2204). The four horizontal configurations2206,2208,2210and2212belong to the same class of invariance under the rotation2102. Similarly, the two configurations2214and2216belong to the same class of invariance. There are only six classes (grouped under the set of classes2218) of invariance under the rotation2102. A vertical configuration is the given occupancy of the two neighbors2202and2204independently of the occupancy of the four neighbours located at the left, right, front and back of the current cube. There are four possible vertical configurations as shown onFIG.18. Consequently, if one considers invariance relatively to the rotation2102along the Z axis, there are 6×4=24 possible configurations. The reflection2108along the Z axis is shown onFIG.16. The vertical configurations2302and2304depicted onFIG.18belong to the same class of invariance under the reflection2108. There are three classes (grouped under the set of classes2306) of invariance under the reflection2108. The invariance under reflection2108means that upward and downward directions behave essentially the same in term of point cloud geometry statistics. It is an accurate assumption for a moving car on a road. If one assumes invariance under both the rotation2102and the reflection2108there are 18 classes of invariances, resulting from the product of the two sets2218and2306. These 18 classes are represented inFIG.19. Applying further invariance under the two other rotations2104and2106, the two configurations2401and2402belong to the same class of invariance. Furthermore, the two configurations2411and2412, the two configurations2421and2422, the three configurations2431,2432and2433, the two configurations2441and2442, the two configurations2451and2452, and finally the two configurations2461and2462belong to same classes. Consequently, invariance under the three rotations (2102,2104and2106) and the reflection2108leads to 10 classes of invariance as shown onFIG.20. From the examples provided hereinabove, assuming or not invariance under three rotations and the reflection, the number of effective neighbour configurations, i.e. classes into which the 64 neighbour configuration may be grouped, is either 64, 24, 18 or 10. Prior to entropy coding, the pattern undergoes the same transformation, i.e. rotations and reflection, as the neighbour configuration does to belong to one of the invariance classes. This preserves the statistical consistency between the invariant neighbour configuration and the coded pattern. It will also be understood that during the traversal of a tree, a child node will have certain neighbouring nodes at the same tree depth that have been previously visited and may be causally used as dependencies. For these same-level neighbours, instead of consulting the parent's collocated neighbour, the same-level neighbours may be used. Since the same-level neighbours have halved dimensions of the parent, one configuration considers the neighbour occupied if any of the four directly adjacent neighbouring child nodes (i.e., the four sharing a face with the current node) are occupied. Entropy Coding Tree Occupancy Patterns Using Binary Coding The above-described techniques of using neighbour occupancy information for coding tree occupancy were detailed in European patent application no. 18305037.6. The described embodiments focus on using non-binary entropy coding of the occupancy pattern, where a pattern distribution is selected based on neighbour occupancy information. However, in some instances, the use of binary coders can be more efficient in terms of hardware implementation. Moreover, on-the-fly updates to many probabilities may require fast-access memory and computation within the heart of the arithmetic coder. Accordingly, it may be advantageous to find methods and devices for entropy encoding the occupancy pattern using binary arithmetic coders. It would be advantageous to use binary coders if it can be done without significantly degrading compression performance and while guarding against having an overwhelming number of contexts to track. The use of binary coders in place of a non-binary coder is reflected in the entropy formula: H(X1,X2|Y)=H(X1|Y)H(X2|Y,X1) where X=(X1, X2) is the non-binary information to be coded, and Y is the context for coding, i.e. the neighbour configuration or selected pattern distribution. To convert non-binary coding of X into binary coding, the information (X1, X2) is split into information X1and X2that can be coded separately without increasing the entropy. To do so, one must code one of the two depending on the other, here X2depending on X1. This can be extended to n bits of information in X. For example, for n=3: H(X1,X2,X3|Y)=H(Xi|Y)H(X2|Y,X1)H(X3|Y,X1,X2) It will be understood that as the occupancy pattern, i.e. bit sequence X, gets longer there are more conditions for coding later bits in the sequence. For a binary coder (e.g. CABAC) this means a large increase in the number of contexts to track and manage. Using an octree as an example, where the occupancy pattern is an eight-bit sequence b=b0. . . b7, the bit sequence may be split into the eight binary information bits b0. . . b7. The coding may use the neighbour configuration N (or NC) for determining context. Assuming that we can reduce the neighbour configurations to 10 effective neighbour configurations through grouping of neighbour configurations into classes of invariance, as described above, then N is an integer belonging to {0, 1, 2, . . . , 9}. For shorthand, the “classes of invariant neighbour configurations” may be referred to herein, at times, simply as the “neighbour configurations”, although it will be appreciated that this reduced number of neighbour configurations may be realized based on the class-based grouping of neighbour configurations based on invariance. FIG.21illustrates the splitting of an eight-bit pattern or sequence into eight individual bits for binary entropy coding. It will be noted that the first bit of the sequence is encoded based on the neighbour configuration, so there are ten total contexts available. The next bit of the sequence is encoded based on the neighbour configuration and any previously-encoded bits, i.e. bit b0. This involves 20 total available contexts: obtained as the product of 10 from N and 2 from b0. The final bit, b7, is entropy encoded using a context selected from 1280 available contexts: obtained as the product of 10 from N and 128 from the partial pattern given by the previously-encoded bits b0, . . . , b6. That is, for each bit the number of contexts (i.e. possible combinations of conditions/dependencies) is the product of the number of neighbour configurations defined (10, in this example, based on grouping of the 64 neighbour configurations into classes), and the number of partial patterns possible from the ordered sequence of n−1 previously-encoded bits (given by 2n-1). As a result, there are a total of 2550 contexts to maintain in connection with binary coding of the occupancy pattern. This is an excessively large number of contexts to track, and the relative scarcity may cause poor performance because of context dilution, particularly for later bits in the sequence. Accordingly, in one aspect, the present application discloses encoders and decoders that determine whether the set of contexts can be reduced and, if so, apply a context reduction operation to realize a smaller set of available contexts for entropy coding at least part of an occupancy pattern using a binary coder. In another aspect, the present application further discloses encoders and decoders that apply one or more rounds of state reduction using the same context reduction operations in order to perform effective context selection from a fixed number of contexts. In some implementations, the context reduction is applied a priori in generating look-up tables of contexts and/or algorithmic conditionals that are then used by the encoder or decoder in selecting a suitable context. The reduction is based on a testable condition that the encoder and decoder evaluate to determine which look-up table to select from or how to index/select from that look-up table to obtain a selected context. Reference is now made toFIG.22, which shows, in flowchart form, one example method3000for coding occupancy patterns in a tree-based point cloud coder using binary coding. The method3000may by implemented by an encoder or a decoder. In the case of an encoder, the coding operations are encoding, and in the case of a decoder the coding operations are decoding. The encoding and decoding is context-based entropy encoding and decoding. The example method3000is for entropy coding an occupancy pattern, i.e. a bit sequence, for a particular node/volume. The occupancy pattern signals the occupancy status of the child nodes (sub-volumes) of the node/volume. In the case of an octree, there are eight child nodes/sub-volumes. In operation3002, the neighbour configuration is determined. The neighbour configuration is the occupancy status of one or more volumes neighbouring the volume for which an occupancy pattern is to be coded. As discussed above, there are various possible implementations for determining neighbour configuration. In some examples, there are 10 neighbour configurations, and the neighbour configurations for a current volume is identified based on the occupancy of the six volumes that share a face with the current volume. In operation3004, an index i to the child nodes of the current volume is set to 0. Then in operation3006an assessment is made as to whether context reduction is possible. Different possible context reduction operations are discussed in more detail below. The assessment of whether context reduction is possible may be based, for example, on which bit in the bit sequence is being coded (e.g. the index value). In some cases, context reduction may be possible for later bits in the sequence but not for the first few bits. The assessment of whether context reduction is possible may be based, for example, on the neighbour configuration as certain neighbour configurations may allow for simplifications. Additional factors may be used in assessing whether context reduction is possible in some implementations. For example, an upper bound Bo may be provided as the maximum number of contexts a binary coder can use to code a bit, and if the initial number of contexts to code a bit is higher than Bo then context reduction is applied (otherwise it is not) such that the number of contexts after reduction is at most Bo. Such a bound Bo may be defined in an encoder and/or decoder specification in order to ensure that a software or hardware implementation capable to deal with Bo contexts will always be able to encode and/or decode a point cloud without generating an overflow in term of the number of contexts. Knowing the bound Bo beforehand also allows for anticipating the complexity and the memory footprint induced by the binary entropy coder, thus facilitating the design of hardware. Typical values for Bo are from ten to a few hundred. If context reduction is determined to be available, then in operation3008a context reduction operation is applied. The context reduction operation reduces the number of available contexts in a set of available contexts to a smaller set containing fewer total contexts. It will be recalled, that the number of available contexts may depend, in part, on the bit position in the sequence, i.e. the index, since the context may depend on a partial pattern of previously-coded bits from the bits sequence. In some implementations, the number of contexts available in the set, before reduction, may be based on the number of neighbour configurations multiplied by the number of partial patterns possible with the previously-coded bits. For a bit at index i, where i ranges from 0 to n, the number of partial patterns may be given by 2i. As noted above, in some implementations the context reduction operations are carried out prior to the coding, and the resulting reduced context sets are the context sets available for use by the encoder and decoder during the coding operation. Use and/or selection of the reduced context set during coding may be based on evaluation of one or more conditions precedent to use of those reduced sets that correspond to the conditions evaluated in operation3006for determining that the number of contexts can be reduced. For example, in the case of a specific neighbour configuration that permits use of reduced context set, the encoder and/or decoder may first determine whether that neighbour configuration condition is met and then, if so, use the corresponding reduced context set. In operation3010, the context for bit biis determined, i.e. selected from the set (or reduced set, if any) of available contexts based on the neighbour configuration and the partial pattern of previously-coded bits in the bit sequence. The current bit is then entropy encoded by a binary coder using the selected context in operation3012. If, in operation3014, the index i indicates that the currently coded bit is the last bit in the sequence, i.e. that i equals imax, then the coding process advances to the next node. Otherwise, the index i is incremented in operation3016and the process returns to operation3006. It will be appreciated that in some implementations, context selection may not depend on neighbour configuration. In some cases, it may only depend on the partial pattern of previously-coded bits in the sequence, if any. A simplified block diagram of part of an example encoder3100is illustrated inFIG.23. In this illustration it will be understood that the occupancy pattern3102is obtained as the corresponding volume is partitioned into child nodes and cycled through a FIFO buffer3104that holds the geometry of the point cloud. Coding of the occupancy pattern3102is illustrated as involving a cascade of binary coders3106, one for each bit of the pattern. Between at least some of the binary coders3106is a context reduction operation3108that operates to reduce the available contexts to a smaller set of available contexts. AlthoughFIG.23illustrates a series of binary coders3106, in some implementations only one binary coder is used. In the case where more than one coder is used, the coding may be (partly) parallelized. Given the context dependence of one bit on preceding bits in the bit sequence, the coding of the pattern cannot necessarily be fully parallelized, but it may be possible to improve pipelining through using cascading binary coders for a pattern to achieve some degree of parallelization and speed improvement. Context Reduction Operations The above examples propose that the coding process include a context reduction operation with respect to at least one bit of the occupancy pattern so as to reduce the set of available contexts to a smaller set of available contexts. In this sense, the “context reduction operation” may be understood as identifying and consolidating contexts that may be deemed duplicative or redundant in the circumstances of a particular bit bi. As noted above, the reduced context set may be determined in advance of coding and may be provided to the encoder and decoder, and the encoder and decoder determine whether to use the reduced context set based on the same conditions described below for reducing the context set. Neighbour Configuration Reduction Through Screening/Shielding A first example context reduction operation involves reducing the number of neighbour configurations based on screening/shielding. In principle, the neighbour configuration factors occupancy status of neighbouring volumes into the context selection process on the basis that the neighbouring volumes help indicate whether the current volume or sub-volume is likely to be occupied or not. As the bits associated with sub-volumes in the current volume are decoded, then they are also factored into the context selection; however, the information from nearby sub-volumes may be more significant and more informative than the occupancy information of a neighbouring volume that is located on the other side of the sub-volumes from the current sub-volume. In this sense, the previously-decoded bits are associated with sub-volumes that “screen” or “shield” the neighbouring volume. This may mean that in such circumstances, the occupancy of the neighbouring volume can be ignored since the relevance of its occupancy status is subsumed by the occupancy status of the sub-volumes between the current sub-volume and the neighbouring volume, thereby permitting reduction of the number of neighbour configurations. Reference is now made toFIG.24, which diagrammatically shows an example context reduction operation based on neighbour screening. The example involves coding the occupancy pattern for a volume3200. The occupancy pattern signals the occupancy status of the eight sub-volumes within the volume3200. In this example, the four sub-volumes in the top half of the volume3200have been coded, so their occupancy status is known. The bit of the occupancy pattern being coded is associated with a fifth sub-volume3204that is located in the bottom half of the volume3200, below the four previously-coded sub-volumes. The coding in this example includes determining context based on neighbour configuration. The 10 neighbour configurations3202are shown. The volume3200containing the fifth sub-volume3204to be coded is shown in light grey and indicated by reference numeral3200. The neighbour configurations3202are based on the occupancy status of the volumes adjacent to the volume3200and sharing a face with it. The neighbouring volumes include a top neighbouring volume3206. In this example, the number of neighbour configurations can be reduced from 10 to 7 by ignoring the top neighbouring volume3206in at least some of the configurations. As shown inFIG.24, three of the four configurations in which the top neighbouring volume3206is shown can be subsumed under equivalent configurations that do not factor in the top neighbouring volume3206, thereby reducing the number of neighbour configurations to 7 total. It may be still be advantageous to keep the configuration showing all six neighbouring volumes since there is no existing 5-volume neighbour configuration that the 6-volume configuration can be consolidated with (having eliminated the 5-element one) meaning that even if the top neighbouring volume is removed a new 5-element neighbour configuration results and no overall reduction in contexts occurs. The top neighbouring volume3206can be eliminated from the neighbour configurations in this example because the context determination for coding of an occupancy bit associated with the fifth sub-volume3204will already take into account the occupancy status of the four previously-coded sub-volumes directly above it, which are a better indication of likelihood and directionality of occupancy for the fifth sub-volume than the occupancy status of the more-distant top neighbouring volume3206. The above example in which the top neighbouring volume3206is screened or shielded by the previously-coded sub-volumes when coding the occupancy bit corresponding to the fifth sub-volume3204is only one example. Depending on coding order within the volume3200a number of other possible screening/shielding situations may be realized and exploited to reduce the available neighbour configurations. Reference is now made toFIG.25, which shows a second example of screening/shielding. In this example, the occupancy pattern for the volume3200is nearly completely coded. The sub-volume to be coded is the eighth sub-volume and is hidden in the figure at the back bottom corner (not visible). In this case, the occupancy status of all seven other sub-volumes has been coded. In particular, the sub-volumes along the top (hence the reduction in neighbour configurations to seven total) and along the right side and front side. Accordingly, in addition to screening the top neighbouring volume, the sub-volumes with previously-coded occupancy bits shield a front neighbouring volume3210and a right-side neighbouring volume3212. This may permit the reduction of neighbour configurations from seven total to five total, as illustrated. It will be appreciated that the two foregoing examples of shielding are illustrative and that in some cases different configurations may be consolidated to account for different shielding situations. The context reduction operation based on shielding/screening by previously-coded sub-volumes is general and not limited to these two examples, although it will be appreciated that it cannot be applied in the case of the first sub-volume to be coded since it requires that there by at least one previously-coded occupancy bit associated with a nearby sub-volume in order for there to be any shielding/screening. It will also be appreciated that the degree of shielding/screening to justify neighbour configuration reduction may be different in different implementations. In the two above examples, all four sub-volumes sharing a face with a neighbouring volume were previously-coded before that neighbouring volume was considered shielded/screened and thus removed from the neighbour configurations. In other examples, partial shielding/screening may be sufficient, e.g. from one to three previously-coded sub-volumes that share a face. Context Reduction Through Special Case Handling There are certain cases in which context reduction may occur without loss of useful information. In the example context determination process described above, the context for coding an occupancy bit is based on the neighbour configuration, i.e. the pattern of occupancy of volumes neighbouring the current volume, and on the partial pattern attributable to the occupancy of sub-volumes in the current volume that were previously coded. That latter condition results in 27=128 contexts to track with respect to the eighth bit in the occupancy pattern bit sequence. Even if neighbour configurations are reduced to five total, this means 640 contexts to track. The number of contexts is large based on the fact that the previously-coded bits of the bit sequence have an order, and the order is relevant in assessing context. However, in some cases, the order may not contain useful information. For example, in the case where the neighbour configuration is empty, i.e. N10=0, any points within the volume may be presumed to be sparsely populated, meaning they do not have a strong enough directionality to justify tracking separate contexts for different patterns of occupancy in the sibling sub-volumes. In the case of an empty neighbourhood, there is no local orientation or topology to the point cloud, meaning the 2jconditions based on previously-coded bits of the bit sequence can be reduced to j+1 conditions. That is, the context for coding one of the bits of the bit sequence is based on the previously-coded bits, but not on their ordered pattern, just on their sum. In other words, the entropy expression in this special case may be expressed as: H(b|n)≈H(b0|0)H(b1|0,b0)H(b2|0,b0+b1) . . .H(b7|0,b0+b1+ . . . +b6) In some implementations, a similar observation may be made with respect to a full neighbour configuration. In some examples, a full neighbour configuration lacks directionality, meaning the order of previously-coded bits need not be taken into account in determining context. In some examples, this context reduction operation may be applied to only some of the bits in the bit sequence, such as some of the later bits in the sequence. In some cases, the application of this context reduction operation to later bits may be conditional on determining that the earlier bits associated with previously-coded sub-volumes were also all occupied. Statistical-Based Context Reduction A statistical analysis may be used to reduce contexts through determining which ones lead to roughly the same statistical behavior and then combining them. This analysis may be performed a priori using test data to develop a reduced context set that is then provided to both the encoder and decoder. In some cases, the analysis may be performed on a current point cloud using two-pass coding to develop a custom reduced context set for the specific point cloud data. In some such cases, the mapping from the non-reduced context set to the custom reduced context set may be signaled to the decoder by using a dedicated syntax coded into the bitstream. Two contexts may be compared through a concept of “distance”. A first context c has a probability p of a bit b being equal to zero, and a second context c′ has a probability p′ of a bit b′ being equal to zero. The distance between c and c′ is given by: d(c,c′)=|plog2p−p′ log2p′|+|(1−p)log2(1−p)−(1−p′)log2(1−p′)| Using this measurement of similarity (distance) the contexts may then be grouped in a process, such as:1. Start with M1contexts and fix a threshold level2. For a given context, regroup into a class all contexts that have a distance from the given context lower than the threshold level3. Repeat 2 for all non-regrouped contexts until all are put into a class4. Label the M2classes from 1 to M2: this results in a brute force reduction function that maps {1, 2, . . . , M1]→[1, 2, . . . , M2] where M1≥M2. The brute force reduction function for mapping a set of contexts to a smaller set of contexts may be stored in memory to be applied by the encoder/decoder as a context reduction operation during coding. The mapping may be stored as a look-up table or other data structure. The brute force reduction function may be applied only for later bits in the bit sequence (pattern), for example. Combinations and Sub-Combinations of Context Reduction Operations Three example context reduction operations are described above. Each of them may be applied individually and independently in some implementations. Any two or more of them may be combined in some implementations. Additional context reduction operations may be implemented alone or in combination with any one or more of the context reduction operations described above. FIG.26shows one example, in flowchart form, of a method3300of occupancy pattern binary coding involving combined context reduction. The method3300codes the 8-bit binary pattern b0, b1, . . . , b7, given a 10-element neighbour configuration N10in {0, 1, 2, . . . , 9}. The first condition evaluated is whether the neighbour configuration is empty, i.e. N10=0. If so, then the bits are coded without reference to their order, as indicated by reference numeral3302. Otherwise, the bits are coded as per normal until bit b4, at which point the encoder and decoder begin applying brute force context reduction functions, BRi, to reduce the number of contexts by mapping the set of contexts defined by the neighbour configuration and the partial pattern of previously-coded bits to a smaller set of contexts having substantially similar statistical outcomes. In this example, the last two bits, b6and b7, are coded using reduced neighbour configurations, based on shielding/screening. All functions may be implemented as look-up tables (LUTs) for reducing the size of the set of contexts. In one practical implementation, all the reductions are factorised in reduction functions, i.e. simply LUTs, that take the contexts as input and provide reduced contexts as output. In this example embodiment, the total number of contexts has been reduced from 2550 to 576, with the output size of each reduction function BR, being 70, 106, 110 and 119, respectively. Context Selection in Systems with Fixed Numbers of Contexts Each of the previously described context reduction operations may be further used in a compression system with a static (fixed) minimal number of contexts. In such a design, for a given symbol in the 8-bit binary pattern, one or more reduction operations are applied to determine the context probability model with which to encode or decode the symbol. Impact on Compression Performance The use of 10 neighbour configurations and non-binary coding provides a compression gain over current implementations of the MPEG test model for point cloud coding. However, the above-proposed use of 10 neighbour configurations with cascaded binary coding using 2550 contexts results in an even better improvement in compression efficiency. Even when context reduction is used, such as using the three techniques detailed above, to reduce the contexts to 576 total, the binary coding compression is still marginally better than implementation using non-binary coding, and much better than the test model. This observation has been shown to be consistent across different test point cloud data. Reference is now made toFIG.14, which shows a simplified block diagram of an example embodiment of an encoder1100. The encoder1100includes a processor1102, memory1104, and an encoding application1106. The encoding application1106may include a computer program or application stored in memory1104and containing instructions that, when executed, cause the processor1102to perform operations such as those described herein. For example, the encoding application1106may encode and output bitstreams encoded in accordance with the processes described herein. It will be understood that the encoding application1106may be stored on a non-transitory computer-readable medium, such as a compact disc, flash memory device, random access memory, hard drive, etc. When the instructions are executed, the processor1102carries out the operations and functions specified in the instructions so as to operate as a special-purpose processor that implements the described process(es). Such a processor may be referred to as a “processor circuit” or “processor circuitry” in some examples. Reference is now also made toFIG.15, which shows a simplified block diagram of an example embodiment of a decoder1200. The decoder1200includes a processor1202, a memory1204, and a decoding application1206. The decoding application1206may include a computer program or application stored in memory1204and containing instructions that, when executed, cause the processor1202to perform operations such as those described herein. It will be understood that the decoding application1206may be stored on a computer-readable medium, such as a compact disc, flash memory device, random access memory, hard drive, etc. When the instructions are executed, the processor1202carries out the operations and functions specified in the instructions so as to operate as a special-purpose processor that implements the described process(es). Such a processor may be referred to as a “processor circuit” or “processor circuitry” in some examples. It will be appreciated that the decoder and/or encoder according to the present application may be implemented in a number of computing devices, including, without limitation, servers, suitably-programmed general purpose computers, machine vision systems, and mobile devices. The decoder or encoder may be implemented by way of software containing instructions for configuring a processor or processors to carry out the functions described herein. The software instructions may be stored on any suitable non-transitory computer-readable memory, including CDs, RAM, ROM, Flash memory, etc. It will be understood that the decoder and/or encoder described herein and the module, routine, process, thread, or other software component implementing the described method/process for configuring the encoder or decoder may be realized using standard computer programming techniques and languages. The present application is not limited to particular processors, computer languages, computer programming conventions, data structures, other such implementation details. Those skilled in the art will recognize that the described processes may be implemented as a part of computer-executable code stored in volatile or non-volatile memory, as part of an application-specific integrated chip (ASIC), etc. The present application also provides for a computer-readable signal encoding the data produced through application of an encoding process in accordance with the present application. Certain adaptations and modifications of the described embodiments can be made. Therefore, the above discussed embodiments are considered to be illustrative and not restrictive. | 77,676 |
11861870 | DETAILED DESCRIPTION Some implementations of the present disclosure will now be described more fully hereinafter with reference to the accompanying figures, in which some, but not all implementations of the disclosure are shown. Indeed, various implementations of the disclosure may be embodied in many different forms and should not be construed as limited to the implementations set forth herein; rather, these example implementations are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art. Like reference numerals refer to like elements throughout. Unless specified otherwise or clear from context, references to first, second or the like should not be construed to imply a particular order. A feature described as being above another feature (unless specified otherwise or clear from context) may instead be below, and vice versa; and similarly, features described as being to the left of another feature else may instead be to the right, and vice versa. Also, while reference may be made herein to quantitative measures, values, geometric relationships or the like, unless otherwise stated, any one or more if not all of these may be absolute or approximate to account for acceptable variations that may occur, such as those due to engineering tolerances or the like. As used herein, unless specified otherwise or clear from context, the “or” of a set of operands is the “inclusive or” and thereby true if and only if one or more of the operands is true, as opposed to the “exclusive or” which is false when all of the operands are true. Thus, for example, “[A] or [B]” is true if [A] is true, or if [B] is true, or if both [A] and [B] are true. Further, the articles “a” and “an” mean “one or more,” unless specified otherwise or clear from context to be directed to a singular form. Furthermore, it should be understood that unless otherwise specified, the terms “data,” “content,” “digital content,” “information,” and similar terms may be at times used interchangeably. Example implementations of the present disclosure relate generally to robotics and, in particular, to one or more of the design, construction, operation or use of robots. As used herein, a robot is a machine designed and configurable to execute maneuvers in its ambient environment. The robot may be manned or unmanned. The robot may be fully human-controlled, or the robot may be semi-autonomous or autonomous in which at least some of the maneuvers are executed independent of or with minimal human intervention. In some examples, the robot is operable in various modes with various amounts of human control. A robot may be implemented as a vehicle in that the robot is a machine designed as an instrument of conveyance by land, water or air. A robot designed and configurable to fly may at times be referred to as an aerial robot, an aerial vehicle, an aircraft or the like. A robot designed and configurable to operate with at least some level of autonomy may at times be referred to as an autonomous robot, or an autonomous aerial robot, autonomous aerial vehicle or autonomous aircraft in the case of an autonomous robot that is also designed and configurable to fly. Examples of suitable robots include aerobots, androids, automatons, autonomous vehicles, explosive ordnance disposal robots, hexapods, industrial robots, insect robots, microbots, nanobots, military robots, mobile robots, rovers, service robots, surgical robots, walking robots and the like. Other examples include a variety of unmanned vehicles, including unmanned ground vehicles (UGVs), unmanned aerial vehicles (UAVs), unmanned surface vehicles (USVs), unmanned underwater vehicles (UUVs), unmanned spacecraft and the like. These may include autonomous cars, planes, trains, industrial vehicles, fulfillment center robots, supply-chain robots, robotic vehicles, mine sweepers, and the like. A robot designed as a vehicle generally includes a basic structure, and a propulsion system coupled to the basic structure. The basic structure is the main supporting structure of the vehicle to which other components are attached. The basic structure is the load-bearing framework of the vehicle that structurally supports the vehicle in its construction and function. In various contexts, the basic structure may be referred to as a chassis, an airframe or the like. The propulsion system includes one or more engines or motors configured to power one or more propulsors to generate propulsive forces that cause the vehicle to move. A propulsor is any of a number of different means of converting power into a propulsive force. Examples of suitable propulsors include rotors, propellers, wheels and the like. In some examples, the propulsion system includes a drivetrain configured to deliver power from the engines/motors to the propulsors. The engines/motors and drivetrain may in some contexts be referred to as the powertrain of the vehicle. The vehicle may also include any of a number of different subsystems (each an individual system) for performing one or more functions or operations. In particular, for example, the vehicle may include a mission management system (MMS) configured to manage missions of the vehicle. A mission is a deployment of the vehicle to achieve one or more mission objectives. A mission may be decomposed into maneuvers of the vehicle with optional sensor and/or effector scheduling, and the MMS may execute tasks to manage the vehicle to execute maneuvers with specific parameters and capabilities. The MMS may include subsystems to process sensor data to situational awareness, plan tasks for the vehicle, and execute the tasks. The MMS may also be configured to interface with a vehicle management system, and in some examples a control station located remote from the vehicle. In various examples, the MMS may be located onboard the vehicle, at the control station, or distributed between the vehicle and the control station. FIG.1illustrates one type of vehicle100, namely, an aircraft, that may benefit from example implementations of the present disclosure. As shown, the vehicle generally includes a basic structure102with an airframe including a fuselage104, and one or more pairs of wings106that extend from opposing sides of the fuselage. The airframe also includes an empennage or tail assembly108at a rear end of the fuselage, and the tail assembly includes stabilizers110. The vehicle further includes a propulsion system112with an engine114configured to power a propulsor116to generate propulsive forces that cause the vehicle to move. On the vehicle as shown, the propulsor is a propeller. Depending on the vehicle, in various examples, the propulsors include one or more of rotors, propellers, jet engines or wheels. As described below, the vehicle100includes a number of other systems and sensors. As shown inFIG.1, for example, the vehicle includes one or more imagers122configured to convey (or in some examples detect and convey) information from which an image of an ambient environment124of the robot is generated. In some examples, this information includes points of the image that are spatially arranged to represent objects depicted in the image, the points corresponding to pixels of a digital image, or data points of a point cloud. The imager122may employ any of a number of different technologies such as acoustics, radio, optics and the like. More particular examples of suitable imagers include those employing radar, lidar122A, infrared sensors, cameras122B and the like. FIG.2is a functional block diagram of a vehicle200that in some examples corresponds to the vehicle100ofFIG.1. According to example implementations, the vehicle may include any of a number of different subsystems (each an individual system) for performing one or more functions or operations. In particular, for example, the vehicle may include a propulsion system202configured to power one or more propulsors204to generate propulsive forces that cause the vehicle to move and thereby execute maneuvers in an ambient environment124. The vehicle200includes at least one imager206(e.g., imager122) and a MMS208. The imager206is configured to convey (or in some examples detect and convey) information from which an image210,212of an ambient environment (e.g., ambient environment124) of the vehicle is generated. The information includes points214,216of the image that are spatially arranged to represent objects218depicted in the image, the points corresponding to pixels214of a digital image210, or data points216of a point cloud212. The MMS is configured to detect the objects depicted in the image, and manage the vehicle to execute the maneuvers in the ambient environment based on the objects as depicted. In some examples, the MMS208is configured to receive the points214,216of the image210,212, and perform a greedy nearest-neighbor (GNN) cluster analysis of the image to group the points of the image. The GNN cluster analysis includes the MMS configured to group the points into a plurality of local GNN clusters, from a greedy analysis of the points using a k-d tree in which the points are organized in a k-dimensional space. As is known in the art, a k-d tree (e.g., a k-dimensional tree) is a space-partitioning data structure for organizing points in a k-dimensional space. The analysis is greedy in that it produces a locally optimal solution at each stage; and while the locally optimal solutions may approximate a globally optimal solution in a shorter amount of time, the locally optimal solutions may not be globally optimal. The MMS208is configured to extend the plurality of local GNN clusters into a plurality of global GNN clusters. This includes for a pair of local GNN clusters, applying the pair to nested conditions by which similarity of the local GNN clusters is evaluated according to defined similarity criteria. The local GNN clusters are merged into a global GNN cluster when each of the defined similarity criteria is evaluated to true, and passed as global GNN clusters when any of the defined similarity criteria is evaluated to false. The MMS is then configured to detect the objects218depicted in the image210,212based on the global GNN clusters. In some examples, the MMS208configured to group the points214,216into the plurality of local GNN clusters includes the MMS configured to at least generate the k-d tree to organize the points in a k-dimensional space. The MMS is configured to add the points to a data queue, and iterate through the points in the data queue to group the points into the plurality of local GNN clusters. In some further examples, the MMS208configured to iterate through the points in the data queue includes for an iteration of a plurality of iterations, the MMS configured to select a point from the data queue as a test point. The MMS is configured to perform a radius nearest neighbor search of the test point in the k-d tree to generate a cluster of points. The MMS is configured to test the cluster against a set of clusters to identify any existing cluster in the set with at least one point in common with the cluster; and add the cluster to the set as a new cluster, or merged with the existing cluster when identified. The MMS is configured to then remove the points of the cluster from the data queue. In some examples, the nested conditions include a first condition by which circumscribing hyperspheres of the local GNN clusters of the pair are evaluated to determine if a distance between the circumscribing hyperspheres is less than or equal to a distance threshold. In some further examples, the nested conditions include a second condition by which a point in a first of the local GNN clusters that is closest to a centroid of a second of the local GNN clusters is evaluated to determine if the point is within a threshold distance of the centroid. The nested conditions of these examples also include a third condition by which a minimum distance between the point in the first of the local GNN clusters and respective points in the second of the local GNN clusters is evaluated to determine if the minimum distance is less than the threshold distance. In some examples, the MMS208configured to extend the plurality of local GNN clusters includes the MMS configured to merge pairs of the local GNN clusters when each of the defined similarity criteria is evaluated true, and thereby produce a set of merged pairs of local GNN clusters. The MMS is configured to then iterate through the set of merged pairs to further merge any intersecting pair of the merged pairs into a respective global GNN cluster. As indicated above, again, the image210,212in some examples depicts the objects218in the ambient environment of the vehicle200. In some of these examples, the MMS208is further configured perform situational awareness of the vehicle based on the objects as detected, and cause the vehicle to execute maneuvers in the ambient environment based on the situational awareness. This may include the MMS configured to send one or more maneuver commands to a vehicle management system (VMS) to control the vehicle to follow the maneuver commands. To further illustrate example implementations of the present disclosure, reference is now made toFIGS.3-7that illustrate GNN cluster analysis of points of an image, according to various example implementations of the present disclosure. The analysis is greedy in the sense that consumes, within a single cluster, all points within a radius, Rnn, of any point that is already a member of the cluster. The GNN cluster analysis may include generating a k-d tree on the points to cluster, Pkd, using some distance measure, fD. This allows for extremely fast nearest neighbor searches. A set of empty clusters, C={ }, is initialized, and all of the points are placed into a processing queue, Qp. A first point in the queue (Qp1) is selected as a test point, Tp, and a radius nearest neighbor search is performed on all points in the k-d tree (including Tp) to generate a new cluster CnewϵPkdfD(Tp, Pkd)≤Rnn. Then, Cnewis tested against any existing clusters in C to see if it shares any points (and can therefore be merged). If Cnew∩Ci=Ø∀CiϵC, a new cluster is created and added to C, C={C, Cnew}; otherwise, Cnewis combined with the first cluster with which Cnewshares a common point. All of the points in Cneware then marked as used and removed from Qp. The above steps are repeated until Qpis empty (all points assigned to a cluster). At this point, C is a set of local, greedy nearest neighbor clusters.FIG.3illustrates a number of points300grouped into four clusters302,304,306and308that are shown with their respective circumscribing hyperspheres310,312,314and316. This approach allows the MMS208to quickly form a base set of clusters while only processing a fraction of the points. For small clouds of less than 1,000 points, to medium clouds of less than 10,000, and even large clouds of over 10,000 points, the MMS may be used to form a base set of less than 300 clusters, while processing less than 10% of the points. As the GNN cluster analysis here only utilizes previously unassigned points to generate new clusters, it is quite likely that one or more of these local clusters may still be merged to obtain a global list of greedy nearest neighbor clusters. This is described in greater detail below with respect to clusters306and308. The MMS208may efficiently extend the local GNN clusters into global GNN clusters in the following manner. The MMS may compute the centroid of each cluster, Act; and using the centroid as the center, determine the radius of the circumscribing hypersphere, Rci. For each pair of clusters, Cij={Ci, Cj}, Ci, CjϵC, Dij=fD(Aci, Acj)−Rci−Rcjmay be computed, and Ipairs={∀Cij}Dij≤0 may be found. As shown inFIG.4, for example, cluster302is distinct from the other local clusters304,306and308in that the distance d between its circumscribing hypersphere310and the other circumscribing hyperspheres312,314and316is greater than a distance threshold (e.g., 0.25). This cluster302can therefore be immediately promoted to a global GNN cluster. The other local clusters may need further processing to extend the them into global GNN clusters. To further process the local clusters, for each pair of clusters Ckd={Ck, Cl}, ϵIpairs, find the point in cluster Clwhich is closest to the center of cluster Ckl=Plk=CljDij=minj(fD(Ack, Clj)−Rck). This is shown inFIG.5; and for this case, both the cluster pairs304,306and306,308with merge potential pass the filtering step. A third condition may then be applied to determine which pairs to actually merge. In this regard, the minimum distance between Pkland each of the points in cluster Ckmay be determined, Dkl=mint(fD(Plk, Cki)). If Dkl<Rnn, Cklmay be merged into a single cluster and added to the set of merged pairs, Mpairs; and Ckand Clmay be removed from C. An example of this is shown inFIG.6; and the global GNN clusters302,304,306in which local clusters306,308are merged are shown inFIG.7. Once every pair in Ipairshas been tested, Mpairsmay be iterated over using the following pseudo-code: While Mpairsnot empty:Ci= Pop first cluster in MpairsFor each cluster remaining in Mpairs:Cj= Pop first cluster in MpairsIf Ci∩ Cj= ∅, Then Append Cjto the end of MpairsElse Ci= Ci∪ CjAdd Cito C. The set of clusters, C, now contains the set of global GNN clusters and the algorithm is complete.FIG.8illustrates GNN cluster analysis of points of an image800of an airport, according to some example implementations FIGS.9A-9Eare flowcharts illustrating various steps in a method900according to various example implementations. The method includes receiving points of an image that are spatially arranged to represent objects depicted in the image, the points corresponding to pixels of a digital image, or data points of a point cloud, as shown at block902ofFIG.9A. The method includes performing a greedy nearest-neighbor (GNN) cluster analysis of the image to group the points of the image, as shown at block904. In some examples, the GNN cluster analysis includes grouping the points into a plurality of local GNN clusters, from a greedy analysis of the points using a k-d tree in which the points are organized in a k-dimensional space, as shown at block906. The GNN cluster analysis also includes extending the plurality of local GNN clusters into a plurality of global GNN clusters, as shown at block908. This includes for a pair of local GNN clusters, applying the pair to nested conditions by which similarity of the local GNN clusters is evaluated according to defined similarity criteria. The local GNN clusters are merged into a global GNN cluster when each of the defined similarity criteria is evaluated to true, and passed as global GNN clusters when any of the defined similarity criteria is evaluated to false. The method900also includes detecting the objects depicted in the image based on the global GNN clusters, as shown at block910. In some examples, grouping the points into the plurality of local GNN clusters at block906includes at least generating the k-d tree to organize the points in a k-dimensional space, as shown at block912ofFIG.9B. In some of these examples, the points are added to a data queue, and the points in the data queue are iterated through to group the points into the plurality of local GNN clusters, as shown at blocks914and916. In some examples, iterating through the points in the data queue at block916includes for an iteration of a plurality of iterations, selecting a point from the data queue as a test point, as shown at block918ofFIG.9C. A radius nearest neighbor search of the test point is performed in the k-d tree to generate a cluster of points, as shown at block920. The cluster is tested against a set of clusters to identify any existing cluster in the set with at least one point in common with the cluster, as shown at block922. The cluster is added to the set as a new cluster, or merged with the existing cluster when identified, as shown at block924. And the points of the cluster are removed from the data queue, as shown at block926. In some examples, the nested conditions include a first condition by which circumscribing hyperspheres of the local GNN clusters of the pair are evaluated to determine if a distance between the circumscribing hyperspheres is less than or equal to a distance threshold. In some further examples, the nested conditions include a second condition by which a point in a first of the local GNN clusters that is closest to a centroid of a second of the local GNN clusters is evaluated to determine if the point is within a threshold distance of the centroid. In some of these examples the nested conditions also include a third condition by which a minimum distance between the point in the first of the local GNN clusters and respective points in the second of the local GNN clusters is evaluated to determine if the minimum distance is less than the threshold distance. In some examples, extending the plurality of local GNN clusters at block908includes merging pairs of the local GNN clusters when each of the defined similarity criteria is evaluated true, and thereby producing a set of merged pairs of local GNN clusters, as shown at block928ofFIG.9D. In some of these examples, the set of merged pairs is iterated through to further merge any intersecting pair of the merged pairs into a respective global GNN cluster, as shown at block930. In some examples, the image depicts the objects in an ambient environment of a robot. In some of these examples, the method900further includes performing situational awareness of the robot based on the objects as detected, as shown at block932ofFIG.9E. The robot is then caused to execute maneuvers in the ambient environment based on the situational awareness, as shown at block934. According to example implementations of the present disclosure, the MMS208may be implemented by various means. Means for implementing the MMS may include hardware, alone or under direction of one or more computer programs from a computer-readable storage medium. In some examples, one or more apparatuses may be configured to function as or otherwise implement as shown and described herein. In examples involving more than one apparatus, the respective apparatuses may be connected to or otherwise in communication with one another in a number of different manners, such as directly or indirectly via a wired or wireless network or the like. FIG.10illustrates an apparatus1000according to some example implementations of the present disclosure. Generally, an apparatus of exemplary implementations of the present disclosure may comprise, include or be embodied in one or more fixed or portable electronic devices. The apparatus may include one or more of each of a number of components such as, for example, processing circuitry1002(e.g., processor unit) connected to a memory1004(e.g., storage device). The processing circuitry1002may be composed of one or more processors alone or in combination with one or more memories. The processing circuitry is generally any piece of computer hardware that is capable of processing information such as, for example, data, computer programs and/or other suitable electronic information. The processing circuitry is composed of a collection of electronic circuits some of which may be packaged as an integrated circuit or multiple interconnected integrated circuits (an integrated circuit at times more commonly referred to as a “chip”). The processing circuitry may be configured to execute computer programs, which may be stored onboard the processing circuitry or otherwise stored in the memory1004(of the same or another apparatus). The processing circuitry1002may be a number of processors, a multi-core processor or some other type of processor, depending on the particular implementation. Further, the processing circuitry may be implemented using a number of heterogeneous processor systems in which a main processor is present with one or more secondary processors on a single chip. As another illustrative example, the processing circuitry may be a symmetric multi-processor system containing multiple processors of the same type. In yet another example, the processing circuitry may be embodied as or otherwise include one or more ASICs, FPGAs or the like. Thus, although the processing circuitry may be capable of executing a computer program to perform one or more functions, the processing circuitry of various examples may be capable of performing one or more functions without the aid of a computer program. In either instance, the processing circuitry may be appropriately programmed to perform functions or operations according to example implementations of the present disclosure. The memory1004is generally any piece of computer hardware that is capable of storing information such as, for example, data, computer programs (e.g., computer-readable program code1006) and/or other suitable information either on a temporary basis and/or a permanent basis. The memory may include volatile and/or non-volatile memory, and may be fixed or removable. Examples of suitable memory include random access memory (RAM), read-only memory (ROM), a hard drive, a flash memory, a thumb drive, a removable computer diskette, an optical disk, a magnetic tape or some combination of the above. Optical disks may include compact disk—read only memory (CD-ROM), compact disk—read/write (CD-R/W), DVD or the like. In various instances, the memory may be referred to as a computer-readable storage medium. The computer-readable storage medium is a non-transitory device capable of storing information, and is distinguishable from computer-readable transmission media such as electronic transitory signals capable of carrying information from one location to another. Computer-readable medium as described herein may generally refer to a computer-readable storage medium or computer-readable transmission medium. In addition to the memory1004, the processing circuitry1002may also be connected to one or more interfaces for displaying, transmitting and/or receiving information. The interfaces may include a communications interface1008(e.g., communications unit) and/or one or more user interfaces. The communications interface may be configured to transmit and/or receive information, such as to and/or from other apparatus(es), network(s) or the like. The communications interface may be configured to transmit and/or receive information by physical (wired) and/or wireless communications links. Examples of suitable communication interfaces include a network interface controller (NIC), wireless NIC (WNIC) or the like. The user interfaces may include a display1010and/or one or more user input interfaces1012(e.g., input/output unit). The display may be configured to present or otherwise display information to a user, suitable examples of which include a liquid crystal display (LCD), light-emitting diode display (LED), plasma display panel (PDP) or the like. The user input interfaces may be wired or wireless, and may be configured to receive information from a user into the apparatus, such as for processing, storage and/or display. Suitable examples of user input interfaces include a microphone, image or video capture device, keyboard or keypad, joystick, touch-sensitive surface (separate from or integrated into a touchscreen), biometric sensor or the like. The user interfaces may further include one or more interfaces for communicating with peripherals such as printers, scanners or the like. As indicated above, program code instructions may be stored in memory, and executed by processing circuitry that is thereby programmed, to implement functions of the systems, subsystems, tools and their respective elements described herein. As will be appreciated, any suitable program code instructions may be loaded onto a computer or other programmable apparatus from a computer-readable storage medium to produce a particular machine, such that the particular machine becomes a means for implementing the functions specified herein. These program code instructions may also be stored in a computer-readable storage medium that can direct a computer, a processing circuitry or other programmable apparatus to function in a particular manner to thereby generate a particular machine or particular article of manufacture. The instructions stored in the computer-readable storage medium may produce an article of manufacture, where the article of manufacture becomes a means for implementing functions described herein. The program code instructions may be retrieved from a computer-readable storage medium and loaded into a computer, processing circuitry or other programmable apparatus to configure the computer, processing circuitry or other programmable apparatus to execute operations to be performed on or by the computer, processing circuitry or other programmable apparatus. Retrieval, loading and execution of the program code instructions may be performed sequentially such that one instruction is retrieved, loaded and executed at a time. In some example implementations, retrieval, loading and/or execution may be performed in parallel such that multiple instructions are retrieved, loaded, and/or executed together. Execution of the program code instructions may produce a computer-implemented process such that the instructions executed by the computer, processing circuitry or other programmable apparatus provide operations for implementing functions described herein. Execution of instructions by a processing circuitry, or storage of instructions in a computer-readable storage medium, supports combinations of operations for performing the specified functions. In this manner, an apparatus1000may include a processing circuitry1002and a computer-readable storage medium or memory1004coupled to the processing circuitry, where the processing circuitry is configured to execute computer-readable program code1006stored in the memory. It will also be understood that one or more functions, and combinations of functions, may be implemented by special purpose hardware-based computer systems and/or processing circuitry which perform the specified functions, or combinations of special purpose hardware and program code instructions. As explained above and reiterated below, the subject disclosure includes, without limitation, the following example implementations. Clause 1. A vehicle comprising: an imager configured to convey information from which an image of an ambient environment is generated, the information including points of the image that are spatially arranged to represent objects depicted in the image, the points corresponding to pixels of a digital image, or data points of a point cloud; and a mission management system (MMS) configured to at least: receive the points of the image; perform a greedy nearest-neighbor (GNN) cluster analysis of the image to group the points of the image, the GNN cluster analysis including the MMS configured to: group the points into a plurality of local GNN clusters, from a greedy analysis of the points using a k-d tree in which the points are organized in a k-dimensional space; and extend the plurality of local GNN clusters into a plurality of global GNN clusters, and including for a pair of local GNN clusters, applying the pair to nested conditions by which similarity of the local GNN clusters is evaluated according to defined similarity criteria, and the local GNN clusters are merged into a global GNN cluster when each of the defined similarity criteria is evaluated to true, and passed as global GNN clusters when any of the defined similarity criteria is evaluated to false; and detect the objects depicted in the image based on the global GNN clusters. Clause 2. The vehicle of clause 1, wherein the MMS configured to group the points into the plurality of local GNN clusters includes the MMS configured to at least: generate the k-d tree to organize the points in a k-dimensional space; and add the points to a data queue; and iterate through the points in the data queue to group the points into the plurality of local GNN clusters. Clause 3. The vehicle of clause 2, wherein the MMS configured to iterate through the points in the data queue includes for an iteration of a plurality of iterations, the MMS configured to: select a point from the data queue as a test point; perform a radius nearest neighbor search of the test point in the k-d tree to generate a cluster of points; test the cluster against a set of clusters to identify any existing cluster in the set with at least one point in common with the cluster; add the cluster to the set as a new cluster, or merged with the existing cluster when identified; and remove the points of the cluster from the data queue. Clause 4. The vehicle of any of clauses 1 to 3, wherein the nested conditions include a first condition by which circumscribing hyperspheres of the local GNN clusters of the pair are evaluated to determine if a distance between the circumscribing hyperspheres is less than or equal to a distance threshold. Clause 5. The vehicle of clause 4, wherein the nested conditions include a second condition by which a point in a first of the local GNN clusters that is closest to a centroid of a second of the local GNN clusters is evaluated to determine if the point is within a threshold distance of the centroid, and a third condition by which a minimum distance between the point in the first of the local GNN clusters and respective points in the second of the local GNN clusters is evaluated to determine if the minimum distance is less than the threshold distance. Clause 6. The vehicle of any of clauses 1 to 5, wherein the MMS configured to extend the plurality of local GNN clusters includes the MMS configured to: merge pairs of the local GNN clusters when each of the defined similarity criteria is evaluated true, and thereby produce a set of merged pairs of local GNN clusters; and iterate through the set of merged pairs to further merge any intersecting pair of the merged pairs into a respective global GNN cluster. Clause 7. The vehicle of any of clauses 1 to 6, wherein the image depicts the objects in the ambient environment of the vehicle, and the MMS is further configured to at least: perform situational awareness of the vehicle based on the objects as detected; and cause the vehicle to execute maneuvers in the ambient environment based on the situational awareness. Clause 8. An apparatus comprising: a memory configured to store computer-readable program code; and processing circuitry configured to access the memory, and execute the computer-readable program code to cause the apparatus to at least: receive points of an image that are spatially arranged to represent objects depicted in the image, the points corresponding to pixels of a digital image, or data points of a point cloud; perform a greedy nearest-neighbor (GNN) cluster analysis of the image to group the points of the image, the GNN cluster analysis including the apparatus caused to: group the points into a plurality of local GNN clusters, from a greedy analysis of the points using a k-d tree in which the points are organized in a k-dimensional space; and extend the plurality of local GNN clusters into a plurality of global GNN clusters, and including for a pair of local GNN clusters, applying the pair to nested conditions by which similarity of the local GNN clusters is evaluated according to defined similarity criteria, and the local GNN clusters are merged into a global GNN cluster when each of the defined similarity criteria is evaluated to true, and passed as global GNN clusters when any of the defined similarity criteria is evaluated to false; and detect the objects depicted in the image based on the global GNN clusters. Clause 9. The apparatus of clause 8, wherein the apparatus caused to group the points into the plurality of local GNN clusters includes the apparatus caused to at least: generate the k-d tree to organize the points in a k-dimensional space; and add the points to a data queue; and iterate through the points in the data queue to group the points into the plurality of local GNN clusters. Clause 10. The apparatus of clause 9, wherein the apparatus caused to iterate through the points in the data queue includes for an iteration of a plurality of iterations, the apparatus caused to: select a point from the data queue as a test point; perform a radius nearest neighbor search of the test point in the k-d tree to generate a cluster of points; test the cluster against a set of clusters to identify any existing cluster in the set with at least one point in common with the cluster; add the cluster to the set as a new cluster, or merged with the existing cluster when identified; and remove the points of the cluster from the data queue. Clause 11. The apparatus of any of clauses 8 to 10, wherein the nested conditions include a first condition by which circumscribing hyperspheres of the local GNN clusters of the pair are evaluated to determine if a distance between the circumscribing hyperspheres is less than or equal to a distance threshold. Clause 12. The apparatus of clause 111, wherein the nested conditions include a second condition by which a point in a first of the local GNN clusters that is closest to a centroid of a second of the local GNN clusters is evaluated to determine if the point is within a threshold distance of the centroid, and a third condition by which a minimum distance between the point in the first of the local GNN clusters and respective points in the second of the local GNN clusters is evaluated to determine if the minimum distance is less than the threshold distance. Clause 13. The apparatus of any of clauses 8 to 12, wherein the apparatus caused to extend the plurality of local GNN clusters includes the apparatus caused to: merge pairs of the local GNN clusters when each of the defined similarity criteria is evaluated true, and thereby produce a set of merged pairs of local GNN clusters; and iterate through the set of merged pairs to further merge any intersecting pair of the merged pairs into a respective global GNN cluster. Clause 14. The apparatus of any of clauses 8 to 13, wherein the image depicts the objects in an ambient environment of a vehicle, and the processing circuitry is configured to execute the computer-readable program code to cause the apparatus to further at least: perform situational awareness of the vehicle based on the objects as detected; and cause the vehicle to execute maneuvers in the ambient environment based on the situational awareness. Clause 15. A method comprising: receiving points of an image that are spatially arranged to represent objects depicted in the image, the points corresponding to pixels of a digital image, or data points of a point cloud; performing a greedy nearest-neighbor (GNN) cluster analysis of the image to group the points of the image, the GNN cluster analysis including: grouping the points into a plurality of local GNN clusters, from a greedy analysis of the points using a k-d tree in which the points are organized in a k-dimensional space; and extending the plurality of local GNN clusters into a plurality of global GNN clusters, and including for a pair of local GNN clusters, applying the pair to nested conditions by which similarity of the local GNN clusters is evaluated according to defined similarity criteria, and the local GNN clusters are merged into a global GNN cluster when each of the defined similarity criteria is evaluated to true, and passed as global GNN clusters when any of the defined similarity criteria is evaluated to false; and detecting the objects depicted in the image based on the global GNN clusters. Clause 16. The method of clause 15, wherein grouping the points into the plurality of local GNN clusters includes at least: generating the k-d tree to organize the points in a k-dimensional space; and adding the points to a data queue; and iterating through the points in the data queue to group the points into the plurality of local GNN clusters. Clause 17. The method of clause 16, wherein iterating through the points in the data queue includes for an iteration of a plurality of iterations: selecting a point from the data queue as a test point; performing a radius nearest neighbor search of the test point in the k-d tree to generate a cluster of points; testing the cluster against a set of clusters to identify any existing cluster in the set with at least one point in common with the cluster; adding the cluster to the set as a new cluster, or merged with the existing cluster when identified; and removing the points of the cluster from the data queue. Clause 18. The method of any of clauses 15 to 17, wherein the nested conditions include a first condition by which circumscribing hyperspheres of the local GNN clusters of the pair are evaluated to determine if a distance between the circumscribing hyperspheres is less than or equal to a distance threshold. Clause 19. The method of clause 18, wherein the nested conditions include a second condition by which a point in a first of the local GNN clusters that is closest to a centroid of a second of the local GNN clusters is evaluated to determine if the point is within a threshold distance of the centroid, and a third condition by which a minimum distance between the point in the first of the local GNN clusters and respective points in the second of the local GNN clusters is evaluated to determine if the minimum distance is less than the threshold distance. Clause 20. The method of any of clauses 15 to 19, wherein extending the plurality of local GNN clusters includes: merging pairs of the local GNN clusters when each of the defined similarity criteria is evaluated true, and thereby producing a set of merged pairs of local GNN clusters; and iterating through the set of merged pairs to further merge any intersecting pair of the merged pairs into a respective global GNN cluster. Clause 21. The method of any of clauses 15 to 20, wherein the image depicts the objects in an ambient environment of a vehicle, and the method further comprises: performing situational awareness of the vehicle based on the objects as detected; and causing the vehicle to execute maneuvers in the ambient environment based on the situational awareness. Clause 22. A computer-readable storage medium that is non-transitory and has computer-readable program code stored therein that, in response to execution by processing circuitry, causes an apparatus to at least: receive points of an image that are spatially arranged to represent objects depicted in the image, the points corresponding to pixels of a digital image, or data points of a point cloud; perform a greedy nearest-neighbor (GNN) cluster analysis of the image to group the points of the image, the GNN cluster analysis including the apparatus caused to: group the points into a plurality of local GNN clusters, from a greedy analysis of the points using a k-d tree in which the points are organized in a k-dimensional space; and extend the plurality of local GNN clusters into a plurality of global GNN clusters, and including for a pair of local GNN clusters, applying the pair to nested conditions by which similarity of the local GNN clusters is evaluated according to defined similarity criteria, and the local GNN clusters are merged into a global GNN cluster when each of the defined similarity criteria is evaluated to true, and passed as global GNN clusters when any of the defined similarity criteria is evaluated to false; and detect the objects depicted in the image based on the global GNN clusters. Clause 23. The computer-readable storage medium of clause 22, wherein the apparatus caused to group the points into the plurality of local GNN clusters includes the apparatus caused to at least: generate the k-d tree to organize the points in a k-dimensional space; and add the points to a data queue; and iterate through the points in the data queue to group the points into the plurality of local GNN clusters. Clause 24. The computer-readable storage medium of clause 23, wherein the apparatus caused to iterate through the points in the data queue includes for an iteration of a plurality of iterations, the apparatus caused to: select a point from the data queue as a test point; perform a radius nearest neighbor search of the test point in the k-d tree to generate a cluster of points; test the cluster against a set of clusters to identify any existing cluster in the set with at least one point in common with the cluster; add the cluster to the set as a new cluster, or merged with the existing cluster when identified; and remove the points of the cluster from the data queue. Clause 25. The computer-readable storage medium of any of clauses 22 to 24, wherein the nested conditions include a first condition by which circumscribing hyperspheres of the local GNN clusters of the pair are evaluated to determine if a distance between the circumscribing hyperspheres is less than or equal to a distance threshold. Clause 26. The computer-readable storage medium of clause 25, wherein the nested conditions include a second condition by which a point in a first of the local GNN clusters that is closest to a centroid of a second of the local GNN clusters is evaluated to determine if the point is within a threshold distance of the centroid, and a third condition by which a minimum distance between the point in the first of the local GNN clusters and respective points in the second of the local GNN clusters is evaluated to determine if the minimum distance is less than the threshold distance. Clause 27. The computer-readable storage medium of clause 226, wherein the apparatus caused to extend the plurality of local GNN clusters includes the apparatus caused to: merge pairs of the local GNN clusters when each of the defined similarity criteria is evaluated true, and thereby produce a set of merged pairs of local GNN clusters; and iterate through the set of merged pairs to further merge any intersecting pair of the merged pairs into a respective global GNN cluster. Clause 28. The computer-readable storage medium of any of clauses 22 to 27, wherein the image depicts the objects in an ambient environment of a vehicle, and the computer-readable storage medium has further computer-readable program code stored therein that, in response to execution by the processing circuitry, causes the apparatus to further at least: perform situational awareness of the vehicle based on the objects as detected; and cause the vehicle to execute maneuvers in the ambient environment based on the situational awareness. Many modifications and other implementations of the disclosure set forth herein will come to mind to one skilled in the art to which the disclosure pertains having the benefit of the teachings presented in the foregoing description and the associated figures. Therefore, it is to be understood that the disclosure is not to be limited to the specific implementations disclosed and that modifications and other implementations are intended to be included within the scope of the appended claims. Moreover, although the foregoing description and the associated figures describe example implementations in the context of certain example combinations of elements and/or functions, it should be appreciated that different combinations of elements and/or functions may be provided by alternative implementations without departing from the scope of the appended claims. In this regard, for example, different combinations of elements and/or functions than those explicitly described above are also contemplated as may be set forth in some of the appended claims. Although specific terms are employed herein, they are used in a generic and descriptive sense only and not for purposes of limitation. | 48,138 |
11861871 | DETAILED DESCRIPTION OF THE INVENTION The following description related to the inventive method also applies to the inventive system, where applicable and vice versa. The present invention provides a method for evaluating the crack intensity on a polymeric sheet based on a predetermined scale of crack intensity grades. The polymeric sheet is a roofing membrane, in particular a single ply roofing membrane, or a sealing membrane, in particular a waterproofing membrane, wherein the polymeric sheet is preferably made of a thermoplastic or thermoset material. A preferred polymeric sheet, in particular single ply roofing membrane, made of a thermoplastic material is a polyvinyl chloride (PVC) membrane, a thermoplastic olefin (TPO) membrane or a ketone ethylene ester (KEE) membrane. A preferred polymeric sheet, in particular single ply roofing membrane, made of a thermoset material is an ethylene propylene diene monomer (EPDM) membrane. The thickness of the polymeric sheet may depend on the desired application and quality. For instance, the thickness for the polymeric sheet may be in the range of 0.75 mm to 2.5 mm, preferably 1.2 to 2.0 mm, in particular for a single ply roofing membrane. The evaluation of the crack intensity on a polymeric sheet is based on a predetermined scale of crack intensity grades. The grades of the scale refer to a graded classification of crack intensities from absence of or low crack intensity to very high crack intensity. Any scale of crack intensity grades, which has been established in advance can be used. The predetermined scale is preferably a standardized scale. While the predetermined scale includes at least two grades, the scale preferably includes three, four, five or more grades. In a preferred embodiment the scale defined in DIN EN 1297:2004-12, Annex B, Table B.1, with grades of 0 to 3 or a selection from these grades is used as the predetermined scale for the crack intensity grades. Polymeric sheets exhibiting a grade of 3 are considered damaged and usually a repair or replacement is necessary. The scale given in standard DIN EN 1297:2004-12 actually refers to thin coatings, but is also suitable for polymeric sheets, which usually have a higher thickness, to which the present invention refer. The scale given in DIN EN 1297 is related to judgement of an experienced person and typically not quantified. For this scale, crack intensity and a rough estimation of dimensions of the cracks observed for polymeric sheets is as follows:Grade 0: essentially no cracksGrade 1: flat and faint cracks (typically cracks of less than 10 μm in width and depth are present)Grade 2: moderate to pronounced cracks (typically cracks of more than 10 μm and less than 100 μm in width and depth are present)Grade 3 broad and deep cracks (typically cracks of more than 100 μm in width and depth are present) While the predetermined scale may preferably include all the grades as defined in DIN EN 1297:2004-12, Annex B, Table B.1, it is also possible, to use a selection from these grades as the predetermined scale for the crack intensity grades, for instance a scale including only grades 0, 2 and 3 or grades 1, 2 and 3 as defined in DIN EN 1297:2004-12. A limited selection of grades may be suitable, if e.g. only the selected grades are relevant for the inspected polymeric sheet. Other rating scales are technically possible as well. The inventive method includes the step of a) recording a digital image of at least a portion of a surface of the polymeric sheet using an apparatus for recording digital images. The digital image may be of the entire surface of the polymeric sheet or preferably of a portion of the surface of the polymeric sheet. It should be self-evident that the digital image is generally a top view image of the polymeric sheet surface. In a preferred embodiment, the polymeric sheet is an installed polymeric sheet and the digital image is recorded on the installation site. Since the program for pattern recognition may be implemented on a portable data processing device, the automatic classification of the crack intensity can be also carried out on the installation site. Alternatively, the automatic classification of the crack intensity may be effected at a different place where the data processing device is located. According to the prior art, a portion of the polymeric sheet is cut out from the installation site and is delivered to another location for analysis. The removed portion must be replaced. A benefit of the present invention is that the method is non-destructive. In a preferred embodiment, the digital image is a magnified image of the portion of the polymeric sheet surface. The magnification is preferably in the range of from 5 to 50, more preferably from 8 to 30 or 10 to 30, most preferably approximately 10. The most preferred embodiment is given by taking pictures of cracked surface with a magnification of 10-fold, in order to conform to DIN EN 13956 and DIN EN 1297. However, any other magnification is technically possible as well. The area of the surface of the polymeric sheet or of the portion of the surface of the polymeric sheet, respectively, from which the digital image is taken can vary, but is preferably an area of at least 100 cm2, e.g. an area of at least 10 cm×10 cm, of the surface of the polymeric sheet. A subarea of the digital image used for evaluation as discussed below, preferably represents an area of at least 100 mm2, e.g. an area of at least 10 mm×10 mm, of the surface of the polymeric sheet. The term area refers to an actual area of the polymeric sheet, i.e. not magnified. The digital image recorded may be used as input data for the following step of automatic classification. Alternatively, one or more subareas of the digital image recorded may be used as input data for the following step of automatic classification. The use of a plurality of subareas instead of the entire digital image may provide statistically more reliable results, since cracks of varying broadness and deepness may be distributed unevenly on the digital picture. Moreover, digital images often include image regions of different image quality so that it is possible to select subareas of higher image quality which may improve the evaluation results. For instance, it is known that in general the focus frame and the image margins of a digital image represent regions of lower image quality compared to other regions of the digital image. In a preferred embodiment a plurality of subareas of the digital image is generated by decomposing an image region located between the focus frame of the digital image and the image margins of the digital image into an array of subareas. The plurality of subareas is preferably generated by tiling, i.e. an array of subareas in form of tiles spaced from each other is generated. When a plurality of subareas of the digital image is used, the plurality of subareas are preferably at least 20, more preferably at least 75 subareas of the digital image, and preferably not more than 500, more preferably not more than 200 subareas. The generation of subareas of a digital image in a desired pattern is conventional for a skilled person. Commercial programs for generating such subareas or array of subareas according to a desired pattern are available. If one or more subareas are used according to the method, the subareas are generated from the image data received from step a) and the subareas obtained are used as input data for the automatic classification step b). The inventive method further includes the step of b) automatic classification of the crack intensity by a computer-implemented program for pattern recognition by means of a trained artificial neural network, comprising1) inputting the digital image or one or more subareas of the digital image to the trained artificial neural network as input data,2) classification by the artificial neural network by assigning a grade from the predetermined scale of crack intensity grades to the digital image or the one or more subareas and3) outputting the assigned grade or grades for the digital image and/or the one or more subareas as output data. An artificial neural network is a self-adaptive computer program that can be trained for certain tasks like pattern recognition. The program contains the option to build up and change connections between an input and an output. In the end, after a suitable training, the program can connect inputs (optical patterns) to output (rating). An artificial neural network is based on a collection of connected units or nodes called artificial neurons. Each connection between artificial neurons can transmit a signal from one to another. The artificial neuron that receives the signal can process it and then signal artificial neurons connected to it. In common artificial neural network implementations, the signal at a connection between artificial neurons is a real number, and the output of each artificial neuron is calculated by a non-linear function of the sum of its inputs. Artificial neurons and connections typically have a weight that adjusts as learning proceeds. Thus, schematically an artificial neural network includes an input layer, an output layer and one or more hidden layers wherein a self-adaptive analysis is carried out.FIG.9illustrates the principle schematically. The principles of artificial neural networks are known to the skilled person. Details of this technology and applications can be found e.g. in S. C. Wang, 2003, Artificial Neural Network, Interdisciplinary Computing in Java Programming, The Springer International Series in Engineering and Computer Science, Vol. 743, and N. Gupta, 2013, Artificial Neural Network, Network and Complex Systems, Vol. 3, No. 1, pp. 24-28. Artificial neural networks are commercially available. An example is the program Watson developed by IBM. Up to now, artificial neural networks are typically used for input that can be given in a mathematical form. For instance, an artificial neural network has been used for electric load forecasting, see D. C. Park et al., 1991, IEEE Transactions on Power Engineering, vol. 6, pages 442-449, or for the evaluation of design of roof shapes, given as a geometrical input, see K. Sasaki, K. Tsutsumi, Kansei Engineering International, 2006, vol. 6 no. 3 pages 13-18. A specific feature of the present invention is that the artificial neural network is trained by direct comparison of digital images or subareas, which are not parametrized, in particular the input data for the artificial neural network are data which are not parametrized. As cracks have random and widely varying shapes, it was surprising that reliable results are achieved for crack evaluation. The artificial neural network is trained in advance in a learning phase with a plurality of digital images or subareas thereof of polymeric sheet surface portions, whose grades in the predetermined scale are known and cover all grades of the predetermined scale. As for all applications using an artificial neural network, a learning phase is required to train the artificial neural networks before the artificial neural network is suitable for practical application. In the application phase, the trained artificial neural network automatically accomplishes the classification of crack intensity based on the predetermined scale given and the input of digital data provided. Learning paradigms used for artificial neural network are supervised learning, unsupervised learning or reinforcement learning, wherein supervised learning is preferred. The learning phase is preferably based on a direct feedback from a trained person. In a preferred embodiment, the learning phase comprisesa training phase where a plurality of digital images or subareas thereof are input to the artificial neural network and the artificial neural network is provided with the known grade of each of the digital images as feedback, anda test phase where the artificial neural networks classifies a plurality of digital images or subareas thereof and the grades assigned by the artificial neural networks are compared with the known grades of the digital images to determine a matching probability, andoptionally repeating the training phase and the test phase until the matching probability desired is reached. It is preferred to use at least 100 digital images for the learning phase, in particular each from a different polymeric sheet. A proportion of the digital images used, for instance a proportion of about 85 to 65%, is used for the training phase, and a proportion of the digital images used, for instance a proportion of about 15 to 35%, is used for the test phase, based on the total number of images used in the learning phase. An appropriate ratio for said proportions is e.g. about 75% images for the training phase and about 25% images for the test phase. If the matching probability desired is not achieved after a first learning cycle, the training phase and the test phase may be repeated one or more times until the matching probability desired is reached. It is evident that for a second and any further learning cycles digital images and/or subareas of digital images are to be used which are different from those used in the previous learning cycles in order to avoid redundancy. For a further learning cycle, the number of digital images and/or subareas may be appropriately lower than or the same as the number of digital images and/or subareas used in the first learning cycle. The matching probability achieved after the learning phase is preferably at least 70%, more preferably at least 80%. Hence, the artificial neural network can be trained in a learning phase comprising the following steps:1) Provide at least 100 representative samples of membrane with different crack intensity, covering all relevant crack grades, e.g. grades 0, 1, 2, 3 according to DIN EN 1297;2) Use a microscope with 10-fold magnification to take pictures of these samples and feed the pictures into the artificial neural network;3) Train the artificial neural network by feeding the assigned crack rating of the trained person for each picture into the artificial neural network;4) Check for matching probability by evaluating a certain amount of samples by the trained person and the artificial neural network in parallel and comparing the outcome5) Extend this procedure and training sample size, until a pre-defined matching probability (e.g >80% agreement between trained person and artificial neural network) is reached. In particular, the apparatus for recording digital images is an optical device for recording digital images. Accordingly, digital images recorded by the optical device are digitalized optical images, i.e. digital representations of optical images. The term “optical” refers to applications in the range of visible light. Preferably, the apparatus for recording digital images is portable. The apparatus for recording digital images is preferably a digital camera or an optical magnifying device including an image sensor, wherein an optical magnifying device including an image sensor is preferred. In principle, a smartphone or a tablet computer may be also used as the apparatus for recording digital images. The optical magnifying device is preferably a microscope including an image sensor. The apparatus for recording digital images usually includes a processing unit to transform the signals of the image sensor into electronic data of the digital image, and optionally a storage unit to store digital images recorded. The apparatus for recording digital images is generally provided with means for transferring the recorded digital image or recorded magnified digital image to one or more processing units wherein the program for pattern recognition and optionally a program for generating subareas of the digital image is implemented. The means for transferring the data may be a common wire connection, e.g. an USB cable, or a wireless connection. Hence, the transfer of data between the apparatus for recording digital images and the data processing device may be effected by wire connection or wireless connection. It is also possible that the apparatus for recording digital images and the data processing device are located in the same device, e.g. a smartphone or a tablet computer. The program for pattern recognition is computer-implemented, e.g. on a smartphone, a tablet computer, a laptop computer or a personal computer. In a preferred embodiment, the apparatus for recording digital images is an optical magnifying device including an image sensor, preferably a microscope including an image sensor, and is provided with means for transferring the recorded digital image or magnified digital image to a smartphone, a tablet computer, a laptop computer or a personal computer. If one or more subareas of the digital image are used for the inventive method, a program is used for generating the one or more subareas. The program for generating subareas of the digital image is computer-implemented, e.g. on a smartphone, a tablet computer, a laptop computer or a personal computer, which may be the same on which the program for pattern recognition is implemented or a different one. It is preferred that the digital image is recorded with a standardized recording distance and standardised light conditions. In a particular preferred embodiment, the apparatus for recording digital images, preferably the optical magnifying device including an image sensor, in particular the microscope including an image sensor, is therefore configured to record the digital image with a standardized recording distance and standardised light conditions. This standardized recording distance and standardised light conditions may be predetermined in connection with the predetermined scale used. Such predefined conditions are advantageous in order record pictures under comparable conditions so that the appearance of the cracks in the digital image is not affected by differing recording conditions. This allows for an improved crack evaluation. Further, many roofing membranes are highly reflective, which may affect the image quality too. Therefore, in a preferred embodiment the apparatus for recording digital images is configured to shield the portion of the polymeric sheet to be recorded from external light sources such as sun light, and includes an internal light source to illuminate the portion of the polymeric sheet to be recorded with standardized light conditions. An example for a suitable apparatus for recording digital images is W5 Wifi mini microscope obtainable from Ostec Electronics. W5 Wifi mini microscope is portable, can record magnified digital pictures, and transmit the data to any desired data processing device by wire connection or wireless connection. Moreover, W5 Wifi mini microscope is configured to ensure defined light conditions and a defined distance to the sheet. Said microscope includes means to shield external light sources and is provided with internal light sources. If the embodiment of the invention is used, where a plurality of subareas of the digital image are input to the artificial neural network, the artificial neural network assigns a grade from the predetermined scale to each of the subareas by parallel or subsequent processing. The grade or overall grade of the digital image is preferably determined from the grades assigned to each subarea by assigning the grade to the digital image which corresponds to that grade to which the highest number of subareas are assigned. Typically, a large majority of subareas of a digital image are assigned to one certain grade. The method may be repeated one or more times with one or more further digital images recorded from one or more portions of the surface of the polymeric sheet at different locations of the polymeric sheet. This may be suitable to verify the result achieved or to obtain more detailed information on the crack intensity of the polymeric sheet. The invention also concerns a system for evaluating the crack intensity on a polymeric sheet based on a predetermined scale of crack intensity grades, wherein the system comprisesA) an apparatus for recording digital images; andB) a data processing device comprising means for carrying out step b) of the inventive method. Details on the apparatus for recording digital images and on the data processing device have been given above with respect to the inventive method so that reference is made thereto. In particular, the apparatus for recording digital images is an optical device for recording digital images. In a preferred embodiment, the apparatus for recording digital images is a digital camera or an optical magnifying device including an image sensor, preferably a microscope including an image sensor, wherein the apparatus for recording digital images is provided with means for transferring the recorded digital image or magnified digital image to the data processing device. In a preferred embodiment, the apparatus for recording digital images is configured to record the digital image with a standardized recording distance and standardised light conditions. As discussed above, the data processing device is preferably a smartphone, a tablet computer, a laptop computer or a personal computer. The drawings enclosed and the followings examples are given for further explanation of the invention, which is however not limited to the embodiments of the examples. EXAMPLES A proof of concept for the inventive method was carried out by evaluating the crack intensity on a number of single ply roofing membrane samples. As the predetermined scale of crack intensity grades a selection from the grades as defined in DIN EN 1297:2004-12, Annex B, Table B.1, was used. The scale used includes grades of 0, 1 and 2 which corresponds to grades 0, 2 and 3, respectively, of the scale as defined in DIN EN 1297:2004-12, Annex B, Table B.1. With respect to the scale for the crack intensity used,FIG.1represents an image of a single ply roofing membrane having a grade of 0,FIG.2represents an image of a single ply roofing membrane having a grade of 1, andFIG.3represents an image of a single ply roofing membrane having a grade of 2. The digital images of single ply roofing membrane samples were recorded using a portable W5 Wifi mini microscope from Ostec Electronics. A conventional personal computer was used as data processing device on which a common program for pattern recognition by means of an artificial neural network and a conventional program for generating subareas of a digital image are implemented. The transfer of the data of a digital image recorded from the microscope to the personal computer is done via a wireless connection. A total of 119 single ply roofing membrane samples were used in the test, all of which were graded by a trained person based on the used scale. Hence, the grade of all samples used is known. A digital image was taken for each sample with the W5 Wifi mini microscope with a predetermined recording distance and predetermined light conditions and a 10-fold magnification. Each digital image recorded was transferred to the personal computer and subjected to a generation of an array of subareas by the program for generating subareas of a digital image by tiling. The array of subareas is located in an image region between the focus frame at the centre of the digital image and the image margins of the digital image and is composed of 93 subareas in form of tiles per digital image. The pattern of the array of tiles is shown inFIG.4. InFIG.4the region at the focus frame in the centre of the digital picture and the regions of the image margins exhibit lower image quality, whereas the region in which the subareas are located exhibits a good image quality. Each digital image recorded represents an area of at least 100 cm2of the polymeric sheet. Each subarea represents an area of at least 100 mm2of the polymeric sheet. The following table 1 summarises the number of digital images, which correspond to the number of single ply roofing membrane samples used, and the number of tiles (subareas) obtained therefrom together with the grades of each sample of the predetermined scale, which has been assigned by the trained person. TABLE 1gradeNumber digital imagesNumber tiles031288314844642393627Total11911067 The general procedure for processing is as follows. Each tile of a digital image is input to the artificial neural network as input data where each tile is processed subsequently. For the classification, the artificial neural network assigns a grade of 0, 1, or 2 to each tile as output data. The grade of the digital image and thus of the polymeric sheet is then determined from the grades assigned to each tiles as that grade to which the highest number of subareas are assigned. Since the artificial neural network used is not a trained artificial neural network, the tests carried out actually represent the learning phase for the artificial neural network. Accordingly, the set of data as given in Table 1 above is divided into data for a training phase and data for a test phase. The first 10 digital images of each grade were used for the test phase, i.e. a total of 30 images (about 25% of the total images). The other digital images were used in the training phase of the artificial neural network (89 images, about 75% of total images). The learning phase starts with the training phase, in which the subareas of the digital images provided for the training phase are input to the artificial neural network. During the training phase, the artificial neural network is also provided with the known grade of each digital image as feedback. As indicated above, the grade is known because it has been assigned by the trained person. After the training phase, a test phase was conducted, in which the subareas of the digital images provided for the test phase are input to the artificial neural network. For each digital image, the grade assigned by the expert and the grade assigned by the artificial neural network were compared, and presence or absence of matching is noted. The matching for each grade in the test phase is shown in Table 2. TABLE 2Assigned grade0*1*2*01100%0%0%110%100%0%210%10%90%1assigned by artificial neural network;*assigned by trained person The resulting matching probability was 96.7%, which is a very good matching probability so that the learning phase can be finished. A repetition of the learning phase is not required. The artificial neural network is trained in a sufficient manner. FIGS.5to8are example of digital images of polymeric sheet samples and grade assignments of subareas by the artificial neural network obtained during the test phase. The omission of the upper left corner in these images is an artefact. Said corners have been omitted because they include information written on the digital images. FIG.5is a digital image of a polymeric sheet which has been assigned a grade of 0 by the trained person. The artificial neural network assigned about 68% of the tiles with a grade of 0, and about 32% of the tiles with a grade of 1. Accordingly, the artificial neural network assigned the overall digital image with a grade of 0 (i.e. that grade to which the highest number of subareas are assigned by the artificial neural network) which matches with the grade determined by the trained person. FIG.6is a digital image of a polymeric sheet which has been assigned a grade of 2 by the trained person. The artificial neural network assigned all tiles with a grade of 2. Accordingly, the artificial neural network assigned the digital image with a grade of 2 which matches with the grade determined by the trained person. FIG.7is a digital image of a polymeric sheet which has been assigned a grade of 2 by the trained person. The artificial neural network assigned about 96% of the tiles with a grade of 2, and about 4% of the tiles with a grade of 1. Accordingly, the artificial neural network assigned the digital image with a grade of 2 which matches with the grade determined by the trained person. FIG.8is a digital image of a polymeric sheet which has been assigned a grade of 1 by the trained person. The artificial neural network assigned all tiles with a grade of 1. Accordingly, the artificial neural network assigned the digital image with a grade of 1 which matches with the grade determined by the trained person. | 28,557 |
11861872 | DESCRIPTION OF EMBODIMENTS A login method based on fingerprint recognition in the embodiments of this application is applicable to a terminal device such as a computer or a tablet computer (Portable Android Device, PAD). Certainly, the method may also be applied to another device that has a power button and to which login can be performed by using fingerprint data. The login method based on fingerprint recognition in the embodiments of this application mainly focuses on how to prolong a battery life while improving user experience when using a fingerprint for verification on terminal device login. In a terminal device login process, there may be two implementations of using a fingerprint for authentication. In one implementation, a power button and a fingerprint sensor are separated, to be specific, a user needs to operate the power button and input a fingerprint once again during login. Consequently, the user needs to perform an operation twice, and has poor experience. In the other implementation, power needs to be continuously supplied to a fingerprint sensor when a system is in a shutdown state, and after a user inputs a fingerprint, components other than the sensor are powered on. If fingerprint authentication succeeds, a login operation is performed. However, in this manner, power needs to be supplied to the fingerprint sensor in the shutdown state, and therefore, a battery life is reduced. Therefore, the login method based on fingerprint recognition and a device in the embodiments of this application are intended to resolve a problem that power-on and fingerprint authentication operations need to be performed a plurality of times and a technical problem that a battery life is relatively short when a fingerprint is used for authentication and login. Specific embodiments are used below to describe in detail the technical solutions of this application. The following several specific embodiments may be combined with each other, and a same or similar concept or process may not be described repeatedly in some embodiments. FIG.2is a schematic flowchart of Embodiment 1 of a login method based on fingerprint recognition according to this application. This embodiment of this application provides a login method based on fingerprint recognition. The method may be performed by any apparatus that performs the login method based on fingerprint recognition, and the apparatus may be implemented by using software and/or hardware. In this embodiment, the apparatus may be integrated into a terminal device. As shown inFIG.2, the method in this embodiment may include the following steps. Step201: Collect fingerprint data when a power-on signal is detected. In this embodiment, when a power button is pressed by a user, the terminal device detects the power-on signal. In this case, a power controller in the terminal device controls a DC/DC converter to be powered on to output each path of power supply, including a fingerprint sensor power supply. In a process in which the user presses the power button, a fingerprint sensor is powered on, and starts to collect fingerprint data before a finger of the user leaves the power button. Step202: Match the fingerprint data with preset fingerprint data that is prestored. In this embodiment, fingerprint data of one or more users who are allowed to log in to the terminal device is prestored in the terminal device, and after collecting the fingerprint data, the terminal device matches the collected fingerprint data with the preset fingerprint data that is prestored. In a possible implementation, the preset fingerprint data may be fingerprint data previously input by the user, or may be fingerprint data obtained through synchronization with a server, and may be a fingerprint image, or may be an eigenvalue extracted based on a fingerprint image. The foregoing data may be encrypted or unencrypted for storage. Step203: Log in to the terminal device if the matching succeeds. In this embodiment, after the terminal device matches the collected fingerprint data with the preset fingerprint data that is prestored, and if the matching succeeds, the user logs in to the terminal device. According to the login method based on fingerprint recognition provided in this embodiment of this application, the fingerprint data is collected when the power-on signal is detected, and the fingerprint data is matched with the preset fingerprint data that is prestored. If the matching succeeds, the user logs in to the terminal device. Because the terminal device collects the fingerprint data when detecting the power-on signal, the fingerprint data may be input in a process of completing power-on. Therefore, the following phenomenon can be avoided: Power-on and fingerprint authentication operations need to be performed twice, or power needs to be continuously supplied to the fingerprint sensor in a shutdown state. In this way, not only terminal device login efficiency can be improved, but also a battery life of the terminal device can be increased. Optionally, if the fingerprint data fails to match the preset fingerprint data that is prestored, prompt information is output, where the prompt information includes at least one of text information, acoustic information, or light information. The prompt information may instruct the user to move a finger or press the power button again. Specifically, if the collected fingerprint data does not match the preset fingerprint data that is prestored, in other words, when the matching fails, prompt information is output to remind the user that an input fingerprint is incorrect.FIG.3is a schematic diagram of an interface for outputting prompt information. As shown inFIG.3, if matching fails, “Fingerprint input error” information is shown on the interface of the terminal device to remind the user. In actual application, in addition to the text information, the prompt information may include the acoustic information, the light information, or the like. For example, the terminal device may remind the user through voice broadcast or light blinking or by using acoustic information. In addition, the terminal device may remind the user in only one of the foregoing manners in the prompt information, or remind the user in a combination of two or more of the foregoing manners in the prompt information. A specific manner of the prompt information is not limited in this embodiment. FIG.4is a schematic structural diagram of Embodiment 1 of a terminal device according to the embodiments of this application. As shown inFIG.4, the terminal device includes a power button, a fingerprint collection module, a fingerprint recognition module, and a power control module, and the fingerprint collection module is disposed on the power button, wherethe power control module is configured to: when detecting that the power button is pressed, control power output to supply power to the fingerprint collection module;the fingerprint collection module is configured to collect fingerprint data, and send the fingerprint data to the fingerprint recognition module;the fingerprint recognition module is configured to match the fingerprint data with preset fingerprint data that is prestored; andthe login module is configured to log in to the terminal device when the matching performed by the fingerprint recognition module succeeds. In this embodiment, the system includes a fingerprint and button module, the fingerprint recognition module, a memory, an EC, the power control module, a DC/DC converter, a battery, an AC/DC adapter, and a platform power supply, where the fingerprint and button module is configured to provide a power-on/shutdown function and fingerprint input, the battery is an apparatus that supplies power to a board, the AC/DC adapter is an apparatus that charges the battery, the DC/DC converter is a board power converter configured to provide a board chip power supply, the power controller is configured to control the DC/DC converter to output each path of power supply, the fingerprint recognition module is configured to recognize a collected fingerprint, the EC is a board peripheral and I/O control device, the memory is an external memory chip of a processor, and the platform power supply is a power supply for components other than the fingerprint collection module, for example, a CPU power supply or an EC power supply. The fingerprint collection module is disposed on the power button to form the fingerprint and button module. After a user presses the power button, the power control module detects a power-on signal. In this case, the power control module in the terminal device controls the DC/DC converter to be powered on to output each path of power supply, including a fingerprint collection module power supply. In a process in which the user presses the power button, the fingerprint collection module is powered on, and starts to collect fingerprint data before a finger of the user leaves the power button. The power control module may be a power controller, the fingerprint collection module may be a fingerprint sensor, and the fingerprint recognition module may be a central processing unit (Central Processing Unit, CPU) or a micro control unit (Microcontroller Unit, MCU) of the terminal device. When the fingerprint recognition module is a CPU, after collecting the fingerprint data, the fingerprint collection module caches the fingerprint data. After the CPU is powered on, the CPU sends a notification message to the fingerprint collection module, and after receiving the notification message, the fingerprint collection module sends the collected fingerprint data to the fingerprint recognition module. The fingerprint collection module may collect only one piece of fingerprint data, and cache the collected fingerprint data. In addition, to increase fingerprint definition and improve fingerprint data recognition efficiency, the fingerprint collection module may collect a plurality of pieces of fingerprint data, select one optimal piece of fingerprint data from the plurality of pieces of fingerprint data through comparison, and cache the data. Optionally, the fingerprint collection module may further determine, by determining whether the fingerprint collection module and the CPU are synchronously powered on, whether to cache the collected fingerprint data. In actual application, the CPU may output a high level or a low level to the fingerprint collection module by using a GPIO pin. The fingerprint collection module determines, based on the input level, whether the CPU is initialized. If the fingerprint collection module determines that the CPU is initialized, but fingerprint data collection is not completed, the fingerprint collection module may directly send subsequently collected fingerprint data to the fingerprint recognition module instead of caching the collected fingerprint data, thereby improving fingerprint data recognition efficiency. After receiving the fingerprint data sent by the fingerprint collection module, the fingerprint recognition module matches the fingerprint data with the preset fingerprint data that is prestored. If the matching succeeds, the user logs in to the terminal device. In the fingerprint recognition module, fingerprint data of one user may be prestored, or fingerprint data of a plurality of users who are allowed to log in to the terminal device may be prestored. In a possible implementation, the Windows system is used as an example. The fingerprint recognition module sends a matching success message to the login module, and the login module may log in to the system. The login module may be a software module for identity recognition in the Windows system, or may be software and/or hardware for sending a login instruction to the Windows system. Alternatively, after the verification succeeds, the fingerprint recognition module may send the prestored fingerprint data to the login module. This implementation does not change a fingerprint login authentication manner of the system, and may be adapted to various existing systems. In a possible implementation, after completing fingerprint recognition, the fingerprint recognition module may send the matching success message or the prestored fingerprint data to the login module. Alternatively, there may be a time difference between a fingerprint matching success and system login, in other words, when fingerprint matching succeeds, the system has not entered a login stage. In this case, the fingerprint recognition module may cache a matching success result, and send the matching success message or the preset fingerprint data to the login module when the system enters the login stage. When the fingerprint recognition module is an MCU, the MCU prestores one or more pieces of fingerprint data, and the MCU is exclusively configured to perform fingerprint data matching. In addition, the fingerprint collection module may further determine, by determining whether the fingerprint collection module and the MCU are synchronously powered on, whether to cache the collected fingerprint data. In actual application, the MCU may output a high-level IO signal or a low-level IO signal to the fingerprint collection module. The fingerprint collection module determines, based on the input level, whether the MCU is powered on. If the fingerprint collection module and the MCU are not synchronously powered on, in other words, the fingerprint collection module is powered on, but the MCU is not powered on, the fingerprint collection module may cache the collected fingerprint data, and after the MCU is powered on, the fingerprint collection module sends the cached fingerprint data to the MCU, so that the MCU performs fingerprint data matching. If the fingerprint collection module determines that the fingerprint collection module and the MCU are synchronously powered on, in other words, the fingerprint collection module determines that the MCU is powered on, but fingerprint data collection is not completed, the fingerprint collection module directly sends subsequently collected fingerprint data to the MCU instead of caching the collected fingerprint data, thereby improving fingerprint recognition efficiency. In addition, if the fingerprint recognition module is an MCU, because the MCU is exclusively configured to perform fingerprint data matching, if the collected fingerprint data fails to match the preset fingerprint data that is prestored in the MCU, before the terminal device enters a login interface, the terminal device may send prompt information to the user to remind the user of a fingerprint input error, thereby improving user experience. When the fingerprint recognition module is an MCU, after matching the fingerprint data collected by the fingerprint collection module with the preset fingerprint data that is prestored, the MCU may directly send a matching result to the CPU, so that the CPU controls, based on the matching result, whether to log in to the terminal device. If the matching succeeds, the user logs in to the terminal device; or if the matching fails, prompt information is output to remind the user of a fingerprint input error. After performing fingerprint data matching, the MCU directly sends the matching result to the CPU, so that CPU processing efficiency can be improved. Moreover, after performing fingerprint data matching, if the matching succeeds, the MCU may directly send successfully matched fingerprint data to the CPU. The CPU stores the fingerprint data, and controls the user to log in to the terminal device. If the CPU does not receive fingerprint data sent by the MCU, it indicates that the matching fails, and the CPU outputs prompt information to remind the user of a fingerprint data input error. Optionally, the terminal device further includes a contact, and the power button interworks with the contact. When the contact is closed, a stroke of the power button is less than or equal to a maximum stroke of the power button. Specifically,FIG.5is a schematic structural diagram 1 of a power button. As shown inFIG.5, because a power button11interworks with a contact12, when a user presses the power button11, the contact12also moves accordingly, so that the power control module supplies power to another component of the terminal device. When the user presses the power button11, the contact12is closed, in other words, a circuit is closed, and a stroke of the power button11is equal to a maximum stroke of the power button. In this case, because the contact is closed, in other words, the fingerprint collection module can be powered on when the power button is pressed to the maximum stroke, after the user presses the power button, a finger needs to stay on the power button for a preset period of time, to ensure completeness and accuracy of fingerprint data collection. The preset period of time may be set based on experience or an actual situation. For example, the preset period of time may be set to 1 s or 1.5 s. A specific value of the preset period of time is not limited in this embodiment. The terminal device may further output prompt information, including text information, acoustic information, or light information, to remind the user to press for a period of time. For example, a progress bar is shown on a screen, or an indicator light changes from red to green, so that the user can know whether fingerprint recognition succeeds. The Windows system is used as an example. It takes several seconds to dozens of seconds from power-on to login interface display. If fingerprint recognition does not succeed when the user presses the power button, but this is found after a login interface is displayed on the screen, fingerprint recognition has to be performed once again, causing an inconvenient secondary operation. Further, because the fingerprint collection module is disposed on the power button, the power button may be unintentionally pressed in a second time of fingerprint recognition, making the terminal device to shut down, and causing a misoperation. FIG.6is a schematic structural diagram 2 of a power button. As shown inFIG.6, on the basis of the foregoing case, when a user presses the power button, and a contact is closed, in other words, a circuit is closed, and a stroke of the power button is less than a maximum stroke of the power button. In this case, because the contact is closed, in other words, the fingerprint collection module is powered on when the power button has not been pressed to the maximum stroke, the user only needs to press the power button to the maximum stroke, so that the fingerprint collection module collects fingerprint data, thereby improving fingerprint data collection efficiency. In addition, to ensure completeness and accuracy of fingerprint data collection, a finger may stay on the power button for a preset period of time when the user presses the power button. The terminal device may also notify a fingerprint recognition result in the foregoing listed manners. The terminal device provided in this embodiment of this application includes the power button, the fingerprint collection module, the fingerprint recognition module, the power control module, and the login module, and the fingerprint collection module is disposed on the power button. The power control module is configured to: when a power-on signal is detected, control power output to supply power to the fingerprint collection module; the fingerprint collection module is configured to collect the fingerprint data, and send the fingerprint data to the fingerprint recognition module; the fingerprint recognition module is configured to match the fingerprint data with the preset fingerprint data that is prestored; and the login module is configured to log in to the terminal device when the matching performed by the fingerprint recognition module succeeds. Because the fingerprint collection module is disposed on the power button, after the user presses the power button, and power is supplied to the fingerprint collection module, the fingerprint collection module starts to collect the fingerprint data. In this way, the fingerprint data may be input in a process of completing power-on. Therefore, the following phenomenon can be avoided: Power-on and fingerprint authentication operations need to be performed a plurality of times, and power needs to be continuously supplied to a fingerprint sensor in a shutdown state. In this way, not only terminal device login efficiency can be improved, but also a battery life of the terminal device can be increased. It may be clearly understood by a person skilled in the art that, for the purpose of convenient and brief description, division of the foregoing function modules is used only as an example for illustration. In actual application, the foregoing functions can be allocated to different function modules and implemented based on a requirement, in other words, an inner structure of an apparatus is divided into different function modules to implement all or some of the functions described above. For a detailed working process of the foregoing system, apparatus, and unit, refer to a corresponding process in the foregoing method embodiments. Details are not described herein again. In the several embodiments provided in this application, it should be understood that the disclosed system, apparatus, and method may be implemented in other manners. For example, the described apparatus embodiment is merely an example. For example, the module or unit division is merely logical function division and there may be other division in actual implementation. For example, a plurality of units or components may be combined or integrated into another system, or some features may be ignored or not performed. In addition, the displayed or discussed mutual couplings or direct couplings or communication connections may be implemented by using some interfaces. The indirect couplings or communication connections between the apparatuses or units may be implemented in electronic, mechanical, or other forms. The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, and may be located in one position, or may be distributed on a plurality of network units. Some or all of the units may be selected based on actual requirements to achieve the objectives of the solutions of the embodiments. In addition, functional units in the embodiments of this application may be integrated into one processing unit, or each of the units may exist alone physically, or two or more units may be integrated into one unit. The integrated unit may be implemented in a form of hardware, or may be implemented in a form of a software functional unit. When the integrated unit is implemented in the form of a software functional unit and sold or used as an independent product, the integrated unit may be stored in a computer-readable storage medium. Based on such an understanding, the technical solutions of this application essentially, or the part contributing to the prior art, or all or some of the technical solutions may be implemented in the form of a software product. The computer software product is stored in a storage medium and includes several instructions for instructing a computer device (which may be a personal computer, a server, or a network device) or a processor (processor) to perform all or some of the steps of the methods described in the embodiments of this application. The foregoing storage medium includes any medium that can store program code, such as a USB flash drive, a removable hard disk, a read-only memory (ROM, Read-Only Memory), a random access memory (RAM, Random Access Memory), a magnetic disk, or an optical disc. | 23,838 |
11861873 | In accordance with common practice the various features illustrated in the drawings may not be drawn to scale. Accordingly, the dimensions of the various features may be arbitrarily expanded or reduced for clarity. In addition, some of the drawings may not depict all of the components of a given system, method or device. Finally, like reference numerals may be used to denote like features throughout the specification and figures. DESCRIPTION Numerous details are described in order to provide a thorough understanding of the example implementations shown in the drawings. However, the drawings merely show some example aspects of the present disclosure and are therefore not to be considered limiting. Those of ordinary skill in the art will appreciate that other effective aspects and/or variants do not include all of the specific details described herein. Moreover, well-known systems, methods, components, devices and circuits have not been described in exhaustive detail so as not to obscure more pertinent aspects of the example implementations described herein. In various implementations, gaze tracking is used to enable user interaction, provide foveated rendering, or reduce geometric distortion. A gaze tracking system includes a camera and a processor that performs gaze tracking on data received from the camera regarding light from a light source reflected off the eye of a user. In various implementations, the camera includes an event camera with a plurality of light sensors at a plurality of respective locations that, in response to a particular light sensor detecting a change in intensity of light, generates an event message indicating a particular location of the particular light sensor. An event camera may include or be referred to as a dynamic vision sensor (DVS), a silicon retina, an event-based camera, or a frame-less camera. Thus, the event camera generates (and transmits) data regarding changes in light intensity as opposed to a larger amount of data regarding absolute intensity at each light sensor. Further, because data is generated when intensity changes, in various implementations, the light source is configured to emit light with modulating intensity. In various implementations, the asynchronous pixel event data from one or more event cameras is accumulated to produce one or more inputs to a neural network configured to determine one or more gaze characteristics, e.g., pupil center, pupil contour, glint locations, gaze direction, etc. The accumulated event data can be accumulated over time to produce one or more input images for the neural network. A first input image can be created by accumulating event data over time to produce an intensity reconstruction image that reconstructs the intensity of the image at the various pixel locations using the event data. A second input image can be created by accumulating event data over time to produce a timestamp image that encodes the age of (e.g., time since) recent event camera events at each of the event camera pixels. A third input image can be created by accumulating glint-specific event camera data over time to produce a glint image. These input images are used individually or in combination with one another and/or other inputs to the neural network to generate the gaze characteristic(s). In other implementations, event camera data is uses as input to a neural network in other forms, e.g., individual events, events within a predetermined time window, e.g., 10 milliseconds. In various implementations, a neural network that is used to determine gaze characteristics is configured to do so efficiently. Efficiency is achieved, for example, by using a multi-stage neural network. The first stage of the neural network is configured to determine an initial gaze characteristic, e.g., an initial pupil center, using reduced resolution inputs. For example, rather than using a 400×400 pixel input image, the resolution of the input image at the first stage can be reduced down to 50×50 pixels. The second stage of the neural network is configured to determine adjustments to the initial gaze characteristic using location-focused input, e.g., using only a small input image centered around the initial pupil center. For example, rather than using the 400×400 pixel input image, a selected portion of this input image (e.g., 80×80 pixels centered around the pupil center) at the same resolution can be used as input at the second stage. The determinations at each stage are thus made using relatively compact neural network configurations. The respective neural network configurations are relatively small and efficient due to the respective inputs (e.g., a 50×50 pixel image and an 80×80 pixel image) being smaller than the full resolution (e.g., 400×400 pixel image) of the entire image of data received from the event camera(s). FIG.1is a block diagram of an example operating environment100in accordance with some implementations. While pertinent features are shown, those of ordinary skill in the art will appreciate from the present disclosure that various other features have not been illustrated for the sake of brevity and so as not to obscure more pertinent aspects of the example implementations disclosed herein. To that end, as a non-limiting example, the operating environment100includes a controller110and a head-mounted device (HMD)120. In some implementations, the controller110is configured to manage and coordinate an augmented reality/virtual reality (AR/VR) experience for the user. In some implementations, the controller110includes a suitable combination of software, firmware, and/or hardware. The controller110is described in greater detail below with respect toFIG.2. In some implementations, the controller110is a computing device that is local or remote relative to the scene105. In one example, the controller110is a local server located within the scene105. In another example, the controller110is a remote server located outside of the scene105(e.g., a cloud server, central server, etc.). In some implementations, the controller110is communicatively coupled with the HMD120via one or more wired or wireless communication channels144(e.g., BLUETOOTH, IEEE 802.11x, IEEE 802.16x, IEEE 802.3x, etc.). In some implementations, the HMD120is configured to present the AR/VR experience to the user. In some implementations, the HMD120includes a suitable combination of software, firmware, and/or hardware. The HMD120is described in greater detail below with respect toFIG.3. In some implementations, the functionalities of the controller110are provided by and/or combined with the HMD120. According to some implementations, the HMD120presents an augmented reality/virtual reality (AR/VR) experience to the user while the user is virtually and/or physically present within the scene105. In some implementations, while presenting an augmented reality (AR) experience, the HMD120is configured to present AR content and to enable optical see-through of the scene105. In some implementations, while presenting a virtual reality (VR) experience, the HMD120is configured to present VR content and to enable video pass-through of the scene105. In some implementations, the user wears the HMD120on his/her head. As such, the HMD120includes one or more AR/VR displays provided to display the AR/VR content. For example, the HMD120encloses the field-of-view of the user. In some implementations, the HMD120is replaced with a handheld electronic device (e.g., a smartphone or a tablet) configured to present AR/VR content to the user. In some implementations, the HMD120is replaced with an AR/VR chamber, enclosure, or room configured to present AR/VR content in which the user does not wear or hold the HMD120. FIG.2is a block diagram of an example of the controller110in accordance with some implementations. While certain specific features are illustrated, those skilled in the art will appreciate from the present disclosure that various other features have not been illustrated for the sake of brevity, and so as not to obscure more pertinent aspects of the implementations disclosed herein. To that end, as a non-limiting example, in some implementations the controller110includes one or more processing units202(e.g., microprocessors, application-specific integrated-circuits (ASICs), field-programmable gate arrays (FPGAs), graphics processing units (GPUs), central processing units (CPUs), processing cores, and/or the like), one or more input/output (I/O) devices206, one or more communication interfaces208(e.g., universal serial bus (USB), FIREWIRE, THUNDERBOLT, IEEE 802.3x, IEEE 802.11x, IEEE 802.16x, global system for mobile communications (GSM), code division multiple access (CDMA), time division multiple access (TDMA), global positioning system (GPS), infrared (IR), BLUETOOTH, ZIGBEE, and/or the like type interface), one or more programming (e.g., I/O) interfaces210, a memory220, and one or more communication buses204for interconnecting these and various other components. In some implementations, the one or more communication buses204include circuitry that interconnects and controls communications between system components. In some implementations, the one or more I/O devices206include at least one of a keyboard, a mouse, a touchpad, a joystick, one or more microphones, one or more speakers, one or more image sensors, one or more displays, and/or the like. The memory220includes high-speed random-access memory, such as dynamic random-access memory (DRAM), static random-access memory (SRAM), double-data-rate random-access memory (DDR RAM), or other random-access solid-state memory devices. In some implementations, the memory220includes non-volatile memory, such as one or more magnetic disk storage devices, optical disk storage devices, flash memory devices, or other non-volatile solid-state storage devices. The memory220optionally includes one or more storage devices remotely located from the one or more processing units202. The memory220comprises a non-transitory computer readable storage medium. In some implementations, the memory220or the non-transitory computer readable storage medium of the memory220stores the following programs, modules and data structures, or a subset thereof including an optional operating system230and an augmented reality/virtual reality (AR/VR) experience module240. The operating system230includes procedures for handling various basic system services and for performing hardware dependent tasks. In some implementations, the AR/VR experience module240is configured to manage and coordinate one or more AR/VR experiences for one or more users (e.g., a single AR/VR experience for one or more users, or multiple AR/VR experiences for respective groups of one or more users). To that end, in various implementations, the AR/VR experience module240includes a data obtaining unit242, a tracking unit244, a coordination unit246, and a rendering unit248. In some implementations, the data obtaining unit242is configured to obtain data (e.g., presentation data, interaction data, sensor data, location data, etc.) from at least the HMD120. To that end, in various implementations, the data obtaining unit242includes instructions and/or logic therefor, and heuristics and metadata therefor. In some implementations, the tracking unit244is configured to map the scene105and to track the position/location of at least the HMD120with respect to the scene105. To that end, in various implementations, the tracking unit244includes instructions and/or logic therefor, and heuristics and metadata therefor. In some implementations, the coordination unit246is configured to manage and coordinate the AR/VR experience presented to the user by the HMD120. To that end, in various implementations, the coordination unit246includes instructions and/or logic therefor, and heuristics and metadata therefor. In some implementations, the rendering unit248is configured to render content for display on the HMD120. To that end, in various implementations, the rendering unit248includes instructions and/or logic therefor, and heuristics and metadata therefor. Although the data obtaining unit242, the tracking unit244, the coordination unit246, and the rendering unit248are shown as residing on a single device (e.g., the controller110), it should be understood that in other implementations, any combination of the data obtaining unit242, the tracking unit244, the coordination unit246, and the rendering unit248may be located in separate computing devices. Moreover,FIG.2is intended more as functional description of the various features which are present in a particular implementation as opposed to a structural schematic of the implementations described herein. As recognized by those of ordinary skill in the art, items shown separately could be combined and some items could be separated. For example, some functional modules shown separately inFIG.2could be implemented in a single module and the various functions of single functional blocks could be implemented by one or more functional blocks in various implementations. The actual number of modules and the division of particular functions and how features are allocated among them will vary from one implementation to another and, in some implementations, depends in part on the particular combination of hardware, software, and/or firmware chosen for a particular implementation. FIG.3is a block diagram of an example of the head-mounted device (HMD)120in accordance with some implementations. While certain specific features are illustrated, those skilled in the art will appreciate from the present disclosure that various other features have not been illustrated for the sake of brevity, and so as not to obscure more pertinent aspects of the implementations disclosed herein. To that end, as a non-limiting example, in some implementations the HMD120includes one or more processing units302(e.g., microprocessors, ASICs, FPGAs, GPUs, CPUs, processing cores, and/or the like), one or more input/output (I/O) devices and sensors306, one or more communication interfaces308(e.g., USB, FIREWIRE, THUNDERBOLT, IEEE 802.3x, IEEE 802.11x, IEEE 802.16x, GSM, CDMA, TDMA, GPS, IR, BLUETOOTH, ZIGBEE, SPI, I2C, and/or the like type interface), one or more programming (e.g., I/O) interfaces310, one or more AR/VR displays312, one or more interior and/or exterior facing image sensor systems314, a memory320, and one or more communication buses304for interconnecting these and various other components. In some implementations, the one or more communication buses304include circuitry that interconnects and controls communications between system components. In some implementations, the one or more I/O devices and sensors306include at least one of an inertial measurement unit (IMU), an accelerometer, a magnetometer, a gyroscope, a thermometer, one or more physiological sensors (e.g., blood pressure monitor, heart rate monitor, blood oxygen sensor, blood glucose sensor, etc.), one or more microphones, one or more speakers, a haptics engine, one or more depth sensors (e.g., a structured light, a time-of-flight, or the like), and/or the like. In some implementations, the one or more AR/VR displays312are configured to present the AR/VR experience to the user. In some implementations, the one or more AR/VR displays312correspond to holographic, digital light processing (DLP), liquid-crystal display (LCD), liquid-crystal on silicon (LCoS), organic light-emitting field-effect transitory (OLET), organic light-emitting diode (OLED), surface-conduction electron-emitter display (SED), field-emission display (FED), quantum-dot light-emitting diode (QD-LED), micro-electromechanical system (MEMS), and/or the like display types. In some implementations, the one or more AR/VR displays312correspond to diffractive, reflective, polarized, holographic, etc. waveguide displays. For example, the HMD120includes a single AR/VR display. In another example, the HMD120includes an AR/VR display for each eye of the user. In some implementations, the one or more AR/VR displays312are capable of presenting AR and VR content. In some implementations, the one or more AR/VR displays312are capable of presenting AR or VR content. In some implementations, the one or more image sensor systems314are configured to obtain image data that corresponds to at least a portion of the face of the user that includes the eyes of the user. For example, the one or more image sensor systems314include one or more RGB camera (e.g., with a complimentary metal-oxide-semiconductor (CMOS) image sensor or a charge-coupled device (CCD) image sensor), monochrome camera, IR camera, event-based camera, and/or the like. In various implementations, the one or more image sensor systems314further include illumination sources that emit light upon the portion of the face of the user, such as a flash or a glint source. The memory320includes high-speed random-access memory, such as DRAM, SRAM, DDR RAM, or other random-access solid-state memory devices. In some implementations, the memory320includes non-volatile memory, such as one or more magnetic disk storage devices, optical disk storage devices, flash memory devices, or other non-volatile solid-state storage devices. The memory320optionally includes one or more storage devices remotely located from the one or more processing units302. The memory320comprises a non-transitory computer readable storage medium. In some implementations, the memory320or the non-transitory computer readable storage medium of the memory320stores the following programs, modules and data structures, or a subset thereof including an optional operating system330, an AR/VR presentation module340, and a user data store360. The operating system330includes procedures for handling various basic system services and for performing hardware dependent tasks. In some implementations, the AR/VR presentation module340is configured to present AR/VR content to the user via the one or more AR/VR displays312. To that end, in various implementations, the AR/VR presentation module340includes a data obtaining unit342, an AR/VR presenting unit344, a gaze tracking unit346, and a data transmitting unit348. In some implementations, the data obtaining unit342is configured to obtain data (e.g., presentation data, interaction data, sensor data, location data, etc.) from at least the controller110. To that end, in various implementations, the data obtaining unit342includes instructions and/or logic therefor, and heuristics and metadata therefor. In some implementations, the AR/VR presenting unit344is configured to present AR/VR content via the one or more AR/VR displays312. To that end, in various implementations, the AR/VR presenting unit344includes instructions and/or logic therefor, and heuristics and metadata therefor. In some implementations, the gaze tracking unit346is configured to determine a gaze tracking characteristic of a user based on event messages received from an event camera. To that end, in various implementations, the gaze tracking unit346includes instructions and/or logic therefor, configured neural networks, and heuristics and metadata therefor. In some implementations, the data transmitting unit348is configured to transmit data (e.g., presentation data, location data, etc.) to at least the controller110. To that end, in various implementations, the data transmitting unit348includes instructions and/or logic therefor, and heuristics and metadata therefor. Although the data obtaining unit342, the AR/VR presenting unit344, the gaze tracking unit346, and the data transmitting unit348are shown as residing on a single device (e.g., the HMD120), it should be understood that in other implementations, any combination of the data obtaining unit342, the AR/VR presenting unit344, the gaze tracking unit346, and the data transmitting unit348may be located in separate computing devices. Moreover,FIG.3is intended more as functional description of the various features which are present in a particular implementation as opposed to a structural schematic of the implementations described herein. As recognized by those of ordinary skill in the art, items shown separately could be combined and some items could be separated. For example, some functional modules shown separately inFIG.3could be implemented in a single module and the various functions of single functional blocks could be implemented by one or more functional blocks in various implementations. The actual number of modules and the division of particular functions and how features are allocated among them will vary from one implementation to another and, in some implementations, depends in part on the particular combination of hardware, software, and/or firmware chosen for a particular implementation. FIG.4illustrates a block diagram of a head-mounted device400in accordance with some implementations. The head-mounted device400includes a housing401(or enclosure) that houses various components of the head-mounted device400. The housing401includes (or is coupled to) an eye pad405disposed at a proximal (to the user10) end of the housing401. In various implementations, the eye pad405is a plastic or rubber piece that comfortably and snugly keeps the head-mounted device400in the proper position on the face of the user10(e.g., surrounding the eye of the user10). The housing401houses a display410that displays an image, emitting light towards onto the eye of a user10. In various implementations, the display410emits the light through an eyepiece (not shown) that refracts the light emitted by the display410, making the display appear to the user10to be at a virtual distance farther than the actual distance from the eye to the display410. For the user to be able to focus on the display410, in various implementations, the virtual distance is at least greater than a minimum focal distance of the eye (e.g., 7 cm). Further, in order to provide a better user experience, in various implementations, the virtual distance is greater than 1 meter. AlthoughFIG.4illustrates a head-mounted device400including a display410and an eye pad405, in various implementations, the head-mounted device400does not include a display410or includes an optical see-through display without including an eye pad405. The housing401also houses a gaze tracking system including one or more light sources422, camera424, and a controller480. The one or more light sources422emit light onto the eye of the user10that reflects as a light pattern (e.g., a circle of glints) that can be detected by the camera424. Based on the light pattern, the controller480can determine a gaze tracking characteristic of the user10. For example, the controller480can determine a gaze direction and/or a blinking state (eyes open or eyes closed) of the user10. As another example, the controller480can determine a pupil center, a pupil size, or a point of regard. Thus, in various implementations, the light is emitted by the one or more light sources422, reflects off the eye of the user10, and is detected by the camera424. In various implementations, the light from the eye of the user10is reflected off a hot mirror or passed through an eyepiece before reaching the camera424. The display410emits light in a first wavelength range and the one or more light sources422emit light in a second wavelength range. Similarly, the camera424detects light in the second wavelength range. In various implementations, the first wavelength range is a visible wavelength range (e.g., a wavelength range within the visible spectrum of approximately 400-700 nm) and the second wavelength range is a near-infrared wavelength range (e.g., a wavelength range within the near-infrared spectrum of approximately 700-1400 nm). In various implementations, gaze tracking (or, in particular, a determined gaze direction) is used to enable user interaction (e.g., the user10selects an option on the display410by looking at it), provide foveated rendering (e.g., present a higher resolution in an area of the display410the user10is looking at and a lower resolution elsewhere on the display410), or reduce geometric distortion (e.g., in 3D rendering of objects on the display410). In various implementations, the one or more light sources422emit light towards the eye of the user which reflects in the form of a plurality of glints. In various implementations, the one or more light sources422emit light with modulating intensity towards the eye of the user. Accordingly, at a first time, a first light source of the plurality of light sources is projected onto the eye of the user with a first intensity and, at a second time, the first light source of the plurality of light sources is projected onto the eye of the user with a second intensity different than the first intensity (which may be zero, e.g., off). A plurality of glints can result from light emitted towards the eye of a user (and reflected by the cornea) with modulating intensity. For example, at a first time, a first glint and a fifth glint of a plurality of glints are reflected by the eye with a first intensity. At a second time later than the first time, the intensity of the first glint and the fifth glint is modulated to a second intensity (e.g., zero). Also at the second time, a second glint and a sixth glint of the plurality of glints are reflected from the eye of the user with the first intensity. At a third time later than the second time, a third glint and a seventh glint of the plurality of glints are reflected by the eye of the user with the first intensity. At a fourth time later than the third time, a fourth glint and an eighth glint of the plurality of glints are reflected from the eye of the user with the first intensity. At a fifth time later than the fourth time, the intensity of the first glint and the fifth glint is modulated back to the first intensity. Thus, in various implementations, each of the plurality of glints blinks on and off at a modulation frequency (e.g., 600 Hz). However, the phase of the second glint is offset from the phase of the first glint, the phase of the third glint is offset from the phase of the second glint, etc. The glints can be configured in this way to appear to be rotating about the cornea. Accordingly, in various implementations, the intensity of different light sources in the plurality of light sources is modulated in different ways. Thus, when a glint, reflected by the eye and detected by the camera424, is analyzed, the identity of the glint and the corresponding light source (e.g., which light source produced the glint that has been detected) can be determined. In various implementations, the one or more light sources422are differentially modulated in various ways. In various implementations, a first light source of the plurality of light sources is modulated at a first frequency with a first phase offset (e.g., first glint) and a second light source of the plurality of light sources is modulated at the first frequency with a second phase offset (e.g., second glint). In various implementations, the one or more light sources422modulate the intensity of emitted light with different modulation frequencies. For example, in various implementations, a first light source of the plurality of light sources is modulated at a first frequency (e.g., 600 Hz) and a second light source of the plurality of light sources is modulated at a second frequency (e.g., 500 Hz). In various implementations, the one or more light sources422modulate the intensity of emitted light according to different orthogonal codes, such as those which may be used in CDMA (code-divisional multiplex access) communications. For example, the rows or columns of a Walsh matrix can be used as the orthogonal codes. Accordingly, in various implementations, a first light source of the plurality of light sources is modulated according to a first orthogonal code and a second light source of the plurality of light sources is modulated according to a second orthogonal code. In various implementations, the one or more light sources422modulate the intensity of emitted light between a high intensity value and a low intensity value. Thus, at various times, the intensity of the light emitted by the light source is either the high intensity value or the low intensity value. In various implementation, the low intensity value is zero. Thus, in various implementations, the one or more light sources422modulate the intensity of emitted light between an on state (at the high intensity value) and an off state (at the low intensity value). In various implementations, the number of light sources of the plurality of light sources in the on state is constant. In various implementations, the one or more light sources422modulate the intensity of emitted light within an intensity range (e.g., between 10% maximum intensity and 40% maximum intensity). Thus, at various times, the intensity of the light source is either a low intensity value, a high intensity value, or some value in between. In various implementations, the one or more light sources422are differentially modulated such that a first light source of the plurality of light sources is modulated within a first intensity range and a second light source of the plurality of light sources is modulated within a second intensity range different than the first intensity range. In various implementations, the one or more light sources422modulate the intensity of emitted light according to a gaze direction. For example, if a user is gazing in a direction in which a particular light source would be reflected by the pupil, the one or more light sources422changes the intensity of the emitted light based on this knowledge. In various implementations, the one or more light sources422decrease the intensity of the emitted light to decrease the amount of near-infrared light from entering the pupil as a safety precaution. In various implementations, the one or more light sources422modulate the intensity of emitted light according to user biometrics. For example, if the user is blinking more than normal, has an elevated heart rate, or is registered as a child, the one or more light sources422decreases the intensity of the emitted light (or the total intensity of all light emitted by the plurality of light sources) to reduce stress upon the eye. As another example, the one or more light sources422modulate the intensity of emitted light based on an eye color of the user, as spectral reflectivity may differ for blue eyes as compared to brown eyes. In various implementations, the one or more light sources422modulate the intensity of emitted light according to a presented user interface (e.g., what is displayed on the display410). For example, if the display410is unusually bright (e.g., a video of an explosion is being displayed), the one or more light sources422increase the intensity of the emitted light to compensate for potential interference from the display410. In various implementations, the camera424is a frame/shutter-based camera that, at a particular point in time or multiple points in time at a frame rate, generates an image of the eye of the user10. Each image includes a matrix of pixel values corresponding to pixels of the image which correspond to locations of a matrix of light sensors of the camera. In various implementations, the camera424is an event camera comprising a plurality of light sensors (e.g., a matrix of light sensors) at a plurality of respective locations that, in response to a particular light sensor detecting a change in intensity of light, generates an event message indicating a particular location of the particular light sensor. FIG.5illustrates a functional block diagram of an event camera500in accordance with some implementations. The event camera500includes a plurality of light sensors515respectively coupled to a message generator532. In various implementations, the plurality of light sensors515are arranged in a matrix510of rows and columns and, thus, each of the plurality of light sensors515is associated with a row value and a column value. Each of the plurality of light sensors515includes a light sensor520illustrated in detail inFIG.5. The light sensor520includes a photodiode521in series with a resistor523between a source voltage and a ground voltage. The voltage across the photodiode521is proportional to the intensity of light impinging on the light sensor520. The light sensor520includes a first capacitor525in parallel with the photodiode521. Accordingly, the voltage across the first capacitor525is the same as the voltage across the photodiode521(e.g., proportional to the intensity of light detected by the light sensor520). The light sensor520includes a switch529coupled between the first capacitor525and a second capacitor527. The second capacitor527is coupled between the switch and the ground voltage. Accordingly, when the switch529is closed, the voltage across the second capacitor527is the same as the voltage across the first capacitor525(e.g., proportional to the intensity of light detected by the light sensor520). When the switch529is open, the voltage across the second capacitor527is fixed at the voltage across the second capacitor527when the switch529was last closed. The voltage across the first capacitor525and the voltage across the second capacitor527are fed to a comparator531. When the difference552between the voltage across the first capacitor525and the voltage across the second capacitor527is less than a threshold amount, the comparator531outputs a ‘0’ voltage. When the voltage across the first capacitor525is higher than the voltage across the second capacitor527by at least the threshold amount, the comparator531outputs a ‘1’ voltage. When the voltage across the first capacitor525is less than the voltage across the second capacitor527by at least the threshold amount, the comparator531outputs a ‘−1’ voltage. When the comparator531outputs a ‘1’ voltage or a ‘−1’ voltage, the switch529is closed and the message generator532receives this digital signal and generates a pixel event message. As an example, at a first time, the intensity of light impinging on the light sensor520is a first light value. Accordingly, the voltage across the photodiode521is a first voltage value. Likewise, the voltage across the first capacitor525is the first voltage value. For this example, the voltage across the second capacitor527is also the first voltage value. Accordingly, the comparator531outputs a ‘0’ voltage, the switch529remains closed, and the message generator532does nothing. At a second time, the intensity of light impinging on the light sensor520increases to a second light value. Accordingly, the voltage across the photodiode521is a second voltage value (higher than the first voltage value). Likewise, the voltage across the first capacitor525is the second voltage value. Because the switch529is open, the voltage across the second capacitor527is still the first voltage value. Assuming that the second voltage value is at least the threshold value greater than the first voltage value, the comparator531outputs a ‘1’ voltage, closing the switch529, and the message generator532generates an event message based on the received digital signal. With the switch529closed by the ‘1’ voltage from the comparator531, the voltage across the second capacitor527is changed from the first voltage value to the second voltage value. Thus, the comparator531outputs a ‘0’ voltage, opening the switch529. At a third time, the intensity of light impinging on the light sensor520increases (again) to a third light value. Accordingly, the voltage across the photodiode521is a third voltage value (higher than the second voltage value). Likewise, the voltage across the first capacitor525is the third voltage value. Because the switch529is open, the voltage across the second capacitor527is still the second voltage value. Assuming that the third voltage value is at least the threshold value greater than the second voltage value, the comparator531outputs a ‘1’ voltage, closing the switch529, and the message generator532generates an event message based on the received digital signal. With the switch529closed by the ‘1’ voltage from the comparator531, the voltage across the second capacitor527is changed from the second voltage value to the third voltage value. Thus, the comparator531outputs a ‘0’ voltage, opening the switch529. At a fourth time, the intensity of light impinging on the light sensor520decreases back to second light value. Accordingly, the voltage across the photodiode521is the second voltage value (less than the third voltage value). Likewise, the voltage across the first capacitor525is the second voltage value. Because the switch529is open, the voltage across the second capacitor527is still the third voltage value. Thus, the comparator531outputs a ‘−1’ voltage, closing the switch529, and the message generator532generates an event message based on the received digital signal. With the switch529closed by the ‘−1’ voltage from the comparator531, the voltage across the second capacitor527is changed from the third voltage value to the second voltage value. Thus, the comparator531outputs a ‘0’ voltage, opening the switch529. The message generator532receives, at various times, digital signals from each of the plurality of light sensors510indicating an increase in the intensity of light (‘1’ voltage) or a decrease in the intensity of light (‘−1’ voltage). In response to receiving a digital signal from a particular light sensor of the plurality of light sensors510, the message generator532generates a pixel event message. In various implementations, each pixel event message indicates, in a location field, the particular location of the particular light sensor. In various implementations, the event message indicates the particular location with a pixel coordinate, such as a row value (e.g., in a row field) and a column value (e.g., in a column field). In various implementations, the event message further indicates, in a polarity field, the polarity of the change in intensity of light. For example, the event message may include a ‘1’ in the polarity field to indicate an increase in the intensity of light and a ‘0’ in the polarity field to indicate a decrease in the intensity of light. In various implementations, the event message further indicates, in a time field, a time the change in intensity in light was detected (e.g., a time the digital signal was received). In various implementations, the event message indicates, in an absolute intensity field (not shown), as an alternative to or in addition to the polarity, a value indicative of the intensity of detected light. FIG.6is a flowchart representation of a method60of event camera-based gaze tracking in accordance with some implementations. In some implementations, the method600is performed by a device (e.g., controller110ofFIGS.1and2), such as a mobile device, desktop, laptop, or server device. The method600can be performed on a device (e.g., HMD120ofFIGS.1and3) that has a screen for displaying 2D images and/or a screen for viewing stereoscopic images such as virtual reality (VR) display (e.g., a head-mounted display (HMD)) or an augmented reality (AR) display. In some implementations, the method600is performed by processing logic, including hardware, firmware, software, or a combination thereof In some implementations, the method600is performed by a processor executing code stored in a non-transitory computer-readable medium (e.g., a memory). At block610, the method600receives a stream of pixel events output by an event camera. The pixel event data can be in various forms. The stream of pixel events can be receives as a series of messages identifying pixel events at one or more pixels of the event camera. In various implementations, pixel event messages are received that each include a location field for the particular location of a particular light sensor, a polarity field, a time field, and/or an absolute intensity field. At block620, the method600derives one or more images from the stream of pixel events. The one or more images are derived to provide a combined input for a neural network. In alternative implementations, pixel event data is fed directly into a neural network as individual input items (e.g., one input per event), batches of inputs (e.g., 10 events per input), or otherwise adjusted into an appropriate form for input into the neural network. In the implementations in which input images are derived, the information in an input image represents event camera data at a corresponding location in the event camera grid (e.g., grid510ofFIG.5). Thus, a value for a pixel in an upper right corner of the input image corresponds to event camera data for an upper right event camera sensor in the event camera's pixel grid. Pixel event data for multiple events is accumulated and used to produce an image that compiles event data for multiple pixels. In one example, an input image is created that represents pixel events occurring within a particular time window, e.g., over the last 10 ms. In another example, an input image is created that represents pixel events occurring up to a particular point in time, e.g., identifying a most recent pixel event to occur at each pixel. In another example, pixel events are accumulated over time to track or estimate an absolute intensity value for each pixel. The location field associated with pixel events included in the stream of pixel events can be used to identify the location of corresponding event camera pixel. For example, pixel event data in the stream of pixel events may identify a pixel event occurring in the upper right corner pixel and this information can be used to assign a value in the input image's corresponding upper right pixel. Examples of deriving input images such as intensity reconstruction images, timestamp images, and glint images are described herein with respect toFIG.7. At block630, the method600generates a gaze characteristic using a neural network. The one or more input images that are derived from the event camera stream of pixel events are used as input to the neural network. The neural network is trained to determine the gaze characteristic using a training dataset of training images that identify the gaze characteristic. For example, a training set can be created using shutter-based images or event camera-derived images of the eyes of a number of subjects (e.g., 25 subjects, 50 subjects, 100 subjects, 1,000 subjects, 10,000 subjects, etc.). The training data set can include a number of images of the eyes of each subject (e.g., 25 images, 50 images, 100 images, 1,000 images, 10,000 images, etc.). The training data can include ground truth gaze characteristic identifications, e.g., location or direction information identifying the locations of pupil centers, the contour shapes of pupils, pupil dilation, glint locations, gaze directions, etc. For pupil contour shape, a neural network can be trained with data indicating a set of points around the perimeter of the pupil, e.g., five points that are sampled in a repeatable fashion around the pupil and fit with an ellipse. The training data can additionally or alternatively be labelled with emotional characteristics (e.g., “interested” as indicated by relatively larger pupil size and “disinterested” as indicated by relatively smaller pupil size). The ground truth data can be manually identified, semi-automatically determined, or automatically determined. The neural network can be configured to use event camera data, shutter-based camera data, or a combination of the two types of data. The neural network can be configured to be indifferent as to whether the data is coming from a shutter-based camera or an event camera. At block640, the method600tracks the gaze based on the gaze characteristics. In some implementations, the gaze is tracked based on the pupil center location, the glint locations, or a combination of these features. In some implementations, the gaze is tracked by tracking pupil center and glint locations to determine and update a geometric model of the eye that is then used to reconstruct gaze direction. In some implementations, gaze is tracked by comparing current gaze characteristics with prior gaze characteristics. For example, gaze can be tracked using an algorithm to compare the pupil center position as it changes over time. In some implementations, the gaze is tracked based on additional information. For example, a correspondence between a selection of a UI item displayed on a screen of a HMD and a pupil center location can be determined when the user selects the UI item. This assumes that the user is looking at the UI item as he or she selects it. Based on the location of the UI element on the display, the location of the display relative to the user, and the current pupil location, the gaze direction associated with the direction from the eye to the UI element can be determined. Such information can be used to adjust or calibrate gaze tracking performed based on the event camera data. In some implementations, tracking the gaze of the eye involves updating the gaze characteristic in real time as subsequent pixel events in the event stream are received. The pixel events are used to derive additional images and the additional images are used as input to the neural network to generate updated gaze characteristics. The gaze characteristic can be used for numerous purposes. In one example, the gaze characteristic that is determined or updated is used to identify an item displayed on a display, e.g., to identify what button, image, text, or other user interface item a user is looking at. In another example, the gaze characteristic that is determined or updated is used to display a movement of a graphical indicator (e.g., a cursor or other user controlled icon) on a display. In another example, the gaze characteristic that is determined or updated is used to select an item (e.g., via a cursor selection command) displayed on a display. For example, a particular gaze movement pattern can be recognized and interpreted as a particular command. Event camera-based gaze tracking techniques, such as the method600illustrated inFIG.6, provide numerous advantages over techniques that rely solely on shutter-based camera data. Event cameras may capture data at a very high sample rate and thus allow the creation of image input images at a faster rate than using a shutter-based camera. The input images (e.g., the intensity reconstruction images) that are created can emulate data from an extremely fast shutter-based camera without the high energy and data requirements of such a camera. The event camera produces relatively sparse data since it does not collect/send an entire frame for every event. However, the sparse data is accumulated over time to provide dense input images that are used as inputs in the gaze characteristic determinations. The result is faster gaze tracking enabled using less data and computing resources. FIG.7illustrates a functional block diagram illustrating an event camera-based gaze tracking process700in accordance with some implementations. The gaze tracking process700outputs a gaze direction of a user based on event messages received from the event camera710. The event camera710comprises a plurality of light sensors at a plurality of respective locations. In response to a particular light sensor detecting a change in intensity of light, the event camera710generates an event message indicating a particular location of the particular light sensor. As describe above with respect toFIG.6, in various implementations, the particular location is indicated by a pixel coordinate. In various implementations, the event message further indicates a polarity of the change in intensity of light. In various implementations, the event message further indicates a time at which the change in intensity of light was detected. In various implementations, the event message further indicates a value indicative of the intensity of detected light. The event messages from the event camera710are received by a separator720. The separator720separates the event message into target-frequency event messages (associated with a frequency band centered around a frequency of modulation of one or more light sources) and off-target-frequency event messages (associated with other frequencies). The off-target-frequency event messages are pupil events730and are fed to an intensity reconstruction image generator750and a timestamp image generator760. The target-frequency event messages are glint events740and are fed to a glint image generator770. In various implementations, the separator720determines that an event message is a target-frequency event message (or an off-target frequency event message) based on a timestamp, in a time field, indicating a time at which the change in intensity of light was detected. For example, in various implementations, the separator720determines that an event message is a target-frequency event message if it is one of a set including number of event messages within a set range indicating a particular location within a set amount of time. Otherwise, the separator720determines that the event message is an off-target-frequency event message. In various implementations, the set range and/or the set amount of time are proportional to a modulation frequency of modulated light emitted towards the eye of the user. As another example, in various implementations, the separator720determines that an event message is a target-frequency event message if the time between successive events with similar or opposite polarity is within a set range of times. The intensity reconstruction image generator750accumulates pupil events730for the pupil over time to reconstruct/estimate absolute intensity values for each pupil. As additional pupil events730are accumulated the intensity reconstruction image generator750changes the corresponding values in the reconstruction image. In this way, it generates and maintains an updated image of values for all pixels of an image even though only some of the pixels may have received events recently. The intensity reconstruction image generator750can adjust pixel values based on additional information, e.g., information about nearby pixels, to improve the clarity, smoothness, or other aspect of the reconstructed image. In various implementations, the intensity reconstruction image includes an image having a plurality of pixel values at a respective plurality of pixels corresponding to the respective locations of the light sensors. Upon receiving an event message indicating a particular location and a positive polarity (indicating that the intensity of light has increased), an amount (e.g., 1) is added to the pixel value at the pixel corresponding to the particular location. Similarly, upon receiving an event message indicating a particular location and a negative polarity (indicating that the intensity of light has decreased), the amount is subtracted from the pixel value at the pixel corresponding to the particular location. In various implementations, the intensity reconstruction image is filtered, e.g., blurred. The time stamp image generator760encodes information about the timing of events. In one example, time stamp image generator760creates an image with values that represent a length of time since a respective pixel event was received for each pixel. In such an image, pixels having more recent events can have higher intensity values than pixels having less recent events. In one implementation, the timestamp image is a positive timestamp image having a plurality of pixel values indicating when the corresponding light sensors triggered the last corresponding events with positive polarity. In one implementation, the timestamp image is a negative timestamp image having a plurality of pixel values indicating when the corresponding light sensor triggered the last corresponding events with negative polarity. The glint image generator770determines events that are associated with particular glints. In one example, the glint image generator770identifies a glint based on the associated frequency. In some implementations, glint image generator770accumulates glint events740for a period of time and produces a glint image identifying the locations of all of the glint event receives within that time period, e.g., within the last 10 ms, etc. In some implementations, the glint image generator770modulates the intensity of each pixel depending on how well the glint frequency or time since last event at that pixel matches an expected value (derived from the target frequency), e.g. by evaluating a Gaussian function with given standard deviation centered at the expected value. The intensity reconstruction image generator750, the timestamp image generator760, and the glint image generator770provide images that are input to the neural network780, which is configured to generate the gaze characteristic. In various implementations, the neural network780involves a convolutional neural network, a recurrent neural network, and/or a long/short-term memory (LSTM) network. FIG.8illustrates a functional block diagram illustrating a system800using a convolutional neural network830for gaze tracking in accordance with some implementations. The system800uses input image(s)810such as an intensity reconstruction image, a timestamp image, and/or a glint image as discussed above with respect toFIG.7. The input image(s)810are resized to resized input image(s)820. Resizing the input image(s)810can include down-sampling to reduce the resolution of the images and/or cropping portions of the images. The resized input image(s)820are input to the convolutional neural network830. The convolutional neural network830includes one or more convolutional layer(s)840and one or more fully connected layer(s)850and produces outputs860. The convolutional layer(s)840are configured to apply a convolution operation to their respective inputs and pass their results to the next layer. Each convolution neuron in each of the convolution layer(s)840can be configured to process data for a receptive field, e.g., a portion of the resized input image(s)820. The fully connected layer(s)850connect every neuron of one layer to every neuron of another layer. FIG.9illustrates a functional block diagram illustrating a system900using a convolutional neural network920for gaze tracking in accordance with some implementations. The system900uses input image(s)910such as an intensity reconstruction image, a timestamp image, and/or a glint image as discussed above with respect toFIG.7. The input image(s)910are resized to resized input image(s)930. In this example, 400×400 pixel input image(s)910are resized to 50×50 pixel resized input image(s)930by down-sampling. The resized input image(s)930are input to the convolutional neural network920. The convolutional neural network920includes three 5×5 convolutional layers940a,940b,940cand two fully connected layers950a,950b. The convolutional neural network920is configured to produce output960identifying a pupil center x coordinate and y coordinate. The output960also includes, for each glint, x coordinate, y coordinate, and visible/invisible indications. The convolutional neural network920includes a regression for pupil performance (e.g., mean-squared error (MSE)) on x coordinate, y coordinate, and radius and a regression for glint performance (e.g., MSE) on x coordinate and y coordinate. The convolutional neural network920also includes a classification for glint visibility/invisibility (e.g., glint softmax cross entropy on visible/invisible logits). The data used by the convolutional neural network920(during training) can be augmented with random translation, rotation, and/or scale. FIG.10illustrates a functional block diagram illustrating a convolutional layer940a-cof the convolutional neural network ofFIG.9. The convolutional layer940a-cincludes a 2×2 average pool1010, a convolution W×W×N1020, a batch normalization layer1030, and a rectified linear unit (ReLu)1040. FIG.11illustrates a functional block diagram illustrating a system1100using an initialization network1115and a refinement network1145for gaze tracking in accordance with some implementations. The use of two sub-networks or stages can enhance the efficiency of the overall neural network1110. The initialization network1115processes a low resolution image of the input image1105and thus does not need to have as many convolutions as it otherwise would. The pupil location that is output from the initialization network1115is used to make a crop of the original input image1105that is input to the refinement network1145. The refinement network1145thus also does not need to use the entire input image1105and can accordingly include fewer convolutions than it otherwise would. The refinement network1145, however, refines the location of the pupil to a more accurate location. Together, the initialization network1105and refinement network1145produce a pupil location result more efficiently and more accurately than if a single neural network were used. The initialization network1115receives input image(s)1105, which are 400×400 pixels in this example. The input image(s)1105are resized to resized input image(s)1120(which are 50×50 pixels in this example). The initialization network1115includes five 3×3 convolutional layers1130a,1130b,1130c,1130d,1130eand a fully connected layer1135at output. The initialization network1115is configured to produce output1140identifying a pupil center x coordinate and y coordinate. The output1140also includes, for each glint, x coordinate, y coordinate, and visible/invisible indications. The pupil x, y1142are input to the refinement network1145along with the input image(s)1105. The pupil x, y1142is used to make a 96×96 pixel crop around pupil location at full resolution, i.e., with no down-sampling, to produce cropped input image(s)1150. The refinement network1145includes five 3×3 convolutional layers1155a,1155b,1155c,1155d,1155e, a concat layer1160, and a fully connected layer1165at output. The concat layer1160concatenates features after the convolutions with features from the initialization network1115. In some implementations, the features from the initialization network1115encode global information (e.g. eye geometry/layout/eye lid position etc) and the features from the refinement network1145encode local state only, i.e. what can be derived from the cropped image. By concatenating the features from the initialization network1115and refinement network1145the final fully-connected layer(s) in the refinement network1145can combine both global and local information and thereby generate a better estimate of the error as would be possible with local information only. The refinement network1145is configured to produce output1165identifying estimated error1175that can be used to determine a pupil center x coordinate and y coordinate. The refinement network1145thus acts as an error estimator of the initialization network1115and produces estimated error1175. This estimated error1175is used to adjust the initial pupil estimate1180from the initialization network1115to produce a refined pupil estimate1185. For example, the initial pupil estimate1180may identify a pupil center at x, y: 10, 10 and the estimated error1175from the refinement network1145may indicate that the x of pupil center should be 2 greater, resulting in a refined pupil estimate of x, y: 12, 10. In some implementations, the neural network1110is trained to avoid overfitting by introducing random noise. In the training phase, random noise e.g., normal distributed noise with zero mean and small sigma, is added. In some implementations, the random noise is added to the initial pupil location estimate. Without such random noise, the refinement network1145could otherwise learn to be too optimistic. To avoid this, the initialization network1115output is artificially made worse during training by adding the random noise to simulate the situation where the initialization network1115is facing data that it has not seen before. In some implementations, a stateful machine learning/neural network architecture is used. For example, the neural network that is used could be a LSTM or other recurrent network. In such implementations, the event data that is used as input may be provided as a labeled stream of sequential events. In some implementations, a recurrent neural network is configured to remember prior events and learn dynamic motions of the eye based on the history of events. Such a stateful architecture could learn which eye motions are naturally (e.g., eye flutter, blinks, etc.) and suppress these fluctuations. In some implementations, the gaze tracking is performed on two eyes of a same individual concurrently. A single neural network receives input images of event camera data for both of the eyes and determines gaze characteristics for the eyes collectively. In some implementations, one or more event cameras capture one or more images of portion of the face that includes both eyes. In implementations in which images of both eyes are captured or derived, the network could determine or produce output useful in determining a convergence point of gaze directions from the two eyes. The network could additionally or alternatively be trained to account for extraordinary circumstances such as optical axes that do not align. In some implementations, an ensemble of multiple networks is used. By combining the results of multiple neural networks (convolutional, recurrent, and/or other types), variance in the output can be reduced. Each neural network can be trained with different hyper-parameters (learning rate, batch size, architecture, etc.) and/or different datasets (for example, using random sub-sampling). In some implementations, post-processing of gaze characteristic is employed. Noise in the tracked points can be reduced using filtering and prediction methods, for example, using a Kalman filter. These methods can also be used for interpolation/extrapolation of the gaze characteristic over time. For example, the methods can be used if the state of the gaze characteristic is required at a timestamp different from the recorded states. Numerous specific details are set forth herein to provide a thorough understanding of the claimed subject matter. However, those skilled in the art will understand that the claimed subject matter may be practiced without these specific details. In other instances, methods apparatuses, or systems that would be known by one of ordinary skill have not been described in detail so as not to obscure claimed subject matter. Unless specifically stated otherwise, it is appreciated that throughout this specification discussions utilizing the terms such as “processing,” “computing,”“calculating,” “determining,” and “identifying” or the like refer to actions or processes of a computing device, such as one or more computers or a similar electronic computing device or devices, that manipulate or transform data represented as physical electronic or magnetic quantities within memories, registers, or other information storage devices, transmission devices, or display devices of the computing platform. The system or systems discussed herein are not limited to any particular hardware architecture or configuration. A computing device can include any suitable arrangement of components that provides a result conditioned on one or more inputs. Suitable computing devices include multipurpose microprocessor-based computer systems accessing stored software that programs or configures the computing system from a general purpose computing apparatus to a specialized computing apparatus implementing one or more implementations of the present subject matter. Any suitable programming, scripting, or other type of language or combinations of languages may be used to implement the teachings contained herein in software to be used in programming or configuring a computing device. Implementations of the methods disclosed herein may be performed in the operation of such computing devices. The order of the blocks presented in the examples above can be varied for example, blocks can be re-ordered, combined, and/or broken into sub-blocks. Certain blocks or processes can be performed in parallel. The use of “adapted to” or “configured to” herein is meant as open and inclusive language that does not foreclose devices adapted to or configured to perform additional tasks or steps. Additionally, the use of “based on” is meant to be open and inclusive, in that a process, step, calculation, or other action “based on” one or more recited conditions or values may, in practice, be based on additional conditions or value beyond those recited. Headings, lists, and numbering included herein are for ease of explanation only and are not meant to be limiting. It will also be understood that, although the terms “first,” “second,” etc. may be used herein to describe various elements, these elements should not be limited by these terms. These terms are only used to distinguish one element from another. For example, a first node could be termed a second node, and, similarly, a second node could be termed a first node, which changing the meaning of the description, so long as all occurrences of the “first node”are renamed consistently and all occurrences of the “second node” are renamed consistently. The first node and the second node are both nodes, but they are not the same node. The terminology used herein is for the purpose of describing particular implementations only and is not intended to be limiting of the claims. As used in the description of the implementations and the appended claims, the singular forms “a,” “an,”and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will also be understood that the term “and/or” as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items. It will be further understood that the terms “comprises” and/or “comprising,” when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. As used herein, the term “if” may be construed to mean “when” or “upon” or “in response to determining” or “in accordance with a determination” or “in response to detecting,” that a stated condition precedent is true, depending on the context. Similarly, the phrase “if it is determined [that a stated condition precedent is true]” or “if [a stated condition precedent is true]” or “when [a stated condition precedent is true]” may be construed to mean “upon determining” or “in response to determining” or “in accordance with a determination”or “upon detecting” or “in response to detecting” that the stated condition precedent is true, depending on the context. The foregoing description and summary of the invention are to be understood as being in every respect illustrative and exemplary, but not restrictive, and the scope of the invention disclosed herein is not to be determined only from the detailed description of illustrative implementations but according to the full breadth permitted by patent laws. It is to be understood that the implementations shown and described herein are only illustrative of the principles of the present invention and that various modification may be implemented by those skilled in the art without departing from the scope and spirit of the invention. | 68,056 |
11861874 | DESCRIPTION OF THE PREFERRED EMBODIMENTS FIG.1is a diagram illustrating a schematic configuration of a digital camera100that is one embodiment of an imaging apparatus according to the present invention. The digital camera100illustrated inFIG.1comprises a lens device40that includes an imaging lens1, a stop2, a lens control unit4, a lens drive unit8, and a stop drive unit9; and a main body unit100A. The main body unit100A comprises an imaging unit50, a system control unit11, an operation unit14, a display device22, a memory16including a random access memory (RAM), a read only memory (ROM), and the like, a memory control unit15that controls data recording in the memory16and data readout from the memory16, a digital signal processing unit17, and an external memory control unit20that controls data recording on a recording medium21and data readout from the recording medium21. The lens device40may be attachable and detachable with respect to the main body unit100A or may be integrated with the main body unit100A. The imaging lens1includes a focus lens or the like that can be moved in an optical axis direction. This focus lens is a lens for adjusting a focal point of an imaging optical system including the imaging lens1and the stop2and is composed of a single lens or a plurality of lenses. Moving the focus lens in the optical axis direction changes a position of a principal point of the focus lens along the optical axis direction, and a focal position on a subject side is changed. A liquid lens that can change the position of the principal point in the optical axis direction under electric control may be used as the focus lens. The lens control unit4of the lens device40is configured to be capable of communicating with the system control unit11of the main body unit100A in a wired or wireless manner. In accordance with an instruction from the system control unit11, the lens control unit4changes the position of the principal point of the focus lens by controlling the focus lens included in the imaging lens1through the lens drive unit8or controls an F number of the stop2through the stop drive unit9. The imaging unit50comprises an imaging element5that images a subject through the imaging optical system including the imaging lens1and the stop2, and an imaging element drive unit10that drives the imaging element5. The imaging element5includes an imaging surface60(refer toFIG.2) on which a plurality of pixels61are two-dimensionally arranged, converts a subject image formed on the imaging surface60by the imaging optical system into pixel signals by the plurality of pixels61, and outputs the pixel signals. A complementary metal-oxide semiconductor (CMOS) image sensor is suitably used as the imaging element5. Hereinafter, the imaging element5will be described as a CMOS image sensor. The system control unit11that manages and controls the entire electric control system of the digital camera100drives the imaging element5through the imaging element drive unit10and outputs the subject image captured through the imaging optical system of the lens device40as an image signal. The imaging element drive unit10drives the imaging element5by generating a drive signal based on an instruction from the system control unit11and supplying the drive signal to the imaging element5. A hardware configuration of the imaging element drive unit10is an electric circuit configured by combining circuit elements such as semiconductor elements. A command signal from a user is input into the system control unit11through the operation unit14. The operation unit14includes a touch panel integrated with a display surface22b, described later, and various buttons and the like. The system control unit11manages and controls the entire digital camera100. A hardware structure of the system control unit11corresponds to various processors that perform processing by executing programs including an imaging control program. The programs executed by the system control unit11are stored in the ROM of the memory16. The various processors include a central processing unit (CPU) that is a general-purpose processor performing various types of processing by executing a program, a programmable logic device (PLD) that is a processor of which a circuit configuration can be changed after manufacturing like a field programmable gate array (FPGA), or a dedicated electric circuit or the like that is a processor having a circuit configuration dedicatedly designed to execute a specific type of processing like an application specific integrated circuit (ASIC). More specifically, a structure of the various processors is an electric circuit in which circuit elements such as semiconductor elements are combined. The system control unit11may be configured with one of the various processors or may be configured with a combination of two or more processors of the same type or different types (for example, a combination of a plurality of FPGAs or a combination of a CPU and an FPGA). The display device22comprises the display surface22bconfigured with an organic electroluminescence (EL) panel, a liquid crystal panel, or the like and a display controller22athat controls display on the display surface22b. The memory control unit15, the digital signal processing unit17, the external memory control unit20, and the display controller22aare connected to each other through a control bus24and a data bus25and are controlled by instructions from the system control unit11. FIG.2is a schematic plan view illustrating a schematic configuration of the imaging element5illustrated inFIG.1. The imaging element5includes the imaging surface60on which the plurality of pixels61are two-dimensionally arranged in a row direction X and a column direction Y orthogonal to the row direction X. The plurality of pixels61include a focal point detection pixel61bfor detecting a signal corresponding to a quantity of received light by receiving one of a pair of luminous fluxes passing through two different parts arranged in the row direction X in a pupil region of the imaging optical system, a focal point detection pixel61cfor detecting a signal corresponding to a quantity of received light by receiving the other of the pair of luminous fluxes, and a normal pixel61afor detecting a signal corresponding to a quantity of received light by receiving both of the pair of luminous fluxes. In the example inFIG.2, a pixel line62obtained by arranging a plurality of the normal pixels61ain the row direction X and a pixel line63obtained by alternately arranging the focal point detection pixel61band the focal point detection pixel61cin the row direction X are alternately arranged in the column direction Y on the imaging surface60. The pixel line63may include a plurality of pairs of the focal point detection pixel61band the focal point detection pixel61cand may also include the normal pixel61ain addition to these pairs. Hereinafter, the pixel line62and the pixel line63will be simply referred to as a pixel line unless otherwise distinguished. The imaging element5further comprises a drive circuit64that drives the pixels61arranged on the imaging surface60, and a signal processing circuit65that processes a pixel signal read out to a signal line from each pixel61of each pixel line arranged on the imaging surface60. Hereinafter, inFIG.2, an end part of the imaging surface60on one end side (an upper side inFIG.2) of the column direction Y will be referred to as an upper end, and an end part of the imaging surface60on the other end side (a lower side inFIG.2) of the column direction Y will be referred to as a lower end. A region in which each pixel line63is arranged in the imaging surface60constitutes a first light-receiving region. A region in which each pixel line62is arranged in the imaging surface60constitutes a second light-receiving region. The drive circuit64performs resetting (discharge of charges accumulated in a photoelectric conversion element) of each pixel61included in each pixel line, reading out of a pixel signal corresponding to the charges accumulated in the photoelectric conversion element of each pixel61to a signal line, and the like by independently driving each pixel line based on a signal from the imaging element drive unit10. The signal processing circuit65performs correlative double sampling processing on the pixel signal read out to the signal line from each pixel61of the pixel line, converts the pixel signal after the correlative double sampling processing into a digital signal, and outputs the digital signal to the data bus25(refer toFIG.1). The signal processing circuit65is controlled by the imaging element drive unit10. The digital signal processing unit17generates captured image data by performing signal processing such as demosaicing and gamma-correction processing on a pixel signal group output to the data bus25from the imaging element5. The digital camera100is equipped with a continuous shooting mode in which a plurality of pieces of captured image data are continuously generated and recorded on the recording medium21in accordance with one imaging instruction. In the continuous shooting mode, the system control unit11drives the imaging element5to image the subject by the imaging element drive unit10based on a rolling shutter system. Driving based on the rolling shutter system includes rolling reset driving and rolling readout driving. The rolling reset driving is driving of sequentially performing processing of starting exposure of each pixel61by resetting each pixel61of the pixel line while changing the pixel line. The rolling readout driving is driving of sequentially performing processing of reading out a signal from each pixel61of an exposed pixel line and finishing the exposure of the pixel line while changing the pixel line. In the continuous shooting mode, in a case where the imaging instruction is received, the system control unit11continuously performs recording of the captured image data, displaying of a live view image on the display surface22b, and a recording imaging control. The recording imaging control is the control for outputting a pixel signal to be used for a focal point detection from the imaging element5. In addition, the system control unit11performs a display imaging control at least once between each of a plurality of the recording imaging controls. The display imaging control is a control for outputting a pixel signal to be used for the display of the live view image on the display surface22band the focal point detection from the imaging element5. The recording imaging control constitutes a first control, and the display imaging control constitutes a second control. That is, the system control unit11performs the recording imaging control (the first control) a plurality of times. The plurality of the recording imaging controls includes a preceding recording imaging control (a preceding first control) and a subsequent recording imaging control (a subsequent first control). The subsequent recording imaging control is the recording imaging control performed next to the preceding first control. In other words, the preceding recording imaging control is the recording imaging control performed prior to the subsequent first control. For example, in a case where the recording imaging controls are performs three times, the first recording imaging control is the preceding recording imaging control with respect to the second recording imaging control, and the second recording imaging control is the subsequent recording imaging control with respect to the first recording imaging control. Furthermore, the second recording imaging control is the preceding recording imaging control with respect to the third recording imaging control, and the third recording imaging control is the subsequent recording imaging control with respect to the second recording imaging control. The focal point detection refers to processing of performing correlation calculation between a first pixel signal group output from each focal point detection pixel61band a second pixel signal group output from each focal point detection pixel61cand deriving a drive amount of the focus lens necessary for focusing on a target subject based on a result of the correlation calculation. Here, the focal point detection pixel61band the focal point detection pixel61cused for the correlation calculation are pixels included in the same pixel line63. The correlation calculation is processing of calculating an area S[d] surrounded by two data waveforms of a data waveform consisting of the first pixel signal group and a data waveform consisting of the second pixel signal group. Specifically, the correlation calculation is the processing of calculating the area S[d] in a case where the two data waveforms are shifted while changing the shift amount d by a plurality of values. FIG.3is a diagram illustrating an example of a curve showing a result of the correlation calculation. A horizontal axis inFIG.3denotes the shift amount d, and a vertical axis inFIG.3denotes the area S[d]. The drive amount of the focus lens can be derived by deriving a defocus amount based on the shift amount d=d1 in a case where the area S[d] is a minimum value min. A maximum value max of the area S[d] depends on the target subject. For example, in a case of a subject having high contrast bright condition), the maximum value max is increased. In a case of a subject having low contrast dark condition), the maximum value max is decreased. In a case where a difference between the maximum value max and the minimum value min is decreased, it is difficult to decide the minimum value min. Thus, a magnitude of the maximum value max affects derivation accuracy (in other words, accuracy of the focal point detection) of the drive amount of the focus lens. In the recording imaging control, the system control unit11causes the imaging element5to perform imaging by setting exposure time periods of the pixel line62and the pixel line63to a first exposure time period (exposure time period EX1described later). The first exposure time period is a value decided based on exposure set by the user or exposure automatically set based on brightness and the like of the subject. In the display imaging control, the system control unit11causes the imaging element5to perform imaging by setting the exposure time period of the pixel line62to the first exposure time period and setting the exposure time period of the pixel line63to a second exposure time period (exposure time period EX2described later). The second exposure time period is a value decided by an exposure condition under which detection accuracy of the minimum value min can be sufficiently secured. The second exposure time period may be the same value as the first exposure time period or may be a value greater than the first exposure time period. However, since quantities of received light of the focal point detection pixel61band the focal point detection pixel61care smaller than that of the normal pixel61a, the second exposure time period is generally longer than the first exposure time period. FIG.4is a diagram illustrating an example of a timing chart for describing an operation of the digital camera100illustrated inFIG.1at a time of the continuous shooting mode. A drive timing of each pixel line of the imaging element5is illustrated in an upper part ofFIG.4. In the upper part ofFIG.4, a vertical axis denotes a position of the pixel line in the column direction Y. Diagonal line RR (only a part thereof is designated by a reference sign) of a solid line illustrated in the drive timing illustrates a timing at which the exposure of each pixel line is finished by the rolling readout driving. Diagonal line RS1(only a part thereof is designated by a reference sign) of a broken line illustrated in the drive timing illustrates a timing at which the resetting (start of exposure) of each pixel line is performed. Diagonal line RS2(only a part thereof is designated by a reference sign) of a dot-dashed line illustrated in the drive timing illustrates a timing at which the resetting (start of exposure) of each pixel line63is performed. Diagonal line RS3(only a part thereof is designated by a reference sign) of a broken line illustrated in the drive timing illustrates a timing at which the resetting (start of exposure) of each pixel line62is performed. A period surrounded by diagonal line RR and adjacent diagonal line RR on a right side constitutes one frame period as a control period. InFIG.4, a frame period in which the recording imaging control is executed is illustrated as a frame period71, and a frame period in which the display imaging control is executed is illustrated as a frame period72. In the example inFIG.4, in a case where the imaging instruction is received, the system control unit11repeatedly executes processing of performing the recording imaging control (frame period71), the display imaging control (frame period72), and the display imaging control (frame period72) in this order. In the frame period71in which the recording imaging control is performed, a period surrounded by diagonal line RS1and adjacent diagonal line RR on a right side illustrates the exposure time period EX1of all pixel lines. A pixel signal group output from each pixel61of the imaging element5by imaging in the exposure time period EX1in the frame period71is used for generation of the captured image data, generation of the live view image, and the focal point detection. In the frame period72in which the display imaging control is performed, a period surrounded by diagonal line RS2and adjacent diagonal line RR on a right side illustrates the exposure time period EX2of the pixel line63, and a period surrounded by diagonal line RS3and adjacent diagonal line RR on a right side illustrates the exposure time period EX1of the pixel line62. As illustrated inFIG.4, the exposure time period EX2is longer than the exposure time period EX1. A pixel signal group output from each pixel line63by imaging in the exposure time period EX2in the frame period72is used for the focal point detection. A pixel signal group output from each pixel line62by imaging in the exposure time period EX1in the frame period72is used for the generation of the live view image. A rectangle illustrated in item “display update” inFIG.4illustrates a period in which the live view image displayed on the display surface22bis updated. A rectangle illustrated in item “auto focus (AF) calculation” inFIG.4illustrates a period in which the focal point detection is performed. A rectangle illustrated in item “prediction calculation” inFIG.4illustrates a period in which processing of predicting a subsequent drive amount of the focus lens at an exposure start timing of the recording imaging control based on the drive amount of the focus lens obtained by the focal point detection performed in the past up to the present is performed. A rectangle illustrated in item “focus lens driving” inFIG.4illustrates a period in which the focus lens is driven based on a result of the prediction calculation. The system control unit11continues executing the recording imaging control (the exposure time period may not be EX1) until receiving the imaging instruction and repeats the display of the live view image, the focal point detection, and the prediction calculation based on the pixel signal group obtained by executing the recording imaging control. In a case where the imaging instruction is received, the system control unit11performs a focal point control by driving the focus lens in accordance with the drive amount at a time of a start of the first frame period71predicted at a time of reception of the imaging instruction (step S0inFIG.4). After the imaging instruction is received, the system control unit11executes the recording imaging control in the first frame period71. In a case where the recording imaging control is finished, the system control unit11subsequently performs the display imaging control twice. In addition, the system control unit11generates the live view image based on the pixel signal group of the pixel line62obtained by the recording imaging control and updates display by displaying the live view image on the display surface22b(step S2inFIG.4). In addition, the system control unit11executes the focal point detection based on the pixel signal group of the pixel line63obtained by the recording imaging control (step S3inFIG.4) and predicts the drive amount of the focus lens at a time of the start of the exposure in the second frame period71based on the drive amount of the focus lens obtained by executing the focal point detection and the drive amount of the focus lens obtained by the focal point detection performed in the past (step S4inFIG.4). In a case where the driving of the focus lens by the drive amount predicted in step S4can be completed until the start of the exposure in the second frame period71, the system control unit11performs a focusing control by driving the focus lens in accordance with the predicted drive amount until the start of the exposure in the second frame period71.FIG.4illustrates a case where the driving of the focus lens by the drive amount predicted in step S4cannot be completed until the start of the exposure in the second frame period71. In such a case, in a case where the first frame period71is finished, the system control unit11drives the focus lens in accordance with the drive amount at the time of the start of the exposure in the second frame period71predicted in step S1during the first frame period71(step S5inFIG.4). In a case where the first frame period72after the imaging instruction is finished, the system control unit11generates the live view image based on the pixel signal group of the pixel line62obtained by imaging in the frame period72and updates display by displaying the live view image on the display surface22b(step S6inFIG.4). In addition, in a case where the first frame period72is finished, the system control unit11executes the focal point detection based on the pixel signal group of the pixel line63obtained by imaging in the frame period72(step S7inFIG.4) and predicts the drive amount of the focus lens at the time of the start of the exposure in the third frame period71based on the drive amount of the focus lens obtained by executing the focal point detection and the drive amount (at least the drive amount obtained in step S3) of the focus lens obtained by the focal point detection performed in the past (step S8inFIG.4). In a case where the second frame period72after the imaging instruction is finished, the system control unit11generates the live view image based on the pixel signal group of the pixel line62obtained by imaging in the frame period72and updates display by displaying the live view image on the display surface22b(step S9inFIG.4). In addition, in a case where the second frame period72is finished, the system control unit11executes the focal point detection based on the pixel signal group of the pixel line63obtained by imaging in the frame period72(step S10inFIG.4) and predicts the drive amount of the focus lens at the start of the exposure in the third frame period71based on the drive amount of the focus lens obtained by executing the focal point detection and the drive amount (including the drive amount obtained in step S3and step S7) of the focus lens obtained by the focal point detection performed in the past (step S11inFIG.4). The system control unit11drives the focus lens in accordance with the drive amount predicted in step S11(step S12inFIG.4). Then, the same operation as step S2to step S12is repeated. As described above, according to the digital camera100, since imaging is performed with exposure appropriate for recording in the frame period71, the captured image data of high quality can be recorded. In addition, in the frame period72, imaging is performed in the same exposure time period EX1as the frame period71for the pixel line62, and the live view image is updated based on the pixel signal group output from the pixel line62. Thus, constant brightness of the live view image can be achieved, and quality of the live view image during continuous shooting can be improved. In addition, in the frame period72, imaging is performed in the exposure time period EX2appropriate for the focal point detection for the pixel line63, and the focal point detection is performed based on the pixel signal group output from the pixel line63. Thus, accuracy of the focusing control using a result of the focal point detection can be improved. First Modification Example of Operation of Digital Camera100 FIG.5is a timing chart for describing a first modification example of the operation of the digital camera100. In the first modification example, an operation example in a case of setting the exposure time period EX2set in the frame period72to be longer than an upper limit value (hereinafter, referred to as a longest exposure time period) per frame period is illustrated. InFIG.5, diagonal line RR that constitutes a boundary between the frame period72and the subsequent frame period72among diagonal lines RR illustrated inFIG.4is changed to a thick solid line. This thick diagonal line RR illustrates a timing at which the rolling readout driving of reading out the pixel signal from only the pixel line62and not reading out the pixel signal from the pixel line63is performed. In addition, compared toFIG.4,FIG.5omits the reading out (that is, diagonal line RS2immediately after thick diagonal line RR) of the pixel signal from the pixel line63in the third, sixth, and ninth frame periods72out of all frame periods71,72. In the operation of the modification example illustrated inFIG.5, the reading out of the pixel signal from the pixel line63is not performed at an end timing of the frame period72immediately after the frame period71. Thus, step S7and step S8illustrated inFIG.4are removed. The reading out of the pixel signal from the pixel line63is performed at an end timing of the second frame period after the frame period71. In the modification example inFIG.5, the exposure time period EX2is set to be longer than the longest exposure time period by continuing the exposure of the pixel line63over two frame periods. According to the first modification example, since the exposure time period EX2can be set to be longer than the longest exposure time period, the focusing control can be performed with high accuracy even for a dark subject. In addition, since the reading out of the pixel signal from the pixel line62is performed for each frame period, update frequency of the live view image is not decreased even in a case where the exposure time period EX2is increased. Consequently, the quality of the live view image during the continuous shooting can be increased. Second Modification Example of Operation of Digital Camera100 FIG.6is a timing chart for describing a second modification example of the operation of the digital camera100. In the second modification example, an operation example in a case of further increasing the exposure time period EX2set in the frame period72from the example inFIG.5is illustrated. FIG.6is significantly different fromFIG.4in that three frame periods72are present between the frame period71and the subsequent frame period71. In addition, inFIG.6, step S5is changed to processing of driving the focus lens in accordance with the drive amount predicted in step S4. In addition, inFIG.6, diagonal line RR constituting boundaries among the three frame periods72is a thick line. This thick diagonal line RR illustrates a timing at which the rolling readout driving of reading out the pixel signal from only the pixel line62and not reading out the pixel signal from the pixel line63is performed. In addition, inFIG.6, the rolling resetting driving illustrated by diagonal line RS2is performed in only the first frame period72of the three frame periods72, and the rolling resetting driving illustrated by diagonal line RS2is not performed in the frame periods72other than the first frame period72of the three frame periods72. That is, the exposure of the pixel line63continues over three frame periods72after the frame period71. In the operation of the modification example illustrated inFIG.6, the reading out of the pixel signal from the pixel line63is not performed at an end timing of two frame periods72immediately after the frame period71. Thus, step S7, step S8, step S10, and step S11illustrated inFIG.4are removed. The reading out of the pixel signal from the pixel line62and the pixel line63is performed at an end timing of the third frame period after the frame period71. The update (step S21) of the live view image based on this pixel signal, the focal point detection (step S22) based on this pixel signal, and the prediction (step S23) of the drive amount of the focus lens at the time of the start of the exposure in the third frame period71based on this focal point detection result and a focal point detection result in the past are performed. Then, in a case where the second frame period71is finished, step S24corresponding to step S2, step S25corresponding to step S3, and step S26corresponding to step S4are performed. The system control unit11drives the focus lens in accordance with the drive amount predicted in step S26(step S27). Then, the same processing as step S6, step S9, step S21, step S22, and step S23is repeated. In the modification example inFIG.6, the exposure time period EX2is set to be longer than the longest exposure time period by continuing the exposure of the pixel line63over three frame periods. According to this modification example, since the exposure time period EX2can be further set to be longer than the example inFIG.5, the focusing control for a darker subject can be performed with high accuracy. In addition, since the reading out of the pixel signal from the pixel line62is performed for each frame period, the update frequency of the live view image is not decreased even in a case where the exposure time period EX2is increased. Consequently, the quality of the live view image during the continuous shooting can be increased. Preferred Example of Digital Camera100 Here, a preferred example of a method of predicting the drive amount of the focus lens will be described. For example, in step S11inFIG.4, the system control unit11predicts the drive amount of the focus lens based on the drive amount derived in step S3, the drive amount derived in step S7, and the drive amount derived in step S10. The system control unit11obtains a linear function showing a change in time of the drive amount from these three drive amounts using weighted least squares and derives the drive amount at a timing of the prediction from this linear function as a prediction value. The system control unit11determines a weighting coefficient that is set for each of the three drive amounts using the weighted least squares. Specifically, the system control unit11sets the weighting coefficient to be a relatively large value for a first drive amount (in the example inFIG.4, the drive amount obtained in each of step S7and step S10) derived based on the pixel signal obtained by imaging in the exposure time period appropriate for the focal point detection. On the other hand, the system control unit11sets the weighting coefficient to be a smaller value than the weighting coefficient of the first drive amount for a second drive amount (in the example inFIG.4, the drive amount obtained in step S3) derived based on the pixel signal obtained by imaging in the exposure time period appropriate for recording. Doing so can increase prediction accuracy of the drive amount even in a case where the exposure of imaging performed in the frame period71is not appropriate for the focal point detection. The weighting coefficient of the second drive amount may be variable instead of being a fixed value. For example, the system control unit11changes the weighting coefficient based on an imaging condition (imaging sensitivity or a difference in exposure with respect to the exposure appropriate for the focal point detection) of imaging performed in the frame period71. The system control unit11sets the weighting coefficient of the second drive amount to be a large value in a state where the accuracy of the focal point detection can be secured with the imaging condition, and sets the weighting coefficient of the second drive amount to be a small value in a state where the accuracy of the focal point detection cannot be secured with the imaging condition. For example, in a state where the imaging sensitivity is high, the accuracy of the focal point detection tends to be decreased because noise is increased. Therefore, the system control unit11sets the weighting coefficient of the first drive amount to be the same as the weighting coefficient of the second drive amount in a case where the imaging sensitivity is less than a threshold value, and decreases the weighting coefficient of the first drive amount in inverse proportion to a magnitude of the imaging sensitivity in a case where the imaging sensitivity is greater than or equal to the threshold value. In addition, the system control unit11sets the weighting coefficient of the first drive amount to be the same value as the weighting coefficient of the second drive amount in a case where the difference in exposure is less than a threshold value, and decreases the weighting coefficient of the first drive amount in inverse proportion to a magnitude of the difference in exposure in a case where the difference in exposure is greater than or equal to the threshold value. Doing so can increase the prediction accuracy of the drive amount. Another Preferred Example of Digital Camera100 In the first modification example and the second modification example, while the exposure time period EX2can be set to be longer than the longest exposure time period, frequency of the focal point detection is decreased, or a frame rate of the continuous shooting is decreased, compared to a case where the exposure time period EX2is less than or equal to the longest exposure time period. Thus, it is preferable to increase the prediction accuracy of the drive amount of the focus lens while maintaining the exposure time period EX2to be less than or equal to the longest exposure time period. For example, in a case where appropriate exposure (exposure appropriate for the focal point detection) of the pixel line63that is decided in accordance with brightness of the target subject is not obtained even by setting the exposure time period EX2to the longest exposure time period, the system control unit11increases the imaging sensitivity of the imaging element5in imaging in the frame period72. FIG.7illustrates an example of a program line diagram in which a relationship between an exposure value (EV) and a time value (TV) corresponding to the exposure time period in a case where the imaging sensitivity and the F number are predetermined values is defined. InFIG.7, a program line diagram in a case where the longest exposure time period is 5.6 TV is illustrated. As illustrated inFIG.7, the appropriate exposure of the pixel line63is implemented by adjusting the TV (exposure time period), in a case where the EV is within a range from 23 EV to near 8 EV. In a case where the EV is less than or equal to 7.5 EV, the appropriate exposure of the pixel line63is implemented by changing the imaging sensitivity between the predetermined value and a predetermined upper limit value while maintaining 5.6 TV. A thick broken line inFIG.7illustrates a change in the program line diagram by increasing the imaging sensitivity. In a case where the EV is further decreased below 5.5 EV, the appropriate exposure of the pixel line63cannot be implemented even by setting the imaging sensitivity to the predetermined upper limit value unless the TV is decreased. Thus, the appropriate exposure of the pixel line63is secured by changing the TV to a smaller value than 5.6 TV (a thick solid line on a left side inFIG.7). That is, changing the TV to a smaller value than 5.6 TV means switching from the operation (an operation of finishing the exposure of the pixel line63within one frame period72) illustrated inFIG.4to the operation (an operation of continuing the exposure of the pixel line63over a plurality of frame periods72) illustrated inFIG.5orFIG.6. In a case where the appropriate exposure for the focal point detection cannot be implemented even by setting the exposure time period EX2to the longest exposure time period, a possibility of the exposure time period EX2exceeding the longest exposure time period can be reduced by increasing the imaging sensitivity within a range in which the appropriate exposure can be implemented without extending the longest exposure time period. Accordingly, it is possible to secure the accuracy of the focusing control while preventing a decrease in the frequency of the focal point detection and a decrease in the frame rate of the continuous shooting as much as possible. In the above description, in a case where the appropriate exposure cannot be implemented even by increasing the imaging sensitivity to the predetermined upper limit value, the exposure time period EX2is changed to a larger value than the longest exposure time period. However, even in a case where the appropriate exposure cannot be implemented by increasing the imaging sensitivity to the predetermined upper limit value, the accuracy of the focal point detection may be secured depending on a situation. For example, even in imaging performed based on the pixel line63in a state where the appropriate exposure cannot be implemented, in a case where the maximum value max of the curve that is illustrated inFIG.3and is obtained by this imaging is large, the minimum value min can be decided with high accuracy, and the accuracy of the focal point detection can be secured. Therefore, in a case where the appropriate exposure of the pixel line63is not obtained even by setting the exposure time period EX2to the longest exposure time period and setting the imaging sensitivity to the predetermined upper limit value, the system control unit11performs the display imaging control by setting the exposure time period EX2to the longest exposure time period and setting the imaging sensitivity to the predetermined upper limit value as long as information based on the pixel signal output from the pixel line63in immediately previous imaging (imaging in the frame period71) satisfies a predetermined condition. Examples of the predetermined condition include a condition that a ratio (max/min) of the maximum value max and the minimum value min of the curve showing the result of the correlation calculation based on the pixel signal output from the pixel line63exceeds a threshold value, or a condition that a difference (max−min) between the maximum value max and the minimum value min exceeds a threshold value. FIG.8illustrates an example of a program line diagram in which a relationship between the exposure value and the TV in a case where the imaging sensitivity and the F number are predetermined values is defined. InFIG.8, a program line diagram in a case where the longest exposure time period is 5.6 TV is illustrated. As illustrated inFIG.8, the appropriate exposure of the pixel line63is implemented by adjusting the TV, in a case where the EV is within a range from 23 EV to near 8 EV. In a case where the EV is less than or equal to 7.5 EV, the appropriate exposure of the pixel line63is implemented by changing the imaging sensitivity between the predetermined value and the predetermined upper limit value while maintaining 5.6 TV (a thick broken line inFIG.8). In a case where the EV is further decreased below 5.5 EV and a predetermined condition is satisfied, the display imaging control is performed in a state where the imaging sensitivity is set to the predetermined upper limit value and the exposure time period EX2is set to the longest exposure time period (a thin broken line inFIG.8). That is, the display imaging control is performed in a state of underexposure in which the exposure is lower than the appropriate exposure. On the other hand, in a case where the EV is below 5.5 EV and the predetermined condition is not satisfied, the accuracy of the focal point detection cannot be secured unless the TV is decreased, and the appropriate exposure cannot be implemented. Thus, the appropriate exposure is secured by changing the TV to a smaller value than 5.6 TV (a thick solid line on a left side inFIG.8). In a case where it is difficult to secure the appropriate exposure even by setting the exposure time period EX2to the longest exposure time period and increasing the imaging sensitivity to the predetermined upper limit value, the possibility of the exposure time period EX2exceeding the longest exposure time period can be reduced by performing the display imaging control in the underexposure as long as the predetermined condition is satisfied. Accordingly, it is possible to secure the accuracy of the focusing control while preventing a decrease in the frequency of the focal point detection and a decrease in the frame rate of the continuous shooting as much as possible. Next, a configuration of a smartphone that is another embodiment of the imaging apparatus according to the present invention will be described. FIG.9illustrates an exterior of a smartphone200. The smartphone200illustrated inFIG.9includes a casing201having a flat plate shape and comprises a display and input unit204in which a display panel202as a display unit and an operation panel203as an input unit are integrated on one surface of the casing201. The casing201comprises a speaker205, a microphone206, an operation unit207, and a camera unit208. The configuration of the casing201is not for limitation and may employ, for example, a configuration in which the display unit and the input unit are independently disposed, or a configuration that has a folded structure or a sliding mechanism. FIG.10is a block diagram illustrating a configuration of the smartphone200illustrated inFIG.9. As illustrated inFIG.10, a wireless communication unit210, the display and input unit204, a call unit211, the operation unit207, the camera unit208, a storage unit212, an external input-output unit213, a global navigation satellite system (GNSS) reception unit214, a motion sensor unit215, a power supply unit216, and a main control unit220are comprised as main constituents of the smartphone. In addition, a wireless communication function of performing mobile wireless communication with a base station apparatus, not illustrated, through a mobile communication network, not illustrated, is provided as a main function of the smartphone200. The wireless communication unit210performs wireless communication with the base station apparatus accommodated in the mobile communication network in accordance with an instruction from the main control unit220. By using the wireless communication, transmission and reception of various file data such as voice data and image data, electronic mail data, or the like and reception of web data, streaming data, or the like are performed. The display and input unit204is a so-called touch panel that visually delivers information to the user by displaying images (still images and motion images), text information, or the like and detects a user operation with respect to the displayed information under control of the main control unit220. The display and input unit204comprises the display panel202and the operation panel203. A liquid crystal display (LCD), an organic electro-luminescence display (OELD), or the like is used as a display device in the display panel202. The operation panel203is a device that is placed such that an image displayed on the display surface of the display panel202can be visually recognized, is operated by a finger of the user or a stylus, and detects one or a plurality of coordinates. In a case where the device is operated by the finger of the user or the stylus, a detection signal generated by the operation is output to the main control unit220. Next, the main control unit220detects an operation position (coordinates) on the display panel202based on the received detection signal. As illustrated inFIG.10, the display panel202and the operation panel203of the smartphone200illustrated as the imaging apparatus according to one embodiment of the present invention are integrated and constitute the display and input unit204. The operation panel203is arranged to completely cover the display panel202. In a case where such arrangement is employed, the operation panel203may have a function of detecting the user operation even in a region outside the display panel202. In other words, the operation panel203may comprise a detection region (hereinafter, referred to as a display region) for an overlapping part in overlap with the display panel202and a detection region (hereinafter, referred to as a non-display region) for an edge part other than the overlapping part that is not in overlap with the display panel202. The size of the display region and the size of the display panel202may completely match, but both sizes do not need to match. In addition, the operation panel203may comprise two sensitive regions of the edge part and an inner part other than the edge part. Furthermore, the width of the edge part is appropriately designed depending on the size and the like of the casing201. Furthermore, as a position detection method employed in the operation panel203, a matrix switch method, a resistive film method, a surface acoustic wave method, an infrared method, an electromagnetic induction method, an electrostatic capacitive method, and the like are exemplified, and any of the methods can be employed. The call unit211comprises the speaker205or the microphone206and converts voice of the user input through the microphone206into voice data processable in the main control unit220and outputs the voice data to the main control unit220, or decodes voice data received by the wireless communication unit210or the external input-output unit213and outputs the decoded voice data from the speaker205. In addition, as illustrated inFIG.9, for example, the speaker205can be mounted on the same surface as a surface on which the display and input unit204is disposed, and the microphone206can be mounted on a side surface of the casing201. The operation unit207is a hardware key that uses a key switch or the like, and receives an instruction from the user. For example, as illustrated inFIG.9, the operation unit207is a push-button type switch that is mounted on a side surface of the casing201of the smartphone200and enters an ON state in a case where the switch is pressed by the finger or the like, and enters an OFF state by restoring force of a spring or the like in a case where the finger is released. The storage unit212stores a control program and control data of the main control unit220, application software, address data in which a name, a telephone number, or the like of a communication counterpart is associated, transmitted and received electronic mail data, web data downloaded by web browsing, and downloaded contents data, and temporarily stores streaming data or the like. In addition, the storage unit212is configured with an internal storage unit217incorporated in the smartphone and an external storage unit218that includes a slot for an attachable and detachable external memory. Each of the internal storage unit217and the external storage unit218constituting the storage unit212is implemented using a storage medium such as a memory (for example, a MicroSD (registered trademark) memory) of a flash memory type, a hard disk type, a multimedia card micro type, or a card type, a random access memory (RAM), or a read only memory (ROM). The external input-output unit213is an interface with all external apparatuses connected to the smartphone200and is directly or indirectly connected to other external apparatuses by communication or the like (for example, Universal Serial Bus (USB), IEEE1394, Bluetooth (registered trademark), radio frequency identification (RFID), infrared communication (Infrared Data Association (IrDA) (registered trademark)), Ultra Wideband (UWB) (registered trademark), or ZigBee (registered trademark)) or through a network (for example, the Ethernet (registered trademark) or a wireless local area network (LAN)). For example, the external apparatuses connected to the smartphone200include a wired/wireless headset, a wired/wireless external charger, a wired/wireless data port, a memory card and a subscriber identity module (SIM)/user identity module (UIM) card connected through a card socket, an external audio and video apparatus connected through an audio and video input/output (I/O) terminal, a wirelessly connected external audio and video apparatus, a smartphone connected in a wired/wireless manner, a personal computer connected in a wired/wireless manner, and an earphone. The external input-output unit213can deliver data transferred from the external apparatuses to each constituent in the smartphone200or transfer data in the smartphone200to the external apparatuses. The GNSS reception unit214receives GNSS signals transmitted from GNSS satellites ST1to STn, executes a position measurement calculation process based on the received plurality of GNSS signals, and detects a position that includes a latitude, a longitude, and an altitude of the smartphone200in accordance with an instruction from the main control unit220. In a case where positional information can be acquired from the wireless communication unit210or the external input-output unit213(for example, a wireless LAN), the GNSS reception unit214can detect the position using the positional information. The motion sensor unit215comprises, for example, a three-axis acceleration sensor and detects a physical motion of the smartphone200in accordance with an instruction from the main control unit220. By detecting the physical motion of the smartphone200, a movement direction or an acceleration of the smartphone200is detected. The detection result is output to the main control unit220. The power supply unit216supplies power stored in a battery (not illustrated) to each unit of the smartphone200in accordance with an instruction from the main control unit220. The main control unit220comprises a microprocessor, operates in accordance with the control program and the control data stored in the storage unit212, and manages and controls each unit of the smartphone200. The microprocessor of the main control unit220has the same function as the system control unit11. In addition, the main control unit220has a mobile communication control function of controlling each unit of a communication system and an application processing function for performing voice communication or data communication through the wireless communication unit210. The application processing function is implemented by operating the main control unit220in accordance with the application software stored in the storage unit212. For example, the application processing function is an infrared communication function of performing data communication with an opposing apparatus by controlling the external input-output unit213, an electronic mail function of transmitting and receiving electronic mails, or a web browsing function of browsing a web page. In addition, the main control unit220has an image processing function such as displaying a video on the display and input unit204based on image data (data of a still image or a motion image) such as reception data or downloaded streaming data. The image processing function refers to a function of causing the main control unit220to decode the image data, perform image processing on the decoding result, and display an image on the display and input unit204. Furthermore, the main control unit220executes display control for the display panel202and operation detection control for detecting the user operation through the operation unit207and the operation panel203. By executing the display control, the main control unit220displays an icon for starting the application software or a software key such as a scroll bar or displays a window for creating an electronic mail. The scroll bar refers to a software key for receiving an instruction to move a display part of a large image or the like that does not fit in the display region of the display panel202. In addition, by executing the operation detection control, the main control unit220detects the user operation through the operation unit207, receives an operation with respect to the icon and an input of a text string in an input field of the window through the operation panel203, or receives a request for scrolling the display image through the scroll bar. Furthermore, by executing the operation detection control, the main control unit220has a touch panel control function of determining whether the operation position on the operation panel203is in the overlapping part (display region) in overlap with the display panel202or the other edge part (non-display region) not in overlap with the display panel202and controlling the sensitive region of the operation panel203or a display position of the software key. In addition, the main control unit220can detect a gesture operation with respect to the operation panel203and execute a preset function depending on the detected gesture operation. The gesture operation is not a simple touch operation in the related art and means an operation of drawing a trajectory by the finger or the like, designating a plurality of positions at the same time, or drawing a trajectory for at least one of the plurality of positions as a combination thereof. The camera unit208includes the imaging unit50in the digital camera illustrated inFIG.1. Captured image data generated by the camera unit208can be stored in the storage unit212or be output through the external input-output unit213or the wireless communication unit210. In the smartphone200illustrated inFIG.9, the camera unit208is mounted on the same surface as the display and input unit204. However, the mount position of the camera unit208is not for limitation purposes. The camera unit208may be mounted on a rear surface of the display and input unit204. In addition, the camera unit208can be used in various functions of the smartphone200. For example, an image acquired by the camera unit208can be displayed on the display panel202, or the image of the camera unit208can be used as one of operation inputs of the operation panel203. In addition, in a case where the GNSS reception unit214detects the position, the position can be detected by referring to the image from the camera unit208. Furthermore, by referring to the image from the camera unit208, an optical axis direction of the camera unit208of the smartphone200can be determined, or the current usage environment can be determined without using the three-axis acceleration sensor or using the three-axis acceleration sensor. The image from the camera unit208can also be used in the application software. Besides, image data of a still image or a motion image to which the positional information acquired by the GNSS reception unit214, voice information (may be text information acquired by performing voice to text conversion by the main control unit or the like) acquired by the microphone206, attitude information acquired by the motion sensor unit215, or the like is added can be stored in the storage unit212or be output through the external input-output unit213or the wireless communication unit210. Even in the smartphone200having the above configuration, the focusing control can be performed with high frequency. As described above, at least the following matters are disclosed in the present specification. While corresponding constituents and the like in the embodiment are shown in parentheses, the present invention is not limited thereto. (1) An imaging apparatus comprising an imaging element (imaging element5) including a first light-receiving region (pixel line63) in which a pixel group including focal point detection pixels (the focal point detection pixel61band the focal point detection pixel61c) is arranged and a second light-receiving region (pixel line62) in which a pixel group not including the focal point detection pixels is arranged, and a processor (system control unit11), in which the processor is configured to perform a first control (recording imaging control) of causing the imaging element to perform imaging in a first exposure time period (exposure time period EX1), and perform a second control (display imaging control) of causing the imaging element to perform imaging by setting an exposure time period of the second light-receiving region as the first exposure time period and setting an exposure time period of the first light-receiving region as a second exposure time period (exposure time period EX2). (2) The imaging apparatus according to (1), in which the second exposure time period and the first exposure time period are different. (3) The imaging apparatus according to (2), in which the second exposure time period is longer than the first exposure time period. (4) The imaging apparatus according to any one of (1) to (3), in which each of the first light-receiving region and the second light-receiving region is configured with a plurality of pixel lines. (5) The imaging apparatus according to any one of (1) to (4), in which the processor is configured to perform the first control a plurality of times, the first control including a preceding first control and a subsequent first control that is the first control performed next to the preceding first control, and perform the second control at least once between the first control and a subsequent first control performed next to the preceding first control. (6) The imaging apparatus according to (5), in which the processor is configured to continuously perform the second control a plurality of times between the preceding first control and the subsequent first control performed next to the preceding first control, and continue exposure of the first light-receiving region during the plurality of times of the second control. (7) The imaging apparatus according to (6), in which the processor is configured to output a signal from the pixel group of the second light-receiving region after a start of the plurality of times of the second control for each of the plurality of times of the second control. (8) The imaging apparatus according to any one of (1) to (7), in which the processor is configured to use image data based on a signal output from the imaging element by the first control for recording and displaying, and use image data based on a signal output from the pixel group of the second light-receiving region by the second control for only displaying out of recording and displaying. (9) The imaging apparatus according to (8), in which the processor is configured to use a first signal output from the pixel group of the first light-receiving region by the first control for driving a focus lens arranged between the imaging element and a subject, and use a second signal output from the pixel group of the first light-receiving region by the second control for driving the focus lens. (10) The imaging apparatus according to (9), in which the processor is configured to, based on a first drive amount of the focus lens decided based on the first signal output from the pixel group of the first light-receiving region by the first control and a second drive amount of the focus lens decided based on the second signal output from the pixel group of the first light-receiving region by the second control after the first control, predict a drive amount of the focus lens in a subsequent first control, and set a weighting coefficient of the first drive amount used for predicting the drive amount to be smaller than a weighting coefficient of the second drive amount. (11) The imaging apparatus according to (10), in which the processor is configured to control the weighting coefficient of the first drive amount based on an imaging condition in the first control. (12) The imaging apparatus according to any one of (1) to (11), in which the processor is configured to set the second exposure time period in accordance with brightness of a subject imaged by the imaging element, perform the first control a plurality of times, the first control including a preceding first control and a subsequent first control that is the first control performed next to the preceding first control, perform the second control at least once between the first control and the subsequent first control performed next to the preceding first control, and increase imaging sensitivity of the imaging element in the second control in a case where appropriate exposure of the first light-receiving region decided in accordance with the brightness of the subject is not obtained in the second control performed once even by setting the second exposure time period to an upper limit value (longest exposure time period) per second control. (13) The imaging apparatus according to (12), in which the processor is configured to, in a case where the appropriate exposure is not obtained even by setting the second exposure time period to the upper limit value and setting the imaging sensitivity to a predetermined upper limit value, continuously perform the second control a plurality of times between the preceding first control and the subsequent first control performed next to the preceding first control and continue exposure of the first light-receiving region during the plurality of times of the second control. (14) The imaging apparatus according to (12), in which the processor is configured to, in a case where the appropriate exposure is not obtained even by setting the second exposure time period to the upper limit value and setting the imaging sensitivity to a predetermined upper limit value, perform the second control by setting the second exposure time period to the upper limit value and setting the imaging sensitivity to the predetermined upper limit value as long as information based on signals output from the focal point detection pixels satisfies a predetermined condition. (15) The imaging apparatus according to (14), in which the processor is configured to, in a case where the appropriate exposure is not obtained and the predetermined condition is not satisfied even by setting the second exposure time period to the upper limit value and setting the imaging sensitivity to the predetermined upper limit value, continuously perform the second control a plurality of times between the first control and the subsequent first control and continue exposure of the first light-receiving region during the plurality of times of the second control. (16) An imaging control method of controlling an imaging element including a first light-receiving region in which a pixel group including focal point detection pixels is arranged and a second light-receiving region in which a pixel group not including the focal point detection pixels is arranged, the imaging control method comprising performing a first control of causing the imaging element to perform imaging in a first exposure time period, and performing a second control of causing the imaging element to perform imaging by setting an exposure time period of the second light-receiving region as the first exposure time period and setting an exposure time period of the first light-receiving region as a second exposure time period. (17) A non-transitory computer recording medium storing a n imaging control program for controlling an imaging element including a first light-receiving region in which a pixel group including focal point detection pixels is arranged and a second light-receiving region in which a pixel group not including the focal point detection pixels is arranged, the imaging control program causing a processor to perform a first control of causing the imaging element to perform imaging in a first exposure time period, and perform a second control of causing the imaging element to perform imaging by setting an exposure time period of the second light-receiving region as the first exposure time period and setting an exposure time period of the first light-receiving region as a second exposure time period. EXPLANATION OF REFERENCES 1: imaging lens2: stop4: lens control unit5: imaging element8: lens drive unit9: stop drive unit10: imaging element drive unit11: system control unit14,207: operation unit15: memory control unit16: memory17: digital signal processing unit20: external memory control unit21: recording medium22a: display controller22b: display surface22: display device24: control bus25: data bus40: lens device50: imaging unit60: imaging surface61a: normal pixel61b,61c: focal point detection pixel61: pixel62,63: pixel line64: drive circuit65: signal processing circuit71,72: frame period100A: main body unit100: digital camera200: smartphone201: casing202: display panel203: operation panel204: display and input unit205: speaker206: microphone208: camera unit210: wireless communication unit211: call unit212: storage unit213: external input-output unit214: GNSS reception unit215: motion sensor unit216: power supply unit217: internal storage unit218: external storage unit220: main control unit | 65,788 |
11861875 | DETAILED DESCRIPTION OF EMBODIMENTS Exemplary embodiments are described with reference to the accompanying drawings. In the figures, the left-most digit(s) of a reference number identifies the figure in which the reference number first appears. Wherever convenient, the same reference numbers are used throughout the drawings to refer to the same or like parts. While examples and features of disclosed principles are described herein, modifications, adaptations, and other implementations are possible without departing from the spirit and scope of the disclosed embodiments. Referring now to the drawings, and more particularly toFIG.1through8, where similar reference characters denote corresponding features consistently throughout the figures, there are shown preferred embodiments and these embodiments are described in the context of the following exemplary system and/or method. FIG.1illustrates a block diagram of a system (100) for one or more adaptive image transformations with respect to a given context, in accordance with an example embodiment. Although the present disclosure is explained considering that the system (100) is implemented on a server, it may be understood that the system (100) may comprises one or more computing devices, such as a laptop computer, a desktop computer, a notebook, a workstation, a cloud-based computing environment and the like. It will be understood that the system100may be accessed through one or more input/output interfaces104-1,104-2. . .104-N, collectively referred to as I/O interface (104). Examples of the I/O interface (104) may include, but are not limited to, a user interface, a portable computer, a personal digital assistant, a handheld device, a smartphone, a tablet computer, a workstation, and the like. The I/O interface (104) are communicatively coupled to the system (100) through a network (106). In an embodiment, the network (106) may be a wireless or a wired network, or a combination thereof. In an example, the network (106) can be implemented as a computer network, as one of the different types of networks, such as virtual private network (VPN), intranet, local area network (LAN), wide area network (WAN), the internet, and such. The network (106) may either be a dedicated network or a shared network, which represents an association of the different types of networks that use a variety of protocols, for example, Hypertext Transfer Protocol (HTTP), Transmission Control Protocol/Internet Protocol (TCP/IP), and Wireless Application Protocol (WAP), to communicate with each other. Further, the network (106) may include a variety of network devices, including routers, bridges, servers, computing devices, storage devices. The network devices within the network (106) may interact with the system (100) through communication links. The system (100) supports various connectivity options such as BLUETOOTH®, USB, ZigBee and other cellular services. The network environment enables connection of various components of the system (100) using any communication link including Internet, WAN, MAN, and so on. In an exemplary embodiment, the system (100) is implemented to operate as a stand-alone device. In another embodiment, the system (100) may be implemented to work as a loosely coupled device to a smart computing environment. The components and functionalities of the system (100) are described further in detail. In the preferred embodiment, the system (100) is configured for one or more adaptive image transformations with respect to a given context and maintaining aesthetic sense of the transformed image. The system automatically learns the content and aesthetics required for a business context from already available domain samples and perform one or more required transformations on the input image to produce one or more output images, maintaining, preserving and composing the content and aesthetics demands in each of the output images. Herein, the system is configured to convert user defined transformation and context requirements into context-metadata to perform context image transformation with hybrid machine learning (ML) models using the context-metadata. Further, the system is configured to create context-aware automated transformation workflow using hybrid models based on the requirements. The system (100) comprises at one or more databases (112) and one or more hardware processors (108) which are communicatively coupled with the at least one memory (102) to execute a plurality of modules (110) therein. Herein, the input/output interface (104) is configured to receive at least one image to perform one or more context-adaptive image transformations, and one or more preference of a user to specify one or more content factors and one or more aesthetic factors of a transformed image. It is to be noted that the adaptive image transformation refers to learning the context required for the business and performing one or more image transformations to obtain the results aligned to the context automatically. It would be appreciated that the user can choose one of the contexts already learnt by the system to create a transformation workflow. Further, the user can also create a new context, if not already available, by providing samples for the context/domain. This context specifies the content and aesthetic demands to be considered while performing image transformations. Herein, the content factors represent one or more objects of importance for the business context in low dimensional embedding space for example representations of cars, dogs, and children etc. Further, the aesthetic factors represent photography style and design composition of image demanded by the business context in low dimensional embedding space. For example, photographic styles such as portrait/close-up/candid, location of text in images, light exposure of objects, presence, and location of salient objects in images etc. ReferringFIG.2, a functional flow diagram (200) to illustrate network training, wherein the system (100) is configured to train a content learning network based on a predefined set of sample images to extract one or more content factors from the received at least one image. Further, the system (100) is configured to train an aesthetics learning network based on the predefined set of sample images to extract aesthetic factors from the received at least one image, and a translation network based on the extracted content factors and aesthetic factors of the predefined set of sample images to derive context factors. The content learning network is a deep neural network based on ConvNet architecture. The content learning network includes a series of convolutional layers, each accompanied with normalization and pooling layers, and are followed by one or more linear transformation layers to produce a final low dimensional vector. The content learning network layer takes one or more images as input and produces a corresponding n-dimensional vector representation. ReferringFIG.3, a schematic diagram (300), wherein the system (100) is configured to identify at least one region of interest (RoI) from the received at least one image based on the received one or more preferences of the user. Herein, a RoI proposal module of the system (100) takes in the input image, one or more user preferences such as RoI method, max-RoI size etc. and extracts RoI's based on the preferences chosen. The RoI proposal module uses either selective search or sliding window to extract RoIs. ReferringFIGS.4(a)&4(b), a schematic diagram (400), wherein the system (100) is configured to extract one or more content factors and one or more aesthetic factors from the at least one identified RoI using the trained content learning network and aesthetics learning network. Herein, the input image is passed to the content learning network already pretrained on domain samples, to extract the content factors (embedding) from the image. Similarly, a saliency map is extracted from the input image using image processing techniques and the map is passed to an aesthetic learning network already pretrained on saliency maps of domain samples, to extract the aesthetics factors. To extract the content factors important for the given domain or business context, the content learning network is trained with one or more sample images. The training is optimized in such a way that the Euclidian distance between representation of any pair of samples of the domain/context is minimum. The training is performed till the representations of sample images are clustered closely in the embedding space. After the training, the mean value of all the representations in the cluster is taken/identified as content factors for the context or domain. Further, the aesthetics learning network is trained based on the predefined set of sample images to extract aesthetic factors from the received at least one image. The aesthetics learning network is a deep neural network based on ConvNet architecture. It includes a series of convolutional layers, each accompanied with normalization and pooling layers, and are followed by one or more linear transformation layers to produce a final low dimensional vector. The aesthetics learning network layer takes one or more images as input and produces a corresponding n-dimensional vector representation. To extract the aesthetic factors important for the given domain or business context, the aesthetics learning network is trained with saliency maps of one or more sample images. The training is optimized in such a way that the Euclidian distance between representation of any pair of samples of the domain/context is minimum. The training/optimization is performed till the representations of samples are clustered closely in the embedding space. After the optimization, the mean value of all the representations in the cluster is taken/identified as aesthetic factors for the context or domain. Moreover, the translation network is trained with extracted content and aesthetic factors of the predefined set of sample images. The translation network is a multilayer perceptron network (MLP) that includes multiple densely connected linear layers to produce a n-dimensional vector representation. To extract the context factors important for the given domain or business context, the translation network is trained with content factors and aesthetic factors of all domain samples already extracted and clustered in separate embedding spaces. The training is optimized in such a way that the Euclidian distance between output representation of any pair of samples of the domain/context is minimum (while maximizing the distance from other clusters of different contexts) The training/optimization is performed till the representations of samples are clustered closely in the embedding space. After the optimization, the mean value of all the representations in the cluster is taken/identified as ‘context factors’ for the context or domain. ReferringFIG.5, a functional block diagram (500), wherein the system (100) is configured to drive at least one context from the extracted one or more content factors and one or more aesthetic factors using a pre-trained translation network. Herein, the extracted content factors and the aesthetic factors representations of input image is passed to the translation network already pretrained on content and aesthetic factors of domain samples, to extract the context factors/representation. Further, the system (100) is configured to identify context aware workflow from the derived at least one context and the received one or more user preferences. Herein, the context aware workflow defines a sequence of context aware transformation tasks, created/identified based on the user preferences. Furthermore, the system (100) is configured to calculate a similarity metric for the extracted at least one content and aesthetic factor from the at least one RoI. It would be appreciated that the system (100) calculates a similarity metric to validate the similarity between the at least one RoI and domain sample images in terms of both content and aesthetics. The RoIs, that are very similar to domain sample images, are revealed by the similarity metric which are chosen for other downstream transformation tasks. Herein, the process of calculating similarity metric includes:a) For each RoI, the content, aesthetic and context factors are extracted using the content learning network, aesthetic learning network and context learning network, respectively.b) The proximity of these content, aesthetic, and context factors from the centroid of content, aesthetic, and context clusters (in separate embedding spaces) of sample domain images is measured using a Squared Euclidean distance, called as proximity score.c) The proximity score calculated for content, aesthetic, and context factors of each RoI are weighed with importance score defined by user, that ranges between 0 and 1. Herein, the importance score indicates what weightage (between 0 and 1) to be given to content, aesthetic, and context. Default importance score being 0.1, 0.1, 0.8 for content, aesthetics, and context proximity scores. d) one or more RoI's, as defined in user preferences, having good similarity score (i.e minimum proximity) will be chosen as candidates for further downstream transformation tasks. In one aspect, the system (100) is configured to perform one or more context-adaptive image transformations based on the identified context-aware workflow to get a transformed image. The transformed image preserving the content and aesthetics required for the context. In another aspect, the system (100) is configured to append a text to the identified at least one RoI based on the extracted saliency map, the calculated similarity metric for the at least one content, the aesthetic factor, and the received one or more preference of the user. ReferringFIG.6, an example, wherein the RoI proposal module takes in the input image, input text and one or more user preferences such as strides, padding, boundary, output Choices (k) etc. The RoI proposal module creates a sliding window based on input text size and user preferences and proposes ‘n’ RoI's (bounding box coordinates) of approximately equal size, by moving the sliding window over the input image. The portions of input image defined by RoI coordinates are extracted and saliency maps are generated. For each saliency map, a saliency score is calculated as mean of all pixel values in saliency map. Also, the input text is added to the input image in each of the ‘n’ RoI's (one at a time) producing ‘n’ intermediate output images. The intermediate output images are passed through pretrained content and aesthetics networks to extract content and aesthetic factors, for which similarity score is calculated with already available domain samples. Based on the saliency score and the similarity score of content and aesthetic factors of RoI, the RoI selection module selects ‘k’ intermediate output images as final output images, where ‘k’ is specified as a user preference (output choices). Further, the system (100) is configured to convert the determined at least one context requirement and the received one or more user preferences into a context-metadata to perform one or more context-adaptive image transformations using the context-metadata with the pre-trained hybrid ML model. ReferringFIG.7, illustrating a functional block diagram (700), wherein the system (100) is configured to perform an aesthetic edge correction on final output image using an edge cropping network that crops-off unwanted parts in the left, right, top and bottom edges. If the edges are aesthetically within the predefined range, no cropping will be performed. For e.g. unwanted borders, captions, or parts of subjects in the edges will be cropped off by aesthetic edge correction. Herein, the edge cropping network is a Spatial Transformer Network (STN) based deep neural network that produces an output image by removing unwanted portions in the edges of input image. A localization network is a regular CNN which calculates the transformation (cropping) parameters from the input image and a grid generator generates a grid of coordinates in the input image corresponding to each pixel from the output image and a sampler crops the input image based on the grid coordinates calculated. ReferringFIG.8, to illustrate a processor-implemented method (800) for adaptive image transformation with respect to a given context and maintaining aesthetic sense of the transformed image is provided. Initially, at the step (802), at least one image is received via an input/output interface to perform one or more context-adaptive image transformations, and one or more preference of a user is received to specify one or more content factors and one or more aesthetic factors of a transformed image. In the preferred embodiment, at the next step (804), a content learning network, an aesthetics learning network and a translation network are trained. It is to be noted that the content learning network and aesthetic learning network are trained based on a predefined set of sample images to extract one or more content factors and aesthetic factor from the received at least one image. Whereas the translation network is trained with content and aesthetic factors extracted from predefined set of domain samples. In the preferred embodiment, at the next step (806), at least one region of interest (RoI) is identified from the received at least one image based on the received one or more preferences of the user. In the preferred embodiment, at the next step (808), extracting one or more content factors and one or more aesthetic factors from the at least one identified RoI using the trained content learning network and aesthetics learning network. In the preferred embodiment, at the next step (810), deriving at least one context from the extracted one or more content factors and one or more aesthetic factors using a pre-trained translation network. In the preferred embodiment, at the next step (812), a context-aware workflow is identified from the derived at least one context and the received one or more user preferences. In the preferred embodiment, at the next step (814), calculating a similarity metric for the extracted at least one content and aesthetic factor to determine at least one context requirement. In the preferred embodiment, at the next step (816), performing one or more context-adaptive image transformations based on the identified context-aware workflow and calculated similarity metric to get a transformed image, wherein the transformed image preserving the content and aesthetics required for the context. In another aspect, wherein a saliency map is extracted for the identified at least one RoI to append a text to the identified at least one RoI based on the extracted saliency map, the calculated similarity metric for the at least one content and aesthetic factor and the received one or more preference of the user. The written description describes the subject matter herein to enable any person skilled in the art to make and use the embodiments. The scope of the subject matter embodiments is defined by the claims and may include other modifications that occur to those skilled in the art. Such other modifications are intended to be within the scope of the claims if they have similar elements that do not differ from the literal language of the claims or if they include equivalent elements with insubstantial differences from the literal language of the claims. The embodiments of present disclosure herein address unresolved problem of transforming images in a quick and consistent way. In the existing state of the art, there are various challenges in image transformation task such as maintaining both content and aesthetic sense in performing transformations, frequently changing contextual requirements, extending solution to new contexts/domains for new clients/markets, high volume of images to be processed, consistent output across a context/domain and a quicker time-to-market. The proposed system automatically learns the content and aesthetics required for a business context from already available domain samples and perform one or more required transformations on the input image to produce one or more output images, maintaining, preserving and composing the content and aesthetics demands in each of the output images. It is to be understood that the scope of the protection is extended to such a program and in addition to a computer-readable means having a message therein; such computer-readable storage means contain program-code means for implementation of one or more steps of the method, when the program runs on a server or mobile device or any suitable programmable device. The hardware device can be any kind of device which can be programmed including e.g. any kind of computer like a server or a personal computer, or the like, or any combination thereof. The device may also include means which could be e.g. hardware means like e.g. an application-specific integrated circuit (ASIC), a field-programmable gate array (FPGA), or a combination of hardware and software means, e.g. an ASIC and an FPGA, or at least one microprocessor and at least one memory with software modules located therein. Thus, the means can include both hardware means, and software means. The method embodiments described herein could be implemented in hardware and software. The device may also include software means. Alternatively, the embodiments may be implemented on different hardware devices, e.g. using a plurality of CPUs. The embodiments herein can comprise hardware and software elements. The embodiments that are implemented in software include but are not limited to, firmware, resident software, microcode, etc. The functions performed by various modules described herein may be implemented in other modules or combinations of other modules. For the purposes of this description, a computer-usable or computer readable medium can be any apparatus that can comprise, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device. The illustrated steps are set out to explain the exemplary embodiments shown, and it should be anticipated that ongoing technological development will change the manner in which particular functions are performed. These examples are presented herein for purposes of illustration, and not limitation. Further, the boundaries of the functional building blocks have been arbitrarily defined herein for the convenience of the description. Alternative boundaries can be defined so long as the specified functions and relationships thereof are appropriately performed. Alternatives (including equivalents, extensions, variations, deviations, etc., of those described herein) will be apparent to persons skilled in the relevant art(s) based on the teachings contained herein. Such alternatives fall within the scope and spirit of the disclosed embodiments. Also, the words “comprising,” “having,” “containing,” and “including,” and other similar forms are intended to be equivalent in meaning and be open ended in that an item or items following any one of these words is not meant to be an exhaustive listing of such item or items, or meant to be limited to only the listed item or items. It must also be noted that as used herein and in the appended claims, the singular forms “a,” “an,” and “the” include plural references unless the context clearly dictates otherwise. Furthermore, one or more computer-readable storage media may be utilized in implementing embodiments consistent with the present disclosure. A computer-readable storage medium refers to any type of physical memory on which information or data readable by a processor may be stored. Thus, a computer-readable storage medium may store instructions for execution by one or more processors, including instructions for causing the processor(s) to perform steps or stages consistent with the embodiments described herein. The term “computer-readable medium” should be understood to include tangible items and exclude carrier waves and transient signals, i.e., be non-transitory. Examples include random access memory (RAM), read-only memory (ROM), volatile memory, nonvolatile memory, hard drives, CD ROMs, DVDs, flash drives, disks, and any other known physical storage media. | 24,558 |
11861876 | DETAILED DESCRIPTION What is set forth below is what is currently considered to be the preferred embodiment or the best representative examples of the claimed invention. In careful consideration of the future and present representations or modifications to the embodiments and preferred embodiments, any changes or modifications that make substantial changes in function, purpose, structure or results are intended to be encompassed by the claims of this patent. The preferred embodiments of the present invention will now be described by way of example only with reference to the accompanying drawings. A stereoscopic video in the 2D+Z format is composed of video content screens or video content frame sequences and depth screens. The present invention is based on the 2D+Z format. The compressed stereoscopic video comprises: a sequence of video content screens or video content frames obtained from a typical 2D+Z video, wherein the video content screens or video content frames refer to still images captured in the video at specific moments, and wherein the “video” refers to electronic media for recording, copying, replaying, broadcasting and displaying a series of still images, also called a movie, and the “still images” refer to images in a still state; a sequence of depth screens obtained from a typical 2D+Z video; and a sequence of additional shape screens provided by the present invention, wherein the shape screens may be generated by AI or manually provided by human intervention. The details of generating the shape screens will be described in detail below. The 2D+Z format of the stereoscopic video, also known as 2D-plus-Depth format. Each 2D image frame is supplemented with a grayscale depth map, which indicates whether a specific pixel in the 2D image needs to be displayed in front of the display (white) or behind the screen plane (black). 256 grayscale levels may establish a smooth depth gradient in the image. Processing in the monitor uses this input to render the multi-view image. The video content screen and the depth screen each are information in a typical 2D+Z format video, wherein the video content screen (also known as video content frame) represents each 2D image frame; and the depth screen represents each grayscale depth map. Different from the traditional 2D+Z format, the stereoscopic video involved in the present invention also comprises a newly added shape screen, which represents shape information of each video frame. The present invention sets up a shape identification for each shape screen, and the shape identification is represented by following 20-bit bytes: CCNNNNVVVTTTTTTTTXYZ (1) wherein, CC represents classification identification, which is represented by two Chinese bytes, such as man (), puppy (), table (), etc., used for distinguishing one object from other objects. NNNN is a four-byte object recognition identification, such as 0001, 0002, etc. The same object recognition identification, such as 0001, represents that both of them are the same object. VVV is a three-byte direction vector, wherein each byte is represented by 1, 0, or −1, respectively representing the direction of the object, for example, (−1, 0, 0) representing that the object is to the left; (1, 0, 0) representing that the object is to the right; (0, −1, 0) representing that the object is downward; (0, 1, 0) representing that the object is upward; (0, 0, −1) representing that the object is backward; and (0, 0, 1) representing that the object is forward. FIG.1is a schematic diagram showing objects with the same direction using the shape library of the present invention. Among them,FIG.1(a)is the human-face video content frames obtained by photograph taking, andFIG.1(b)is the human-face video content frames generated by the shape recognition identifications with the same direction vector according toFIG.1(a). The generated human-face video content frames have the same classification identification CC as inFIG.1(a), indicating that they are both men; have different object recognition identifications, that is, have different NNNN values, indicating that they are not the same object or not the same person; have the same or substantially the same direction vector VVV, indicating that the direction of the object or person inFIG.1(b)is consistent with that inFIG.1(a). In the example ofFIG.1,FIG.1(a, b) (1)-FIG.1(a, b)(6) have direction vectors: (1) (1, −1, 1); (2) (−1, 0, 1); (3) (−1, 1, 1); (4) (0, 0, 1); (5) (0, 0, 1); and (6) (0, −1, 1), respectively. TTTTTTTT is an octet time identification, which may represent time in the following format: HH: MM: SS.ss; wherein HH represents hours, MM represents minutes, SS represents seconds, and ss represents a multiple of 10 milliseconds. XYZ is a three-bit byte coordinate identification, which represents a coordinate within the object. A typical shape recognition list video file format provided by the present invention is that: the video content frame and the depth frame are superimposed with the shape identification list; and a shape identification library is composed of a file set formed by superimposing each video content frame and depth frame with the shape identification list. The present invention provides an encoder that uses a clustering algorithm to find and extract video content frames, depth frames, and shape identification list information from all content screens, shape screens, and depth screens of a complete video, and superpose these information to compose the shape recognition list video file format, to generate a shape library for unmasking of an object. The shape library is stored at the header of the compressed file. For a specific time: HH: MM: SS.ss, the object of the shape screen is encoded in the shape identification list format in the above expression (1) to perform the unmasking. Clustering is an unsupervised learning process and does not require training on sample data. After designing a suitable distance measurement method, the target data set may be clustered. The goal of the above clustering algorithm is to find objects closely associated with each other and distinguish them, mainly to identify the value of the degree of association between the two objects, that is, the correlation distance measurement value. Hierarchical clustering or similar layered clustering algorithms may be selected to continuously merge two most similar groups; or K-means clustering algorithm may be selected, which randomly selects K points, obtains K categories, calculates the average of the resulting categories, to calculate the new K value, cycles the process until the same K value is found, and finally completes the clustering process. The clustering algorithm is a relatively typical example, and other clustering algorithms, such as fuzzy K-means clustering algorithm, dichotomous K-means clustering algorithm, Canopy clustering algorithm, etc., may all achieve the purpose of the present invention. The present invention further provides an encoder and a decoder. For a specific time: HH: MM: SS.ss, the encoder is used for converting the information into a defined specific format; wherein: Encoder At step101, a clustering algorithm is used for finding and extracting video content frames, depth frames, and shape identification list information from all content screens, shape screens, and depth screens of a complete video, and superpose these information to compose a shape recognition list video file format. At step102, a shape library is further generated from the composed shape recognition list video file for unmasking of an object. The shape library is stored at the header of the compressed file. The header also indicates the size of the composite shape library. At step103, based on the shape library obtained at step102, a classification algorithm is called to assign a classification identification (CC), an object identification (NNNN), a direction vector (VVV) and a position XYZ to each frame. The classification algorithm (also referred to as “discriminator”) is an AI algorithm, which receives the video content screens, depth screens and shape screens as inputs; and takes the classification identification (CC), object identification (NNNN), direction vector (VVV) and position XYZ as outputs. At step104, a function that maps the inputs to the outputs is trained through a set of training data samples. The “trained” refers to the process of determining a function based on a series of input and output data samples. The training data samples are stored in the form of a database or a shape library. The shape library refers to a database or a form of file structure, which uses an index that contains a file set of all possible outputs of the classification algorithm, such as classification identification (CC), object identification (NNNN), direction vector (VVV) and position XYZ, and at step105, uses this index to label the shape screens, wherein the similar shape screens are classified under the same index. The classification algorithm includes, but is not limited to, a convolutional neural network (CNN) method, a neural network that uses a periodicity or a time recurrent neural network (“RNN”). At step106, the following shape identification (“ID”) list format is used for encoding the unmasked object of the shape screen: CCNNNNVVVTTTTTTTTXYZ. The encoding step refers to converting information into a defined specific format: CCNNNNVVVTTTTTTTTXYZ. Decoder Contrary to the steps of the encoder, the decoder is used for converting the specific format obtained after being encoded into the content represented by the information. At step201, the encoded shape identification list of a certain frame is read; at step202, the shapes related to the above classification identification (CC), object identification (NNNN) and direction vector (VVV) are copied from the shape library to the position XYZ of the shape screen; at step203, the depth screen is reconstructed and the shape screen is generated through the trained discriminator; at step204, the complete video content is restored according to the combination of the time identification (TTTTTTTT) and the video content frames at the specific time HH:MM:SS.ss represented by the time identification, and the decoding steps are completed. FIG.2is a schematic diagram of objects with the same shape and direction generated by AI using the shape screens of the present invention. FIGS.2(a)(1)-(8) are the original 2D+Z video images, and each 2D+Z video frame and depth frame are added with a shape identification.FIGS.2(b)(1)-(8) are respectively generated by AI fromFIGS.2(a)(1)-(8).FIG.2(b)(1)-(8) correspond to the shape identification ofFIG.2(a)(1)-(8), both have the same classification identification CC and direction vector VVV, but may have the same or different object recognition identification NNNN. For example,FIG.2(a)(1)-(8) andFIG.2(b)(1)-(8) each are men, but the man in theFIG.2(b)(1)-(8) generated by AI may be different from the man inFIG.2(a)(1)-(8); it may also represent the same person when the object recognition identification NNNN is consistent. In the example ofFIG.2,FIG.2(a, b)(1)-FIG.2(a, b)(8) have direction vectors: (1) (−1, −1, 1); (2) (0, 0, 1); (3) (1, 0, 1); (4) (1, 0, 1); (5) (0, 0, 1); (6) (1, −1, 1); (7) (1, −1, 1); and (8) (1, −1, 1), respectively. AI generation methods include, but are not limited to, traditional convolutional neural network (CNN) methods, or AI Learning 3D Face Reconstruction, or reconstruction of 3D faces in a way similar to Lego fragments. It is possible to reconstruct the 3D faces in reference to the method of direct volume regression. The coherence of the segments may be also improved by means of long and short-term memory (“LSTM”) using periodic neural networks or time recurrent neural network (“RNN”). In another embodiment of the present invention, the objects may be classified according to the depth information of the objects in the video frame, which provides additional information and improves the quality of reconstruction. Specifically,FIG.5schematically shows an example of classifying the depth screens according to the present invention. The discriminator501is repeatedly trained by using the video content screen502, the depth screen503,504and the shape screen505of a specific object (i.e., classification algorithm); in the process of each repeated training, the depth screen503of the object may be transformed into the depth screen504by random distortion; for example, a part of the depth screen is randomly removed to simulate a hole on the depth screen503. The trained discriminator507receives the simulated depth screen506(i.e., the depth screen504) as inputs, and generates the shape screen509(i.e., the shape screen505) and the reconstructed depth screen508. The reconstructed depth screen508retains the information of the depth screen503before random distortion. In other words, the trained discriminator507linked to the shape library may repair the missing information to improve the quality of the depth screen. The various component embodiments of the present invention may be implemented by hardware, or by software modules running on one or more processors, or by a combination thereof. Those skilled in the art should appreciate that a microprocessor or a digital signal processor (DSP) may be used in practice to implement the method for improving the video resolution and quality and some or all of the functions of some or all of the components of the video encoder and the decoder of the display terminal according to the embodiments of the present invention. The present invention may also be implemented as a device or device program (for example, a computer program and a computer program product) for executing part or all of the methods described herein. Such a program for implementing the present invention may be stored on a computer-readable medium, or may have the form of one or more signals. Such a signal may be downloaded from an Internet website, or provided on a carrier signal, or provided in any other forms. For example,FIG.3shows a server, such as an application server, that may implement the present invention. The server traditionally comprises a processor1010and a computer program product or computer readable medium in the form of a memory1020. The memory1020may be an electronic memory such as flash memory, EEPROM (Electrically Erasable Programmable Read Only Memory), EPROM, hard disk, or ROM. The memory1020has a storage space1030for the program codes1031for implementing any method step in the above method. For example, the storage space1030for the program codes may comprise various program codes1031respectively used for implementing various steps in the above method. These program codes may be read from or written into one or more computer program products. These computer program products comprise program code carriers such as hard disks, compact disks (CD), memory cards, or floppy disks. Such computer program products are usually portable or fixed storage units as described with reference toFIG.4. The storage unit may have storage segments, storage spaces, etc. arranged similarly to the memory1020in the server ofFIG.3. The program codes may be for example compressed in an appropriate form. Generally, the storage unit comprises computer readable codes1031′, that is, codes that may be read by, for example, a processor such as1010, which, when run by a server, cause the server to perform the steps in the method described above. The terms “one embodiment”, “an embodiment” or “one or more embodiments” referred to herein means that a specific feature, structure, or characteristic described in combination with the embodiment is included in at least one embodiment of the present invention. In addition, it should be noted that the word examples “in one embodiment” herein do not necessarily all refer to the same embodiment. The above description is not intended to limit the meaning or scope of the words used in the following claims that define the present invention. Rather, the description and illustration are provided to help understand the various embodiments. It is expected that future changes in structure, function, or results will exist without substantial changes, and all these insubstantial changes in the claims are intended to be covered by the claims. Therefore, although the preferred embodiments of the present invention have been illustrated and described, those skilled in the art will understand that many changes and modifications may be made without departing from the claimed invention. In addition, although the term “claimed invention” or “present invention” is sometimes used herein in the singular form, it will be understood that there are multiple inventions as described and claimed. | 16,856 |
11861877 | DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENT A novel system and method for identifying a machine tool having processed a wood piece will be described hereinafter. Although the invention is described in terms of specific illustrative embodiment(s), it is to be understood that the embodiment(s) described herein are by way of example only and that the scope of the invention is not intended to be limited thereby. Referring toFIG.1, an embodiment of a system for identifying a machine tool having processed a wood piece10is shown. The exemplary system10comprises a computerized device20connected to information gathering systems or devices30, such as but not limited to laser, scanner, camera or any other detection sensor or device. The computerized device20may be embodied as a computer, a server, a controller, or any device comprising a central processor unit (CPU), one or more graphics processing unit (GPU) or any other processing unit known in the art. The information gathering systems or devices30may be embodied as, but not limited to a scanner, a laser, a camera, a thermometer or any other detection sensor or device. In an embodiment of the invention, only one scanner30is comprised in the system10. The information gathering devices30configured to scan at least a surface of a wood piece, such as but not limited to a plank. The information gathering is in communication with the computerized device20and may send and/or receive information to said computer20using any known communication system or protocol. In some embodiments, the information gathering devices30are used to gather quantifiable and/or qualitative information about each processed wood piece. Accordingly, the type of gathered information may generally be any visual detail and/or geometrical data, such as but not limited to depth, with, length and general shape. In a preferred embodiment, the information gathering devices30are positioned at the end of the production line or at a step of the production line where the tools and/or equipment having processed the wood pieces or planks being conveyed are to be determined. The identification of the tools and/or equipment having processed the planks are generally performed in a single iteration, thus typically by executing a single scanning process. The computerized device20typically comprises a CPU and/or at least one GPU, transient and/or non-transient memory, input/output ports, etc. The computerized device20is configured to execute instructions of one or more programs40and to receive data from the gathering information devices30. The program40is configured to use the data from the gathering information devices30to detect marks or signs on one or more faces of the processed wood piece, to associate a type or a geometric shape to each detected mark or sign and to automatically identify which tool or machine of the production line was used to create the detected mark or sign. In some embodiments, the program40is configured to make use of artificial intelligence (also referred as “AI”) functions and/or algorithms. In such embodiments, the program40is configured to be trained to associate marks or signs present on the wooden piece to operations of a specific machine or a specific tool of a machine based on the information received by the gathering information devices30. In yet other embodiments, the program40may be configured to be executed on any standard sawmill or production line optimizer instead of being executed by the computerized device20as described above. Referring now toFIG.2, a method for identifying a machine tool having processed a wood piece100is shown. The method100comprises scanning one or more face of the wooden piece and/or capturing one or more images and/or geometrical data of one or more faces of the analyzed wooden piece110. The method100further comprises processing each of the captured images or the geometrical data to identify marks, signs or traits present on a least one face of the wooden piece120. The method100further comprises analyzing the characteristics of each mark130and may further comprise categorizing the said characteristics140in predetermined categories or in categories to be determined by the program40. The method100further comprises associating a specific machine or specific tool of a machine to the identified mark based on the identified categories and/or characteristics of the wooden piece150. Understandably, the steps of processing the capture images120, of analyzing the characteristics of each marks130and/or categorizing the said characteristics140may be performed using an AI framework or an AI algorithm as described herein below. The step to scan the wood110piece may comprise capturing a digital representation as an image and/or videos and/or may comprise geometrical data, such as coordinates of the scanned surfaces, edges of the wood piece, and/or thermographic data. Understandably, any type of data which may be captured using known sensors may be used within the scope of the present invention. In some embodiments, the processing of each of the captured images or the geometrical data to identify marks120may further comprise executing an artificial intelligence algorithm trained to detect marks, signs and/or traits of tools used on the wood piece in an image of a surface of the said wood piece. Understandably, such AI algorithm may be trained using training method known in the art such as human-based confirmation methods. In yet other embodiments, the analyzing of the characteristics of each mark130may further comprise executing an AI algorithm trained to identify characteristics of the detected marks, signs and/or traits used on the wood piece in an image of a surface of the said wood piece. The identified characteristics of the mark may be the level of curvature, the width, depth and/or length of the mark, the distance between different marks, the variation of the said identified characteristics over time or any other characteristics that may be identified by a human or by the AI algorithm. The step to associate a categorized mark with one or more specific machines or specific tools may further comprise using an unsupervised AI algorithm to determine the categories and/or characteristics of the different wood pieces. In other embodiments, the step may use a supervised AI algorithm having predetermined categories. The step to associate a specific machine or specific tool of a machine to the identified mark150may further use an identification table. The identification table preferably comprises records having the identification of the equipment, the position of the equipment in the production line. The identification table further comprises the output data of each of the faces of the wood piece after being processed, such as the characteristics and/or categories associated to the outputted face. An exemplary identification table is shown inFIG.6and is described thereafter. Broadly, to associate the specific machine or tool of a machine to a detected mark, the characteristics and/or categories associated to the detected mark are looked in the identification table to find a matching machine. The method100may further comprise identifying a machine by deduction. Thus, if a first tool is matching to a first scanned face of the wood piece and a second tool is matching to a second scanned face, the method100may comprise the step to deduct that the wood piece was processed by a specific machine as the machine is associated with the first and the second matching tools. In some embodiment, the method100comprises calculating the matching probabilities. In such embodiment, the method100comprises comparing the matching probabilities to a predetermined level of comfort or acceptance. Understandably, more than one machine may be associated to a processed wood piece. In such event, the path or portions of the path followed by the wood pieced during the production process may be identified. As such, the specific machines or tools having performed operations on the wood pieces are identified. In some embodiments, the method100further comprises detecting irregularities in marks or patterns on a surface of a wood piece. The detected irregularities may be quantifiable and/or be qualitative characteristics being outside of acceptable error margins or ranges. As such, the machine may comprise any type of wood processing machines or tools, which may include cutting machines, such as but not limited to, saw machine, trimmer, reducer, sandblasting tools, router, circular saws, linear saws (bandsaws), cylindrical canters, conic canters, profiling heads, planer knives, 12 etc., or any combinations thereof. Understandably, scanning a plurality of surfaces of a plank or wood pieces increases the precision of the identification process. In a preferred embodiment, at least two surfaces of a plank are scanned, typically the top surface and the back surface. It may be appreciated that the method100does not require a preliminary scan of a log to efficiently determine the origin of a processed wooden piece. In some embodiments, multiple analysed wooden pieces may originate from various different logs and may yet be identified to have been processed by the same equipment or tool. The time taken to identify a problem in a log processing line may thus be reduced compared to other prior art methods as defective tools and equipment may directly be identified by the method. Referring now toFIG.3, an exemplary process200for transforming a log into a plurality of planks is shown. The exemplary process generally involves a plurality of machines and/or tools used to process a log into a plurality of planks. As a non-limiting example, the process200may comprise using a chipper canter tool210, such as a conic canter tool, profiling the log220, sawing the log, typically using thin or band saws230, splitting or separating the log240, chipping using a cylindrical canter tool250and splitting the log260into a plurality of planks, such as using circular saws. After one or more of the steps are performed, the outputted plank may have marks and/or properties due to the actions performed by the machine, tools and/or combination of machines and tools used. Now referring toFIGS.4A to4F, illustrations of exemplary plank surfaces50comprising tool marks60are shown. The illustrations ofFIGS.4A and4Bare illustrating tool marks60which may typically be performed by a conic canter pattern, also referred to as chipper canter. The illustration ofFIG.4Cis illustrating tool marks60which may typically be caused by a band saw. The illustration ofFIG.4Dis illustrating tool marks60which may typically be caused by a cylindrical canter pattern. The illustrations ofFIGS.4E and4Fare illustrating tool marks60which may typically be caused by a circular saw. Understandably and based on the above-mentioned figures, each tool may create tool marks60which are different from tool marks60caused by other tools. The equipment used that controls the movement and/or position of the tools is yet another factor which may provide different tool marks60. Referring back toFIGS.4A and4B, exemplary wooden boards50having marks60of a typical chipper canter are shown. In such specific example, the program40may detect as the characteristics of the mark60the radius of curvature, the length, the direction of curb of the mark60. The program40may further detect that the board50ofFIG.4Ais from a certain side of a log wherein the board50ofFIG.4Bis from another side of the log due to the direction of the marks60. The position of a cutting pattern60may further indicate the orientation of a plank60on the conveyor, such as the vertical position of a plank50relative to a tool used. Given that a saw may comprise a shaft for rotation and that a plank or log50may be processed over or under said shaft, the resulting marks60may be positioned differently. As such, marks60located on the upper or lower side of an analyzed surface may provide information to better identify the origin of a wooden plank50. Such detected characteristics are used to match with a tool or a machine of the production line. Referring toFIG.4C, an exemplary wooden board50is shown. The illustrated wooden board has been cut or sawn using a standard profiler head220. In such specific example, the analysis of the characteristics of each mark130detects that the marks60are substantially perpendicular to length of the board50and that such marks60are substantially rectilinear. Further characteristic may be the number of consecutive similar marks present on the board50face. Such characteristics are typically determined by the program40using artificial intelligence techniques, such as using deep learning algorithm or neural networks. Referring now toFIGS.4E and4F, an exemplary wooden board50having marks60of a typical circular saw is shown. In such specific example, the program40may detect as one of the characteristics the direction of rotation of the circular saw. A plank surface may further comprise more than one type of tool marks60. Combining the more than one tool marks60of a given surface in the analysis may provide more efficient results since more parameters may be used to pinpoint the tools and equipment used. Understandably, given that similar tools or machines may cut planks at different steps of the standardized process and that the positioning of the tools relative to the processed planks may not be the same at each step, analysing the direction of the tool marks may provide clarification as to which tool and equipment caused each mark. Referring now toFIGS.5A to5F, an exemplary process of cutting a wood piece into two in shown. In such process, the wood piece may be cut by a tool4, such as a circular saw, in two or more pieces, such as pieces #1and #2ofFIG.5A. The tool may further comprise a shaft8and may thus process wood pieces over and under said shaft8. As shown inFIG.5B, after being sawn, the pieces #1and #2fall on the conveyor, not shown, or any other moving mechanism toward the step of the process. When falling on the conveyor, the pieces may be in one or another direction as seen inFIG.5C, thus complexifying the detection of the marks as such marks may now be reversed. One or more surfaces of the resulting pieces are then scanned by a sensor as seen inFIG.5D. The scanned surfaces may therefore have similar marks. However, such marks may have differing directions or orientation. Furthermore, the position of the mark in relation to the center34of a scanning area may also differs from one plank to another coming from the same original log or wood piece. Now referring toFIGS.5E and5F, the system may be configured to evaluate the positioning of the marks60, the orientation and/or direction of the marks60and the type of marks60based on a central point34of the conveyor. Using a specific point34on the conveyor for the analysis generally aims for an AI algorithm/framework to better isolate the processing location of a plank50relative to a tool or equipment used. In such embodiments, the program40is configured to detect the direction of the marks and to identify marks of two different pieces as matching even if the marks are in opposite directions. The program40may be further configured to process more than one scanned surface of each piece to match the marks and thus determine the tools and equipment used. Now referring toFIG.6, an exemplary identification table80is provided. The identification table80may comprise exemplary information about a face output, such as upper or down face output. The table80may further comprise the equipment and the positioning of the equipment associated with the said output information. The identification table80may be used to categorize marks60of scanned wood plank surfaces to determine the tools used and the processing location in a wooden log processing assembly. Accordingly, a given type of mark60may be associated with certain parameters such as the tool or equipment used, the positioning of a plank relative to the tool when processed and so on. It may be appreciated that the identification table80may allow the use of more than one analysed surface of a given plank50or for planks50having been processed by the same equipment/tool. For example, as illustrated in the table ofFIG.6, each row84represents a different analysed plank50wherein each column88represents the location of analysed marks60and the associated equipment and processing location. In the exemplary equipment column88of the embodiment ofFIG.6, the term “OLI” refers to equipment used for sawing a log such as, but not limited to, thin or band saws typically associated to the primary breakdown of the processed log. The term “TBL” refers to equipment used for splitting a log with circular saws thus equipment generally associated to the secondary breakdown of the processed log. Understandably, any other terms used for each of the parameters of the table80may be used. The categories of an identification table may further be created by users, may be created by an AI algorithm or a combination of both. In yet other embodiments, other parameters or characteristics may be identified by the program40based on the scanned surfaces. As discussed above, such characteristics or categories may be self detected (unsupervised training) or be predetermined (supervised training). For example, the depth of the tool marks may differentiate two similar equipment from one another. The plurality of analysis parameters may further help identify if there are problems with a tool or an equipment by comparing the tool marks of a plank against marks which shall be expected by the said machines. In another embodiment, the program40is programmed to detect predetermined conditions or parameters associated with specific characteristics from data of the peripheral devices30. As an example, the program40may be configured to analyse the presence or absence of each predetermined parameters in the received data and determine the most likely tool and equipment used amidst a list of given tools and equipment. In further embodiments, the program40is programmed to provide analysis by executing instructions of a deep-learning algorithm or neural networks. In some embodiments, the program40is configured to execute a known artificial intelligence platform, such as TensorFlow® from Google®, Azure® from Microsoft®, Watson® from IBM®, and to train such artificial intelligence platform to detect the marks using feedback of a professional quality control operator or other relevant user. In the training process, the artificial intelligence platform may categorize the data received from the sensors or capturing device30. The training of the artificial intelligence platform may be performed by a using providing feedback if the analysis outcome of the program40is successful or if the outcome is unsuccessful. Generally, the more feedback is provided to the artificial intelligence platform, the more precise the analysis result may be. The program40may be further configured to be executed in an unsupervised learning mode. This mode comprises certain risks as it may originally direct the analysis of the algorithm with less than efficient parameters to analyse, thus taking more time before readjusting the analysis method after receiving sufficient feedback. Nonetheless, this mode may also provide unexpected results by analysing identifying parameters typically not identified as relevant by user operators. In other embodiments, the algorithm40may be configured be trained or to learn in a supervised mode. This mode may provide lower analysis time as the program40categorize the data in predetermined categories which are based on relevant data from production lines. While illustrative and presently preferred embodiment(s) of the invention have been described in detail hereinabove, it is to be understood that the inventive concepts may be otherwise variously embodied and employed and that the appended claims are intended to be construed to include such variations except insofar as limited by the prior art. | 20,160 |
11861878 | DESCRIPTION OF THE PREFERRED EMBODIMENTS A vehicle and trailer maneuvering system or trailering assist system and/or driving assist system operates to capture images exterior of the vehicle and a trailer being towed by the vehicle and may process the captured image data to determine a path of travel for the vehicle and trailer and to detect objects at or near the vehicle and in the predicted path of the vehicle, such as to assist a driver of the vehicle in maneuvering the vehicle and trailer in a rearward (or forward) direction. The system includes an image processor or image processing system that is operable to receive image data from one or more cameras and may provide an output to a display device for displaying images representative of the captured image data. Optionally, the system may provide a rearview display or a top down or bird's eye or surround view display or the like. Referring now to the drawings and the illustrative embodiments depicted therein, a vehicle10includes a trailer assist system12that is operable to assist in backing up or reversing the vehicle with a trailer hitched to the vehicle via, for example, a hitch14, and that may maneuver the vehicle10and trailer16toward a desired or selected location. The trailer maneuver assist system12includes at least one exterior viewing vehicle-based imaging sensor or camera, such as a rearward viewing imaging sensor or camera18(and the system may optionally include multiple exterior viewing imaging sensors or cameras, such as a sideward/rearward viewing camera at respective sides of the vehicle), which captures image data representative of the scene exterior of the vehicle10, which includes the hitch14and/or trailer16, with the camera18having a lens for focusing images at or onto an imaging array or imaging plane or imager of the camera (FIG.1). The imager comprises an imaging array of rows and columns of photosensing elements. Optionally, a forward viewing camera may be disposed at the windshield of the vehicle10and view through the windshield and forward of the vehicle10, such as for a machine vision system (such as for traffic sign recognition, headlamp control, pedestrian detection, collision avoidance, lane marker detection and/or the like). The trailer maneuver assist system12includes a control or electronic control unit (ECU) or processor that is operable to process image data captured by the camera or cameras and may detect objects or the like and/or provide displayed images at a display device for viewing by the driver of the vehicle (the control and/or display device may be part of or incorporated in or at an interior rearview mirror assembly of the vehicle, or the control and/or the display device may be disposed elsewhere at or in the vehicle). The data transfer or signal communication from the camera to the ECU may comprise any suitable data or communication link, such as a vehicle network bus or the like of the equipped vehicle. Trailer calibration is an important step in understanding and recognizing the features and aspects of a trailer hitched to a towing vehicle. The recognized features/aspects may be stored in memory of the vehicle and recalled and used in other trailering applications to provide, for example, smooth operation of the trailer. Referring now toFIG.2, a trailer assist system for a vehicle includes performing trailer angle detection (TAD) and performing calibration of the system for the particular trailer hitched to the vehicle. During calibration, the system determines a number of important features of the trailer. For example, the system captures features of the hitched trailer and estimates the trailer hitch position. The system may also determine the jack-knife angle of the trailer (i.e., the trailer angle at which the trailer is in danger of jack-knifing), estimate a beam length of the trailer, and/or determine a trailer collision angle (i.e., the trailer angle at which the trailer is in danger of colliding with the towing vehicle). The trailer assist system may utilize aspects described in U.S. provisional applications, Ser. No. 62/705,966, filed Jul. 24, 2020 and titled VEHICULAR TRAILERING ASSIST SYSTEM WITH TRAILER COLLISION ANGLE DETECTION, and/or Ser. No. 62/705,967, filed Jul. 24, 2020 and titled VEHICULAR TRAILERING ASSIST SYSTEM WITH HITCH BALL DETECTION, which are hereby incorporated herein by reference in their entireties. The system provides an optimal calibration drive where all calibration features of the hitched trailer are determined, detected, and/or estimated. Different features are determined at various positions of the calibration drive or maneuver as each feature may exhibit a variety of characteristics during the ride, which may otherwise cause estimation of the feature to be difficult. The event-based optimal calibration drive described herein may cover all features prior to completion of the calibration. As illustrated in the state diagram ofFIG.2, the system may include three primary states: a calibration state, a scanning state, and an angle detection state. Optionally, the calibration state only occurs (i.e., the system only performs calibration) when a new trailer is introduced to the system. A new trailer is defined as any trailer not previously hitched to the vehicle or calibrated by the system. For example, the operator may indicate to the system (via, for example, a user-actuatable input such as a touchscreen display disposed within the vehicle or the like) that a new trailer is hitched to the vehicle and the system should enter calibration. Alternatively, the system may determine that the hitched trailer is new (e.g., via processing of image data captured by the rear-viewing camera or data from other sensors). When a previously calibrated trailer is hitched to the vehicle, the system may automatically detect that trailer or alternatively the operator may select that trailer from a set of stored trailer profiles (i.e., stored on non-volatile memory disposed within the vehicle). Thus, the system may recall parameters for a previously calibrated trailer without the need to calibrate the trailer a second time. Optionally, the user may force (e.g., via actuation of a user input) the system to perform a recalibration to replace current calibration values. After calibration, the system enters the scanning state and/or the angle detection state. During the scanning state, the system determines an initial angle of the hitched trailer relative to the vehicle and transitions to the angle detection state. From the angle detection state, the system may transition to a tracking lost state, which, as discussed in more detail below, includes a scanning sub-state and a dummy angle sub-state. Referring now toFIG.3, the calibration state performs three primary functions: trailer calibration, hitch ball detection, and collision angle detection. During calibration, the vehicle operator (or the vehicle when operating autonomously or semi-autonomously operation) drives the towing vehicle straight for a threshold distance before turning the vehicle left or right sharply. The system may perform trailer calibration, hitch ball detection, and collision angle detection sequentially. For example, each of these algorithms may be performed in a sub-state of the calibration state. The sub-states may include a drive straight sub-state, a turn left/right sub-state, and a please wait sub-state. Once the particular algorithm has completed (i.e., trailer calibration, hitch ball detection, or collision angle detection), the system automatically transitions to the next sub-state. As illustrated inFIGS.3and4, during calibration, the vehicle, when calibrating a new trailer, performs a calibration maneuver. The calibration maneuver or drive involves the towing vehicle driving in a straight line (i.e., with a zero or near zero steering wheel angle) for a threshold distance and/or at a threshold speed for a threshold period of time prior to making a turn (e.g., a 90 degree turn or a 180 degree turn such as a U-turn to the left or right). After completing the turn, the vehicle again drives in a straight line forward. Different portions of the calibration maneuver may be used to perform different aspects of the trailer assist system. For example, as illustrated inFIG.4, trailer calibration may be performed during a first drive straight portion. Angle sampling, hitch range detection, and collision angle detection may be performed during the turn left/right portion (i.e., a U-turn). The hitch ball calculation may be performed during the final drive straight portion. During the first portion of the calibration maneuver, the system may operate in the drive straight sub-state. In this state, a kinematic model begins providing approximate trailer angles (i.e., the angle of the trailer relative to the vehicle) based on, for example, the steering wheel angle of the towing vehicle and speed of the towing vehicle. The initial trailer angle is helpful to determine or estimate when or whether the trailer angle has become zero degrees (i.e., the trailer is in-line with the towing vehicle). During the calibration maneuver, the system may process a series of frames of image data. Each frame of image data in the series is captured at a point in time after the previous frame in the series. That is, the first frame of image data in the series is captured at the earliest point in time of all of the frames of image data in the series and the last frame of image data in the series is captured at the latest point in time of all the frames of image data in the series. The series of frames of image data may include consecutive frames of image data captured by the camera or instead may “skip” any number of frames in between. For example, the system may process every fifth frame or every tenth frame or the like. Once the system determines that the trailer angle during the first straight portion of the calibration maneuver is zero or approximately zero, a trailer calibration algorithm may begin processing canny edges of frames of image data captured by the rear-view camera (e.g., top view images that include at least a portion of the trailer hitch). Optionally, the system processes a frame of image data to convert the frame to an edge image. For example,FIG.5Aillustrates an exemplary frame of top-view image data andFIG.5Billustrates a corresponding edge image of the frame ofFIG.5A. The edge image may remove all content from the image data other than detected edges (e.g., canny edges). The edge image may be provided to the trailer calibration module for processing. The system may process only a portion of the image data for edge detection and transmit only that portion to the trailer calibration algorithm. For example, a size of the original input top view image may be 640 pixels by 400 pixels while a region of interest (e.g., a portion of the hitch including the hitch ball) may be, for example, 201 pixels by 221 pixels. As illustrated inFIG.5B, the system may crop the image prior to or after performing the edge detection. That is, as shown inFIG.5B, the system may crop or otherwise remove portions of the frame of image data and/or edge image.FIG.5Cillustrates an example of the edge image after cropping where only the region of interest remains. After completing the initial drive straight portion of the calibration maneuver, the system may enter the turn left/right sub-state where the operator or system begins a turn such as a U-turn to either the right or the left. Once in this sub-state, the system determines a dummy trailer angle based on an assumed hitch point. The system, at this point in calibration, uses this rough estimation as the initial trailer angle. During this sub-state, the system determines whether the hitch ball has been detected. When the hitch ball has not yet been detected, the system begins an angle sampling algorithm. This algorithm stores a captured image of the trailer and/or trailer hitch into vehicle memory. The system uses the kinematic model as an angle reference. The system may periodically store a frame of image data. For example, the system may store consecutive frames of image data or skip any number of frames of image data between storing another frame. Once the kinematic model reaches, for example, 25 degrees, the system halts collection of image data and transitions to the please wait sub-state. During the next state (e.g., the please wait sub-state), the system provides the collected image data (e.g., a plurality of frames of images data) to a hitch ball detection module. Optionally, a total of nine stored images, each captured at a different point during the calibration turn and thus each at a different trailer angle, is transmitted to the hitch ball detection module (e.g., the images captured until the vehicle performed a threshold portion of the turn, such as 25 degrees or 30 degrees). Each frame of image data may be captured and/or stored at any resolution. For example, the frames of image data may have a resolution of 640×400.FIG.6illustrates an example of nine frames of image data, each converted to an edge image frame (and optionally cropped) and each captured when the trailer angle was a different value from each other frame of captured image data. In addition to angle sampling, the hitch range detection algorithm determines a hitch range (i.e., a range in the image data where hitch is likely located). Once the hitch range detection and angle sampling is complete, a collision angle algorithm determines a collision angle (i.e., the trailer angle that would result in the towing vehicle colliding with the trailer). To determine the collision angle, the system may require that the trailer angle is in a steady state condition while the vehicle is moving (i.e., the background is moving). For example, after a first portion of the turn of the calibration maneuver (e.g., 30 degrees), the trailer will turn at the same rate as the vehicle and the trailer angle will remain constant until the vehicle begins straightening out from the turn.FIG.7illustrates an exemplary frame of image data captured while the trailer angle is steady (i.e., constant) during a portion of the turn of the calibration maneuver. The collision angle module may determine an edge hit rate during this process to obtain a boundary of the trailer. The system may determine the collision angle based on the determined boundary of the trailer relative to the rear bumper or rear portion of the vehicle that is present in the top down field of view of the camera and is distorted or curved due to the wide angle lens of the camera (e.g., a banana view). Referring now toFIG.8, the final edge frame of image data may only contain trailer regions. The system may calculate this final edge frame of image data by adding a hit rate for a series of, for example, 30 frames. Once the final edge is calculated, the system may detect or determine the boundary of the trailer/hitch using profiling. From this profile, the system may select the nearest trailer position from the dummy angle position as a suitable candidate and subsequently convert the dummy angle to a trailer angle. Next, the system transitions to the please wait sub-state. In this state, the dummy angle is provided with the assumed hitch position. The system may provide the input images from the angle sampling to a hitch ball detection algorithm. In order to provide the hitch ball detection algorithm with adequate processing time and data, the algorithm may process or execute in a part time manner over, for example, 16 frames of captured image data. During this portion, the system may display a notification to the driver via, for example, a display disposed within the vehicle. For example, the system may display a “Please Wait” notification. During this portion, the input frames of image data captured while the trailer angle was constant or steady (e.g., the nine input images) are processed in a sequential manner. Referring now toFIG.9, the system initially processes a single image and warps a specific region of interest90into a separate image. The warped image (illustrated on the right inFIG.9) may have any resolution, such as, for example, 640×150. The system may also warp, crop, and/or resize the calibration template (e.g., to a size of 203×150 pixels). The system may slide the trailer template over the sampled image that includes the warped region of interest90and determine a score for each position during the slide. The best score (e.g., the highest score) is chosen as the trailer position. The best score may indicate the best match between the trailer template and the warped region of interest90during the slide. This process may be repeated for all nine frames of captured image data, with each frame of captured image data having a different trailer angle/hitch position (and subsequent warped region of interest). The system may average the hitch ball position from each frame of captured image data.FIG.10illustrates an example of template matching between the trailer template and the captured frames of captured image data during the turn. Once the location of the hitch ball is determined after the template matching, the state of the system may transition to the scanning state. Thus, implementations herein determine several important features or aspects of a hitched trailer via a calibration maneuver. The system may apply to a variety of applications, such as: rear-camera based trailer feature detection, hitch ball calculation, collision angle detection, trailer beam length estimation, and/or any application that requires tracking of objects moving around a pivot. The system may utilize aspects of the trailering assist systems or trailer angle detection systems or trailer hitch assist systems described in U.S. Pat. Nos. 10,755,110; 10,733,757; 10,706,291; 10,638,025; 10,586,119; 10,532,698; 10,552,976; 10,160,382; 10,086,870; 9,558,409; 9,446,713; 9,085,261 and/or 6,690,268, and/or U.S. Publication Nos. US-2020-0406967; US-2020-0356788; US-2020-0334475; US-2020-0361397; US-2020-0017143; US-2019-0297233; US-2019-0347825; US-2019-0118860; US-2019-0064831; US-2019-0042864; US-2019-0039649; US-2019-0143895; US-2019-0016264; US-2018-0276839; US-2018-0276838; US-2018-0253608; US-2018-0215382; US-2017-0254873; US-2017-0050672; US-2015-0217693; US-2014-0160276; US-2014-0085472 and/or US-2015-0002670, which are all hereby incorporated herein by reference in their entireties. The camera or sensor may comprise any suitable camera or sensor. Optionally, the camera may comprise a “smart camera” that includes the imaging sensor array and associated circuitry and image processing circuitry and electrical connectors and the like as part of a camera module, such as by utilizing aspects of the vision systems described in U.S. Pat. Nos. 10,099,614 and/or 10,071,687, which are hereby incorporated herein by reference in their entireties. The system includes an image processor operable to process image data captured by the camera or cameras, such as for detecting objects or other vehicles or pedestrians or the like in the field of view of one or more of the cameras. For example, the image processor may comprise an image processing chip selected from the EYEQ family of image processing chips available from Mobileye Vision Technologies Ltd. of Jerusalem, Israel, and may include object detection software (such as the types described in U.S. Pat. Nos. 7,855,755; 7,720,580 and/or 7,038,577, which are hereby incorporated herein by reference in their entireties), and may analyze image data to detect vehicles and/or other objects. Responsive to such image processing, and when an object or other vehicle is detected, the system may generate an alert to the driver of the vehicle and/or may generate an overlay at the displayed image to highlight or enhance display of the detected object or vehicle, in order to enhance the driver's awareness of the detected object or vehicle or hazardous condition during a driving maneuver of the equipped vehicle. The vehicle may include any type of sensor or sensors, such as imaging sensors or radar sensors or lidar sensors or ultrasonic sensors or the like. The imaging sensor or camera may capture image data for image processing and may comprise any suitable camera or sensing device, such as, for example, a two dimensional array of a plurality of photosensor elements arranged in at least 640 columns and 480 rows (at least a 640×480 imaging array, such as a megapixel imaging array or the like), with a respective lens focusing images onto respective portions of the array. The photosensor array may comprise a plurality of photosensor elements arranged in a photosensor array having rows and columns. Preferably, the imaging array has at least 300,000 photosensor elements or pixels, more preferably at least 500,000 photosensor elements or pixels and more preferably at least 1 million photosensor elements or pixels. The imaging array may capture color image data, such as via spectral filtering at the array, such as via an RGB (red, green and blue) filter or via a red/red complement filter or such as via an RCC (red, clear, clear) filter or the like. The logic and control circuit of the imaging sensor may function in any known manner, and the image processing and algorithmic processing may comprise any suitable means for processing the images and/or image data. For example, the vision system and/or processing and/or camera and/or circuitry may utilize aspects described in U.S. Pat. Nos. 9,233,641; 9,146,898; 9,174,574; 9,090,234; 9,077,098; 8,818,042; 8,886,401; 9,077,962; 9,068,390; 9,140,789; 9,092,986; 9,205,776; 8,917,169; 8,694,224; 7,005,974; 5,760,962; 5,877,897; 5,796,094; 5,949,331; 6,222,447; 6,302,545; 6,396,397; 6,498,620; 6,523,964; 6,611,202; 6,201,642; 6,690,268; 6,717,610; 6,757,109; 6,802,617; 6,806,452; 6,822,563; 6,891,563; 6,946,978; 7,859,565; 5,550,677; 5,670,935; 6,636,258; 7,145,519; 7,161,616; 7,230,640; 7,248,283; 7,295,229; 7,301,466; 7,592,928; 7,881,496; 7,720,580; 7,038,577; 6,882,287; 5,929,786 and/or 5,786,772, and/or U.S. Publication Nos. US-2014-0340510; US-2014-0313339; US-2014-0347486; US-2014-0320658; US-2014-0336876; US-2014-0307095; US-2014-0327774; US-2014-0327772; US-2014-0320636; US-2014-0293057; US-2014-0309884; US-2014-0226012; US-2014-0293042; US-2014-0218535; US-2014-0218535; US-2014-0247354; US-2014-0247355; US-2014-0247352; US-2014-0232869; US-2014-0211009; US-2014-0160276; US-2014-0168437; US-2014-0168415; US-2014-0160291; US-2014-0152825; US-2014-0139676; US-2014-0138140; US-2014-0104426; US-2014-0098229; US-2014-0085472; US-2014-0067206; US-2014-0049646; US-2014-0052340; US-2014-0025240; US-2014-0028852; US-2014-005907; US-2013-0314503; US-2013-0298866; US-2013-0222593; US-2013-0300869; US-2013-0278769; US-2013-0258077; US-2013-0258077; US-2013-0242099; US-2013-0215271; US-2013-0141578 and/or US-2013-0002873, which are all hereby incorporated herein by reference in their entireties. The system may communicate with other communication systems via any suitable means, such as by utilizing aspects of the systems described in U.S. Pat. Nos. 10,071,687; 9,900,490; 9,126,525 and/or 9,036,026, which are hereby incorporated herein by reference in their entireties. Optionally, the vision system may include a display for displaying images captured by one or more of the imaging sensors for viewing by the driver of the vehicle while the driver is normally operating the vehicle. Optionally, for example, the vision system may include a video display device, such as by utilizing aspects of the video display systems described in U.S. Pat. Nos. 5,530,240; 6,329,925; 7,855,755; 7,626,749; 7,581,859; 7,446,650; 7,338,177; 7,274,501; 7,255,451; 7,195,381; 7,184,190; 5,668,663; 5,724,187; 6,690,268; 7,370,983; 7,329,013; 7,308,341; 7,289,037; 7,249,860; 7,004,593; 4,546,551; 5,699,044; 4,953,305; 5,576,687; 5,632,092; 5,708,410; 5,737,226; 5,802,727; 5,878,370; 6,087,953; 6,173,501; 6,222,460; 6,513,252 and/or 6,642,851, and/or U.S. Publication Nos. US-2014-0022390; US-2012-0162427; US-2006-0050018 and/or US-2006-0061008, which are all hereby incorporated herein by reference in their entireties. Changes and modifications in the specifically described embodiments can be carried out without departing from the principles of the invention, which is intended to be limited only by the scope of the appended claims, as interpreted according to the principles of patent law including the doctrine of equivalents. | 24,626 |
11861879 | DETAILED DESCRIPTION OF THE INVENTION Now, at least one embodiment of the present invention is described with reference to the accompanying drawings. Duplicate descriptions are omitted for each component denoted by the same reference symbol. In this embodiment, there is described an information processing system configured to provide a service in which a plurality of accounts are created by a plurality of users, posts of a plurality of images are received from each of the plurality of accounts, and the images are made public to users who access the images from other accounts. FIG.1is a diagram for illustrating an example of an information processing system1according to at least one embodiment of the present invention. The information processing system1includes an information processing server10and an image storage19. The information processing server10is configured to communicate with at least one of customer terminals2and register, in the service, an image transmitted by a user operating each customer terminal2. Specifically, the information processing system1stores an image received from the user in the image storage19, and registers the image in a searchable manner. The information processing server10includes a processor11, a storage12, a communication unit13, and an input/output unit14. The information processing system1may be implemented by a plurality of server computers configured to execute processing of the information processing server10. The processor11is configured to operate in accordance with a program stored in the storage12. The processor11is also configured to control the communication unit13and the input/output unit14. The above-mentioned program may be provided through, for example, the Internet, or may be provided by being stored and provided in a flash memory, a DVD-ROM, or another computer-readable storage medium. The storage12is formed of memory devices, such as a RAM and a flash memory, and an external storage device, such as a hard disk drive. The storage12is configured to store the above-mentioned program. The storage12is also configured to store information and calculation results that are input from the processor11, the communication unit13, and the input/output unit14. The communication unit13implements a function of communicating with another device, and is formed of, for example, an integrated circuit for a wireless LAN or a wired LAN. Under control of the processor11, the communication unit13inputs information received from another device to the processor11and the storage12, and transmits information to another device. The input/output unit14is formed of, for example, a video controller configured to control a display output device and a controller configured to acquire data from an input device. Examples of the input device include a keyboard, a mouse, and a touch panel. The input/output unit14is configured to output display data to the display output device under the control of the processor11, and to acquire data input by the user operating the input device. The display output device is, for example, a display device connected to the outside. The image storage19is formed of, for example, an external storage device, such as a hard disk drive, or a flash memory. The image storage19is configured to store an image registered by the user under control of the information processing server10. The image storage19follows instructions of the information processing server10to receive an image from the information processing server10, store the image, and transmit a read-out image to the information processing server10. The image storage19is a device different from the information processing server10, but may be a part of the storage12of the information processing server10. Next, functions provided by the information processing system1are described.FIG.2is a block diagram for illustrating functions implemented by the information processing system1. The information processing system1includes an authentication module50, an image registration module51, a background identification module53, and an identicalness information output module55. In addition, the identicalness information output module55functionally includes a feature extraction module56and an identicalness information generation module57. Those functions are implemented when the processor11included in the information processing server10executes the programs stored in the storage12and controls the communication unit13and other components. The authentication module50is configured to authenticate an account of a user who accesses the information processing system1. More specifically, the authentication module50acquires information required for authentication including the account from the customer terminal2operated by the user, and authenticates whether or not the account is a proper one. After the account has been authenticated, the subsequent communication between the customer terminal2and the information processing server10is associated with the account. It is assumed that the user has created an account with respect to the information processing system1in advance. The image registration module51is configured to acquire an image from a user who accesses the information processing system1from a certain account, and store the acquired image in the image storage19in association with the account. The background identification module53is configured to identify a background area of the image registered from the account. The identicalness information output module55is configured to output identicalness information, which indicates whether or not a user owning a first account serving as a query and a user owning a second account being another account are identical, based on a background area of a first image associated with the first account and a background area of the second image associated with the second account. The feature extraction module56is configured to acquire, from each of the registered images, one global feature vector indicating a global feature of the background area of the image and a plurality of local feature vectors indicating local features of the image. The identicalness information generation module57is configured to output the identicalness information based on a first global feature vector acquired for the first image and a second global feature vector acquired for the second image. The identicalness information generation module57is also configured to output the identicalness information based on a plurality of first local feature vectors acquired for the first image and a plurality of second local feature vectors acquired for the second image. The identicalness information generation module57is also configured to search for at least one of second images corresponding to the first image based on the background area of at least one first image and the background areas of a plurality of second images, and to output the identicalness information based on the at least one of second images corresponding to the at least one first image. FIG.3is a flow chart for illustrating an example of processing of the image registration module51. First, the image registration module51receives an image uploaded from the customer terminal2(Step S101). The image registration module identifies an account from which the image has been transmitted (Step S102). In this case, the image registration module51may identify the account from which the image has been transmitted based on the account associated with the communication from the customer terminal2, or may identify the account from which the image has been transmitted based on information transmitted together with the image. The image registration module51stores the received image in the image storage19in association with the identified account (Step S103). FIG.4is a view for illustrating an example of an image to be registered.FIG.4is an example of an image to be registered in a marketplace service that allows customers to directly buy and sell products therebetween, and is an illustration of an image of a bag placed in a room. Next, processing for detecting a plurality of accounts owned by an identical user is described.FIG.5is a flow chart for illustrating an example of processing of the background identification module53and the feature extraction module56. The processing illustrated inFIG.5is executed for each image. In addition, the processing illustrated inFIG.5may be performed each time the image registration module51receives an image to be registered. In another case, images for which the processing illustrated inFIG.5has not been executed may be periodically detected, and the processing illustrated inFIG.5may be executed for each of the detected images. First, the background identification module53identifies the background area of the image to be processed (Step S201). The background area may be identified by so-called semantic segmentation using a neural network, or may be identified by a so-called region proposal method using a neural network. Those methods are publicly known, and hence detailed descriptions thereof are omitted. When the background area is identified, the background identification module53acquires an image obtained by filling an area (namely, foreground) other than the background area as an image (background image) of the background area of the image to be processed (Step S202). FIG.6is a view for illustrating an example of the background image. In the background image, an area of a bag, which is the foreground, is filled. In the following processing, the filled area is treated as an area including no image, and is excluded from extraction targets of the global feature vector and the local feature vector. When the background image is acquired, the feature extraction module56calculates uniqueness of the background image (Step S203). The uniqueness of the background image may be calculated by the feature extraction module56inputting the background image into a machine learning model and acquiring output of the machine learning model as a calculation result. In this embodiment, the machine learning model is a machine learning model implemented with machine learning, for example, AdaBoost, a random forest, a neural network, a support vector machine (SVM), or a nearest neighbor discriminator. The machine learning model is subjected to learning based on training data including learning input images and label data in advance. The training data includes: a learning input image obtained by enlarging or reducing the background image so that the background image has a predetermined number of pixels; and given label data indicating a level of the uniqueness of the background image. The uniqueness may be calculated without use of the machine learning model. For example, the level of the uniqueness may be calculated by obtaining a variance of a gradation value of the background image. In this case, as the background is closer to a single color, a variance value becomes smaller, and the uniqueness becomes lower. It is difficult to detect an ordinary pattern of, for example, a lattice, but it is possible to calculate the uniqueness at high speed. In another case, the uniqueness may be calculated based on background images similar to the background image for which the uniqueness is to be calculated. More specifically, an image search model including a plurality of background images that have already been acquired is built in advance. Then, the feature extraction module56searches for background images similar to the background image for which the uniqueness is to be calculated through use of the image search model. The feature extraction module56calculates the uniqueness based on the number of similar background images found through the search or a similarity between each of the similar background images and the background image for which the uniqueness is to be calculated. As the number of similar background images that have been found becomes smaller, or the similarity becomes smaller, the uniqueness becomes higher. When the background image has no uniqueness (the level of the uniqueness is lower than a determination threshold value) (N in Step S204), the feature extraction module56excludes the image from which this background image has been extracted from targets to be subjected to determination of accounts owned by an identical user, which is described below (Step S205), and the processing from Step S206to Step S208is not executed. The processing from Step S203to Step S205is performed in order to exclude an ordinary (low in uniqueness) background image from processing targets. The ordinary background image is highly likely to be included in images of different users, and hence it is not appropriate to use the ordinary background image to detect a plurality of accounts owned by an identical user. Accordingly, those processing steps can improve accuracy in detection of a plurality of accounts owned by an identical user, and can save calculation resources by skipping the processing relating to the ordinary background image. In this case, when the background image has uniqueness (the level of the uniqueness exceeds the determination threshold value) (Y in Step S204), the feature extraction module56extracts a plurality of local feature vectors from the background image, and stores the extracted local feature vectors in association with the original image of the background image (Step S206). The local feature vectors may be stored in the image storage19, or may be stored in the storage12. Each of the plurality of local feature vectors indicates a local feature of the background image. The feature extraction module56may extract the local feature vectors through use of a publicly known method, for example, SIFT, ORB, or BRISK. The feature extraction module56also acquires a position of a point at which the local feature vector is extracted in the image together with the local feature vector, and stores the position of the point as well in association with the image.FIG.7is a view for schematically illustrating the extraction of the local feature vectors. InFIG.7, the black circles indicate the points at which the local feature vectors are extracted from the background image illustrated inFIG.6. The local feature vectors are extracted from the points exhibiting the local features in the background image, and the number of local feature vectors varies depending on the background image. In the following description, local feature vectors extracted from a background image generated from a certain image are referred to as “local feature vectors of the image.” The feature extraction module56also extracts information other than the local feature vectors. More specifically, the feature extraction module56extracts a global feature vector from the background image, and stores the extracted global feature vector in association with the original image of the background image (Step S207). In this case, the feature extraction module56extracts one global feature vector from one background image. In the following description, a global feature vector extracted from a background image generated from a certain image is referred to as “global feature vector of the image.” The feature extraction module56may extract the global feature vector through use of the method of “Bag of Visual Words (BoVW).” In this case, the feature extraction module56may determine Visual Words based on background images having uniqueness among the background images relating to all accounts and extract the global feature vector of each of the background images, or may determine Visual Words based on some of the background images. The processing method of BoVW is publicly known, and hence a detailed description of the processing is omitted. The feature extraction module56may extract the global feature vector based on the local feature vectors extracted in Step S206, or may extract the global feature vector based on local feature vectors extracted by a method different from that of Step S206. In another case, although the accuracy is lowered, the feature extraction module56may extract the global feature vector from an image in which the foreground is not filled. In place of BoVW, the feature extraction module56may input a background image to an encoder portion of an autoencoder and acquire output of the encoder portion as the global feature vector. In this case, it is assumed that a plurality of training images are input to the autoencoder in advance and a parameter of each node of the autoencoder is adjusted based on the output of the autoencoder and the training images. Now, the processing of the identicalness information generation module57is described. The identicalness information generation module57determines whether or not the first image and the second image have an identical background in order to determine a certain account (first account) and another account (second account) belong to an identical user. The first image is an image stored in association with the first account by the image registration module51, and the second image is an image stored in association with the second account by the image registration module51. However, the first image and the second image that are to be subjected to the following processing are limited to those determined to have uniqueness in the background image in Step S204. FIG.8is a diagram for illustrating an outline of processing of the identicalness information generation module57. To facilitate the description,FIG.8is an illustration of an exemplary case in which one first image associated with the first account is used as a query image to search a plurality of second images for a second image having a background in common with the first image. The processing illustrated inFIG.8may be executed on each of the first images associated with the first account. The identicalness information generation module57executes three-stage processing in order to reduce a processing load and improve accuracy at the same time. In the first-stage processing, a group of second images (first candidate group) is generated by screening similar background images by a method having a low processing load. In the first-stage processing, a similar second image is detected based on the similarity between the global feature vector of the first image and the global feature vector of the second image. The detected second image is added to the first candidate group. The background image of the first image and the background image of the second image correspond to the first image and the second image, respectively, on a one-to-one basis, and hence in the first-stage processing, the background image of the first image and the background image of the second image may be detected in place of the first image and the second image, respectively. The same applies to the following processing. In the second-stage processing, a group of second images (second candidate group) having a smaller number than the first candidate group is generated by screening similar background images by a more accurate method. In the second-stage processing, a plurality of local feature vectors (first local feature vectors) of the first image and a plurality of local feature vectors (second local feature vectors) of the second image are caused to match each other by a publicly known matching method, and it may be determined whether they are similar to each other based on the number of sets of the first local feature vector and the second local feature vector that are considered to correspond to each other. In the third-stage processing, the second images are screened based on whether or not a proper three-dimensional background can be obtained from sets of the first local feature vector and the second local feature vector corresponding to each other. More specifically, in the third-stage processing, from a plurality of sets, the three-dimensional positions of the points at which the first and second local feature vectors included in the sets are extracted are estimated, and it is determined whether a proper three-dimensional background has been obtained based on whether or not the three-dimensional positions have been properly estimated. This determination is performed for each of the second images included in the second candidate group. The second image determined to include an identical background is output as a final search result. FIG.9toFIG.11are flow charts for illustrating examples of processing of the identicalness information generation module57.FIG.9toFIG.11indicate processing for identifying a set of a first image and a second image that have an identical background image from a plurality of first images associated with the first account and the second image associated with the second account, and determining whether or not the first account and the second account are owned by an identical user based on the identified set. The first-stage processing is the processing of Step S305, the second-stage processing is the processing from Step S311to Step S317, and the third-stage processing is the processing from Step S321to Step S327. First, the identicalness information generation module57selects one first image from a plurality of images determined to have uniqueness among a plurality of first images associated with the first account (Step S301). Subsequently, the identicalness information generation module57acquires, as the first global feature vector, a global feature vector stored in association with the selected first image (Step S302), and acquires, as the plurality of first local feature vectors, a plurality of local feature vectors stored in association with the selected first image (Step S303). Then, the identicalness information generation module57calculates, for each of the second images associated with the second account, the similarity between the first global feature vector and the second global feature vector associated with the second image. Of the plurality of second images, a second image having a similarity higher than a threshold value is added to the first candidate group (Step S305). The similarity is, for example, L2 norm, and a criterion of the similarity for determining an image which is similar thereto is set to be looser than that of the second-stage processing. When the second image is added to the first candidate group, the identicalness information generation module57selects one second image belonging to the first candidate group (Step S311). Then, the identicalness information generation module57determines correspondences between the acquired plurality of first local feature vectors and the plurality of second local feature vectors associated with the selected second image (Step S312). The publicly known matching method is used in determining the correspondences. For example, the identicalness information generation module57calculates a distance (corresponding to the similarity) between the first local feature vector and each of the plurality of second local feature vectors by, for example, L2 norm. In addition, when “d1<d2×A” (where A is a constant equal to or larger than 0 and smaller than 1) is satisfied between a second local feature vector having the smallest distance (d1) and a second local feature vector having the second smallest distance (d2), the identicalness information generation module57determines the second local feature vector having the smallest distance (d1) as the second local feature vector corresponding to the first local feature vector, and stores a set of the first local feature vector and the second local feature vector corresponding thereto in the storage12. In this case, the determined second local feature vector is not to be determined to correspond to another first local feature vector. As the matching method, the matching between the first local feature vector and the second local feature vector may be performed by, for example, SuperGlue or another graph neural network. FIG.12is a view for illustrating the correspondences between the first and second local feature vectors. The upper image corresponds to one first image, and the lower image corresponds to one second image. The black circles of the upper image correspond to the first local feature vectors, and the black circles of the lower image correspond to the second local feature vectors. The black circles connected by each broken line indicate the first and second local feature vectors corresponding to each other. When the correspondences are determined, the identicalness information generation module57determines whether or not the background image of the first image and the background image of the second image are similar to each other based on the correspondences (Step S313). Specifically, the identicalness information generation module57may determine the similitude based on whether or not the number of sets of the first local feature vector and the second local feature vector that are considered to correspond to each other exceeds a threshold value. The identicalness information generation module57may also determine the similitude based on whether or not a ratio of the number of sets to the smaller number between the number of first local feature vectors and the number of second local feature vectors exceeds a set threshold value. When it is determined that the background image of the first image and the background image of the second image are not similar to each other (N in Step S313), the processing of Step S314and Step S315is skipped. Meanwhile, when it is determined that the background image of the first image and the background image of the second image are similar to each other (Y in Step S313), the identicalness information generation module57determines whether or not the background image of the first image and the background image of the second image are extremely close to each other (Step S314). More specifically, the identicalness information generation module57determines that the background image of the first image and the background image of the second image are extremely close to each other when the number of sets of the first and second local feature vectors corresponding to each other is larger than a strict threshold value and a sum or average of the distances between the first local feature vectors and the second local feature vectors corresponding to each other is smaller than a strict distance threshold value. In this case, the strict threshold value is larger than the set threshold value. When it is determined that the background image of the first image and the background image of the second image are extremely close to each other (Y in Step S314), the identicalness information generation module57determines that the first image and the second image correspond to each other, and stores the set of the first image and the second image in the storage12(Step S319). Then, the identicalness information generation module57advances to the processing of Step S326by skipping the third-stage processing. When the backgrounds of the first image and the second image are extremely similar to each other, the identicalness information generation module57determines that the first image and the second image have an identical background without performing the subsequent processing. Thus, the subsequent processing is not performed, and hence the processing time is reduced. In addition, when the point of view of a camera is almost the same because of use of, for example, a tripod, it is possible to reduce such a fear that the relationship between the first account and the second account may be erroneously determined by erroneously determining that the first image and the second image do not correspond to each other in the third-stage processing. Meanwhile, when it is not determined that the background image of the first image and the background image of the second image are extremely close to each other (N in Step S314), the identicalness information generation module57adds the selected second image to the second candidate group, and stores the sets of the first local feature vector and the second local feature vector corresponding to each other in the storage12in association with the second image (Step S315). When the second images belonging to the first candidate group include a second image that has not been selected (N in Step S316), the identicalness information generation module57selects the next second image from the first candidate group (Step S317), and repeatedly performs the processing of Step S312and the subsequent steps. When all of the second images belonging to the first candidate group have been selected (Y in Step S316), the second candidate group is determined, and the procedure advances to the third-stage processing of Step S321and the subsequent steps. When the second image is added to the second candidate group, the identicalness information generation module57selects one second image belonging to the second candidate group (Step S321). Then, the identicalness information generation module57calculates, based on the plurality of sets stored in association with the selected second image, a three-dimensional position of a point corresponding to each of the sets (Step S322). In other words, the identicalness information generation module57estimates the three-dimensional position of a point at which the first and second local feature vectors are extracted for each of the plurality of sets based on a position at which the first local feature vector has been extracted in the first image and a position at which the second local feature vector has been extracted in the second image. The three-dimensional position may be estimated through use of a concept of so-called triangulation.FIG.13is a view for illustrating calculation of the three-dimensional position. More specifically, first, the identicalness information generation module57estimates points e1and e2of view and photographing directions (see the arrows extending from the points e1and e2of view) with which the images have been acquired, based on positions (see points Pi1and Pi2ofFIG.13) in the first and second images (see images m1and m2ofFIG.13) from which the first and second local feature vectors included in each of the plurality of sets have been extracted. Then, the identicalness information generation module57uses a direct linear transform (DLT) method based on the points of view, the photographing directions, and the positions (points) at which the first and second local feature vectors have been extracted, to thereby obtain the three-dimensional position of a point (see point Pr ofFIG.13) corresponding to those points. A method of obtaining the three-dimensional position of a point at which local feature vectors corresponding to each other are extracted in a plurality of images is publicly known, and hence a more detailed description thereof is omitted. This method is also used in, for example, software for implementing an SfM, and the identicalness information generation module57may calculate the three-dimensional position of a point for each of the plurality of sets by executing this software. When the three-dimensional position of the point is calculated, the identicalness information generation module57determines whether or not the three-dimensional position has been properly estimated. More specifically, first, the identicalness information generation module57reprojects the point onto the first image and the second image based on the calculated three-dimensional position of the point, and calculates reprojection errors between positions of the reprojected point and positions of original points in the first image and the second image (Step S323). In this case, the positions of the original points in the first and second images are the positions at which the first and second local feature vectors have been extracted in the first and second images, respectively. In other words, the reprojection errors are: a distance between a projected point at which the three-dimensional point has been projected onto the first image and the point at which the first local feature vector corresponding to the projected point has been extracted; and a distance between a projected point at which the three-dimensional point has been projected onto the second image and the point at which the second local feature vector corresponding to the projected point has been extracted. The identicalness information generation module57calculates a sum of the reprojection errors calculated for each of the plurality of sets, and determines whether or not the sum of the reprojection errors falls within a predetermined range (is smaller than a determination threshold value) (Step S324). When the sum of the reprojection errors falls within the predetermined range (Y in Step S324), the identicalness information generation module57determines that the first image and the second image correspond to each other, and stores the set of the first image and the second image in the storage12(Step S325). When the sum of the reprojection errors does not fall within the predetermined range (is larger than the determination threshold value), Step S325is skipped. In place of the determination based on whether or not the sum exceeds the threshold value, the determination may be performed based on the average or a variance of the reprojection errors. When the second images belonging to the second candidate group include a second image that has not been selected (N in Step S326), the identicalness information generation module57selects the next second image from the second candidate group (Step S327), and repeatedly performs the processing of Step S322and the subsequent steps. Meanwhile, when all of the second images belonging to the second candidate group have been selected (Y in Step S326), the third-stage processing is brought to an end, and the procedure advances to the processing of Step S330and the subsequent steps. In Step S330, the identicalness information generation module57determines whether all the images of at least one of first images that are associated with the first account and have uniqueness have been selected. When there is an image that has not yet been selected (N in Step S330), the identicalness information generation module57selects the next first image among the plurality of first images that are associated with the first account and have uniqueness (Step S331), and repeatedly performs the processing of Step S302and the subsequent steps. Meanwhile, when all the first images have been selected (Y in Step S330), the identicalness information generation module57generates identicalness information indicating whether or not the user owning the first account and the user owning the second account are identical to each other based on the sets of the first image and the second image that are stored in the storage12(Step S332). The plurality of sets each including the first image and the second image are information equivalent to at least one first image and at least one of second images corresponding to the at least one first image. FIG.14is a diagram for illustrating an example of correspondences between the first images and the second images.FIG.14indicates combinations of images that have been considered to have the same background and determined to correspond to each other in the first account and the second account. The lines connecting the images indicate correspondences between the images. In Step S332, for example, when the number of sets or a value obtained by dividing the number of sets by the sum of the number of first images and second images exceeds a predetermined threshold value, the identicalness information generation module57determines that the first account and the second account are owned by an identical user, and may create identicalness information indicating the determination result. Through the determination using the correspondences of a plurality of images in such a manner, it is possible to prevent erroneous determination ascribable to, for example, an accidental match. The processing illustrated inFIG.9toFIG.11is executed for a plurality of second accounts that are different from one another. In view of this, the identicalness information generation module57may generate identicalness information based on the number of second accounts determined to match the first account. More specifically, the identicalness information generation module57determines whether or not the first account and the second account match each other for each of the plurality of second accounts through use of the same method as the method of generating identicalness information indicating whether or not the two accounts are identical to each other in Step S332. Then, after the matching determination is performed for the plurality of second accounts in Step S332, the identicalness information generation module57generates identicalness information indicating whether or not the user owning the first account and users owning the plurality of second accounts are identical to each other based on the number of second accounts determined to match the first account. For example, the identicalness information generation module57may generate identicalness information indicating that the user owning the first account and the users owning the plurality of second accounts are identical to each other when the number of second accounts matching the first account is smaller than an account threshold value (for example, an integer of 3 or more), and is not required to generate identicalness information indicating identicalness when the number of second accounts that match the first account is equal to or larger than the account threshold value. Thus, for example, when a background image actually having low uniqueness is targeted for the processing illustrated inFIG.9toFIG.11, it is possible to prevent the first and second accounts actually owned by different users from being erroneously determined to be identical to each other. When it is determined that the first image and the second image have the same background and correspond to each other, the identicalness information generation module57may determine, in place of Step S325and Step S319, that an identical user owns the first account associated with the first image and the second account associated with the second image to generate identicalness information. In this case, the processing illustrated inFIG.9toFIG.11is not executed thereafter. With this processing, it is possible to reduce the processing load irrespective of some decrease in accuracy. While there have been described what are at present considered to be certain embodiments of the invention, it will be understood that various modifications may be made thereto, and it is intended that the appended claims cover all such modifications as fall within the true spirit and scope of the invention. | 39,138 |
11861880 | DETAILED DESCRIPTION The following description of the embodiments of the invention is not intended to limit the invention to these embodiments, but rather to enable any person skilled in the art to make and use this invention. 1. Overview As shown inFIG.1, the method can include: determining a property S100, determining attribute values for the property S200, determining a reference population for the property S300, determining reference population attribute values S400, determining a typicality metric for the property S500, and/or any suitable steps. The method can optionally include determining an influential attribute S600. In variants, the method can function to determine how typical and/or atypical a property is in comparison to properties in a reference population. 2. Examples In an example, the method includes: identifying a property (e.g., property of interest); determining a reference population (e.g., a set of reference properties) for the property satisfying a set of criteria (e.g., a location associated with the property, a default reference population for the property identifier, received as part of a user request, etc.); determining attribute values (e.g., for each of a set of attributes) for the property and the reference population (e.g., for each reference property); and calculating the typicality metric based on the property attribute values and the reference population attribute values. The attribute values can be: retrieved from a database (e.g., pre-calculated and stored in the database), retrieved from third-party databases, extracted from measurements (e.g., extracted responsive to a request), and/or otherwise determined. The property can be: identified via a property identifier input request, selected from a database, selected from a set of properties (e.g., wherein the method is iteratively performed for each of the set of properties, wherein the remainder of the set of properties is the reference population), and/or otherwise determined. The typicality metric can be calculated (e.g., using a typicality model) as a comparison between a vector of attribute values for the property and one or more vectors of attribute values for the reference population (e.g., an attribute vector for each reference property, an aggregate attribute vector of averaged attribute values, etc.). The comparison can optionally be based on a distribution metric (e.g., variance, covariance, statistical measure, etc.) associated with the reference population attribute values. In a second example, the method can include: directly predicting the typicality metric based on property information, such as measurements (e.g., images), for the property and for the reference population using a trained typicality model (e.g., processing measurements directly as inputs to the typicality model). However, the method can be otherwise performed. 3. Technical Advantages Variants of the technology can confer one or more advantages over conventional technologies. Conventional methods of determining how similar a property is relative to a reference population are subjective, inefficient, and prone to significant error. Variants of the technology can be more objective, more accurate, faster, more efficient, and/or more scalable than conventional methods. First, variants of the technology can provide increased objectivity and/or decreased subjective influence on the typicality metric. For example, in addition to using standard property records and assessor data, the technology can leverage objective computer vision/machine learning (CV/ML) derived content (e.g., attribute values), instead of relying on human-reported content that is subjective and vulnerable to bias. In a specific example, a typicality metric for a given property can be determined by comparing attribute values extracted from images of the given property to attribute values extracted from images of the reference population (e.g., images of each reference property in the reference population). Second, variants of the technology can provide a more accurate measure (e.g., quantitative measure) of property typicality. In a first example, a typicality model can be trained to predict a typicality metric that is correlated with a validation metric (e.g., automated valuation model error, historical valuation, manual labeling, price discrepancy, days on market, insurance loss ratio, etc.), wherein the typicality model can predict the typicality metric for a property, even when the validation information is unavailable for that property. In a specific example, the inventors have discovered that there is a statistically significant correlation between the typicality metric and automated valuation model (AVM) error such that the typicality metric can be used to determine the relative risk in the valuation of a given property. In a second example, a subset of attributes (e.g., high-impact attributes, causal attributes, etc.) can be selected for use in the typicality model, which can reduce the number of overall attributes that are analyzed, thereby resulting in a faster, potentially more accurate typicality prediction. Third, variants of the method can determine the causality for a property's atypicality. In an example, the method can identify property attribute values that have the most significant impact on the property typicality metric, and optionally provide a semantic representation of how the attribute impacted the typicality metric (e.g., based on the attribute's value). This can enable lenders, real estate agents, and/or insurance agents, property owners, and/or other entities to better understand how to approach the property (e.g., adjust valuation, adjust portfolio, guide improvements, etc.). Fourth, variants of the technology can improve the functioning of other technologies. For example, the typicality metric can be used as an input to downstream models (e.g., an AVM model, risk prediction model, insurance model, rental estimate model, vacancy prediction model, etc.) to reduce error and increase the accuracy of the downstream model's output (e.g., adjusting the model to account for atypicality, etc.). In another example, the typicality metric can be associated with a model output adjustment, wherein the model's output can be adjusted by the model output adjustment to account for the property's typicality and/or atypicality. In another example, the typicality metrics of a pre- and post-proposed property change (e.g., remodel) can be compared to evaluate the effects of the proposed property change. Fifth, variants of the technology can increase computational efficiency and/or decrease computational resources by selecting and/or adjusting a downstream model (e.g., an AVM model) based on the typicality metric determined for a given property. For example, a nonstandard AVM model can be used for atypical properties, an AVM model can be adjusted for atypical properties (e.g., fork the model, apply parameter adjustments in the model and/or to model outputs, etc.), and/or any other AVM model adjustment can be used. This can provide computational savings by reducing the complexity of an AVM model (e.g., using multiple, less complex AVM models to achieve the same or increased accuracy compared to a single, more complex AVM model) and/or by identifying which homes need less or more computationally intensive models. However, further advantages can be provided by the system and method disclosed herein. 4. System As shown inFIG.3, the system can include one or more typicality models, attribute models, and/or any other set of models. The system can optionally include a computing system, a database, and/or any other suitable components. In variants, the system can function to determine one or more typicality metrics for a property based on measurements associated with the property (e.g., depicting the property). The system can be used with one or more properties. The properties can function as test properties (e.g., properties of interest), training properties (e.g., used to train the model(s)), and/or be otherwise used. Each property can be or include: a parcel (e.g., land), a property component or set or segment thereof, and/or otherwise defined. For example, the property can include both the underlying land and improvements (e.g., built structures, fixtures, etc.) affixed to the land, only include the underlying land, or only include a subset of the improvements (e.g., only the primary building). Property components can include: built structures (e.g., primary structure, accessory structure, deck, pool, etc.); subcomponents of the built structures (e.g., roof, siding, framing, flooring, living space, bedrooms, bathrooms, garages, foundation, HVAC systems, solar panels, slides, diving board, etc.); permanent improvements (e.g., pavement, statutes, fences, etc.); temporary improvements or objects (e.g., trampoline); vegetation (e.g., tree, flammable vegetation, lawn, etc.); land subregions (e.g., driveway, sidewalk, lawn, backyard, front yard, wildland, etc.); debris; and/or any other suitable component. The property and/or components thereof are preferably physical, but can alternatively be virtual. Each property can be identified by one or more property identifiers. A property identifier (property ID) can include: geographic coordinates, an address, a parcel identifier, a block/lot identifier, a planning application identifier, a municipal identifier (e.g., determined based on the ZIP, ZIP+4, city, state, etc.), and/or any other identifier. The property identifier can be used to retrieve property information, such as parcel information (e.g., parcel boundary, parcel location, parcel area, etc.), property measurements, property descriptions, and/or other property data. The property identifier can additionally or alternatively be used to identify a property component, such as a primary building or secondary building, and/or be otherwise used. Each property can be associated with property information. The property information can be static (e.g., remain constant over a threshold period of time) or variable (e.g., vary over time). The property information can be associated with: a time (e.g., a generation time, a valid duration, etc.), a source (e.g., the information source), an accuracy or error, and/or any other suitable metadata. The property information is preferably specific to the property, but can additionally or alternatively be from other properties (e.g., neighboring properties, other properties sharing one or more attributes with the property). Examples of property information can include: measurements, descriptions, attributes, auxiliary data, and/or any other suitable information about the property. Property measurements preferably measure an aspect about the property, such as a visual appearance, geometry, and/or other aspect. In variants, the property measurements can depict a property (e.g., the property of interest), but can additionally or alternatively depict the surrounding geographic region, adjacent properties, and/or other factors. The measurement can be: 2D, 3D, and/or have any other set of dimensions. Examples of measurements can include: images, surface models (e.g., digital surface models (DSM), digital elevation models (DEM), digital terrain models (DTM), etc.), point clouds (e.g., generated from LIDAR, RADAR, stereoscopic imagery, etc.), depth maps, depth images, virtual models (e.g., geometric models, mesh models), audio, video, radar measurements, ultrasound measurements, and/or any other suitable measurement. Examples of images that can be used include: RGB images, hyperspectral images, multispectral images, black and white images, grayscale images, panchromatic images, IR images, NIR images, UV images, thermal images, and/or images sampled using any other set of wavelengths; images with depth values associated with one or more pixels (e.g., DSM, DEM, etc.); and/or other images. The measurements can include: remote measurements (e.g., aerial imagery, such as satellite imagery, balloon imagery, drone imagery, etc.), local or on-site measurements (e.g., sampled by a user, streetside measurements, etc.), and/or sampled at any other proximity to the property. The remote measurements can be measurements sampled more than a threshold distance away from the property, such as more than 100 ft, 500 ft, 1,000 ft, any range therein, and/or sampled any other distance away from the property. The measurements can be: top-down measurements (e.g., nadir measurements, panoptic measurements, etc.), side measurements (e.g., elevation views, street measurements, etc.), angled and/or oblique measurements (e.g., at an angle to vertical, orthographic measurements, isometric views, etc.), and/or sampled from any other pose or angle relative to the property. The measurements can depict the property exterior, the property interior, and/or any other view of the property. The measurements can be a full-frame measurement, a segment of the measurement (e.g., the segment depicting the property, such as that depicting the property's parcel; the segment depicting a geographic region a predetermined distance away from the property; etc.), a merged measurement (e.g., a mosaic of multiple measurements), orthorectified, and/or otherwise processed. The measurements can be received as part of a user request, retrieved from a database, determined using other data (e.g., segmented from an image, generated from a set of images, etc.), synthetically determined, and/or otherwise determined. The property information can include property descriptions. The property description can be: a written description (e.g., a text description), an audio description, and/or in any other suitable format. The property description is preferably verbal but can alternatively be nonverbal. Examples of property descriptions can include: listing descriptions (e.g., from a realtor, listing agent, etc.), property disclosures, inspection reports, permit data, appraisal reports, and/or any other text based description of a property. The property information can include auxiliary data. Examples of auxiliary data can include property descriptions, permit data, insurance loss data, inspection data, appraisal data, broker price opinion data, property valuations, property attribute and/or component data (e.g., values), and/or any other suitable data. The auxiliary data can be used to: determine attribute values, increase the accuracy of the typicality metric, select which attributes should be used for typicality metric determination, and/or otherwise used. However, the property information can include any other suitable information about the property. All or a subset of properties can be associated with attribute values for one or more property attributes, wherein the attribute values function to represent one or more quantitative, qualitative, semantic, and/or other aspects of the property. Each property can be associated with a set of property attributes, which function to represent one or more aspects of a given property. The property attributes can be semantic, quantitative, qualitative, and/or otherwise describe the property. Each property can be associated with its own set of property attributes, and/or share property attributes with other properties. As used herein, property attributes can refer to the attribute parameter (e.g., the variable) and/or the attribute value (e.g., value bound to the variable for the property). Property attributes can include: property components, features (e.g., feature vector, mesh, mask, point cloud, pixels, voxels, any other parameter extracted from a measurement), any parameter associated with a property component (e.g., property component characteristics), semantic features (e.g., whether a semantic concept appears within the property information), and/or higher-level summary data extracted from property components and/or features. Property attributes can be determined based on property information for the property itself, neighboring properties, and/or any other set of properties. Property attributes can be automatically determined, manually determined, and/or otherwise determined. Property attributes can be intrinsic, extrinsic, and/or otherwise related to the property. Intrinsic attributes are preferably inherent to the property's physical aspects, and would have the same values for the property independent of the property's context (e.g., property location, market conditions, etc.), but can be otherwise defined. Examples of intrinsic attributes include: record attributes, structural attributes, condition attributes, and/or other attributes determined from measurements or descriptions about the property itself. Extrinsic attributes can be determined based on other properties or factors (e.g., outside of the property). Examples of extrinsic attributes include: attributes associated with property location, attributes associated with neighboring properties (e.g., proximity to a given property component of a neighboring property), and/or other extrinsic attributes. Examples of attributes associated with the property location can include distance and/or orientation relative to a: highway, coastline, lake, railway track, river, wildland and/or any large fuel load, hazard potential (e.g., for wildfire, wind, fire, hail, flooding, etc.), other desirable site (e.g., park, beach, landmark, etc.), other undesirable site (e.g., cemetery, landfill, wind farm, etc.), zoning information (e.g., residential, commercial, and industrial zones; subzoning; etc.), and/or any other attribute associated with the property location. Property attributes can include: structural attributes, condition attributes, record attributes, semantic attributes, subjective attributes, and/or any other suitable set of attributes. Structural attributes can include: structure class/type, parcel area, framing parameters (e.g., material), flooring (e.g., floor type), historical construction information (e.g., year built, year updated/improved/expanded, etc.), area of living space, the presence or absence of a built structure (e.g., deck, pool, ADU, garage, etc.), physical or geometric attributes of the built structure (e.g., structure footprint, roof surface area, number of roof facets, roof slope, pool surface area, building height, number of beds, number of baths, number of stories, etc.), relationships between built structures (e.g., distance between built structures, built structure density, setback distance, count, etc.), presence or absence of an improvement (e.g., solar panel, etc.), ratios or comparisons therebetween, and/or any other structural descriptors. Condition-related attributes can include: roof condition (e.g., tarp presence, material degradation, rust, missing or peeling material, sealing, natural and/or unnatural discoloration, defects, loose organic matter, ponding, patching, streaking, etc.), wall condition, exterior condition, accessory structure condition, yard debris and/or lot debris (e.g., presence, coverage, ratio of coverage, etc.), lawn condition, pool condition, driveway condition, tree parameters (e.g., overhang information, height, etc.), vegetation parameters (e.g., coverage, density, setback, location within one or more zones relative to the property), presence of vent coverings (e.g., ember-proof vent coverings), structure condition, occlusion (e.g., pool occlusion, roof occlusion, etc.), pavement condition (e.g., percent of paved area that is deteriorated), resource usage (e.g., energy usage, gas usage, etc.), overall property condition, and/or other parameters (e.g., that are variable and/or controllable by a resident). Condition-related attributes can be a rating for a single structure, a minimum rating across multiple structures, a weighted rating across multiple structures, and/or any other individual or aggregate value. Record attributes can include: number of beds/baths, construction year, square footage, legal class (e.g., residential, mixed-use, commercial, etc.), legal subclass (e.g., single-family vs. multi-family, apartment vs. condominium, etc.), location (e.g., neighborhood, zip code, etc.), location factors (e.g., positive location factors such as distance to a park, distance to school; negative location factors such as distance to sewage treatment plans, distance to industrial zones; etc.), population class (e.g., suburban, urban, rural, etc.), school district, orientation (e.g., side of street, cardinal direction, etc.) and/or any other suitable attributes (e.g., that can be extracted from a property record or listing). Semantic attributes (e.g., semantic features) can include whether a semantic concept is associated with the property (e.g., whether the semantic concept appears within the property information). Examples of semantic attributes can include: whether a property is in good condition (e.g., “turn key”, “move-in ready”, or related terms appear in the description), “poor condition”, “walkable”, “popular”, small (e.g., “cozy” appears in the description), and/or any other suitable semantic concept. The semantic attributes can be extracted from: the property descriptions, the property measurements, and/or any other suitable property information. The semantic attributes can be extracted using a model (e.g., an NLP model, a CNN, a DNN, etc.) trained to identify keywords, trained to classify or detect whether a semantic concept appears within the property information, and/or otherwise trained. Subjective attributes can include: curb appeal, viewshed, and/or any other suitable attributes. Other property attributes can include: built structure values (e.g., roof slope, roof rating, roof material, root footprint, covering material, etc.), auxiliary structures (e.g., a pool, a statue, ADU, etc.), risk asset scores (e.g., asset score indicating risk of flooding, hail, wildfire, wind, house fire, etc.), neighboring property values (e.g., distance of neighbor, structure density, structure count, etc.), and/or any other suitable attributes. Example property attributes can include: structural attributes (e.g., for a primary structure, accessory structure, neighboring structure, etc.), record attributes (e.g., number of bed/bath, construction year, square footage, legal class, legal subclass, geographic location, etc.), condition attributes (e.g., yard condition, roof condition, pool condition, paved surface condition, etc.), semantic attributes (e.g., semantic descriptors), location (e.g., parcel centroid, structure centroid, roof centroid, etc.), property type (e.g., single family, lease, vacant land, multifamily, duplex, etc.), property component parameters (e.g., area, enclosure, presence, structure type, count, material, construction type, area condition, spacing, relative and/or global location, distance to another component or other reference point, density, geometric parameters, condition, complexity, etc.; for pools, porches, decks, patios, fencing, etc.), storage (e.g., presence of a garage, carport, etc.), permanent or semi-permanent improvements (e.g., solar panel presence, count, type, arrangement, and/or other solar panel parameters; HVAC presence, count, footprint, type, location, and/or other parameters; etc.), temporary improvement parameters (e.g., presence, area, location, etc. of trampolines, playsets, etc.), pavement parameters (e.g., paved area, percent illuminated, paved surface condition, etc.), foundation elevation, terrain parameters (e.g., parcel slope, surrounding terrain information, etc.), legal class (e.g., residential, mixed-use, commercial), legal subclass (e.g., single-family vs. multi-family, apartment vs. condominium), geographic location (e.g., neighborhood, zip, etc.), population class (e.g., suburban, urban, rural, etc.), school district, orientation (e.g., side of street, cardinal direction, etc.), subjective attributes (e.g., curb appeal, viewshed, etc.), built structure values (e.g., roof slope, roof rating, roof material, roof footprint, covering material, etc.), auxiliary structures (e.g., a pool, a statue, ADU, etc.), risk scores (e.g., score indicating risk of flooding, hail, fire, wind, wildfire, etc.), neighboring property values (e.g., distance to neighbor, structure density, structure count, etc.), context (e.g., hazard context, geographic context, vegetation context, weather context, terrain context, etc.), historical construction information, historical transaction information (e.g., list price, sale price, spread, transaction frequency, transaction trends, etc.), semantic information, and/or any other attribute that remains substantially static after built structure construction. In a specific example, the attributes can exclude condition-related attributes. In variants, the set of attributes that are used (e.g., by the model(s)) can be selected from a superset of candidate attributes. This can function to: reduce computational time and/or load (e.g., by reducing the number of attributes that need to be extracted and/or processed), increase score prediction accuracy (e.g., by reducing or eliminating confounding attributes), and/or be otherwise used. The set of attributes can be selected: manually, automatically, randomly, recursively, using an attribute selection model, using lift analysis (e.g., based on an attribute's lift), using any explainability and/or interpretability method, based on an attribute's correlation with a given metric or training label, using predictor variable analysis, through predicted outcome validation, during model training (e.g., attributes with weights above a threshold value are selected), using a deep learning model, based on a zone classification, and/or via any other selection method or combination of methods. The attributes can be determined from property information (e.g., property measurements, property descriptions, etc.), a database or a third party source (e.g., third-party database, MLS™ database, city permitting database, historical weather and/or hazard database, tax assessor database, etc.), be predetermined, be calculated (e.g., from an extracted value and a scaling factor, etc.), and/or be otherwise determined. In a first example, the attributes can be determined by extracting features from property measurements, wherein the attribute values can be determined based on the extracted feature values. In a second example, a trained attribute model can predict the attribute value directly from property information (e.g., based on property imagery, descriptions, etc.). In a third example, the attributes can be determined by extracting features from a property description (e.g., using a sentiment extractor, keyword extractor, etc.). However, the attributes can be otherwise determined. In examples, the attribute values can be determined using the methods disclosed in U.S. application Ser. No. 17/502,825 filed 15 Oct. 2021 and U.S. application Ser. No. 15/253,488 filed 31 Aug. 2016, which are incorporated in their entireties by this reference. Property attributes and attribute values are preferably determined asynchronously from method execution. Alternatively, property attributes and attribute values can be determined in real time or near real time with respect to the method. Attributes and values can be stored by the processing system performing the determination of property attributes, and/or by any other suitable system. Preferably, storage can be temporary, based on time (e.g., 1 day, 1 month, etc.), based on use (e.g., after one use of the property attribute values by the asset prediction model), based on time and use (e.g., after one week without use of property attribute values), and/or based on any other considerations. Alternatively, property asset data is permanently stored. Attribute values can be discrete, continuous, binary, multiclass, and/or otherwise structured. The attribute values can be associated with time data (e.g., from the underlying measurement timestamp, value determination timestamp, etc.), a hazard event, an uncertainty parameter, and/or any other suitable metadata. Attribute values can optionally be associated with an uncertainty parameter (e.g., each attribute value in a set is associated with an uncertainty parameter). Uncertainty parameters can include variance values, a confidence score, a probability, and/or any other uncertainty metric. In a first example, the uncertainty parameter is a probability associated with an attribute value. In an illustrative example, for an attribute of ‘roof material’, the attribute values can be: shingle, tile, metal, and other, wherein each attribute value is associated with a probability that the roof material of the property corresponds to the attribute value (e.g., the attribute values [shingle, tile, metal, other] can map to uncertainty parameters [0.9, 0.07, 0.02, 0.01]). In a second example, the uncertainty parameter is a confidence score. In a first illustrative example, the roof material attribute for a structure corresponds to attribute values: shingle with 90% confidence, tile with 7% confidence, metal with 2% confidence, and other with 1% confidence. In a second illustrative example, 10% of the roof is obscured (e.g., by a tree), which can result in a 90% confidence score for the roof geometry attribute value (e.g., an attribute value of ‘shingle’ has a 90% confidence score). In a third illustrative example, the vegetation coverage attribute value is 70%±10%. In a third example, the uncertainty parameter can be a variance for a given attribute value (e.g., across a set of properties such as a reference population, across an individual property, etc.). In an illustrative example, the parcel size attribute value across a set of properties (e.g., an aggregate attribute value for a reference population) can be 3000 ft2with a variance of 2000 ft4. The set of attribute values associated with each property can be represented as a vector (e.g., attribute vector), a multidimensional surface, a single value, and/or otherwise represented. For example, an attribute vector for a given property includes property-specific attribute values for each of a set of attributes (e.g., wherein the set of attributes can be selected using S200methods). The values included in an attribute vector are preferably attribute values (e.g., exclusively attribute values), but can additionally or alternatively include uncertainty parameters (e.g., for attribute values), property identifiers, property information, and/or any other information for one or more properties. The order of attributes within each attribute vector and/or attribute set is preferably the same across different properties (e.g., the same across different vectors, the same across different sets, etc.), but can alternatively be different. The attribute vector can be any shape (e.g., an array, set, matrix of any dimension, etc.). However, any other suitable property attribute and/or value thereof can be determined. The system can include or use one or more attribute models, which function to determine attribute values for one or more property attributes. Each attribute model can determine values for a single attribute (e.g., be a binary classifier, be a multiclass classifier, etc.), multiple attributes (e.g., be a multiclass classifier), and/or for any other suitable set of attributes. A single attribute value can be determined using a single attribute model, multiple attribute models, and/or any other suitable number of attribute models. Inputs to the attribute model, used to determine attribute values for a given property, can include property information (e.g., a property dataset) for the given property, property information for associated properties (e.g., neighboring properties), and/or any other suitable set of inputs. The property information can include: measurements, descriptions, auxiliary data, parcel data, and/or any other suitable information for the property. The property information can be associated with: a single property, a larger geographic context (e.g., based on a region larger than the property parcel size), and/or otherwise associated with one or more properties. The inputs can optionally be associated with a common timestamp, with a common timeframe (e.g., all determined within the same week, month, quarter, season, year, etc.), with different timeframes, and/or otherwise temporally related. The outputs for the attribute model can be: values for one or more property attributes, image feature segments, property measurements, property identifiers, uncertainty parameters (e.g., a confidence score for each attribute value prediction), and/or any other suitable information. The attribute value model can be or include: neural networks (e.g., CNN, DNN, etc.), an equation (e.g., weighted equations), regression (e.g., leverage regression), classification (e.g., binary classifiers, multiclass classifiers, semantic segmentation models, instance-based segmentation models, etc.), segmentation algorithms (e.g., neural networks, such as CNN based algorithms, thresholding algorithms, clustering algorithms, etc.), rules, heuristics (e.g., inferring the number of stories of a property based on the height of a property), instance-based methods (e.g., nearest neighbor), regularization methods (e.g., ridge regression), decision trees, Bayesian methods (e.g., Naïve Bayes, Markov, etc.), kernel methods, statistical methods (e.g., probability), deterministics, support vectors, genetic programs, isolation forests, robust random cut forest, clustering, selection and/or retrieval (e.g., from a database and/or library), comparison models (e.g., vector comparison, image comparison, etc.), object detectors (e.g., CNN based algorithms, such as Region-CNN, fast RCNN, faster R-CNN, YOLO, SSD—Single Shot MultiBox Detector, R-FCN, etc.; feed forward networks, transformer networks, and/or other neural network algorithms), key point extraction, SIFT, any computer vision and/or machine learning method (e.g., CV/ML extraction methods), and/or any other suitable model or methodology. Different attribute values can be determined using different methods, but can alternatively be determined in the same manner. The attribute model can determine attribute values by: extracting features from property information (e.g., measurements) and determining the attribute values based on the extracted feature values, extracting attribute values directly from property information, retrieving values from a database or a third party source (e.g., third-party database, real estate listing service database such as an MLS database, city permitting database, historical weather and/or hazard database, tax assessor database, etc.), using a predetermined value, calculating a value (e.g., from an extracted value and a scaling factor, etc.), and/or otherwise determined. However, the attribute model can be otherwise configured. In examples, property attributes and/or values thereof can defined and/or determined as disclosed in U.S. application Ser. No. 17/529,836 filed on 18 Nov. 2021, U.S. application Ser. No. 17/475,523 filed 15 Sep. 2021, U.S. application Ser. No. 17/749,385 filed 20 May 2022, U.S. application Ser. No. 17/870,279 filed 21 Jul. 2022, and/or U.S. application Ser. No. 17/858,422 filed 6 Jul. 2022, each of which is incorporated in its entirety by this reference (e.g., wherein features and/or feature values disclosed in the references can correspond to attributes and/or attribute values). However, any other suitable property attribute and/or value thereof can be determined. The system can include one or more typicality models, which function to determine the typicality metric for a property (e.g., with respect to a reference population). Inputs to the typicality model can include: attribute values (e.g., attribute vectors) for the property and/or the reference population, property information for the property and/or the reference population, uncertainty parameters for the attribute values, and/or any other suitable inputs. Outputs of the typicality model can include one or more typicality metrics and/or uncertainty parameters for the typicality metric(s). The model can be parametric (e.g., assume an underlying distribution), nonparametric (e.g., without an underlying distribution assumption), or a combination thereof. The typicality model can be or include: similarity models (e.g., configured to compute a similarity measure, such as a Bregman divergence, a Bhattacharyya distance, a Mahalanobis distance, cosine distance, etc.), neural networks (e.g., CNN, DNN, etc.), an equation (e.g., weighted equations), regression (e.g., leverage regression), classification (e.g., binary classifiers, multiclass classifiers, semantic segmentation models, instance-based segmentation models, etc.), segmentation algorithms, rules, heuristics, instance-based methods (e.g., nearest neighbor), regularization methods (e.g., ridge regression), decision trees, Bayesian methods (e.g., Naïve Bayes, Markov, etc.), kernel methods, statistical methods (e.g., probability), deterministics, support vectors, genetic programs, isolation forests, robust random cut forest, clustering, selection and/or retrieval (e.g., from a database and/or library), comparison models (e.g., vector comparison, image comparison, etc.), object detectors, SIFT, any computer vision and/or machine learning method, and/or any other suitable model or methodology. The typicality model can be generic or be specific to: a use case (e.g., real estate valuation, insurance loss estimation, maintenance/repair cost, etc.), property information (e.g., available property information for the property and/or reference population), a property attribute value (e.g., each building classification is associated with a different typicality model), a geographic region (e.g., a continent, a country, a state, a county, a city, a zip code, a street, a neighborhood, a school district, etc.), a property class (e.g., single-family home, multi-family home, a house, an apartment, a condominium, etc.), a timeframe (e.g., a season, a week of the year, a month, a specific set of dates, etc.), a reference population size (e.g., greater or less than: 10 properties, 20 properties, 50 properties, 100 properties, 1000 properties, 10000 properties, or any number of properties) a terrain (e.g., forest, desert, etc.), a residential area classification (e.g., urban, suburban, rural, etc.), a zoning classification (e.g., residential area, industrial area, commercial area, etc.), and/or be otherwise generic or specific to one or more parameters. In a first variant, the typicality model is a similarity model configured to determine a similarity score between an attribute value set for a property and the attribute value sets for each of a plurality of reference properties (the reference population). The similarity model preferably accounts for the covariance of the attribute values and/or distribution of the attribute values within the reference population, but can alternatively be agnostic to the attribute value covariance, be agnostic to the attribute value distribution within the reference population, and/or be otherwise configured. In a second variant, the typicality model is a neural network trained to predict the typicality metric. In a first embodiment, the typicality model can predict the typicality metric based on property measurement(s) depicting the property and based on property measurement(s) depicting the reference properties. The property measurement(s) depicting the property and the property measurement(s) depicting the reference properties can be the same measurement(s) (e.g., a single image depicting the property and the reference properties), or be different. In this embodiment, the typicality model can be trained using a training data set including property measurements and training typicality metrics (e.g., calculated using the first variant) for each of a set of training properties. In a second embodiment, the typicality model can predict the typicality metric based on the attribute value set for the property and the reference properties. However, the typicality model can be otherwise configured. Models in the system (e.g., the typicality model, the attribute model, an attribute selection model, a reference population selection model, etc.) can optionally be trained on: labeled data (e.g., manually labeled data), real estate data (e.g., valuation data, sales data, inspection data, appraisal data, broker price opinion data, permit data, etc.), insurance claims data, synthetic data, and/or any other suitable ground truth training data set. The models can optionally be trained using supervised learning, unsupervised learning, semi-supervised learning, single-shot learning, zero-shot learning, and/or any other suitable learning technique. The system can optionally include a database which can function to store property identifiers, property information (e.g., measurements, auxiliary data, etc.), attribute values, typicality metrics, reference population information (e.g., property sets), and/or any other information. The database can be local, remote, distributed, or otherwise arranged relative to any other system or module. In variants, the database can be or interface with a third-party source (e.g., third-party database, MLS database, city permitting database, historical weather and/or hazard database, tax assessor database, etc.), but can alternatively not interface with a third-party source. For example, information in the database can be retrieved, linked, or otherwise associated with information in a third-party source. In an example, a property identifier for each of a set of properties is stored in the database, wherein attribute values (e.g., extracted using S300methods) are stored in association with the corresponding property identifier for all or a subset of the properties. Attribute values can optionally be edited and/or appended to the database when new property information (e.g., recent imagery or other measurements) is added. The database can be queried (e.g., based on a property identifier) to retrieve measurements, attribute values, typicality metrics, and/or any other information in the database. The system can optionally include a computing system. The computing system can function to execute all or portions of the method, and/or perform any other suitable functionality. The computing system can be local (e.g., a user device such as a smartphone, laptop, desktop, tablet, etc.), remote (e.g., one or more servers, one or more platforms, etc.), distributed, or otherwise arranged relative to any other system or module. The computing system can include one or more: CPUs, GPUs, custom FPGA/ASICS, microprocessors, servers, cloud computing, and/or any other suitable components. The computing system can be used with a user interface (e.g., mobile application, web application, desktop application, API, database, etc.) or not be used with a user interface. The user interface can be used to: receive and/or input property identifiers and/or property requests, present attribute values, present typicality metrics, and/or otherwise used. The computing system can optionally interface with the databases. 5. Method As shown inFIG.1andFIG.2, the method can include: determining a property S100, determining attribute values for the property S200, determining a reference population for the property S300, determining reference population attribute values S400, determining a typicality metric for the property S500, and/or any suitable steps. The method can optionally include determining an influential attribute S600. All or portions of the method can be performed for one or more properties (e.g., serially, in a batch, upon request, etc.). All or portions of the method can be performed in real time (e.g., responsive to a request), iteratively, asynchronously, periodically, and/or at any other suitable time. All or portions of the method can be performed automatically, manually, semi-automatically, and/or otherwise performed. Determining a property S100functions to identify a property (e.g., for comparison against the reference population, for typicality metric determination, etc.) and/or select a property (e.g., from a set of properties) for attribute value determination. S100can be performed iteratively (e.g., for each of a set of properties in a database, for each of a set of properties in an image, etc.), in response to a request (e.g., received from a user, via an API, via a GUI, etc.), and/or at any other time. S100can include determining a single property, determining a set of properties (e.g., a plurality of properties, etc.), and/or any other suitable number of properties. When a single property is identified, the property (e.g., a property of interest) can be used to determine a reference population, for comparison against a reference population, for typicality metric determination, and/or for any other downstream methods. When a set of properties is identified, all or parts of the method can be iterated for each property in the set. For example, attribute values can be determined for each property in the set. In an illustrative example, a typicality metric is determined for one property in the set (e.g., wherein the remainder of the properties in the set are reference properties), wherein all or parts of the method can be iterated to determine typicality metrics for the other properties in the set. However, any identified property can be otherwise used. In a first variant, the property can be determined based on a property identifier (e.g., received as part of a user request, retrieved from a database, etc.). The property can optionally be selected from a database (e.g., selected from a set of properties in the database) based on the property identifier. Each property identifier preferably identifies a single property (e.g., a 1:1 cardinality between property identifiers and properties), but alternatively can identify a set of properties (e.g., identify all properties associated with the property identifier) and/or more than one property identifier can identify a single property. In a second variant, the property can be determined based on a location (e.g., geographic region). One or more properties associated with the location can be identified (e.g., a single property, each property within a geographic region, a subset of properties within a geographic region, etc.). In a first example, the one or more properties can be extracted from a map, image, geofence, and/or any other representation of the location. In a specific example, the properties can be identified using image segmentation methods to extract each property within the geographic region. In a second example, one or more properties associated with the location can be identified based on an address registry, database, and/or any other suitable method. In a first illustrative example, all properties with a neighborhood are identified. In a second illustrative example, a subset of properties within a neighborhood (e.g., a subset of properties corresponding to one or more property attribute values) are identified. However, one or more properties can be otherwise determined. Determining attribute values for the property S200can function to determine property-specific values associated with the property. S200can be performed after S100(e.g., for a single property, iteratively for a set of properties, in batches for sets of properties, etc.), in response to a request, when new property information (e.g., recent imagery or other measurements) associated with the property is received, before or after selecting attributes, at regular time intervals, and/or at any other time. S200can include determining one or more attribute values for each set of attributes, determining uncertainty parameters for all or a subset of attribute values, determining an attribute vector including attribute values and/or uncertainty parameters, and/or determining any other property-specific values. The value(s) for each property attribute within the property attribute set are preferably determined using one or more attribute models (e.g., wherein each attribute model is specific to a given attribute), but can additionally and/or alternatively be retrieved from a database, retrieved from a third-party (e.g., third-party database, real estate listing service database such as an MLS database, city permitting database, historical weather and/or hazard database, tax assessor database, etc.), determined using a different model, and/or otherwise determined. An example is shown inFIG.4. Attribute values can be determined based on property information (e.g., measurements, auxiliary data, parcel data, etc.) for the given property, property information for associated properties (e.g., neighboring properties), and/or any other suitable information. The property information can optionally be associated with a common timestamp, with a common timeframe (e.g., all determined within the same week, month, quarter, season, year, etc.), with different timeframes, and/or otherwise temporally related. The attribute values can be determined by: extracting features from property information (e.g., measurements and/or measurement segments) and determining the attribute values based on the extracted feature values, extracting attribute values directly from property information, retrieving values from a database or a third party source (e.g., third-party database, real estate listing service database such as an MLS database, city permitting database, historical weather and/or hazard database, tax assessor database, etc.), using a predetermined value, calculating a value (e.g., from an extracted value and a scaling factor, etc.), and/or otherwise determined. In a first variant, determining attribute values from the property data includes extracting features from images and determining the attribute values based on the extracted feature values. The feature values can be extracted from an image as a whole or from an image segment (e.g., segmented based on the property parcel outline, segmented based on one or more property components, etc.). In an illustrative example, the attribute model can extract visual features from an image (e.g., RGB image, depth measurement, etc.) of the property and determine an attribute of a property component (e.g., a roof geometry) for the property based on the extracted visual features. In a second variant, the attribute values are determined by applying a model or algorithm to other attribute values (e.g., calculating a structure footprint to parcel footprint feature value based on the structure square footage and the parcel square footage). In a third variant, the attribute values can be determined using a set of heuristics. For example, the number of stories of a property can be inferred based on the height of a property (e.g., based on the industry average height of a floor or story). The height of the property can be determined using another variant (e.g., retrieved from a database, from building permits, extracted from a digital surface map, etc.) or otherwise determined. For example, attribute values can determined using the methods disclosed in U.S. application Ser. No. 17/529,836 filed on 18 Nov. 2021, U.S. application Ser. No. 17/475,523 filed 15 Sep. 2021, U.S. application Ser. No. 17/749,385 filed 20 May 2022, U.S. application Ser. No. 17/870,279 filed 21 Jul. 2022, and/or U.S. application Ser. No. 17/858,422 filed 6 Jul. 2022, each of which is incorporated in its entirety by this reference (e.g., wherein features and/or feature values disclosed in the references can correspond to attributes and/or attribute values). S200can optionally include determining uncertainty parameters for all or a subset of attribute values, The uncertainty parameter for an attribute value can be determined based on: the attribute model (e.g., an output of the model, based on the model itself, etc.), property information used to determine the attribute value (e.g., a percentage of a property component that is obscured, a timeframe associated with the input measurements indicating recency of the measurements, etc.), a set of attribute values (e.g., corresponding to a given attribute), and/or otherwise determined. In an illustrative example, the attribute model input is an image, and the output includes both the attribute value and an uncertainty parameter associated with the attribute value. In this example, the uncertainty parameter can be black box output from the attribute model, determined based on the attribute model itself, and/or otherwise determined using the attribute model. In a first example, the uncertainty parameter can be used as a weight for the corresponding attribute value (e.g., used in S400to aggregate attribute values across reference properties, used in S500to weight attribute values when comparing property attribute values to reference population attribute values, etc.). In a second example, the uncertainty parameter can be used directly as an attribute value. In a specific example, uncertainty parameters can be compared in S500(e.g., a distance is determined between two attribute vectors wherein one or more components of the attribute vectors includes an uncertainty parameter). In a third example, the uncertainty parameter can be used to determine a distribution metric in S550(e.g., used to determine variance between properties, covariance between attributes, any statistical measure, etc.). In variants, the uncertainty parameter can increase the accuracy of typicality metric determination by more accurately comparing attribute values (e.g., wherein attribute values with high uncertainty have a smaller effect on the typicality metric). S200can optionally include selecting the set of attributes, wherein the set of attributes is selected from a superset of candidate attributes. This can function to: reduce computational time and/or load (e.g., by reducing the number of attributes that need to be extracted and/or processed), increase typicality metric prediction accuracy (e.g., by selecting the most predictive attributes, by reducing or eliminating confounding attributes, etc.), and/or be otherwise used. Attributes can be selected: from all available attributes, from attributes corresponding to a location (e.g., geographic region), attributes retrieved from a database, and/or any other superset of attributes. The attribute sets can be the same or different across different properties, reference populations, locations, property classes, seasons, typicality models, and/or other parameters. For example, properties in two different neighborhoods can use two different attribute sets. In another example, typicality metrics calculated for winter can use a different attribute set than that for spring (e.g., the attribute set used to analyze winter measurements can be different from the attribute set used to analyze spring measurements). The set of attributes can be selected: manually, automatically, randomly, iteratively and/or recursively, using an attribute selection model (e.g., a trained attribute selection model), using lift analysis (e.g., based on an attribute's lift), using explainability and/or interpretability methods (e.g., as described in S600), through typicality metric validation, based on an attribute's correlation with a training label or validation metric, using predictor variable analysis, during typicality model training (e.g., attributes with weights above a threshold value are selected), using a deep learning model, and/or via any other selection method or combination of methods. In a first variant, the set of attributes is selected such that a typicality metric (for a given property) determined based on the set of attributes is indicative of a validation metric. The metric can be a metric used to validate a typicality model, a training target (e.g., used to train the typicality model), and/or any other metric. For example, the validation metric can be: AVM error (e.g., properties with high atypicality have high AVM error; properties with low typicality have low AVM error, etc.), historical valuation (e.g., the property's valuation compared to valuations of properties in the reference population; the price a property was: valued at, listed for, sold for, etc.; and/or any other valuation data), manual labeling, price discrepancy, days on market, insurance loss ratio (e.g., insurance loss divided by premium), a combination thereof, and/or any other metric. The attributes can be selected to: maximize a validation metric value, obtain a target validation metric valence (e.g., positive, negative, etc.), correlate the typicality metric with the validation metric, and/or otherwise selected. In an example, a statistical analysis of training data can be used to select attributes that have a nonzero statistical relationship (e.g., correlation, interaction effect, etc.) with the validation metric (e.g., positive or negative correlation with AVM error). In a specific example, the attribute selection model can be trained such that a typicality metric determined using the selected attributes correlates with maximal AVM error (e.g., properties identified as atypical based on attribute values for the selected attributes have maximal AVM error relative to other properties), wherein the AVM error is determined based on historical data (e.g., historical sale data) and a valuation calculation using the AVM. In a second variant, the set of attributes is selected using a combination of an attribute selection model and a supplemental validation method. For example, the supplemental validation method can be any explainability and/or interpretability method (e.g., described in S600), wherein the selection method determines the effect an attribute has on the typicality metric. When this effect is incorrect or introduces biases (e.g., based on a manual determination using domain knowledge, based on a comparison with a validated typicality model, etc.), the attribute selection and/or the typicality model can be adjusted. In a third variant, the set of attributes can be selected to include all available attributes. In a fourth variant, the set of attributes can be manually selected. However, the attribute set can be otherwise selected. However, attribute values can be otherwise determined. Determining a reference population for the property S300can function to determine a set of reference properties for comparison against the property. S300can be performed in response to S100(e.g., determining a reference population for the property determined in S100), after S100, asynchronously from S100, iteratively (e.g., determining a reference population for each of a set of properties; assigning each of a set of properties to a reference population; etc.), and/or at any other suitable time. The reference population can include one or more reference properties (e.g., including or not including the property). In a first example, the number of reference properties can be greater than a threshold number of properties, wherein the threshold can be between 5-100,000 properties or any range or value therebetween (e.g., 10, 50, 100, 1000, 10000, etc.), but can alternatively be less than 5 (e.g., a single property) or greater than 100,0000. In a specific example, the threshold number of properties can be determined such that statistical significance can be achieved in all or parts of S500(e.g., in a statistical analysis performed on the reference population). In a second example, the number of reference properties is predetermined. In a specific example, the reference population includes a predetermined number of properties (e.g., 10, 100, 1000, 10000, etc.) that best satisfy a criterion (e.g., the 100 properties closest to the property). In a third example, the reference population includes all properties (e.g., any number of properties) that satisfy a set of criteria. In a fourth example, the reference population includes a single property. However, the reference population can include any number of properties. The properties in the reference population can be predetermined, dynamically determined (e.g., each iteration of all or parts of the method, for each new property determined in S100, when new property information is received, upon request, etc.), and/or otherwise determined. The reference properties can be determined based on: a location (e.g., side of street, block, zip code, neighborhood, city, radius around the property, census block group, development/subdevelopment, any geographic region, etc.), attribute value (e.g., associated with the reference properties and/or the property), property information (e.g., associated with the reference properties and/or the property), received requests (e.g., attribute values and/or property information determined from a request), the typicality model (e.g., a different reference population for different typicality models), and/or any other parameters. The reference properties in the reference population can be determined (e.g., identified) using S100methods and/or can be otherwise determined. In a first variant, determining the reference population includes identifying properties that satisfy one or more criteria, wherein all or a subset of properties that satisfy the criteria can be selected as the reference population. The criteria can include: the reference property is associated with (e.g., matches) a predetermined parameter value (e.g., location, attribute values, measurements, reference population type, other property information, etc.), the reference property satisfies a threshold criterion relative to a parameter (e.g., physically located within a threshold distance of the property, attribute vector is within a threshold distance of the property attribute vector, etc.), the reference property is a best/closest match with respect to a parameter (e.g., the one or more properties physically located closest to the location of the property, the one or more properties with attribute values closest to attribute values for the property, etc.), and/or any other criterion. The criteria and/or a parameter used in the criteria can be predetermined (e.g., a default criterion), manually determined, determined via a received request, determined based on the property, and/or otherwise determined. Examples of attribute values that can be used in a criterion include: location (e.g., geographic region, side of street, block, zip code, neighborhood, city, geofence, radius around the property, census block group, development/subdevelopment, etc.), location characteristic (e.g., rural, suburban, city, city greater than a threshold size, city less than a threshold size, distance to a city, distance to a school, etc.), property type/classification, property components (e.g., parcel size, number of bedrooms, number of bathrooms, roof type, roof condition, etc.), record attributes (e.g., built year, bed/baths, etc.), and/or any other attribute value. In a first example of an attribute value criterion, the reference population includes reference properties that are located with a threshold radius of a location associated with the property. The threshold radius can be between 1 mile-100 miles, or any range or value therebetween (e.g., 1 mile, 2 miles, 5 miles, 10 miles, 20 miles, 50 miles, etc.), but can alternatively be less than 1 mile or greater than 100 miles. In a second example of an attribute value criterion, the reference population includes reference properties with attribute values for a set of attributes that match corresponding attribute values for the property (e.g., exactly equal, closest match, match within a threshold similarity, etc.). In a specific example, the set of attributes can be attributes retrieved from a third-party database (e.g., number of beds and/or baths, construction style, year built, parcel size, building size, etc.). In a third example of an attribute value criterion, a combination of attribute value criteria can be used. In an illustrative example, all properties with a similar number of bedrooms that are also within a threshold radius of the property are selected as the reference population. In an example of a measurement criterion, all or a subset of properties depicted in an image are selected as the reference properties. The image can be an image associated with the property (e.g., wherein the property is depicted at substantially the center of the image, wherein the image depicts a geographic region associated with the property, etc.), an image submitted via a request, an image retrieved from a database, and/or any image. In a second variant, determining the reference population includes selecting properties to maximize and/or minimize a metric. In a first embodiment, the reference properties are selected to maximize/minimize the typicality metric for the property. In a specific example, the reference population is iteratively refined to maximize the typicality metric. In an illustrative example, the reference properties that maximize the typicality metric are real estate comparables (‘comps’) for the property. In a second embodiment, the reference properties are selected to maximize/minimize a validation metric (e.g., AVM error, historical valuation, manual labeling, price discrepancy, days on market, insurance loss ratio, etc.). In a third variant, the reference population is determined using a reference population selection model. The input to the reference population selection model can be the property determined via S100(e.g., a property identifier), attribute values (e.g., associated with the property), property information (e.g., associated with the property), and/or any other suitable inputs. The output of the reference population selection model can be one or more reference properties. The reference population selection model can be or include: neural networks, equations, regression, classification, segmentation algorithms, rules, heuristics, instance-based methods, regularization methods, decision trees, Bayesian methods, kernel methods, statistical methods, deterministics, support vectors, genetic programs, isolation forests, robust random cut forest, clustering, selection and/or retrieval, comparison models, object detectors, key point extraction, SIFT, any computer vision and/or machine learning method, and/or any other suitable model. In a specific example, the reference population selection model can be trained on a set of training data, wherein the training data includes: a property (e.g., a property identifier, attribute values, property information, etc.) and a set of reference properties. The model can be trained to predict the set of reference properties based on the property (e.g., based on: the property identifier, attribute values, property information, etc.). In a specific example, the training data includes a set of available properties, wherein the set of reference properties is a labeled subset (e.g., manually labeled) within the set of available properties. However, the reference population can be otherwise determined. Determining reference population attribute values S400functions to determine property-specific values and/or population specific values associated with the reference population. S400can be performed after S300(e.g., in response to determination of the reference population), prior to S300, and/or at any other time. In a specific example, attribute values can be determined for each reference property in the reference population prior to S300(e.g., performed for all properties in a database, wherein the reference population is a subset of the properties in the database), wherein determining the reference population attribute values based on the individual reference property attribute values can occur after S300. The reference population attribute values can include a set of attribute values (e.g., attribute vector) for each property within the reference population, an aggregate attribute value set (e.g., including a single aggregated value for each attribute in a set of attributes, wherein the aggregate value is based on the corresponding attribute values for each reference property), a set of attribute values derived from reference population information (e.g., directly extracted from measurements of a geographic region associated with the reference population), a combination thereof, and/or can be otherwise configured. The reference population attributes and/or attribute values are preferably analogous to the property attributes and/or attribute values (e.g., to enable comparison), but can alternatively be non-analogous to the property attributes. In a first example, the reference population attribute values and the property attribute values can each correspond to the same set of attributes (e.g., the reference population attribute vector is analogous to the property attribute vector; the reference population attribute vector includes the same attributes, optionally in the same order, as the property attribute vector, etc.). In a first variant, S400includes determining attribute values for each property within the reference population (e.g., iteratively performing S200for each reference property). Determining reference population attribute values can optionally include generating a set (e.g., an array) of attribute vectors (e.g., one attribute vector for each reference property). When measurements associated with the reference properties are used to determine the attribute values, the measurements can be from the same or different timeframes as measurements used to determine attribute values for the property (e.g., in S200). In a second variant, S400includes performing the first variant and aggregating the attribute values for all or a subset of the reference properties to generate the reference population attribute values; example shown inFIG.5AandFIG.9. In a first example, aggregating the attribute values includes taking an average (e.g., weighted average) of attribute values for the reference properties (e.g., averaging attribute values across the reference properties for each attribute in a set of attributes; averaging reference property attribute vectors; etc.). In a first specific example, each attribute value is weighted based on an uncertainty parameter for the attribute value. In a second specific example, each attribute vector is weighted based on a weight for the associated reference property (e.g., wherein weights can be based on physical distance to property, based on similarity to criteria parameters in S300, etc.). In a second example, aggregating the attribute values includes determining a statistical measure (e.g., variance, standard deviation, interquartile range, range, maximum/minimum, etc.) for one or more attributes (e.g., based on the attribute values for each reference population). In a third example, aggregating the attribute values includes clustering the reference property attribute values (e.g., based on attribute values and/or any suitable parameter). In a fourth example, aggregating the attribute values includes selecting the one or more most common attribute values and/or attribute vectors to represent the reference population. In a fifth example, a combination of one or more of the previous examples can be used. In a specific example, a first set of attribute values can be aggregated using a first method, and a second set of attribute values can be aggregated using a second method. For example, attribute values corresponding to property components can be averaged, while uncertainty parameters for attribute values are aggregated using statistical methods (e.g., to result in an overall uncertainty parameter for the average attribute value). However, attribute values can be otherwise aggregated. In a third variant, S400includes determining attribute values directly for the reference population (e.g., without determining attribute values for individual reference properties). In a first example, the attribute values are determined based on an image associated with the reference population (e.g., an image depicting all or a subset of the reference properties); example shown inFIG.5B. In a specific example, attribute values can be extracted directly from the overall image using an attribute model (e.g., without segmenting individual properties). In a second example, the attribute values can be determined based on property information for the reference population (e.g., for the reference population as a whole). In a specific example, the property information can be retrieved from a database (e.g., geographic region information, reference population type, etc.). In a third example, a combination of property information and measurements can be used to determine the reference population attribute values. The reference population attribute values are preferably associated with the same timeframe as the attribute values for the property (e.g., determined in S200), but can alternatively be from a different timeframe. The duration of the timeframe can vary as a function of: the number of available reference properties, the geographic spread of the reference properties, and/or based on any other suitable variable. When measurements are used to determine reference population attribute values, the measurements can optionally be within a smaller timeframe for a smaller reference population (e.g., a smaller number of properties, a smaller geographic region, etc.) relative to a timeframe for a larger reference population. Alternatively, a larger timeframe can be used for a smaller reference population, the timeframes can be not associated with reference population size, and/or the measurements can be otherwise configured. However, attribute values for the reference population can be otherwise determined. Determining a typicality metric for the property S500can function to determine how typical (e.g., similar, comparable, representative, etc.) or atypical (e.g., unique, complex, outlier, etc.) the property is with respect to the reference population. S500can be performed after S200and S400, be iteratively performed (e.g., for each property in a database, for each property in a set, for each of a set of reference properties with respect to a single property, for multiple sets of reference properties, etc.), and/or at any other suitable time. In a specific example, S500can be performed for a single property. In a second specific example, S500can be performed for each property in a set, wherein the reference population for each iteration includes the other properties in the set. The typicality metric can be stored in association with the property (e.g., in a database); returned via a user device, API, GUI, or other endpoint; and/or otherwise managed. The typicality metric can be a label, classification, score, value, statistical measure, and/or any parameter. The typicality metric can be discrete, continuous, binary, multiclass, and/or otherwise structured. The typicality metric can additionally or alternatively include an uncertainty parameter (e.g., variance, confidence score, etc.). The typicality metric and/or an uncertainty parameter for the typicality metric can optionally be determined based on variance and/or co-variance (e.g., variance in attribute values across reference properties, covariance between attributes, a covariance matrix based on attribute values for each reference property, etc.). In a first variant, one or more attribute values and/or attribute vectors can be normalized based on the variance of the reference population (e.g., normalized prior to determining typicality metric, during typicality metric determination, etc.). In examples, an attribute vector for each reference property is normalized, an aggregate attribute vector for the reference population is normalized, an attribute vector for the population of interest is normalized, and/or any other normalization can be performed. In a second variant, the typicality metric can be adjusted (e.g., normalized, increased/decreased, etc.) based on variance and/or co-variance. For example, the typicality metric can account for co-variance between attributes. The typicality metric is preferably determined using a typicality model, but can alternatively be otherwise determined. The outputs for the typicality model can be: a typicality metric, an uncertainty parameter, and/or any other suitable information. Inputs to the typicality model can include: attribute values (e.g., attribute vectors) for the property and/or the reference population, property information (e.g., measurements, auxiliary data, parcel data, etc.) for the property and/or the reference population, uncertainty parameters for the attribute values, and/or any other suitable inputs. In a first variant, the input includes an attribute vector for the property (e.g., determined via S200) and one or more attribute vectors for the reference population (e.g., determined via S400). In a second variant, the input includes a measurement (e.g., image) associated with the property and/or one or more measurements associated with the reference population. In a first example, the input includes analogous images (e.g., acquired during similar timeframes, acquired using similar methods, associated with similar image properties, etc.) for the property and for each reference property. In a second example, the input includes an image associated with the property and a composite image associated with the reference population. In specific examples, the composite image can be aggregated images for each reference property, an image of a geographic region depicting the reference properties, and/or any other image. In a third variant, the input includes both attribute vectors and measurements. One or more inputs can optionally be weighted (e.g., weighting inputs associated with each reference property based on a weight for the associated reference property; weighting attribute values and/or attribute vectors based on associated uncertainty parameters; weighting attribute values based on attribute importance, etc.). In specific examples, attribute importance can be determined in prior iterations of all or parts of the method (e.g., for the same or different properties), determined via S600methods, determined during attribute selection, and/or otherwise determined. The typicality model can optionally be trained (e.g., using supervised learning, unsupervised learning, semi-supervised learning, etc.). Training the typicality model can include: adjusting attributes (selecting attributes), adjusting weights (for properties, for attributes, for attribute vectors, for attribute values, etc.), refining the reference population, adjusting uncertainty parameter determination (e.g., for attribute values, for attribute vectors, for the typicality metric, etc.), adjusting comparison methods (e.g., distance calculations, clustering methods, etc.), and/or otherwise training the typicality model. In an example, the typicality model can be trained such that the typicality metric is correlated with a validation metric (e.g., AVM error, historical valuation, manual labeling, price discrepancy, days on market, a combination thereof, etc.). In an illustrative example of training the typicality model, training data includes: training inputs (e.g., attribute vectors, measurements, etc.) for a property and a reference population, and a validation metric for the property (e.g., with respect to the reference population). In this example, the typicality model can be trained to predict the validation metric (or trained to output a typicality metric correlated with the validation metric) based on the training inputs (e.g., example shown inFIG.10). Additionally or alternatively, the typicality model can be validated using the validation metric. The validation metric can optionally be determined using a model (e.g., model for typicality proxy), wherein the model outputs the validation metric for a property based on property information (e.g., historical validation data for the property, AVM outputs for the property, etc.). The typicality metric is preferably determined based on a comparison between the property attribute values and the reference population attribute values (e.g., which can include a single attribute value vector for the population or a plurality of attribute value vectors), but can be otherwise determined. The comparison can include: a statistical measure (e.g., leverage, which quartile the property attribute values fall into, etc.), a distance (e.g., between the property attribute vector and an aggregate reference population attribute vector; between the property attribute vector and a set of reference property attribute vectors; etc.), and/or any other suitable comparison. Examples of distances that can be used include: Bregman divergences (e.g., Mahalanobis distance), Bhattacharyya distance, Hamming distance (e.g., wherein the attribute vectors are treated as strings), Hellinger distance, models trained to determine a distance metric (e.g., using similarity learning or metric learning), a distance derived from a Gaussian mixture model, a multi-modal distance algorithm, and/or any other suitable distance or method. Additionally or alternatively, a trained model (e.g., a trained black box model) takes in property information (e.g., measurements, attribute values, etc.) for the property and for the reference population (e.g., for each reference property, for the entire reference population as a whole), and outputs the typicality metric. In a first variant, S500includes determining the typicality metric based on an attribute vector comparison. In this variant, the attribute values for the property and for the reference population can each include one or more vectors (e.g., array, set, point in n-dimensional feature space, etc.). The property attribute vector(s) and the reference population attribute vector(s) can then be compared; examples shown inFIG.6AandFIG.6B. The typicality metric can be a comparison output (e.g., the distance metric, the variance thereof, whether the property is an outlier or not, etc.) or be determined from the comparison output (e.g., calculated, scaled, binned into a labeled bin, etc.). In a specific example of determining the typicality metric from the comparison output, a distance metric is determined for each of a set of properties, wherein the properties are binned based on the associated distance metrics (e.g., binned into discrete groups 1-10). The number of bins can be between 1-1000 or any range or value therebetween (e.g., 100, 50, 10, 5, etc.). The binning can be uniformly distributed (e.g., decile groups with an equal number of properties in each bin), nonuniformly distributed, normally distributed, and/or have any other distribution. The distribution can be across the properties (e.g., the same number of properties are in each bin), by distance metric value (e.g., each bin encompasses the same number of metric values), and/or across any other suitable dimension. Each bin preferably corresponds to a typicality metric (e.g., wherein each property in the bin has the same typicality metric), but alternatively the typicality metric can be otherwise determined based on the bin. For example, properties with the longest distance metrics are binned into group 10 with a typicality metric of 10, and properties with the shortest distances are binned into group 1 with a typicality metric of 1. However, the typicality metric can be otherwise determined from the comparison output. In variants, a model (e.g., ML model, neural network, etc.) can be trained to predict the bin based on the attribute values for a given property (e.g., wherein the model can be trained on attribute value sets paired with bin identifiers determined using the method described herein). In a first embodiment of the first variant, the distance between vectors is determined using: Mahalanobis distance, cosine distance, Euclidean distance (e.g., dimensionality reduction and Euclidean distance, Euclidean distance in feature space, etc.), and/or any other method. For example, the distance can be between a property attribute vector and an aggregate reference population attribute vector (e.g., vector of averaged attribute values). The distance can optionally be based on a distribution metric associated with the reference population (e.g., wherein the distribution metric is determined in S550). In a first example, the distance can be determined in an n-dimensional feature space. In a first specific example, the feature space is determined based on the distribution metric (e.g., an axis of feature space is determined to maximize variance along that axis). In a second specific example, the feature space is learned. In a second example, the aggregate reference population attribute vector can be based on the distribution metric (e.g., the reference property attribute vectors are aggregated based on variance and/or co-variance across attribute values). In a second example, the distribution metric can be used determine whether the property vector is an outlier (e.g., based on the vector distance), wherein the distribution metric is a statistical analysis of the reference population (e.g., standard deviation, IQR, etc.). In a third example, non-normal distributions (e.g., for the reference population, for one or more attributes, etc.) can be accounted for (e.g., using a nonparametric typicality model) based on the distribution metric. In a specific example, the reference property attribute vectors can form a multimodal distribution. In this example, the distance can be based on a Gaussian mixture model (e.g., a mixture of Gaussian distributions representing the overall distribution), any multi-modal distance algorithm, a distance between property attribute vector and one or more representative reference population attribute vectors, and/or otherwise determined. In examples, the representative reference population attribute vector can be: an attribute vector representing a single mode within the distribution, an attribute vector representing a subpopulation of the reference properties (e.g., a centroid of a distribution of the subpopulation, wherein the subpopulation is determined using a Gaussian mixture model and/or any other mixture model), and/or any other representative reference population attribute values. In an illustrative example, the typicality metric can be the shortest distance of the set of distances between the property attribute vector and each of a set of representative reference population attribute vectors (e.g., the distance is the typicality of the property relative to the closest matching subpopulation of reference properties). In this illustrative example, the subpopulation can be an identified subset of reference properties similar to the property (e.g., used as valuation ‘comps’). In a second embodiment of the first variant, clustering is used, wherein the typicality metric is determined based on the distance between the property attribute values and the cluster of reference property attribute values (e.g., cluster centroid, cluster values, etc.); example shown inFIG.8AandFIG.8B. Each property (e.g., the property determined via S100and the reference population properties determined via S300) can be associated with a set of attribute values occupying one or more points in n-dimensional space. In one example, S500includes using a density-based clustering algorithm, local outlier factor, one-class SVM (e.g., wherein the typicality metric is based on the decision function of a one-class SVM), K nearest neighbors algorithm, DBSCAN, locality-sensitive hashing, isolation forest, and/or any other algorithm. In another example, the reference population properties are clustered based on one or more classifications (e.g., house type, residential area class, zoning class, etc.), wherein the property can be compared against the one or more clusters to produce one or more typicality metrics. The reference population cluster can optionally include multiple centroids (e.g., indicative of multiple property archetypes), wherein the typicality metric can be determined based on a distance to one or more of the centroids. In a third embodiment of the first variant, the typicality metric can be determined using isolation forests and/or any other decision tree method. For example, an isolation forest can be generated based on the property attribute values (e.g., the property and reference population attribute values). In a first specific example, the typicality metric for a property can be based on the path length of the property attribute values (e.g., average path length for the property across a multi-dimensional isolation forest, a path length for the property relative to the average path length for the isolation forest, etc.). In a second specific example, the typicality metric can be an overall typicality metric for a population, wherein the typicality metric is a statistical measure of the path lengths in the isolation forest (e.g., average path length, median path length, a distribution of path lengths, etc.). In any variant, attribute values can be transformed (e.g., to project attribute value vectors from n-dimensions to 2-dimensions prior to vector comparison). The transformation can be performed via dimensionality reduction, projection, rescaling, principal component analysis, using a distribution metric (e.g., based on variance and/or co-variance), using embedding methods, and/or otherwise transformed to a feature space (e.g., metric space). In a first specific example, the attribute values can be transformed to an n-dimensional space wherein at least one axis in the n-dimensional space is selected to be the axis of greatest variance. In a second specific example, the attribute values can be transformed to an embedding (e.g., a learned embedding, a common embedding, etc.). In this example, the embedding can be a space where attribute values for similar properties (e.g., the reference properties, comparables, etc.) are embedded near each other, wherein the typicality metric can be determined based on embedding density (e.g., a local density at the embedding location for the property attribute values), a distance in the embedding (e.g., distance between the property attribute values and a centroid of the reference population attribute values, a distance between the property attribute values and the nearest reference property attribute values), and/or any other typicality metric. In a third variant, S500includes an image-to-image comparison (e.g., example shown inFIG.7). This can be performed with or without calculating attribute values for the property and/or the reference population (e.g., attribute values are not directly extracted from images/segmented images in S200and S400). In one embodiment, S500includes generating a composite image for the reference population, wherein the property image is compared against the composite image. The image-to-image comparison can use key point matching, image transforms, perceptual hash, image feature histograms, and/or any other image analysis technique. In another embodiment, images for one or more properties can be inputs in a deep learning model. In an example, the deep learning model can be trained such that the typicality metric is correlated with a validation metric (e.g., based on historical valuation data, AVM error, etc.). S500can additionally or alternatively feature any other image analysis/comparison/evaluation method in determining the typicality metric. In a fourth variant, a neural network classifier is used to determine the typicality metric. The classifier can ingest attribute values (e.g., attribute vectors) associated with the property and the reference population, then output a typicality metric (e.g., typicality class-typical vs atypical; typicality metric, etc.). The classifier can be trained using a validation metric and/or any other suitable data. For example, each training property can be assigned a typicality metric based on AVM error, wherein the classifier can be trained to predict the assigned typicality metric based on the training property's attribute values. However, the classifier can be otherwise trained. In a fifth variant, determining the typicality metric can include: determining a vector representative of the appearance and/or geometry for a property of interest (e.g., from property measurements, such as imagery and/or geometric measurements; using a trained encoder; etc.); determining a vector representative of the appearance and/or geometry for each of a set of comparison properties; and determining the typicality metric for the property of interest relative to the set of comparison properties based on the respective vectors (e.g., by calculating a similarity score, etc.). The vectors can be extracted by the same or different models. In examples, the model can be trained to output the same vector for a given property, irrespective of common or transient changes (e.g., changes in lighting, shadows, tree coverage, transient debris, etc.). In an example, the model can be trained using the methods disclosed in U.S. application Ser. No. 18/074,295 filed 2 Dec. 2022 titled “System and Method for Change Analysis” and claiming priority to U.S. Provisional Application No. 63/290,174 filed 16 Dec. 2021 and/or U.S. Provisional Application No. 63/350,124 filed 8 Jun. 2022, incorporated herein in its entirety by this reference, wherein the vectors can be the vectors disclosed in the referenced application. However, other models can be used. However, other models can be used. However, the typicality metric can be otherwise determined. The method can optionally include determining a distribution metric S550, which can function to determine a metric associated with the distribution of attribute values for a population of properties (e.g., wherein the distribution metric can optionally be used in S500). The population is preferably the reference population but can be another property population (e.g., a subset of the reference population). The distribution metric can include: interquartile range, modality, spread, standard deviation, range, variance (e.g., covariance; a covariance matrix; an axis of greatest variance; etc.), a classification, cluster metrics, geometric parameters (e.g., ellipse/ellipsoid: axes, height/width, center point, foci, etc.) of the attribute values (e.g., in reduced dimensions), maximum/minimum, any statistical measure, and/or any other suitable metric. In a specific example, the typicality metric for a property is determined based on a distribution metric for the reference population (e.g., the typicality metric accounts for variance/covariance; the distribution metric can be used to determine whether the property is an outlier; etc.). However, distribution metrics can be otherwise determined and used. The method can optionally include identifying reference properties (e.g., a subset of the reference population) that are similar to the property, wherein the identified reference properties can be used as comparables (e.g., ‘comps’ for valuation or insurance quoting), used as an updated reference population (e.g., a refined reference population) for future iterations of all or parts of the method, stored in a database, returned to a user (e.g., via a user device, API, GUI, other endpoint, etc.), used to train the reference population selection model, and/or be or otherwise used. In a first specific example, the subset of reference properties can be used to determine an estimated valuation for the property (e.g., based on known valuation information for the subset of reference properties). In a second specific example, one or more reference property subsets can be used to determine a feature space (e.g., an embedding) for property attribute values, wherein the attribute values for reference properties in a subset are close together in the feature space. For example, the feature space can be trained to minimize the distance between attribute values for reference properties (e.g., comparables) in a reference property subset. The subset of reference properties can be determined based on: an attribute value comparison between the property and one or more of the reference properties (e.g., deemed similar when the attribute vector distance is below a threshold value, using any comparison method in S500, etc.), based on one or more typicality metrics, based on selected attributes (e.g., attributes selected using the attribute selection model), and/or otherwise determined. In a first specific example, typicality metrics can be iteratively calculated for subsets of the reference population, wherein the subset with the highest typicality metric is selected as similar to the property (e.g., selected as comparables). In a second specific example, a typicality metric can be determined for each of a set of properties (e.g., for each reference property, for each property in a geographic region, etc.), wherein properties with similar typicality metrics are selected as similar. An uncertainty parameter can optionally be associated with the subset of reference properties (e.g., wherein the uncertainty parameter can represent a predicted similarity between the valuation of a property of interest and the valuation of the subsets of reference properties). The uncertainty parameter can be determined based on: the typicality metric for the property of interest, the typicality metric for each property in the subset of reference properties (e.g., a comparison between the typicality metric for the property of interest and the typicality metrics for the subset of reference properties), and/or otherwise determined. The uncertainty parameter can be determined using probabilistic uncertainty analysis (e.g., Monte Carlo methods), deterministic uncertainty analysis, and/or any other uncertainty quantification methods. However, a subset of reference properties can be otherwise identified. The method can optionally include determining an influential attribute S600. S600can function to explain a typicality metric (e.g., why a property is typical or atypical, what attribute(s) are causing the typicality metric model to output a typicality metric indicating that the given property is atypical, etc.). S600can occur automatically (e.g., for each property), in response to a request, when a typicality metric falls below or rises above a threshold, and/or at any other time. S600can use explainability and/or interpretability techniques to identify property attributes and/or attribute interactions that had the greatest effect in determining a given typicality metric. The influential attribute(s) (e.g., key attribute(s)) and/or values thereof can be provided to a user (e.g., to explain why the property is atypical), used to identify errors in the data, used to identify ways of improving the typicality model and/or the attribute selection model, and/or otherwise used. S600can be global (e.g., for one or more typicality metric models used in S500) and/or local (e.g., for a given property and/or property attribute values). S600can include any explainability and/or interpretability method, including: local interpretable model-agnostic explanations (LIME), Shapley Additive exPlanations (SHAP), Ancors, DeepLift, Layer-Wise Relevance Propagation, contrastive explanations method (CEM), counterfactual explanation, Protodash, Permutation importance (PIMP), L2X, partial dependence plots (PDPs), individual conditional expectation (ICE) plots, accumulated local effect (ALE) plots, Local Interpretable Visual Explanations (LIVE), breakDown, ProfWeight, Supersparse Linear Integer Models (SLIM), generalized additive models with pairwise interactions (GA2Ms), Boolean Rule Column Generation, Generalized Linear Rule Models, Teaching Explanations for Decisions (TED), surrogate models, attribute summary generation, and/or any other suitable method and/or approach. In an example, one or more high-lift attributes for a property typicality metric determination are returned to a user. Any of these interpretability methods can alternatively or additionally be used in selecting attributes. However, one or more influential attributes can be otherwise determined. 6. Use Cases All or portions of the methods described above can be used for automated property valuation, for insurance purposes, for rental analysis (e.g., rental value, vacancy estimation, renovation costs, etc.), and/or otherwise used. For example, any of the outputs discussed above (e.g., for the property, for the reference population, etc.) can be provided to an automated valuation model (AVM), which can predict a property value based on one or more of the attribute values (e.g., feature values), generated by the one or more models discussed above, and/or attribute value-associated information. The AVM can be: retrieved from a database, determined dynamically, and/or otherwise determined. In examples, the typicality metric can be used to determine a property metric and/or used as a proxy for a property metric. In particular, the typicality metric can determine or infer: AVM error (e.g., properties with high atypicality have high AVM error; properties with low typicality have low AVM error, etc.), insurance loss ratio (e.g., insurance loss divided by premium), property investment risk, days on the market, valuation certainty, an estimated valuation, price discrepancy (e.g., from another property, from an average price), a combination thereof, and/or any other value or assessment. The typicality metric can optionally be used with: personal lines insurance (e.g., rating, inspection optimization, etc.), real estate property investing (e.g., identify underpriced and/or overpriced properties-atypically good and/or atypically bad; determine risk, etc.), real estate loan trading (e.g., use typicality metric as an input into a model that predicts probability of default; at the pre-bid analysis stage and/or the due diligence stage for Non Performing Loan workflows; etc.), real estate mortgage origination, real estate valuations (e.g., use typicality metric as an input to an automated valuation model—as a warning, trigger, induce a workflow change, etc.; use method to identify ‘comps’; use typicality metric as a supplement to a property-level valuation report; etc.), initial property assessment stage (e.g., for Single Family Rental workflows), appraisal review (e.g. use method to test validity of ‘comps’), insurance loss (e.g., improving risk prediction for loss severity and/or incurring a loss by using a typicality metric calculated based on property attributes associated with that loss type), and/or otherwise used. In a specific example, the method can identify the properties that are low value outliers and are likely to be overvalued by an AVM. In a specific example, the method can be used on a list of properties (e.g., to identify outliers in a portfolio). In a first example, inspectors, insurance agents, real estate agents, and/or any other human resources can be automatically allocated to the identified outliers. In a second example, computational resources (e.g., complex AVM models, more extensive attribute value analysis, etc.) can be automatically allocated to the identified outliers. In variants, the typicality metric can be used as an input to an AVM model to reduce error in the model output (e.g., adjusting the model to account for atypicality, adjusting the model output to account for atypicality, etc.). The typicality metric can additionally or alternatively be used to determine whether to use another model (e.g., use a nonstandard AVM model for atypical properties) and/or adjust a model (e.g., adjust an AVM model for atypical properties). This can provide computational savings by identifying which homes need less or more computationally intensive models. Different subsystems and/or modules discussed above can be operated and controlled by the same or different entities. In the latter variants, different subsystems can communicate via: APIs (e.g., using API requests and responses, API keys, etc.), requests, and/or other communication channels. Alternative embodiments implement the above methods and/or processing modules in non-transitory computer-readable media, storing computer-readable instructions that, when executed by a processing system, cause the processing system to perform the method(s) discussed herein. The instructions can be executed by computer-executable components integrated with the computer-readable medium and/or processing system. The computer-readable medium may include any suitable computer readable media such as RAMs, ROMs, flash memory, EEPROMs, optical devices (CD or DVD), hard drives, floppy drives, non-transitory computer readable media, or any suitable device. The computer-executable component can include a computing system and/or processing system (e.g., including one or more collocated or distributed, remote or local processors) connected to the non-transitory computer-readable medium, such as CPUs, GPUs, TPUS, microprocessors, or ASICs, but the instructions can alternatively or additionally be executed by any suitable dedicated hardware device. Embodiments of the system and/or method can include every combination and permutation of the various system components and the various method processes, wherein one or more instances of the method and/or processes described herein can be performed asynchronously (e.g., sequentially), contemporaneously (e.g., concurrently, in parallel, etc.), or in any other suitable order by and/or using one or more instances of the systems, elements, and/or entities described herein. Components and/or processes of the following system and/or method can be used with, in addition to, in lieu of, or otherwise integrated with all or a portion of the systems and/or methods disclosed in the applications mentioned above, each of which are incorporated in their entirety by this reference. As a person skilled in the art will recognize from the previous detailed description and from the figures and claims, modifications and changes can be made to the preferred embodiments of the invention without departing from the scope of this invention defined in the following claims. | 106,832 |
11861881 | DESCRIPTION OF THE EMBODIMENTS Reference will now be made in detail to example implementations. These embodiments are described in sufficient detail to enable those skilled in the art to practice the invention and it is to be understood that other embodiments may be utilized and that changes may be made without departing from the scope of the invention. The following description is, therefore, merely exemplary. I. Introduction and Overview Pathologists commonly use pen ink to indicate malignant regions in images, such as biopsy images. Deep learning models trained with such images can erroneously learn that ink is evidence of malignancy. Therefore, some embodiments train a weakly-supervised attention-based neural network under a multiple-instance learning paradigm to detect pen ink on images. Such pen ink can then be removed from the images, and the scrubbed images used to train a second neural network to detect malignancy, without inadvertently training the second network to erroneously identify pen ink as malignancy. More generally, embodiments can be used to train a neural network to detect critical components in images, e.g., components that are by themselves determinative of a classification of the images. Such embodiments can identify, e.g., by annotating images, such critical components. These and other features and advantages are disclosed in detail herein. FIG.1is a schematic diagram100depicting an example supra-image102, its constituent images104, a tiling108of one of its constituent images106, and vector representations112of the tiles of the constituent image106according to various embodiments. As used herein, the term “supra-image” includes one or more constituent images of a specimen. The specimen may be a medical specimen, a landscape specimen, or any other specimen amenable to image capture. For example, a supra-image may represent images from a single resection or biopsy (the supra-image) constituting several slides (the constituent images). As another example, the supra-image may be a three-dimensional volume representing the results of a radiological scan, and the constituent images may include two-dimensional slices of the three-dimensional volume. Within the domain of digital pathology, the images forming a supra-image may be of tissue stained with Hematoxylin and Eosin (H&E), and a label may be associated with the supra-image, for example, the diagnosis rendered by the pathologist. Frequently, more tissue is cut than can be scanned in a single slide—this is especially frequent for suspected malignant cases—and several images may share the same weak label. A supra-image may be of any type of specimen in any field, not limited to pathology, e.g., a set of satellite images. As shown in,FIG.1, supra-image102may represent a three-dimensional volume by way of non-limiting examples. Supra-image102may be, for example, a representation of a three-dimensional Computer Tomography (CT) or Magnetic Resonance Imaging (MRI) scan. Images104represent the constituent images of supra-image102. By way of non-limiting examples, images104may be slices derived from, or used to derive, a CT or MRI scan, or may be whole-slide images, e.g., representing multiple images from a biopsy of a single specimen. In general, when processed by a computer, each constituent image of a supra-image may be broken down into a number of tiles, which may be, e.g., 128 pixels by 128 pixels. As shown inFIG.1, image106of constituent images104may be partitioned into tiles, such as tile110, to form partitioned image108. In general, an individual tile may be represented by one or more corresponding feature vectors. Such feature vectors may be obtained from tiles using a separate neural network, trained to produce feature vectors from tiles. Each such feature vector may encode the presence or absence of one or more features in the tile that it represents. Each feature vector may be in the form of a tuple of numbers. As shown inFIG.1, feature vectors112represent the tiles of partitioned image108. For example feature vector114may correspond to and represent a presence or absence of a particular feature in tile110. Both tiles and their representative feature vectors are examples of “components” as that term is used herein. According to some embodiments, each component is implemented as a tile of a constituent image of a supra-image. According to some embodiments, each component is implemented as a vector, such as a feature vector, that represents a respective tile in a constituent image of a supra-image. Current hardware (e.g., Graphical Processing Units or GPUs) commonly used to train neural networks cannot always hold all the image tiles from a supra-image or constituent image at once due to Random Access Memory (RAM) limitations. For example, each image of a supra-image is typically too large to feed into the hardware used to hold and train the deep learning neural network. Some embodiments train a weakly supervised neural network at the supra-image level, within these hardware limitations, by sampling (e.g., randomly sampling) components from constituent images of supra-images into collections of components that are close to the maximum size the hardware is able to hold in RAM. The random sampling may not take into account which image from a supra-image the components are drawn from; components may be randomly drawn without replacement from a common pool for the supra-image. The sampling can be performed several times for a given supra-image, creating more than one collection to train with for a given supra-image. Multiple such collections may form a partition of a given supra-image; that is, the set-theoretic union of the collections from a single supra-image may cover the entire supra-image, and the set-theoretic intersection of such collections may be empty. A. Multi-Instance Supra-Image Level Learning While previous work in multiple-instance learning has been limited to training at the level of small image patches, or subsets of an image identified by a pre-processing step or network, embodiments may utilize tile-based multiple-instance learning training at the supra-image level, which does not require selecting out small regions of interest. Datasets that contain large numbers of high-resolution images, such as neural network training corpora, can be extremely costly to annotate in detail. A time-saving and cost-saving alternative to annotations is to supply weak labels to the images or supra-images, simply stating whether or not certain features are present. In past work, weakly-supervised networks were trained to operate either only in the specific case of a weak label per-image, or trained using a downstream classifier or alternative numerical method to combine the output of a weakly-supervised classifier from the image level to the supra-image level. The former case clearly restricts the usability of a trained network, while the latter relies on two models' or methods' performance to generate and combine image-level classifications to produce a representative supra-image level classification. None of these prior methods of artificial intelligence training allow for training based on how diagnoses are made in clinical practice, where the pathologist renders a diagnosis for each specimen only, not for each individual slide pertaining to that specimen. This diagnosis may be stored in an electronic clinical records system, such as a Laboratory Information System (“LIS”), a Laboratory Information Management System (“LIMS”), an Electronic Medical Record (“EMR”) system. By abstracting training to the specimen level, some embodiments provide a training method that may operate on diagnoses made straight from an electronic clinical records system, without the requirement of human intervention to label relevant slides. That is, some embodiments may use as a training corpus of supra-images with weak labels taken from diagnoses stored in an electronic clinical records system. Some embodiments provide a framework in which each image in a supra-image is divided into a mosaic of tiles, e.g., squares of 128 pixels-per-side. A sampled collection of such tiles, or feature vector representations thereof, small enough to be stored in the available volatile memory of the training computer, and labeled with the label of the supra-image from which the tiles are obtained, may serve as a single element of the training corpus for weakly-supervised training according to various embodiments. Multiple such labeled collections of components may comprise a full training corpus. No region-of-interest need be identified. While embodiments may be applied within the domain of digital pathology, the supra-image methods disclosed herein generalize to other fields with problems that involve several images with shared labels, such as time series of satellite images. B. Attention and Critical Components This disclosure presents techniques for automatically identifying critical components in images that are dispositive of classification of the images (or supra-images made up of the images). Some embodiments provide a neural network trained to identify such critical components. Embodiments may be used to identify critical components that are sufficient for classifying an image (or supra-image) into a particular class by a trained neural network. FIG.2is schematic diagram of a neural network200that includes attention layers208according to various embodiments. For example, any of methods300,400,500, and700may be implemented using neural network200. Neural network200may be implemented on hardware such as system900. Neural network200accepts an input202. The input202may be a set of sampled components, such as tiles or feature vectors, from a constituent image of a supra-image. The set of components may be a proper subset of a partition of the image and may be randomly sampled. The components provided as input202may be used for training the neural network200, e.g., as part of method300, or for classification of their image or supra-image, e.g., as part of any of methods400,500, or700. Neural network passes the input202to convolutional layers204. Convolutional layers204include multiple layers of convolutions that, during training, apply filters to learn high-level features from components, such as tiles. During classification, convolutional layers204apply the filters to a novel input202from a novel image or supra-image that causes an activation in convolutional layers204, and repeated activations may generate a feature map, which identifies features of interest in the novel image or supra-image. Neural network passes outputs from convolutional layers204to fully connected layers206. Fully connected layers206convert flattened convolutional features for each component into lower dimensional vectors. The output of fully connected layers206is passed to self-attention module210and to attention layers208, which may be implemented as fully connected layers for attention. Attention layers208convert lower dimensional vectors for each component into a scalar floating point attention weight, which may be in the range from zero to one, inclusive. Self-attention module210computes the scalar product of lower dimensional features and scalar weights to get an aggregated representation of a supra-image. The aggregated representation is passed to final fully connected layers212. Final fully connected layers202convert the aggregated representation received from self-attention module210into a final prediction. The final prediction is then passed as an output214. During training, the output214can be compared with an actual classification to train a model as described in detail herein in reference toFIG.3. During classification, the output214may be used as a classification of the input image or supra-image. In general, in deep learning machine learning neural networks with attention, the attention layers, such as attention layers208, assign an attention weight to each component. (In the case of a multi-class or multi-task model, embodiments may have more than one attention layer that is class or task-dependent, and therefore more than one attention weight per component. These different attention layers might be trained to highlight different features.) For supra-images, potentially spanning multiple whole-slide images, individual tiles or their representative feature vectors may be assigned an attention weight. The attention layers allow the model to focus on the most relevant regions of interest in the image or specimen during the training procedure. This increases model interpretability by capturing which tiles or regions were considered important when performing the classification task. It also allows for feeding a model a large amount of information (e.g., one or several whole-slide images), while having it learn which regions are relevant and which regions are not. Due to the way the model is trained, the learned attention weights reflect the relative importance of each tile's contribution to the model's overall prediction for a whole-slide image. So, within a given image or specimen, the component with the highest attention weight contributes the most to the model's decision, while the component with the lowest attention weight contributes the least to the model's decision. These weights may be used in direct fashion to visualize the trained model's predicted regions of interest or attention on an image, e.g., as a heat map, without the need for any pixel-wise annotations during training. However, while a direct use of the attention weights predicted by a model to generate a map can provide an impression of the relative importance of tiles to the model's prediction, it gives no concept of absolute importance, e.g., whether a particular component's presence in an image is determinative of the image's classification. What a pathologist really wants to know when they examine these attention weights—or a map of the attention weights laid over the image—are often answers to the questions: Are there signs of tumor in this region? Would any given tile or region, alone, be enough to come to the same conclusion as to whether the specimen indicates cancer? Does the evidence within a tile or specific region indicate some other type of classification? To answer these sorts of questions, some embodiments provide techniques to highlight components that contribute to—or are sufficient for—the model's prediction. Beyond just identifying components that are helpful for, or correlated with, a prediction, some embodiments can identify one or more components that are sufficient on their own for a model's classification. This is accomplished by testing collections (e.g., subsets) of the components on which a prediction is to be made. If a whole-slide image (i.e., all of the components of a whole-slide image) are associated with a prediction D by a trained multiple instance learning model, it is desirable to know which region(s) (or subset(s) of tiles) specifically resulted in the prediction D. Some embodiments accomplish this by iteratively predicting on smaller subsets of tiles of varying attention weights, until they determine a threshold on attention weight below which tiles are not associated with a positive prediction. The set of tiles above this threshold corresponds to tiles that result in a positive classification for the whole-slide image, even if only a single one of them were to be evaluated by the model. In other words, for the tumor problem, an embodiment can predict that any tile with an attention weight above this threshold contains standalone evidence of a tumor (or evidence of whatever the model was trained to predict). C. Spuriously Correlated Feature Removal Artificial Intelligence has proven to be a useful tool in tackling several problems in fields such as computational histopathology. Despite its success, it has been recently identified that artifacts on whole-slide images adversely affect machine learning models. Several deep learning based solutions have been proposed in the literature to tackle this problem, but such solutions either require hand crafted features or finer labels than slide-level labels. Pen ink can be particularly problematic when attempting to train a weakly-supervised machine learning model to do something like cancer detection, because often pen ink is used by pathologists or residents to mark regions of cancerous morphology in a supra-image. Therefore, pen ink represents a spurious correlation with the actual features of which detection is desired, namely, regions showing cancerous morphology. A weakly-supervised model, which does not rely on pixel-wise annotations for training, is prone to incorrectly identifying instances of pen ink and similar spurious correlates of the desired target as positive components, of themselves indicative of cancer. Because of this tendency of weakly supervised models, it is desirable to eliminate pen ink from training corpora. Some embodiments provide a neural network trained to remove confounding features—such as pen ink—from images. Such embodiments may provide a first neural network that can classify images as either including or not including certain features that are spuriously correlated with the presence of features of interest. The first neural network can identify particular components that include the spuriously correlated confounding features, and the components that include such features can then be removed. The resulting images (or their supra-images) can then be used to train a second neural network to classify images as including or not including the feature of interest. Some embodiments use weakly-supervised, multiple-instance learning coupled with deep features to remove pen ink from pathology images automatically and without annotations. The applied technique is not color-dependent, requires no annotations to train, and does not need handcrafted or heuristic features to select inked regions. II. Example Embodiments FIG.3is a flow diagram for a method300of iteratively training, at the supra-image level, a neural network to classify supra-images for the presence of a property according to various embodiments. Method300may be used to generate models that may be used to implement methods400,500, and700, for example. Method300may be implemented by system900, as shown and described herein in reference toFIG.9. Method300may extend component-based multiple-instance learning training to the supra-image level, which does not require selecting out small regions of interest, manually, or otherwise. At block302, method300accesses a training corpus of supra-images. The supra-images may be in any field of interest. The supra-images include or may be otherwise associated with weak labels. The supra-images and weak labels may be obtained from an electronic clinical records system, such as an LIS. The supra-images maybe accessed over a network communication link, or from electronic persistent memory, by way of non-limiting examples. The training corpus may include hundreds, thousands, or even tens of thousands or more supra-images. The training corpus of supra-images may have previously been determined to be sufficient, e.g., by employing method200as shown and described herein in reference toFIG.2. At304, method300selects a batch of supra-images for processing. In general, the training corpus of supra-images with supra-image level labels to be used for training is divided into one or more batches of one or more supra-images. In general, during training, the loss incurred by the network is computed over all batches through the actions of304,306,308,310,312, and314. The losses over all of the batches are accumulated, and then the weights and biases of the network are updated, at which point the accumulated loss is reset, and the process repeats until the iteration is complete. At306, method300samples, e.g., randomly samples, a collection of components from the batch of supra-images selected at304. In general, each batch of supra-images is identified with a respective batch of collections of components, where each collection of components includes one or more components sampled, e.g., randomly sampled, from one or more images from a single supra-image in the batch of supra-images. Thus, the term “batch” may refer to both a batch of one or more supra-images and a corresponding batch of collections of components from the batch of one or more supra-images. Embodiments may not take into account which constituent image a given component in a collection comes from; components in the collection may be randomly drawn without replacement from a common pool for a given supra-image. Each collection of components is labeled according to the label of the supra-image making up the images from which the components from the collection are drawn. The components may be tiles of images within the selected supra-image batch, or may be feature vectors representative thereof. The collections of components, when implemented as tiles, may form a partition of a given supra-image, and when implemented as vectors, the corresponding tiles may form a partition. Embodiments may iterate through a single batch, i.e., a batch of collections of components, through the actions of306,308, and310, until all components from the images of the supra-images for the batch are included in some collection of components that is forward propagated through the network. Embodiments may iterate through all of the batches through the actions of304,306,308,310,312, and314to access the entire training dataset to completely train a network. Thus, at308, the collection of components sampled at306is forward propagated through the neural network to compute loss. When the collection of components that is forward propagated through the multiple-instance learning neural network, the network's prediction is compared to the weak label for the collection. The more incorrect it is, the larger the loss value. Such a loss value is accumulated each time a collection of components is propagated through the network, until all collections of components in the batch are used. At310, method300determines whether there are additional collections of components from the batch selected at304that have not yet been processed. If so, control reverts to306, where another collection of components is selected for processing as described above. If not, then control passes to312. At312, method300back propagates the accumulated loss to update the weights and biases of the neural network. That is, after iterating through the collections of components from a single batch, the neural network weights and biases are updated according to the magnitude of the aggregated loss. This process may repeat over all batches in the dataset. Thus, at314, method300determines whether there are additional batches of supra-images from the training corpus accessed at302that have not yet been processed during the current iteration. Embodiments may iterate over the batches to access the entire training dataset. If additional batches exist, then control reverts to304, where another batch of one or more supra-images is selected. Otherwise, control passes to316. At316, once all collections of components from all batches of supra-images are processed according to304,306,308,310,312, and314, a determination is made as to whether an additional epoch is to be performed. In general, each iteration over all batches of supra-images in the training corpus may be referred to as an “epoch”. Embodiments may train the neural networks for hundreds, or even thousands or more, of epochs. At318, method300provides the neural network that has been trained using the training corpus accessed at302. Method300may provide the trained neural network in a variety of ways. According to some embodiments, the trained neural network is stored in electronic persistent memory. According to some embodiments, the neural network is made available on a network, such as the internet. According to some such embodiments, an interface to the trained neural network is provided, such as a Graphical User Interface (GUI) or Application Program Interface (API). FIG.4is a flow diagram for a method400of automatically classifying a supra-image according to various embodiments. Method400may use a neural network trained according to method300as shown and described herein in reference toFIG.3. Method400may be implemented by system900, as shown and described herein in reference toFIG.9. At402, a supra-image is obtained. The supra-image may be in any field. The supra-image may be obtained over a network link or by retrieval from persistent storage, by way of non-limiting example. At404, the neural network is applied to the supra-image obtained at402. To do so, the supra-image may be broken down into parts (e.g., components or sets of components) and the parts may be individually passed through the network up to a particular layer, where the features from the various parts are aggregated, and then the parts are passed through to a further particular layer, where the features are again aggregated, until all parts are passed and all features aggregated such that one or more outputs are produced. Multiple outputs, if present, may be independently useful, or may be synthesized to produce a final, single output. At406, method400provides the output. The output may be provided by displaying a corresponding datum to a user of method400, e.g., on a computer monitor. Such a datum may indicate the presence or absence of the feature of interest in the supra-image. FIG.5is a flow diagram for a method500of determining a threshold attention weight for a positively classified supra-image according to various embodiments. Method500may be performed for any classifier that includes an attention layer, such as neural network200as shown and described above in reference toFIG.2. Method500may use a neural network trained according to method300as shown and described herein in reference toFIG.3. Method500may be implemented by system900, as shown and described herein in reference toFIG.9. In general, a single component with an attention weight above the threshold attention weight for the supra-image, when present in the supra-image, is sufficient for a positive classification of the supra-image by the classifier. If all components with attention weights greater than the threshold attention weight are removed from a positively classified supra-image, the classifier will classify the resulting scrubbed supra-image as negative. Method500determines the threshold attention weight for a particular positively classified supra-image. Method500may be used to determine threshold attention weights for each of a plurality of positively classified supra-images by repeated application. Method500may determine a threshold attention weight for a positively classified supra-image using a search strategy. The naïve approach, where the subset of components sufficient for positive classification is established by individually passing each component of a supra-image through the model, is prohibitively inefficient for practical applications, as there are a large number of components in each supra-image, and typically a very small number of those are responsible for the model's positive prediction. Instead, some embodiments utilize a binary search technique to detect this subset of components sufficient for positive classification by utilizing the attention weights themselves to choose trial subsets to pass through the model and obtain a prediction on each subset. For a supra-image with model prediction D, the goal of passing these trial subsets through the model is to find an attention threshold index t for a set of components l sorted by their attention weights [w0. . . wn] with w0being the lowest and wnbeing the highest such that:(1) Predicting on the subset of components with the lowest attention [i0, . . . , it-1] results in a different prediction D′≠D, and(2) Predicting on the subset [l0. . . lt] results in the same prediction D as would result from predicting on the entire set. (And, predicting individually on any single tile in the set [lt. . . ln] will also give the original prediction D.) Some embodiments find this critical attention weight threshold wtby using the model to predict on subsets of images until it identifies the index for the minimal attention weight index t at which the model's decision D matches the prediction the network would make on the entire supra-image, or when maximally-attended components of the supra-image are included in prediction. It is computationally inefficient to feed all possible trial subsets of components within a supra-image through the model to find t, so some embodiments improve the efficiency of this process by logarithmically reducing the number of components in a trial subset for every iteration, following a binary search strategy to efficiently find t. At502, method500obtains a supra-image. The supra-image may have a prediction D by a neural network with an attention layer, and the prediction D may be a positive prediction for a property. The property may be a property of interest, a spuriously correlated property, a confounding property, or a different property. For notational purposes, the supra-image obtained at502may have n components (e.g., tiles or feature vectors of its one or more constituent images). That is, the supra-image may have a total of n components from among its constituent image(s); such components of a supra-image may be referred to as “supra-image components”. At504, method500sorts the set of n components by their attention weights, where the sorted list is denoted [l0. . . ln]. Actions506,508,510, and512iteratively divide the set of components in half to narrow down the components contributing to the model's positive prediction. The iteration may repeatedly pare down a set of components referred to as an “active” set of components A to determine the threshold attention weight. At the first step in the iteration, the active sequence of components may be set equal to the full set of components [l0. . . ln]. The iteration may continue until the active sequence of components A may no longer be divided in half and all components have been labeled as either positive (corresponding to critical areas for the prediction D) or negative. Thus, at506, method500splits the active sequence of components A in half. If the active sequence of components is odd, it may be split into two sets that are not equal in size in any manner, e.g., by splitting in half and assigning the “middle” component to either half. Denote the half with the lower attention weights as Al, and denote the half with higher attention weights as Ah. At508, method500passes the components in the lower half Althrough the neural network to obtain a classification. The classification may be positive (e.g., D) or negative (e.g., D′). At510, method500resets the active sequence of components according to the classification obtained at508. If, on the one hand, the classification is not equal to D, then label all components in Alas negative and discard them from the search, and set the active components to the upper half, A=Ah. If, on the other hand, the classification is equal to D, then label all components in Ahas positive and discard them from the search, and set the active components to the lower half, A=Al. At512, method500determines whether to stop. In general, method500may stop once the active sequence of components A may no longer be divided in two (e.g., it is a singleton). In that case, the threshold is equal to the index t of first component in Ah, corresponding to the tile or component with the lowest attention weight that can be labelled as a positive for corresponding to the prediction D. Such an attention weight, denoted wt, may be provided as the threshold attention weight for the supra-image obtained at502. At514, method500provides the threshold attention weight. The threshold attention weight may be provided an any of a variety of manners. According to some embodiments, the threshold attention weight is provided by being displayed on a computer monitor. According to some embodiments, the threshold attention weight is provided by being stored in electronic persistent memory in association with an identification of its associated supra-image, image(s), or component(s) thereof. According to some embodiments, the threshold attention weight is provided to a computer process, either directly or with prior storage in memory, such as is shown and described herein in reference to method700. Method500may be repeated for a plurality of supra-images, e.g., a training corpus of supra-images, in order to assign each supra-image a threshold attention weight. For various applications of method500, each component in such a supra-image may be associated with the threshold attention weight associated with the supra-image. FIG.6depicts an example whole-slide image portion600with critical components identified by an example embodiment. As shown inFIG.6, image600is a portion of a whole-slide image and depicts three slices from a biopsy. The supra-image that includes image600was classified as positive for cancer by a neural network that included an attention layer. Further, image600is parsed into tiles, and the tiles, e.g., tiles602, that have attention weights over the threshold attention weight for the supra-image that includes image600are sufficient on their own for a positive classification by the neural network. If the tiles having attention weights greater than the threshold attention weight, including tiles602, are removed from the image, it will no longer have a positive classification by the neural network. FIG.7is a flow diagram for a method700of training an electronic neural network classifier to identify a presence of a particular property in a novel supra-image while ignoring a spurious correlation of the presence of the particular property with a presence of an extraneous property. The particular property may be, by way of non-limiting example, a presence of a pathology such as a malignancy. The extraneous confounding property may be, by way of non-limiting example, a presence of ink markings. Method700may involve two classifiers, referred to herein as a property if interest classifier and an extraneous property classifier. The property of interest classifier may be any classifier, such as by way of non-limiting example, a neural network, that can be trained to discriminate for the presence of the particular property, e.g., a property of interest. The extraneous property classifier can be any neural network or other classifier that includes an attention layer, e.g., neural network200as shown and described above in reference toFIG.2, trained to discriminate for the presence of the extraneous property, which may be a confounding property that is spuriously correlated with the particular property. Method700may utilize a training technique, such as method300as shown and described herein in reference toFIG.3. Method700may determine a threshold attention weight for one or more supra-images, e.g., using method500as shown and described herein in reference toFIG.5. Method700may be implemented by system900, as shown and described herein in reference toFIG.9. At702, method700obtains a training corpus of supra-images for training a first neural network to discriminate for the presence of the property of interest. The supra-images may be weakly labeled, e.g., based on electronic clinical records, such as are stored in an LIS. The training corpus may be obtained using any of a variety of techniques, such as retrieval over a computer network or retrieval from electronic persistent storage. At704, method700identifies, for the extraneous property and using the extraneous property classifier, a respective attention weight for each component of each image of each supra-image in the training corpus obtained at702. Method700may pass the components through the extraneous property classifier to do so. For example, method700may pass the components through the extraneous property classifier as part of one or more applications of method500, e.g., during706; that is, the actions of704may be combined with the actions of706. The attention weights may be stored in volatile or persistent memory for usage later on in method700. At706, method700identifies, for the extraneous property and using the extraneous property classifier, a respective threshold attention weight for each supra-image in the training corpus obtained at702. Method700may implement method500repeatedly to do so. The threshold attention weights may be stored in volatile or persistent memory for usage later on in method700. At708, method700removes from each supra-image of the training corpus the components that have attention weights above the threshold attention weight for the respective supra-image. Method700may do so using a variety of techniques. For example, the components that have attention weights above their respective threshold attention weights may be masked, covered, or deleted from the supra-images in the training corpus. For example, such components may be marked for omission from being passed to the neural network during training. The process of708produces a scrubbed training corpus, which does not include a detectable presence of the extraneous property. At710, method700trains the particular property classifier using the scrubbed training corpus produced by708. Method700may use method300, as shown and described herein in reference toFIG.3, to do so. The resulting trained particular property classifier is capable of discriminating for the presence of the property of interest, without erroneously classifying supra-images that include the extraneous property as positive, even though the extraneous property may be spuriously correlated with a positive classification for the property of interest in the original training corpus. For example, method700can be used to detect pen ink marks in digitized whole-slide images (or supra-images) in a weakly supervised fashion. To accomplish this, some embodiments train an attention-based multiple instance learning model using tiles as components, with the labels for training given at the slide level, indicating whether or not a slide contains a region inked by pen. Once the model is trained, it can identify the slides (or supra-images) predicted positive for pen ink. Such embodiments then isolate positive components containing pen ink in these slides and exclude them from analyses, training, or prediction. FIG.8depicts shows depictions800of an example pathology image802with a pen mark806and the example pathology image804with a pen mark identification808produced by an example reduction to practice. Image802is a dermatopathology slide containing residual melanoma in situ, with pen ink present, indicating the presence of the tumor. Image804shows attention values, represented as relative transparency, from the ink detection model for each tile overlaid on the original whole-slide image. Lighter shaded regions have lower attention weight values, whereas darker shading indicates high attention weight values. The identified region was outlined by the second example reduction to practice described in Section V, below. In particular, the outlined squares and right polygons overlaid on the whole-slide image identify the components that were labeled as positive (relevant to the prediction) using binary search attention thresholding as described herein in reference to method500. Note that images802and804include thousands of components, only a small number of which are positive. The inked region is completely isolated in image804. Removing the outlined tiles from the whole-slide image allows it to be used for downstream weakly supervised models without risk of ink producing biased, false signals of malignancy. FIG.9is a schematic diagram of a hardware computer system900suitable for implementing various embodiments. For example,FIG.9illustrates various hardware, software, and other resources that can be used in implementations of any of methods300,400,500, or700and/or one or more instances of a neural network, such as neural network200. System900includes training corpus source902and computer901. Training corpus source902and computer901may be communicatively coupled by way of one or more networks904, e.g., the internet. Training corpus source902may include an electronic clinical records system, such as an LIS, a database, a compendium of clinical data, or any other source of supra-images suitable for use as a training corpus as disclosed herein. Computer901may be implemented as any of a desktop computer, a laptop computer, can be incorporated in one or more servers, clusters, or other computers or hardware resources, or can be implemented using cloud-based resources. Computer901includes volatile memory914and persistent memory912, the latter of which can store computer-readable instructions, that, when executed by electronic processor910, configure computer901to perform any of methods300,400,500, and/or700, and/or form or store any neural network, such as neural network200, and/or perform any classification technique, such as hierarchical classification technique1100, as shown and described herein. Computer901further includes network interface908, which communicatively couples computer901to training corpus source902via network904. Other configurations of system900, associated network connections, and other hardware, software, and service resources are possible. III. First Example Reduction to Practice This Section presents a first example reduction to practice. The first example reduction to practice was configured to perform hierarchical classification of digitized whole-slide image specimens into six classes defined by their morphological characteristics, including classification of “Melanocytic Suspect” specimens likely representing melanoma or severe dysplastic nevi. The reduction to practice was trained on 7,685 images from a single lab (the reference lab), including the largest set of triple-concordant melanocytic specimens compiled to date, and tested the system on 5,099 images from two distinct validation labs. The reduction to practice achieved Area Underneath the Receiver Operating Characteristics Curve (AUC) values of 0.93 classifying Melanocytic Suspect specimens on the reference lab, 0.95 on the first validation lab, and 0.82 on the second validation lab. The reduction to practice is capable of automatically sorting and triaging skin specimens with high sensitivity to Melanocytic Suspect cases and demonstrates that a pathologist would only need between 30% and 60% of the caseload to address all melanoma specimens. A. Introduction to the Reduction to Practice More than five million diagnoses of skin cancer are made each year in the United States, about 106,000 of which are melanoma of the skin. Diagnosis requires microscopic examination H&E stained, paraffin wax embedded biopsies of skin lesion specimens on glass slides. These slides can be manually observed under a microscope, or digitally on a whole-slide image scanned on specialty hardware. The five-year survival rate of patients with metastatic malignant melanoma is less than 20%. Melanoma occurs more rarely than several other types of skin cancer, and its diagnosis is challenging, as evidenced by a high discordance rate among pathologists when distinguishing between melanoma and benign melanocytic lesions (˜40% discordance rate). The Melanocytic Pathology Assessment Tool and Hierarchy for Diagnosis (MPATH-Dx; “MPATH” hereafter) reporting schema was introduced by Piepkorn, et al.,The mpath-dx reporting schema for melanocytic proliferations and melanoma, Journal of the American Academy of Dermatology, 70(1):131-141, 2014 to provide a precise and consistent framework for dermatopathologists to grade the severity of melanocytic proliferation in a specimen. MPATH scores are enumerated from I to V, with I denoting a benign melanocytic lesion and V denoting invasive melanoma. It has been shown that discordance rates are related to the MPATH score, with better inter-observer agreement on both ends of the scale than in the middle. A tool that allows labs to sort and prioritize melanoma cases in advance of pathologist review could improve turnaround time, allowing pathologists to review cases requiring faster turnaround time early in the day. This is particularly important as shorter turnaround time is correlated with improved overall survival for melanoma patients. It could also alleviate common lab bottlenecks such as referring cases to specialized dermatopathologists, or ordering additional tissue staining beyond the standard H&E. These contributions are especially important as the number of skin biopsies performed per year has skyrocketed, while the number of practicing pathologists has declined. The advent of digital pathology has brought the revolution in machine learning and artificial intelligence to bear on a variety of tasks common to pathology labs. Several deep learning algorithms have been introduced to distinguish between different skin cancers and healthy tissue with very high accuracy. See, e.g., De Logu, et al.,Recognition of cutaneous melanoma on digitized histopathological slides via artificial intelligence algorithm, Frontiers in Oncology, 10, 2020; Thomas, et al., Interpretable deep learning systems for multi-class segmentation and classification of nonmelanoma skin cancer, Medical Image Analysis, 68:101915, 2021; Zormpas-Petridis, et al.,Superhistopath: A deep learning pipeline for mapping tumor heterogeneity on low-resolution whole-slide digital histopathology images, Frontiers in Oncology, 10:3052, 2021; and Geijs, et al.,End-to-end classification on basal-cell carcinoma histopathology whole-slides images, Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series, February 2021. However, almost all of these studies fail to demonstrate the robustness required for use in a clinical workflow setting because they were tested a on small number (<˜1000) of whole-slide images. Moreover, these algorithms are often not capable of triaging whole-slide images, as they use curated training and test datasets that do not represent the diversity of cases encountered in a dermatopathology lab. Many of them rely on pixel-level annotations to train their models, which is slow and expensive to scale to a large dataset with greater variability. Considerable advancements have been made towards systems capable of use in clinical practice for prostate cancer. In Campanella, et al.,Clinical-grade computational pathology using weakly supervised deep learning on whole-slide images, Nature Medicine, 25(8):1301-1309, 2019, the authors trained a model in a weakly-supervised framework that did not require pixel-level annotations to classify prostate cancer and validated on ˜10,000 whole-slide images sourced from multiple countries. However, some degree of human-in-the-loop curation was performed on their dataset, including manual quality control such as post-hoc removal of slides with pen ink from the study. Pantanowitz, et al,An artificial intelligence algorithm for prostate cancer diagnosis in whole-slide images of core needle biopsies: a blinded clinical validation and deployment study, The Lancet Digital Health, 2(8):e407-e416, 2020 describes using pixel-wise annotations to develop a model trained on ˜550 whole-slide images that distinguish high-grade from low-grade prostate cancer. In dermatopathology, the model developed in Ianni, et al.,Tailored for real-world: A whole-slide image classification system validated on uncurated multi-site data emulating the prospective pathology workload, Nature Scientific Reports, 10(1):1-12, 2020, hereinafter, “Ianni 2020”, classified skin lesion specimens between four morphology-based groups, was tested on ˜13,500 whole-slide images, and also demonstrated that use of confidence thresholding could provide a high accuracy; however, it grouped malignant melanoma with all other benign melanocytic lesions, limiting its potential uses. Additionally, all previous attempts at pathology classification using deep learning have, at their greatest level of abstraction, performed classification at the level of a whole-slide image or a sub-region of a whole-slide image. Because a pathologist is required to review all whole-slide images from a tissue specimen, previous deep learning pathology efforts therefore do not leverage the same visual information that a pathologist would have at hand to perform a diagnosis, require some curation of datasets to ensure that pathology is present in all training slides, and implement ad-hoc rules for combining the predictions of each whole-slide corresponding to a specimen. Most have also neglected the effect of diagnostic discordance on their ground truth, resulting in potentially mislabeled training and testing data. Thus, this Section presents a reduction to practice that can classify skin cases for triage and prioritization prior to pathologist review. Unlike previous systems, the reduction to practice performs hierarchical melanocytic specimen classification into low (MPATH I-II), Intermediate (MPATH III), or High (MPATH IV-V) diagnostic categories, allowing for prioritization of melanoma cases. The reduction to practice was the first to classify skin biopsies at the specimen level through a collection of whole-slide images that represent the entirety of the tissue from a single specimen, e.g., a supra-image. This training procedure is analogous to the process of a dermatopathologist, who reviews the full collection of scanned whole-slide images corresponding to a specimen to make a diagnosis. Finally, the reduction to practice was trained and validated on the largest dataset of consensus-reviewed melanocytic specimens published to date. The reduction to practice was built to be scalable and ready for the real-world, built without any pixel-level annotations, and incorporating the automatic removal of scanning artifacts. B. Reference and Validation Lab Data Collection The reduction to practice was trained using slides from 3511 specimens (consisting of 7685 whole-slide images) collected from a leading dermatopathology lab in a top academic medical center (Department of Dermatology at University of Florida College of Medicine), which is referred to as the “Reference Lab”. The Reference Lab dataset consisted of both an uninterrupted series of sequentially-accessioned cases (69% of total specimens) and a targeted set, curated to enrich for rarer melanocytic pathologies (31% of total specimens). Melanocytic specimens were only included in this set if three dermatopathologists' consensus on diagnosis could be established. The whole-slide images consisted exclusively of H&E-stained, formalin-fixed, paraffin-embedded dermatopathology tissue and were scanned using a 3DHistech P250 High Capacity Slide Scanner at an objective power of 20×, corresponding to 0.24 μm/pixel. The final classification given by the reduction to practice was one of six classes, defined by their morphologic characteristics:1. Basaloid: containing abnormal proliferations of basaloid-oval cells, primarily basal cell carcinoma of various types;2. Squamous: containing malignant squamoid epithelial proliferations, consisting primarily of squamous cell carcinoma (invasive and in situ);3. Melanocytic Low Risk: benign to moderately atypical melanocytic nevi/proliferation of cells of melanocytic origin, classified as the MPATH I or MPATH II diagnostic category;4. Melanocytic Intermediate Risk: severely atypical melanocytic nevi or melanoma in situ, classified as the MPATH III diagnostic category;5. Melanocytic High Risk: invasive melanoma, classified as the MPATH IV or V diagnostic category; or6. Other: all skin specimens that do not fit into the above classes, including but not limited to inflammatory conditions and benign proliferations of squamoid epithelial cells. The overall reference set was composed of 544 Basaloid, 530 Squamous, 1079 Melanocytic and 1358 Other specimens. Of the Melanocytic specimens, 764 were Low Risk, 213 were Intermediate Risk and 102 were High Risk. The heterogeneity of this reference set is illustrated in Table 1, below. TABLE 1Counts of each of the general pathologies inthe reference set from the Reference Lab,broken-out into specific diagnostic entitiesDiagnostic MorphologyCountsBasaloid544Nodular Basal Cell Carcinoma404Basal Cell Carcinoma, NOS123Basal Cell Carcinoma, Morphea7typePilomatrixoma5Infiltrative Basal Cell Carcinoma5Squamous530Invasive Squamous Cell Carcinoma269Squamous Cell Carcinoma in situ254(Bowen’s Disease)Fibrokeratoma4Warty Dyskeratorma3Melanocytic High Risk102Melanoma102Melanocytic Intermediate Risk213Melanoma In Situ202Severe Dysplasia9Melanocytic Low Risk764Conventional Melanocytic Nevus368(acquired and congenital)Mild Dysplasia289Moderate Dysplasia75Halo Nevus14Dysplastic Nevus, NOS12Spitz Nevus2Blue Nevus2Other Diagnoses1360 The specimen counts presented herein for the melanocytic classes reflect counts following three-way consensus review (see Section IV(C)). For training, validating, and testing the reduction to practice, this dataset was divided into three partitions by sampling at random without replacement with 70% of specimens used for training, and 15% used for each of validation and testing. To validate performance and generalizability across labs, scanners, and associated histopathology protocols, several large datasets of similar composition to the Reference Lab were collected from leading dermatopathology labs of two additional top academic medical centers (Jefferson Dermatopathology Center, Department of Dermatology Cutaneous Biology, Thomas Jefferson University, denoted as “Validation Lab 1”, and Department of Pathology and Laboratory Medicine at Cedars-Sinai Medical Center, which is denoted as “Validation Lab 2”). These datasets are both comprised of: (1) an uninterrupted set of sequentially-accessioned cases—65% for Validation Lab 1, 24% for Validation Lab, and (2) a set targeted to heavily sample melanoma, pathologic entities that mimic melanoma, and other rare melanocytic specimens. Specimens from Validation Lab 1 consisted of slides from 2795 specimens (3033 whole-slide images), scanned using a 3DHistech P250 High Capacity Slide Scanner at an objective power of 20× (0.24 μm/pixel). Specimens from Validation Lab 2 consisted of slides from 2066 specimens (2066 whole-slide images; each specimen represented by a single whole-slide image), with whole-slide images scanned using a Ventana DP 200 scanner at an objective power of 20× (0.47 μm/pixel). Note: specimen and whole-slide image counts above reflect specimens included in the study after screening melanocytic specimens for inter-pathologist consensus. Table 2 shows the class distribution for the Validation labs. TABLE 2Class counts for the Validation Lab datasetsValidationValidationLabel CategoryLab 1Lab 2MPATH I-II1457458MPATH III225361MPATH IV-V100361Basaloid198265Squamous10455Other711563 C. Consensus Review There are high discordance rates in diagnosing melanocytic specimens. Elmore et al. [4] studied 240 dermatopathology cases and found that the consensus rate for MPATH Class II lesions was 25%, for MPATH Class III lesions 40%, and for MPATH Class IV 45%. Therefore, three board-certified pathologists reviewed each melanocytic specimen to establish a reliable ground truth for melanocytic cases in the implementation of the reduction to practice described herein. The first review was the original specimen diagnosis made via glass slide examination under a microscope. Two additional dermatopathologists independently reviewed and rendered a diagnosis digitally for each melanocytic specimen. The patient's year of birth and gender were provided with each specimen upon review. Melanocytic specimens were considered to have a consensus diagnosis and included in the study if:1. All three dermatopathologists were in consensus on a diagnostic class for the specimen, or2. Two of three dermatopathologists were in consensus on a diagnostic class for the specimen, and a fourth and fifth pathologist reviewed the specimen digitally and both agreed with the majority classification. A diagnosis was rendered in the above fashion for every melanocytic specimen obtained from the Reference Lab and Validation Lab 1. All dysplastic and malignant melanocytic specimens from Validation Lab 2 were reviewed by three dermatopathologists, and only the specimens for which consensus could be established were included in the study. No non-melanocytic specimens were reviewed for concordance due to inherently lower known rates of discordance. For the specimens obtained from the Reference Lab, consensus was established for 75% of specimens originally diagnosed as MPATH I/II, 66% of those diagnosed as MPATH III, 87% of those diagnosed as MPATH IV/V, and for 74% of the reviewed specimens in total. For specimens obtained from Validation Lab 1, pathologists consensus was established for 84% of specimens originally diagnosed as MPATH I/II specimens, 51% of those diagnosed as MPATH III, 54% of those diagnosed as MPATH IV/V, and for 61% of the reviewed specimens in total. D. Reduction to Practice System Architecture FIG.10is a schematic diagram of the system architecture1000of an example reduction to practice. The reduction to practice includes three main components: quality control1010, feature extraction1020, and hierarchical classification1030. A brief description of how the reduction to practice was used to classify a novel supra-image follows. Each specimen1002, a supra-image, was first segmented into tissue-containing regions, subdivided into 128×128 pixel tiles by tiling1004, and extracted at an objective power of 10×. Each tile was passed through the quality control1010, which includes ink filtering1012, blur filtering1016, and image adaptation1014. Ink filtering1012implemented at least a portion of an embodiment of method700. The image-adapted tiles were then passed through the feature extraction1020stage, including a pretrained ResNet50 network1022, to obtain embedded vectors1024as components corresponding to the tiles. Next, the embedded vectors1024were propagated through the hierarchical classification1030stage, including an upstream neural network1032performing a binary classification between “Melanocytic Suspect” and “Rest”. Specimens that were classified as “Melanocytic Suspect” were fed into a first downstream neural network1034, which classified between “Melanocytic High Risk, Melanocytic Intermediate Risk” and “Rest”. The remaining specimens were fed into a second downstream “Rest” neural network1036, which classified between “Basaloid, Squamous, Melanocytic Low Risk” and “Other”. This classification process of the reduction to practice is described in detail presently. Quality control1010included ink filtering1012, blur filtering1016, and image adaptation1014. Pen ink is common in labs migrating their workload from glass slides to whole-slide images where the location of possible malignancy was marked. This pen ink represented a biased distractor signal in training the reduction to practice that is highly correlated with malignant or High Risk pathologies. Tiles containing pen ink were identified by a weakly supervised neural network trained to detect inked slides. These tiles were removed from the training and validation data and before inference on the test set. Areas of the image that were out of focus due to scanning errors were also removed to the extent possible by blur filtering1016by setting a threshold on the variance of the Laplacian over each tile. In order to avoid domain shift between the colors of the training data and validation data, the reduction to practice adopted as its image adaptation1014the image adaptation procedure in Ianni 2020. The next component of the reduction to practice, feature extraction1020, extracted informative features from the quality controlled, color-standardized tiles. To capture higher-level features in these tiles, they were propagated through a neural network (ResNet50; He, et al.,Deep residual learning for image recognition, arXiv preprint arXiv:1512.03385, 2015) trained on the ImageNet (Deng, et al.,Imagenet: A large-scale hierarchical image database, In IEEE Conference on Computer Vision and Pattern Recognition, pages 248-255, 2009) dataset to embed each input tile into 1024 channel vectors which were then used in subsequent neural networks. The hierarchical neural network architecture was developed in order to classify both Melanocytic High and Intermediate Risk specimens with high sensitivity. First, the upstream neural network1032performed a binary classification between “Melanocytic Suspect” (defined as “High or Intermediate Risk”) and “Basaloid, Squamous, Low Risk”, or “Other” (which are collectively defined as the “Rest” class). Specimens that were classified as “Melanocytic Suspect” were fed into the downstream neural network1034, which further classified the specimen between “Melanocytic High Risk, Melanocytic Intermediate Risk” and “Rest”. The remaining specimens, classified as “Rest”, were fed into a separate downstream neural network1036, which further classified the specimen between “Basaloid, Squamous, Melanocytic Low Risk” and “Other”. Each neural network1032,1034,1036included four fully-connected layers (two layers of 1024 channels each, followed by two of 512 channels each). Each neuron in the three layers after the input layer was ReLU activated. The three neural networks1032,1034,1036in the hierarchy were trained under a weakly-supervised multiple-instance learning (MIL) paradigm. Each embedded tile was treated as an instance of a bag containing all quality-assured tiles of a specimen. Embedded tiles were aggregated using sigmoid-activated attention heads. To help prevent over-fitting, the training dataset included augmented versions of the tiles. Augmentations were generated with the following augmentation strategies: random variations in brightness, hue, contrast, saturation, (up to a maximum of 15%), Gaussian noise with 0.001 variance, and random 90° image rotations. The upstream binary “Melanocytic Suspect vs. Rest” classification neural network1032and the downstream “Rest” subclassifier neural network1036were each trained end-to-end with cross-entropy loss. The “Melanocytic Suspect” subclassifier neural network1034was also trained with cross-entropy loss, but with a multi-task learning strategy. This subclassifier neural network1034was presented with three tasks: differentiating “Melanocytic High Risk” from “Melanocytic Intermediate Risk” specimens, “Melanocytic High Risk” from “Rest” specimens, and “Melanocytic Intermediate Risk” from “Rest” specimens. The training loss for this subclassifier neural network1034was computed for each task, but was masked if it did not relate to the ground truth label of the specimen. Two out of three tasks were trained for any given specimen in a training batch. By training in this manner, the shared network layers were used as a generic representation of melanocytic pathologies, while the task branches learned to attend to specific differences to accomplish their tasks. FIG.11is a schematic diagram representing a hierarchical classification technique1100implemented by the reduction to practice ofFIG.10. For example, the hierarchal classification technique1100may be implemented by hierarchal classification1030as shown and described above in reference toFIG.10. Thus,FIG.11depicts Melanocytic Suspect Subclassifier1134, corresponding to the first downstream neural network1034ofFIG.10, and depicts Rest subclassifier1136, corresponding to the second downstream neural network1036ofFIG.10. During inference, the predicted classes of an input specimen1102(e.g., a supra-image) were computed as follows:1. The larger of the two confidence values1104(see below for the confidence thresholding procedure) output from the upstream classifier determined which downstream classifier a specimen was passed to.2. If the specimen was handed to the “Rest” subclassifier1136, used the highest confidence class probability was used as the predicted label.3. If the specimen was handed to the Melanocytic Suspect subclassifier1134, the highest confidence class probability between the “Melanocytic High Risk vs Rest” and “Melanocytic Intermediate Risk vs Rest” tasks was used as the predicted label. As an additional step in the classification pipeline, the hierarchical classification technique1100performed classification with uncertainty quantification to establish a confidence score for each prediction using a Monte Carlo dropout method following a similar procedure as used by Gal et al.,Dropout as a Bayesian approximation: Representing model uncertainty in deep learning, In International Conference on Machine Learning, pages 1050-1059, 2016. Using the confidence distribution of the specimens in the validation set of the Reference Lab, the hierarchal classification technique1100computed confidence threshold values for each predicted class following the procedure outlined in Ianni 2020 by requiring classifications to meet a predefined a level of accuracy in the validation set. Specimens that were predicted as “Melanocytic High Risk” had to pass two confidence thresholds: an accuracy threshold1112and a PPV threshold1114—both established a priori on the validation set to be predicted as “Melanocytic High Risk—in order to be predicted as “Melanocytic High Risk”. Specimens that were predicted to be “Melanocytic High Risk” but failed to meet these thresholds were predicted as “Melanocytic Suspect”. Thresholds that maximized the sensitivity of the reduction to practice to the “Melanocytic Suspect” class were set, while simultaneously maximizing the PPV to the “Melanocytic High Risk” class. To evaluate how the reduction to practice generalizes to data from other labs, the neural network trained on data from the Reference Lab to both Validation Lab 1 and Validation Lab 2 was fine tuned. A quantity of 255 specimens were set aside from each validation lab (using an equal class distribution of specimens) as the calibration set, of which 210 specimens were used as the training set, and 45 specimens were used as the validation set for fine tuning the neural networks. (The remaining specimens in the validation lab used as the test set.) The final validation lab metrics presented below are reported on the test set with these calibrated neural networks. E. Performance Evaluation FIG.12depicts Receiver Operating Characteristic (“ROC”) curves1200for the neural networks implemented by the reduction to practice ofFIG.6. In particular, the ROC curves derived from the Reference Lab test dataset for the hierarchal neural networks632,634,636of the reduction to practice as shown and described in reference toFIG.6are depicted inFIG.12.FIG.12depicts such results for the upstream classifier (left column), the High & Melanocytic Intermediate classifier (middle column), and the Basaloid, Squamous, Low Risk Melanocytic & Rest classifier (right column), for the Reference Lab (first row), for Validation Lab 1, (second row), and for Validation Lab 2 (third row). The Area Underneath the ROC Curve (“AUC”) values, calculated with the one-vs-rest scoring scheme, were 0.97, 0.95, 0.87, 0.84, 0.81, 0.93, and 0.96 for the Basaloid, Squamous, Other, Melanocytic High Risk, Melanocytic Intermediate Risk, Melanocytic Suspect, and Melanocytic Low Risk classes, respectively. Table 3 shows the performance of the reduction to practice with respect to diagnostic entities of clinical interest on the Reference Lab test dataset. In particular, Table 3 shows metrics for selected diagnoses of clinical interest, based on the reference Lab test set, representing the classification performance of the individual diagnoses into their higher-level classes: e.g., a correct classification of “Melanoma” is the prediction “Melanocytic High Risk”. Results are class-weighted according to the relative prevalence in the test set. TABLE 3Metrics for selected diagnoses of clinical interestF1BalancedDiagnosisPPVSensitivityScoreAccuracySupportMelanoma → Melanocytic0.660.450.470.5223High RiskMelanoma → Melanocytic1.000.830.900.8323SuspectMelanoma in situ →1.000.750.860.7520MelanocyticIntermediate RiskMelanoma in situ →1.000.850.920.8520MelanocyticSuspectSpitz Nevus0.000.000.000.002Dysplastic Nevus0.910.760.820.5661Dermal Nevus1.000.810.900.8128Compound Nevus0.940.750.820.5573Junctional Nevus0.840.770.800.4261Halo Nevus1.001.001.001.0020Blue Nevus1.000.670.800.6768Squamous Cell Carcinoma1.000.810.890.8115Bowen’s Disease1.000.850.920.854Basal Cell Carcinoma1.000.840.910.848 The sensitivity of the reduction to practice to the Melanocytic Suspect class was found to be 0.83, 0.85 for the Melanocytic High and Intermediate risk classes, respectively. The PPV to Melanocytic High Risk was found to be 0.57. The dropout Monte Carlo procedure set the threshold for Melanocytic High Risk classification very high; specimens below this threshold were classified as Melanocytic Suspect, maximizing the sensitivity to this class. After fine-tuning all three neural networks in the hierarchy through the calibration procedure in each validation lab, the reduction to practice was able to generalize to unseen data from both validation labs as depicted inFIG.12. Note that fine-tuning was not performed for any of the neural networks in the pre-processing pipeline (Colorization, Ink Detection or ResNet). The ROC curves derived from the Validation Lab 1 and Validation Lab 2 test datasets are shown inFIG.12. The AUC values for Validation Lab 1 were 0.95, 0.88, 0.81, 0.87, 0.87, 0.95, and 0.92 for the Basaloid, Squamous, Other, Melanocytic High Risk, Intermediate Risk, Suspect, and Low Risk classes, respectively and the AUC values for the same classes for Validation Lab 2 were 0.93, 0.92, 0.69, 0.76, 0.75, 0.82, and 0.92. F. Consensus Ablation Study FIG.13depicts a chart1300comparing reference lab performance on the same test set when trained on consensus and non-consensus data. The melanocytic class referenced in chart1300is defined as the Low, Intermediate and High Risk classes. The sensitivity of the Melanocytic Intermediate and High Risk classes are defined with respect to the reduction to practice classifying these classes as suspect. The PPV to melanocytic high risk in the non-consensus trained model was 0.33, while the consensus model was 0.57. In general, diagnosing melanocytic cases is challenging. Although some specimens (such as ones diagnosed as compound nevi) clearly exhibit very low risk, and others (such as invasive melanoma) exhibit very high risk of progressing into life threatening conditions, reproducible stratification in the middle of the morphological spectrum has historically proved difficult. The results disclosed in this Section were derived with the reduction to practice trained and evaluated on consensus data: data for which the ground truth melanocytic specimen diagnostic categories were agreed upon by multiple experts. To understand the effect of consensus on training deep learning neural networks, an ablation study was performed by training two hierarchical neural networks. Both neural networks used all non-melanocytic specimens available in the training set. The first neural network was trained only including melanocytic specimens for which consensus was obtained under the diagnostic categories of MPATH I/II, MPATH III, or MPATH IV/V. The other neural network was trained by also including non-consensus data: melanocytic specimens whose diagnostic category was not agreed upon by the experts. To facilitate a fair comparison, validation sets for both neural network versions and a common consensus test set derived from the Reference Lab were reserved. The sensitivities of the reduction to practice to different classes on both consensus and non-consensus data are shown inFIG.13, where a clear improvement is shown in the sensitivity to the Melanocytic class of over 40% for melanocytic specimens that are annotated with consensus labels over ones that are not; this primarily manifested from a reduction in false positive Melanocytic Suspect classifications. G. Discussion This document discloses a reduction to practice capable of automatically sorting and triaging skin specimens with high sensitivity to Melanocytic Suspect cases prior to review by a pathologist. By contrast, prior art techniques may provide diagnostically-relevant information on a potential melanoma specimen only after a pathologist has reviewed the specimen and classified it as a Melanocytic Suspect lesion. The ability of the reduction to practice to classify suspected melanoma prior to pathologist review could substantially reduce diagnostic turnaround time for melanoma by not only allowing timely review and expediting the ordering of additional tests or stains, but also ensuring that suspected melanoma cases are routed directly to subspecialists. The potential clinical impact of an embodiment with these capabilities is underscored by the fact that early melanoma detection is correlated with improved patient outcomes. FIG.14depicts a chart1400showing mean1402and standard deviation1404sensitivity to melanoma versus percentage reviewed for 1,000 simulated sequentially accessioned datasets, drawn from reference lab confidence scores. In particular, chart1400depicts mean1402and standard deviation sensitivity1404to melanoma versus percentage reviewed for 1,000 simulated sequentially-accessioned datasets, drawn from Reference Lab confidence scores. In the clinic, 95% of melanoma suspect cases are detected within the first 30% of cases, when ordered by melanoma suspect model confidence. As the reduction to practice was optimized to maximize melanoma sensitivity, the performance was investigated as a simple Melanocytic Suspect binary classifier. The reduction to practice may be used to sort a pathologist's work list of specimens by the reduction to practice's confidence (in descending order) in the upstream classifier's suspect melanocytic classification.FIG.10demonstrates the resulting sensitivity to the Melanocytic Suspect class against the percentage of total specimens that a pathologist would have to review in this sorting scheme in order to achieve that sensitivity. A pathologist would only need between 30% and 60% of the caseload to address all melanoma specimens according to this dataset. Diagnostic classification of melanocytic lesions remains challenging. There is known lack of consensus among pathologists, and a disturbing lack of intra-pathologist concordance over time was recently reported. Training with consensus data resulted in improved performance seen in classifications excluding Melanocytic Suspect, which has the highest pathologist discordance rates, as show in in Chart1000. Because pathologists tend to cautiously diagnose a benign lesion as malignant, the reduction to practice learned the same bias in absence of consensus. By training on consensus of multiple dermatopathologists, the reduction to practice may have the unique ability to learn a more consistent feature representation of melanoma and aid in flagging misdiagnosis. While the reduction to practice is highly sensitive to melanoma (84% correctly detected as Intermediate or High Risk in the Reference Lab Test set) there are a large number of false positives (2.7% of sequentially-accessioned specimens in the reference lab were predicted to be suspect) classified as suspect. It may therefore be possible to flag initial diagnoses discordant with the reduction to practice's classification of highly confident predictions for review in order to lower the false positive rate. The reduction to practice also enables other automated pathology workflows in addition to triage and prioritization of suspected melanoma cases. Sorting and triaging specimens into other classifications such as Basaloid could allow the majority of less complicated cases (such as basal cell carcinoma) to be directly assigned to general pathologists, or to dermatologists who routinely sign out such cases. Relevant to any system designed for clinical use is how well its performance generalizes to sites on which the system was not trained. Performance of the reduction to practice on the Validation Labs after calibration (as shown inFIG.10) was in many cases close to that of the Reference Lab. IV. Second Example Reduction to Practice This Section presents a second example reduction to practice. In the second example reduction to practice, a weakly-supervised attention-based neural network, similar to neural network200, was trained under a multiple-instance learning paradigm to detect and remove pen ink on a slide. In particular, an attention-based neural network was trained under a multiple-instance learning framework to detect whether or not ink was present on a slide, where the attention-based neural network treated a slide as a bag and tiles from the slide as instances. The training corpus for the second example reduction to practice included whole-slide images of H&E-stained malignant skin (240 whole-slide images) and prostate (465 whole-slide images) specimens, half with and half without pen ink present. The dataset was randomly partitioned into 70%/15%/15% train/validation/test sets. Each whole-slide image was divided into 128×128 pixel tiles to train the model to classify each whole-slide image as positive or negative for pen ink. Ink-containing regions were identified by iteratively predicting on a whole-slide image with high-attention tiles removed until the prediction became negative, and then automatically excluded from the image. That is, ink-containing regions were identified and removed using an application of at least a portion of method700. If both benign and malignant tissue types were represented in the training corpus, the weakly supervised model used to detect ink might instead have learned to identify patterns of malignancy. To avoid this, the training corpus included whole-slide images with and without pen ink from a dataset of skin biopsies, specifically melanocytic tissues (240 whole-slide images; 236 melanomas [in situ: 118, invasive: 118], 3 dysplastic, 1 Spitz) scanned on Ventana DP-200, and from a dataset of prostate biopsies (465 whole-slide images; 182 Gleason grade 6, 201 grade 7, 40 grade 8, 42 grade 9) scanned on Epredia (3D Histech). Whole-slide images were drawn from both source datasets such that 50% of whole-slide images had ink present and 50% did not. Each whole-slide image was first passed through tissue segmentation stage, and the tissue regions were divided into a bag of 128×128 pixel tiles to train the model. The model included five convolutional layers, two fully connected layers, a single attention head and a single sigmoid-activated output head. The ink detector was trained only on whole-slide-image-level labels, without requiring pixel level annotations. If the output was greater than 0.5, it was interpreted as a positive prediction. If ink was detected (i.e., the output value was greater than 0.5), the second example reduction to practice used the attention values for each tile to steadily remove highly-attended tiles by iteratively performing inference on subsets of tiles, until the decision of the model changed to “no ink” (i.e., output dropped beneath 0.5) using an application of at least a portion of method500. After the tiles that contributed to the decision of ink being present were identified, they were removed from the bag, and the scrubbed whole-slide image could be used for downstream training of weakly supervised models.FIG.8depicts an application of the second example reduction to practice. The ink-detection model of the second example reduction to practice achieved 98% balanced accuracy (F1score=0.98) on 106 withheld test whole-slide images. To demonstrate efficacy of removing ink tiles in downstream modeling, a malignancy-detection model was trained on prostate whole-slide images with and without pen ink excluded to discriminate prostate cancer regions with a Gleason score of at least 6. The model without pen ink removed erroneously focused on ink tiles to achieve strong performance, at 92% balanced accuracy. With ink removed, model performance increased to 95% balanced accuracy, demonstrating a +3% improvement on balanced accuracy and +3% improvement on precision by focusing on regions of malignancy, reducing false positives. The technique for pen ink removal applied by the second example reduction to practice required no annotations and performed on both skin and prostate images. The technique was not color-dependent, and required no handcrafted or heuristic features to select inked regions. The second example reduction to practice thus demonstrates the importance of removing such seemingly innocuous artifacts from machine learning datasets. Thus, the first and second reductions to practice demonstrate the advantages of removing seemingly innocuous artifacts from machine learning training corpora. In particular, the first and second reductions to practice show an improvement in performance when pen ink regions are removed from whole-slide images. However, pen ink is one of many commonly occurring quality issues. More broadly, embodiments may be used to detect and remove any artifacts that could adversely bias models if ignored when using weakly supervised learning. Some further aspects are defined in the following clauses: Clause 1: A method of training a first electronic neural network classifier to identify a presence of a particular property in a novel supra-image while ignoring a spurious correlation of the presence of the particular property with a presence of an extraneous property, the method comprising: obtaining a training corpus of a plurality of supra-images, each supra-image comprising at least one image, each image of each of the at least one image corresponding to a respective plurality of components, wherein the respective plurality of components for each image of each of the at least one image of each supra-image of the training corpus collectively form a supra-image plurality of components; passing each respective supra-image of the plurality of supra-images of the training corpus through a second electronic neural network classifier trained to identify a presence of the extraneous property, the second electronic neural network classifier comprising an attention layer, whereby the attention layer assigns a respective attention weight to each component of the supra-image plurality of components; identifying, for each supra-image of the plurality of supra-images of the training corpus that have a positive classification by the second electronic neural network classifier, a respective supra-image threshold attention weight, whereby each component of the supra-image plurality of components is associated with a respective supra-image threshold attention weight, wherein each individual component of the supra-image plurality of components that has a respective attention weight above its respective supra-image threshold attention weight corresponds to positive classification by the second electronic neural network classifier, and wherein each individual component of the supra-image plurality of components that has a respective attention weight below its respective supra-image threshold attention weight corresponds to negative classification by the second electronic neural network classifier; removing components of the supra-image plurality of components that have respective attention weights above their respective supra-image threshold attention weights to obtain a scrubbed training corpus; and training the first electronic neural network classifier to identify the presence of the particular property using the scrubbed training corpus. Clause 2: The method of Clause 1, wherein the extraneous property comprises a pen marking. Clause 3: The method of Clause 1 or Clause 2, wherein the identifying, for each supra-image of the plurality of supra-images of the training corpus that have a positive classification by the second electronic neural network classifier, a respective supra-image threshold attention weight comprises conducting, for each supra-image of the plurality of supra-images of the training corpus, a respective binary search of its components. Clause 4: The method of any of Clauses 1-3, wherein the conducting, for each supra-image of the plurality of supra-images of the training corpus, a respective binary search of its components comprises: ordering components of each supra-image of the plurality of supra-images of the training corpus according to their respective attention weights to form a respective ordered sequence for each supra-image of the plurality of supra-images of the training corpus; and iterating, for each respective ordered sequence: splitting the respective ordered sequence into a respective low part and a respective high part, passing the respective low part through the second electronic neural network classifier to obtain a respective low part classification, setting the respective ordered sequence to its respective low part when its respective low part classification is positive, and setting the respective ordered sequence to its respective high part when its respective low part classification is not positive. Clause 5: The method of any of Clauses 1-4, wherein each component of the supra-image plurality of components comprises a 128-pixel-by-128-pixel square portion of an image. Clause 6: The method of any of Clauses 1-5, wherein each component of the supra-image plurality of components comprises a feature vector corresponding to a portion of an image. Clause 7: The method of any of Clauses 1-6, wherein the training corpus comprises a plurality of biopsy supra-images. Clause 8: The method of any of Clauses 1-7, wherein the particular property comprises a dermatopathology property. Clause 9: The method of Clause 8, wherein the dermatopathology property comprises one of: a presence of a malignancy, a presence of a specific grade of malignancy, or a presence of a category of risk. Clause 10: The method of any of Clauses 1-9, further comprising identifying the presence of the particular property in the novel supra-image by submitting the novel supra-image to the first electronic neural network classifier. Clause 11: The method of any of Clauses 1-10, wherein the training corpus comprises a plurality of biopsy supra-images. Clause 12: The method of any of Clauses 1-11, wherein each image comprises a whole-slide image. Clause 13: A system for training a first electronic neural network classifier to identify a presence of a particular property in a novel supra-image while ignoring a spurious correlation of the presence of the particular property with a presence of an extraneous property, the system comprising: a processor; and a memory communicatively coupled to the processor, the memory storing instructions which, when executed on the processor, perform operations comprising: obtaining a training corpus of a plurality of supra-images, each supra-image comprising at least one image, each image of each of the at least one image corresponding to a respective plurality of components, wherein the respective plurality of components for each image of each of the at least one image of each supra-image of the training corpus collectively form a supra-image plurality of components; passing each respective supra-image of the plurality of supra-images of the training corpus through a second electronic neural network classifier trained to identify a presence of the extraneous property, the second electronic neural network classifier comprising an attention layer, whereby the attention layer assigns a respective attention weight to each component of the supra-image plurality of components; identifying, for each supra-image of the plurality of supra-images of the training corpus that have a positive classification by the second electronic neural network classifier, a respective supra-image threshold attention weight, whereby each component of the supra-image plurality of components is associated with a respective supra-image threshold attention weight, wherein each individual component of the supra-image plurality of components that has a respective attention weight above its respective supra-image threshold attention weight corresponds to positive classification by the second electronic neural network classifier, and wherein each individual component of the supra-image plurality of components that has a respective attention weight below its respective supra-image threshold attention weight corresponds to negative classification by the second electronic neural network classifier; removing components of the supra-image plurality of components that have respective attention weights above their respective supra-image threshold attention weights to obtain a scrubbed training corpus; and training the first electronic neural network classifier to identify the presence of the particular property using the scrubbed training corpus. Clause 14: The system of Clause 13, wherein the extraneous property comprises a pen marking. Clause 15: The system of Clause 13 or Clause 14, wherein the identifying, for each supra-image of the plurality of supra-images of the training corpus that have a positive classification by the second electronic neural network classifier, a respective supra-image threshold attention weight comprises conducting, for each supra-image of the plurality of supra-images of the training corpus, a respective binary search of its components. Clause 16: The system of any of Clauses 13-15, wherein the conducting, for each supra-image of the plurality of supra-images of the training corpus, a respective binary search of its components comprises: ordering components of each supra-image of the plurality of supra-images of the training corpus according to their respective attention weights to form a respective ordered sequence for each supra-image of the plurality of supra-images of the training corpus; and iterating, for each respective ordered sequence: splitting the respective ordered sequence into a respective low part and a respective high part, passing the respective low part through the second electronic neural network classifier to obtain a respective low part classification, setting the respective ordered sequence to its respective low part when its respective low part classification is positive, and setting the respective ordered sequence to its respective high part when its respective low part classification is not positive. Clause 17: The system of any of Clauses 13-16, wherein each component of the supra-image plurality of components comprises a 128-pixel-by pixel square portion of an image. Clause 18: The system of any of Clauses 13-16, wherein each component of the supra-image plurality of components comprises a feature vector corresponding to a portion of an image. Clause 19: The system of any of Clauses 13-18, wherein the training corpus comprises a plurality of biopsy supra-images. Clause 20: The system of any of Clauses 13-19, wherein the particular property comprises a dermatopathology property. Clause 21: The system of Clause 20, wherein the dermatopathology property comprises one of: a presence of a malignancy, a presence of a specific grade of malignancy, or a presence of a category of risk. Clause 22: The system of any of Clauses 13-21, wherein the operations further comprise identifying the presence of the particular property in the novel supra-image by submitting the novel supra-image to the first electronic neural network classifier. Clause 23: A method of identifying, for a supra-image having a positive classification for a presence of a property by a trained electronic neural network classifier, wherein the trained electronic neural network classifier comprises an attention layer, wherein the supra-image comprises at least one image, wherein each image of the at least one image corresponds to a respective plurality of components, wherein the respective plurality of components for each image of the at least one image collectively form a global plurality of components, at least one component of the global plurality of components that is determinative of the positive classification of the supra-image, the method comprising: classifying the supra-image by the trained electronic neural network classifier, whereby the attention layer assigns a respective attention weight to each component of the global plurality of components; identifying a threshold attention weight, wherein individual components of the global plurality of components having attention weights above the threshold attention weight correspond to a positive classification by the trained electronic neural network, and wherein individual components of the global plurality of components having attention weights below the threshold attention weight correspond to a negative classification by the trained electronic neural network; and identifying, as the at least one component of the global plurality of components that is determinative of the positive classification of the supra-image, the individual components of the global plurality of components having attention weights above the threshold attention weight. Clause 24: The method of Clause 23, wherein the identifying the threshold attention weight comprises conducting a binary search of the global plurality of components. Clause 25: The method of Clause 23 or Clause 24, wherein the conducting the binary search comprises: ordering the global plurality of components according to their respective attention weights, whereby an ordered sequence is obtained; and iterating: splitting the ordered sequence into a low part and a high part, passing the low part through the trained electronic neural network classifier to obtain a low part classification, setting the ordered sequence to the low part when the low part classification is positive, and setting the ordered sequence to the high part when the low part classification is not positive. Clause 26: The method of any of Clauses 23-25, wherein each component of the global plurality of components comprises a 128-pixel-by-128-pixel square portion of an image of the at least one image. Clause 27: The method of any of Clauses 23-25, wherein each component of the global plurality of components comprises a feature vector corresponding to a portion of an image of the at least one image. Clause 28: The method of any of Clauses 23-26, wherein the supra-image represents a biopsy. Clause 29: The method of any of Clauses 23-28, wherein each image of the at least one image comprises a whole-slide image. Clause 30: The method of any of Clauses 23-29, wherein the property comprises at least one pen marking. Clause 31: The method of any of Clauses 23-30, wherein the property comprises a dermatopathology property. Clause 32: The method of any of Clauses 23-31, wherein the dermatopathology property comprises one of: a presence of a malignancy, a presence of a specific grade of malignancy, or a presence of a category of risk. Clause 33: The method of any of Clauses 23-32, further comprising: removing the components of the global plurality of components having attention weights above the threshold attention weight from the supra-image, whereby a scrubbed supra-image is produced; including the scrubbed supra-image in a training corpus; and training a second electronic neural network classifier using the training corpus. Clause 34: At least one non-transitory computer readable medium comprising computer readable instructions that, when executed by at least one electronic processor, configure the at least one electronic processor to perform operations of any of Clauses 1-12 or 23-33. Clause 35: An electronic computer comprising at least one electronic processor communicatively coupled to electronic persistent memory comprising instructions that, when executed by the at least one processor, configure the at least one processor to perform operations of any of Clauses 1-12 or 23-33. Certain embodiments can be performed using a computer program or set of programs. The computer programs can exist in a variety of forms both active and inactive. For example, the computer programs can exist as software program(s) comprised of program instructions in source code, object code, executable code or other formats; firmware program(s), or hardware description language (HDL) files. Any of the above can be embodied on a transitory or non-transitory computer readable medium, which include storage devices and signals, in compressed or uncompressed form. Exemplary computer readable storage devices include conventional computer system RAM (random access memory), ROM (read-only memory), EPROM (erasable, programmable ROM), EEPROM (electrically erasable, programmable ROM), and magnetic or optical disks or tapes. While the invention has been described with reference to the exemplary embodiments thereof, those skilled in the art will be able to make various modifications to the described embodiments without departing from the true spirit and scope. The terms and descriptions used herein are set forth by way of illustration only and are not meant as limitations. In particular, although the method has been described by examples, the steps of the method can be performed in a different order than illustrated or simultaneously. Those skilled in the art will recognize that these and other variations are possible within the spirit and scope as defined in the following claims and their equivalents. Note that any of the following claims may be combined with any other of the following claims to the extent that antecedent bases for terms in such are clear. | 98,457 |
11861882 | DETAILED DESCRIPTION One or more currently preferred embodiments have been described by way of example. It will be apparent to persons skilled in the art that a number of variations and modifications can be made without departing from the scope of the invention as defined in the claims. The detailed description set forth below is intended as a description of various configurations of embodiments and is not intended to represent the only configurations in which the subject matter of this disclosure can be practiced. The appended drawings are incorporated herein and constitute a part of the detailed description. The detailed description includes specific details for the purpose of providing a more thorough understanding of the subject matter of this disclosure. However, it will be clear and apparent that the subject matter of this disclosure is not limited to the specific details set forth herein and may be practiced without these details. In some instances, structures and components are shown in block diagram form in order to avoid obscuring the concepts of the subject matter of this disclosure. Overview In at least some embodiments, it would be advantageous to reduce the processing time and save computing resources associated with inefficient and inaccurate classification of e-commerce products resulting from incorrect use of available input data for building the classification model which may be based on manual partitioning or random partitioning of the data resulting in an erroneous classifier. Notably, in some cases, manual partitioning of data between the training and testing data sets is not only inaccurate and yields unpredictable performance results but it is also not a feasible approach when dealing with large datasets with potentially numerous duplicate or similar input data entries (e.g. image or text entries). Generally, in at least some implementations, there is disclosed herein systems and methods for automated product classifications with improved efficiency and accuracy by intelligently partitioning available source data sets for different purposes (e.g. training/testing/validation) as used to build the supervised machine learning models in the classifiers. An Example e-Commerce Platform FIG.1illustrates an example e-commerce platform100, according to one embodiment. The e-commerce platform100may be used to provide merchant products and services to customers. While the disclosure contemplates using the apparatus, system, and process to purchase products and services, for simplicity the description herein will refer to products. All references to products throughout this disclosure should also be understood to be references to products and/or services, including, for example, physical products, digital content (e.g., music, videos, games), software, tickets, subscriptions, services to be provided, and the like. While the disclosure throughout contemplates that a ‘merchant’ and a ‘customer’ may be more than individuals, for simplicity the description herein may generally refer to merchants and customers as such. All references to merchants and customers throughout this disclosure should also be understood to be references to groups of individuals, companies, corporations, computing entities, and the like, and may represent for-profit or not-for-profit exchange of products. Further, while the disclosure throughout refers to ‘merchants’ and ‘customers’, and describes their roles as such, the e-commerce platform100should be understood to more generally support users in an e-commerce environment, and all references to merchants and customers throughout this disclosure should also be understood to be references to users, such as where a user is a merchant-user (e.g., a seller, retailer, wholesaler, or provider of products), a customer-user (e.g., a buyer, purchase agent, consumer, or user of products), a prospective user (e.g., a user browsing and not yet committed to a purchase, a user evaluating the e-commerce platform100for potential use in marketing and selling products, and the like), a service provider user (e.g., a shipping provider112, a financial provider, and the like), a company or corporate user (e.g., a company representative for purchase, sales, or use of products; an enterprise user; a customer relations or customer management agent, and the like), an information technology user, a computing entity user (e.g., a computing bot for purchase, sales, or use of products), and the like. Furthermore, it may be recognized that while a given user may act in a given role (e.g., as a merchant) and their associated device may be referred to accordingly (e.g., as a merchant device) in one context, that same individual may act in a different role in another context (e.g., as a customer) and that same or another associated device may be referred to accordingly (e.g., as a customer device). For example, an individual may be a merchant for one type of product (e.g., shoes), and a customer/consumer of other types of products (e.g., groceries). In another example, an individual may be both a consumer and a merchant of the same type of product. In a particular example, a merchant that trades in a particular category of goods may act as a customer for that same category of goods when they order from a wholesaler (the wholesaler acting as merchant). The e-commerce platform100provides merchants with online services/facilities to manage their business. The facilities described herein are shown implemented as part of the platform100but could also be configured separately from the platform100, in whole or in part, as stand-alone services. Furthermore, such facilities may, in some embodiments, may, additionally or alternatively, be provided by one or more providers/entities. In the example ofFIG.1, the facilities are deployed through a machine, service or engine that executes computer software, modules, program codes, and/or instructions on one or more processors which, as noted above, may be part of or external to the platform100. Merchants may utilize the e-commerce platform100for enabling or managing commerce with customers, such as by implementing an e-commerce experience with customers through an online store138, applications142A-B, channels110A-B, and/or through point of sale (POS) devices152in physical locations (e.g., a physical storefront or other location such as through a kiosk, terminal, reader, printer, 3D printer, and the like). A merchant may utilize the e-commerce platform100as a sole commerce presence with customers, or in conjunction with other merchant commerce facilities, such as through a physical store (e.g., ‘brick-and-mortar’ retail stores), a merchant off-platform website104(e.g., a commerce Internet website or other internet or web property or asset supported by or on behalf of the merchant separately from the e-commerce platform100), an application142B, and the like. However, even these ‘other’ merchant commerce facilities may be incorporated into or communicate with the e-commerce platform100, such as where POS devices152in a physical store of a merchant are linked into the e-commerce platform100, where a merchant off-platform website104is tied into the e-commerce platform100, such as, for example, through ‘buy buttons’ that link content from the merchant off platform website104to the online store138, or the like. The online store138may represent a multi-tenant facility comprising a plurality of virtual storefronts. In embodiments, merchants may configure and/or manage one or more storefronts in the online store138, such as, for example, through a merchant device102(e.g., computer, laptop computer, mobile computing device, and the like), and offer products to customers through a number of different channels110A-B (e.g., an online store138; an application142A-B; a physical storefront through a POS device152; an electronic marketplace, such, for example, through an electronic buy button integrated into a website or social media channel such as on a social network, social media page, social media messaging system; and/or the like). A merchant may sell across channels110A-B and then manage their sales through the e-commerce platform100, where channels110A may be provided as a facility or service internal or external to the e-commerce platform100. A merchant may, additionally or alternatively, sell in their physical retail store, at pop ups, through wholesale, over the phone, and the like, and then manage their sales through the e-commerce platform100. A merchant may employ all or any combination of these operational modalities. Notably, it may be that by employing a variety of and/or a particular combination of modalities, a merchant may improve the probability and/or volume of sales. Throughout this disclosure the terms online store138and storefront may be used synonymously to refer to a merchant's online e-commerce service offering through the e-commerce platform100, where an online store138may refer either to a collection of storefronts supported by the e-commerce platform100(e.g., for one or a plurality of merchants) or to an individual merchant's storefront (e.g., a merchant's online store). In some embodiments, a customer may interact with the platform100through a customer device150(e.g., computer, laptop computer, mobile computing device, or the like), a POS device152(e.g., retail device, kiosk, automated (self-service) checkout system, or the like), and/or any other commerce interface device known in the art. The e-commerce platform100may enable merchants to reach customers through the online store138, through applications142A-B, through POS devices152in physical locations (e.g., a merchant's storefront or elsewhere), to communicate with customers via electronic communication facility129, and/or the like so as to provide a system for reaching customers and facilitating merchant services for the real or virtual pathways available for reaching and interacting with customers. In some embodiments, and as described further herein, the e-commerce platform100may be implemented through a processing facility. Such a processing facility may include a processor and a memory. The processor may be a hardware processor. The memory may be and/or may include a non-transitory computer-readable medium. The memory may be and/or may include random access memory (RAM) and/or persisted storage (e.g., magnetic storage). The processing facility may store a set of instructions (e.g., in the memory) that, when executed, cause the e-commerce platform100to perform the e-commerce and support functions as described herein. The processing facility may be or may be a part of one or more of a server, client, network infrastructure, mobile computing platform, cloud computing platform, stationary computing platform, and/or some other computing platform, and may provide electronic connectivity and communications between and amongst the components of the e-commerce platform100, merchant devices102, payment gateways106, applications142A-B, channels110A-B, shipping providers112, customer devices150, point of sale devices152, etc. In some implementations, the processing facility may be or may include one or more such computing devices acting in concert. For example, it may be that a plurality of co-operating computing devices serves as/to provide the processing facility. The e-commerce platform100may be implemented as or using one or more of a cloud computing service, software as a service (SaaS), infrastructure as a service (IaaS), platform as a service (PaaS), desktop as a service (DaaS), managed software as a service (MSaaS), mobile backend as a service (MBaaS), information technology management as a service (ITMaaS), and/or the like. For example, it may be that the underlying software implementing the facilities described herein (e.g., the online store138) is provided as a service, and is centrally hosted (e.g., and then accessed by users via a web browser or other application, and/or through customer devices150, POS devices152, and/or the like). In some embodiments, elements of the e-commerce platform100may be implemented to operate and/or integrate with various other platforms and operating systems. In some embodiments, the facilities of the e-commerce platform100(e.g., the online store138) may serve content to a customer device150(using data134) such as, for example, through a network connected to the e-commerce platform100. For example, the online store138may serve or send content in response to requests for data134from the customer device150, where a browser (or other application) connects to the online store138through a network using a network communication protocol (e.g., an internet protocol). The content may be written in machine readable language and may include Hypertext Markup Language (HTML), template language, JavaScript™, and the like, and/or any combination thereof. In some embodiments, online store138may be or may include service instances that serve content to customer devices and allow customers to browse and purchase the various products available (e.g., add them to a cart, purchase through a buy-button, and the like). Merchants may also customize the look and feel of their website through a theme system, such as, for example, a theme system where merchants can select and change the look and feel of their online store138by changing their theme while having the same underlying product and business data shown within the online store's product information. It may be that themes can be further customized through a theme editor, a design interface that enables users to customize their website's design with flexibility. Additionally or alternatively, it may be that themes can, additionally or alternatively, be customized using theme-specific settings such as, for example, settings that may change aspects of a given theme, such as, for example, specific colors, fonts, and pre-built layout schemes. In some implementations, the online store may implement a content management system for website content. Merchants may employ such a content management system in authoring blog posts or static pages and publish them to their online store138, such as through blogs, articles, landing pages, and the like, as well as configure navigation menus. Merchants may upload images (e.g., for products), video, content, data, and the like to the e-commerce platform100, such as for storage by the system (e.g., as data134). In some embodiments, the e-commerce platform100may provide functions for manipulating such images and content such as, for example, functions for resizing images, associating an image with a product, adding and associating text with an image, adding an image for a new product variant, protecting images, and the like. As described herein, the e-commerce platform100may provide merchants with sales and marketing services for products through a number of different channels110A-B, including, for example, the online store138, applications142A-B, as well as through physical POS devices152as described herein. The e-commerce platform100may, additionally or alternatively, include business support services116, an administrator114, a warehouse management system, and the like associated with running an on-line business, such as, for example, one or more of providing a domain registration service118associated with their online store, payment services120for facilitating transactions with a customer, shipping services122for providing customer shipping options for purchased products, fulfillment services for managing inventory, risk and insurance services124associated with product protection and liability, merchant billing, and the like. Services116may be provided via the e-commerce platform100or in association with external facilities, such as through a payment gateway106for payment processing, shipping providers112for expediting the shipment of products, and the like. In some embodiments, the e-commerce platform100may be configured with shipping services122(e.g., through an e-commerce platform shipping facility or through a third-party shipping carrier), to provide various shipping-related information to merchants and/or their customers such as, for example, shipping label or rate information, real-time delivery updates, tracking, and/or the like. FIG.2depicts a non-limiting embodiment for a home page of an administrator114. The administrator114may be referred to as an administrative console and/or an administrator console. The administrator114may show information about daily tasks, a store's recent activity, and the next steps a merchant can take to build their business. In some embodiments, a merchant may log in to the administrator114via a merchant device102(e.g., a desktop computer or mobile device), and manage aspects of their online store138, such as, for example, viewing the online store's138recent visit or order activity, updating the online store's138catalog, managing orders, and/or the like. In some embodiments, the merchant may be able to access the different sections of the administrator114by using a sidebar, such as the one shown onFIG.2. Sections of the administrator114may include various interfaces for accessing and managing core aspects of a merchant's business, including orders, products, customers, available reports and discounts. The administrator114may, additionally or alternatively, include interfaces for managing sales channels for a store including the online store138, mobile application(s) made available to customers for accessing the store (Mobile App), POS devices, and/or a buy button. The administrator114may, additionally or alternatively, include interfaces for managing applications (apps) installed on the merchant's account; and settings applied to a merchant's online store138and account. A merchant may use a search bar to find products, pages, or other information in their store. More detailed information about commerce and visitors to a merchant's online store138may be viewed through reports or metrics. Reports may include, for example, acquisition reports, behavior reports, customer reports, finance reports, marketing reports, sales reports, product reports, and custom reports. The merchant may be able to view sales data for different channels110A-B from different periods of time (e.g., days, weeks, months, and the like), such as by using drop-down menus. An overview dashboard may also be provided for a merchant who wants a more detailed view of the store's sales and engagement data. An activity feed in the home metrics section may be provided to illustrate an overview of the activity on the merchant's account. For example, by clicking on a ‘view all recent activity’ dashboard button, the merchant may be able to see a longer feed of recent activity on their account. A home page may show notifications about the merchant's online store138, such as based on account status, growth, recent customer activity, order updates, and the like. Notifications may be provided to assist a merchant with navigating through workflows configured for the online store138, such as, for example, a payment workflow, an order fulfillment workflow, an order archiving workflow, a return workflow, and the like. The e-commerce platform100may provide for a communications facility129and associated merchant interface for providing electronic communications and marketing, such as utilizing an electronic messaging facility for collecting and analyzing communication interactions between merchants, customers, merchant devices102, customer devices150, POS devices152, and the like, to aggregate and analyze the communications, such as for increasing sale conversions, and the like. For instance, a customer may have a question related to a product, which may produce a dialog between the customer and the merchant (or an automated processor-based agent/chatbot representing the merchant), where the communications facility129is configured to provide automated responses to customer requests and/or provide recommendations to the merchant on how to respond such as, for example, to improve the probability of a sale. The e-commerce platform100may provide a financial facility120for secure financial transactions with customers, such as through a secure card server environment. The e-commerce platform100may store credit card information, such as in payment card industry data (PCI) environments (e.g., a card server), to reconcile financials, bill merchants, perform automated clearing house (ACH) transfers between the e-commerce platform100and a merchant's bank account, and the like. The financial facility120may also provide merchants and buyers with financial support, such as through the lending of capital (e.g., lending funds, cash advances, and the like) and provision of insurance. In some embodiments, online store138may support a number of independently administered storefronts and process a large volume of transactional data on a daily basis for a variety of products and services. Transactional data may include any customer information indicative of a customer, a customer account or transactions carried out by a customer such as. for example, contact information, billing information, shipping information, returns/refund information, discount/offer information, payment information, or online store events or information such as page views, product search information (search keywords, click-through events), product reviews, abandoned carts, and/or other transactional information associated with business through the e-commerce platform100. In some embodiments, the e-commerce platform100may store this data in a data facility134. Referring again toFIG.1, in some embodiments the e-commerce platform100may include a commerce management engine136such as may be configured to perform various workflows for task automation or content management related to products, inventory, customers, orders, suppliers, reports, financials, risk and fraud, and the like. In some embodiments, additional functionality may, additionally or alternatively, be provided through applications142A-B to enable greater flexibility and customization required for accommodating an ever-growing variety of online stores, POS devices, products, and/or services. Applications142A may be components of the e-commerce platform100whereas applications142B may be provided or hosted as a third-party service external to e-commerce platform100. The commerce management engine136may accommodate store-specific workflows and in some embodiments, may incorporate the administrator114and/or the online store138. Implementing functions as applications142A-B may enable the commerce management engine136to remain responsive and reduce or avoid service degradation or more serious infrastructure failures, and the like. Although isolating online store data can be important to maintaining data privacy between online stores138and merchants, there may be reasons for collecting and using cross-store data, such as, for example, with an order risk assessment system or a platform payment facility, both of which require information from multiple online stores138to perform well. In some embodiments, it may be preferable to move these components out of the commerce management engine136and into their own infrastructure within the e-commerce platform100. Platform payment facility120is an example of a component that utilizes data from the commerce management engine136but is implemented as a separate component or service. The platform payment facility120may allow customers interacting with online stores138to have their payment information stored safely by the commerce management engine136such that they only have to enter it once. When a customer visits a different online store138, even if they have never been there before, the platform payment facility120may recall their information to enable a more rapid and/or potentially less-error prone (e.g., through avoidance of possible mis-keying of their information if they needed to instead re-enter it) checkout. This may provide a cross-platform network effect, where the e-commerce platform100becomes more useful to its merchants and buyers as more merchants and buyers join, such as because there are more customers who checkout more often because of the ease of use with respect to customer purchases. To maximize the effect of this network, payment information for a given customer may be retrievable and made available globally across multiple online stores138. For functions that are not included within the commerce management engine136, applications142A-B provide a way to add features to the e-commerce platform100or individual online stores138. For example, applications142A-B may be able to access and modify data on a merchant's online store138, perform tasks through the administrator114, implement new flows for a merchant through a user interface (e.g., that is surfaced through extensions/API), and the like. Merchants may be enabled to discover and install applications142A-B through application search, recommendations, and support128. In some embodiments, the commerce management engine136, applications142A-B, and the administrator114may be developed to work together. For instance, application extension points may be built inside the commerce management engine136, accessed by applications142A and142B through the interfaces140B and140A to deliver additional functionality, and surfaced to the merchant in the user interface of the administrator114. In some embodiments, applications142A-B may deliver functionality to a merchant through the interface140A-B, such as where an application142A-B is able to surface transaction data to a merchant (e.g., App: “Engine, surface my app data in the Mobile App or administrator114”), and/or where the commerce management engine136is able to ask the application to perform work on demand (Engine: “App, give me a local tax calculation for this checkout”). Applications142A-B may be connected to the commerce management engine136through an interface140A-B (e.g., through REST (REpresentational State Transfer) and/or GraphQL APIs) to expose the functionality and/or data available through and within the commerce management engine136to the functionality of applications. For instance, the e-commerce platform100may provide API interfaces140A-B to applications142A-B which may connect to products and services external to the platform100. The flexibility offered through use of applications and APIs (e.g., as offered for application development) enable the e-commerce platform100to better accommodate new and unique needs of merchants or to address specific use cases without requiring constant change to the commerce management engine136. For instance, shipping services122may be integrated with the commerce management engine136through a shipping or carrier service API, thus enabling the e-commerce platform100to provide shipping service functionality without directly impacting code running in the commerce management engine136. Depending on the implementation, applications142A-B may utilize APIs to pull data on demand (e.g., customer creation events, product change events, or order cancelation events, etc.) or have the data pushed when updates occur. A subscription model may be used to provide applications142A-B with events as they occur or to provide updates with respect to a changed state of the commerce management engine136. In some embodiments, when a change related to an update event subscription occurs, the commerce management engine136may post a request, such as to a predefined callback URL. The body of this request may contain a new state of the object and a description of the action or event. Update event subscriptions may be created manually, in the administrator facility114, or automatically (e.g., via the API140A-B). In some embodiments, update events may be queued and processed asynchronously from a state change that triggered them, which may produce an update event notification that is not distributed in real-time or near-real time. In some embodiments, the e-commerce platform100may provide one or more of application search, recommendation and support128. Application search, recommendation and support128may include developer products and tools to aid in the development of applications, an application dashboard (e.g., to provide developers with a development interface, to administrators for management of applications, to merchants for customization of applications, and the like), facilities for installing and providing permissions with respect to providing access to an application142A-B (e.g., for public access, such as where criteria must be met before being installed, or for private use by a merchant), application searching to make it easy for a merchant to search for applications142A-B that satisfy a need for their online store138, application recommendations to provide merchants with suggestions on how they can improve the user experience through their online store138, and the like. In some embodiments, applications142A-B may be assigned an application identifier (ID), such as for linking to an application (e.g., through an API), searching for an application, making application recommendations, and the like. Applications142A-B may be grouped roughly into three categories: customer-facing applications, merchant-facing applications, integration applications, and the like. Customer-facing applications142A-B may include an online store138or channels110A-B that are places where merchants can list products and have them purchased (e.g., the online store, applications for flash sales (e.g., merchant products or from opportunistic sales opportunities from third-party sources), a mobile store application, a social media channel, an application for providing wholesale purchasing, and the like). Merchant-facing applications142A-B may include applications that allow the merchant to administer their online store138(e.g., through applications related to the web or website or to mobile devices), run their business (e.g., through applications related to POS devices), to grow their business (e.g., through applications related to shipping (e.g., drop shipping), use of automated agents, use of process flow development and improvements), and the like. Integration applications may include applications that provide useful integrations that participate in the running of a business, such as shipping providers112and payment gateways106. As such, the e-commerce platform100can be configured to provide an online shopping experience through a flexible system architecture that enables merchants to connect with customers in a flexible and transparent manner. A typical customer experience may be better understood through an embodiment example purchase workflow, where the customer browses the merchant's products on a channel110A-B, adds what they intend to buy to their cart, proceeds to checkout, and pays for the content of their cart resulting in the creation of an order for the merchant. The merchant may then review and fulfill (or cancel) the order. The product is then delivered to the customer. If the customer is not satisfied, they might return the products to the merchant. In an example embodiment, a customer may browse a merchant's products through a number of different channels110A-B such as, for example, the merchant's online store138, a physical storefront through a POS device152; an electronic marketplace, through an electronic buy button integrated into a website or a social media channel). In some cases, channels110A-B may be modeled as applications142A-B. A merchandising component in the commerce management engine136may be configured for creating, and managing product listings (using product data objects or models for example) to allow merchants to describe what they want to sell and where they sell it. The association between a product listing and a channel may be modeled as a product publication and accessed by channel applications, such as via a product listing API. A product may have many attributes and/or characteristics, like size and color, and many variants that expand the available options into specific combinations of all the attributes, like a variant that is size extra-small and green, or a variant that is size large and blue. Products may have at least one variant (e.g., a “default variant”) created for a product without any options. To facilitate browsing and management, products may be grouped into collections, provided product identifiers (e.g., stock keeping unit (SKU)) and the like. Collections of products may be built by either manually categorizing products into one (e.g., a custom collection), by building rulesets for automatic classification (e.g., a smart collection), and the like. Product listings may include 2D images, 3D images or models, which may be viewed through a virtual or augmented reality interface, and the like. In some embodiments, a shopping cart object is used to store or keep track of the products that the customer intends to buy. The shopping cart object may be channel specific and can be composed of multiple cart line items, where each cart line item tracks the quantity for a particular product variant. Since adding a product to a cart does not imply any commitment from the customer or the merchant, and the expected lifespan of a cart may be in the order of minutes (not days), cart objects/data representing a cart may be persisted to an ephemeral data store. The customer then proceeds to checkout. A checkout object or page generated by the commerce management engine136may be configured to receive customer information to complete the order such as the customer's contact information, billing information and/or shipping details. If the customer inputs their contact information but does not proceed to payment, the e-commerce platform100may (e.g., via an abandoned checkout component) transmit a message to the customer device150to encourage the customer to complete the checkout. For those reasons, checkout objects can have much longer lifespans than cart objects (hours or even days) and may therefore be persisted. Customers then pay for the content of their cart resulting in the creation of an order for the merchant. In some embodiments, the commerce management engine136may be configured to communicate with various payment gateways and services106(e.g., online payment systems, mobile payment systems, digital wallets, credit card gateways) via a payment processing component. The actual interactions with the payment gateways106may be provided through a card server environment. At the end of the checkout process, an order is created. An order is a contract of sale between the merchant and the customer where the merchant agrees to provide the goods and services listed on the order (e.g., order line items, shipping line items, and the like) and the customer agrees to provide payment (including taxes). Once an order is created, an order confirmation notification may be sent to the customer and an order placed notification sent to the merchant via a notification component. Inventory may be reserved when a payment processing job starts to avoid over-selling (e.g., merchants may control this behavior using an inventory policy or configuration for each variant). Inventory reservation may have a short time span (minutes) and may need to be fast and scalable to support flash sales or “drops”, which are events during which a discount, promotion or limited inventory of a product may be offered for sale for buyers in a particular location and/or for a particular (usually short) time. The reservation is released if the payment fails. When the payment succeeds, and an order is created, the reservation is converted into a permanent (long-term) inventory commitment allocated to a specific location. An inventory component of the commerce management engine136may record where variants are stocked, and may track quantities for variants that have inventory tracking enabled. It may decouple product variants (a customer-facing concept representing the template of a product listing) from inventory items (a merchant-facing concept that represents an item whose quantity and location is managed). An inventory level component may keep track of quantities that are available for sale, committed to an order or incoming from an inventory transfer component (e.g., from a vendor). The merchant may then review and fulfill (or cancel) the order. A review component of the commerce management engine136may implement a business process merchant's use to ensure orders are suitable for fulfillment before actually fulfilling them. Orders may be fraudulent, require verification (e.g., ID checking), have a payment method which requires the merchant to wait to make sure they will receive their funds, and the like. Risks and recommendations may be persisted in an order risk model. Order risks may be generated from a fraud detection tool, submitted by a third-party through an order risk API, and the like. Before proceeding to fulfillment, the merchant may need to capture the payment information (e.g., credit card information) or wait to receive it (e.g., via a bank transfer, check, and the like) before it marks the order as paid. The merchant may now prepare the products for delivery. In some embodiments, this business process may be implemented by a fulfillment component of the commerce management engine136. The fulfillment component may group the line items of the order into a logical fulfillment unit of work based on an inventory location and fulfillment service. The merchant may review, adjust the unit of work, and trigger the relevant fulfillment services, such as through a manual fulfillment service (e.g., at merchant managed locations) used when the merchant picks and packs the products in a box, purchase a shipping label and input its tracking number, or just mark the item as fulfilled. Alternatively, an API fulfillment service may trigger a third-party application or service to create a fulfillment record for a third-party fulfillment service. Other possibilities exist for fulfilling an order. If the customer is not satisfied, they may be able to return the product(s) to the merchant. The business process merchants may go through to “un-sell” an item may be implemented by a return component. Returns may consist of a variety of different actions, such as a restock, where the product that was sold actually comes back into the business and is sellable again; a refund, where the money that was collected from the customer is partially or fully returned; an accounting adjustment noting how much money was refunded (e.g., including if there was any restocking fees or goods that weren't returned and remain in the customer's hands); and the like. A return may represent a change to the contract of sale (e.g., the order), and where the e-commerce platform100may make the merchant aware of compliance issues with respect to legal obligations (e.g., with respect to taxes). In some embodiments, the e-commerce platform100may enable merchants to keep track of changes to the contract of sales over time, such as implemented through a sales model component (e.g., an append-only date-based ledger that records sale-related events that happened to an item). Engine300—Data Partitioning and Automated Classification System The functionality described herein may be used in e-commerce systems to provide improved customer or buyer experiences. The e-commerce platform100could implement the functionality for any of a variety of different applications, examples of which are described elsewhere herein.FIG.3illustrates the e-commerce platform100ofFIG.1but including an engine300. The engine300is an example of a computer-implemented classification system and engine that implements the functionality described herein for use by the e-commerce platform100, the customer device150and/or the merchant device102. The engine300, also referred to as a classification engine, utilizes a supervised machine learning model to classify received product data (e.g. images or text related to e-commerce products which may or may not be labelled) so as to associate it with one or more e-commerce products. The engine300is configured to optimize such a classifier (e.g. the classifier416inFIG.4) implemented using a supervised machine learning model by intelligently selecting and assigning training and testing data from a pool of available product data (e.g. product attributes) for the model. The classification may be displayed to one or more users interacting via one or more computing devices (e.g. merchant device102and/or customer device150and/or other native or browser application in communication with platform100) with the engine300. In non-limiting examples, the classification predictions may be in the form of a link to product metadata (e.g. websites) and associated attributes classified as belonging to the product. Although the engine300is illustrated as a distinct component of the e-commerce platform100inFIG.3, this is only an example. The engine300could also or instead be provided by another component residing within or external to the e-commerce platform100. In some embodiments, either or both of the applications142A-B provide an engine that implements the functionality described herein to make it available to customers and/or to merchants. Furthermore, in some embodiments, the commerce management engine136provides the aforementioned engine. However, the location of the engine300is implementation specific. In some implementations, the engine300is provided at least in part by an e-commerce platform (e.g. e-commerce platform100), either as a core function of the e-commerce platform or as an application or service supported by or communicating with the e-commerce platform. Alternatively, the engine300may be implemented as a stand-alone service to clients such as a customer device150or a merchant device102. In addition, at least a portion of such an engine could be implemented in the merchant device102and/or in the customer device150. For example, the customer device150could store and run an engine locally as a software application. As discussed in further detail below, the engine300could implement at least some of the functionality described herein. Although the embodiments described below may be implemented in association with an e-commerce platform, such as (but not limited to) the e-commerce platform100, the embodiments described below are not limited to e-commerce platforms and may be implemented in other computing devices. Each of the data partitioning model408and the classifier416may be implemented in software and may include instructions, logic rules, machine learning, artificial intelligence or combinations thereof stored on a memory (e.g. stored within data134on the e-commerce platform100or an external memory accessible by the engine300) and executed by one or more processors which, as noted above, may be part of or external to the e-commerce platform100to provide the functionality described herein. Referring toFIG.4, shown is an example classification engine300according to one embodiment, which includes various modules and data stores, for generating optimal data splitting of available data into multiple parts (e.g. testing data set and training data set) for building and testing of supervised classification model(s) used to classify e-commerce products into product categories for use by one or more computing devices. These modules and data stores include a data partitioning model408, an input text database402, an input image database404, an input product database406, a connectivity data graph409, a training set410, a testing set412, a classifier416and optionally in some aspects, a validation set413. The engine300may include additional computing modules or data stores in various embodiments to provide the implementations described herein. Additional computing modules and data stores may not have been shown inFIG.4to avoid undue complexity of the description. For example, a user computing device providing one or more portions of the source data such as the input text database402, input image database404and input product database406and the network through which such device communicates with the engine300are not shown inFIG.4. At a high level, a need for the classification engine300may arise in product classification (e.g. using image, text or other data formats) for electronic commerce applications but automated classification of items such as e-commerce products into predetermined categories is a complex problem to solve. Therefore, it is preferable to include large training and testing samples to improve such a classification engine300that utilizes a machine learning (ML) based classifier416. Although the input data samples used for training and testing such data driven predictive supervised classification models, e.g. the classifier416, may come from different sources such as independent online merchants, there may still be very similar or identical data, e.g. two drop shippers selling the same product having similar images (from the same supplier) found within the available source data samples and thus it may decrease the classification performance and accuracy to simply randomly split the source data into defined groups such as training, validation or test sets. Notably, because of the similar or identical types of data that may be found in the source data, random splitting of the data may result in overfitting and lack of generalization in the resulting trained model causing a decreased labelling performance in the accuracy of the classification model. The supervised machine learning model implemented by the classifier416may include but is not limited to: a linear classifier, support vector machine (SVM), neural network, decision trees, k-nearest neighbor, and random forest, and others as may be envisaged by a person or persons skilled in the art. Thus, in at least some aspects, the engine300is configured to intelligently split and assign the source data to various data groups for use by the classifier416(e.g. input text database402, input image database404, input product database406) such as to reduce likelihood of testing the model with the same data that was used to train the model shown as the classifier416. Generally, in this disclosure, the engine300illustrated inFIGS.3and4dynamically partitions a sample input dataset into a training subset and a testing subset, based on the underlying subsets having distinct components or features, for use in the training and testing of a machine learning model to provide automated classifications of products (e.g. image or text). Conveniently, the splitting of a sample dataset to generate a training subset and a testing subset that minimally overlaps in their components or features may help improve the training, testing and overall predictive performance of the classification model (e.g. the classifier416). For example, for e-commerce products, both image (e.g. views of product) and text (e.g. description of item such as product ID or ASIN or product title/description) may be collected, as well as assigned label such as product categories (e.g. ‘kids' clothing→shoes’; ‘kids' clothing→tops’). This collected data may be used for training, validating, and/or testing a classification model. The goal of the classifier416may then be that given an input having both image and text portions—e.g. a new image and a text portion describing the product provided as output classifications418, to classify the input to predict the label. The label also serves to thus categorize the product as belonging to a group of products, such as kid's clothing→tops. Although the present disclosure provides in at least some aspects, a method of partitioning a given dataset into disjoint training and testing datasets for a supervised machine learning model used for an example application of automated product classification, such methods and systems described herein may also be applied in other applications where there is a need to identify highly similar content within a data set and prevent duplication (i.e. detecting copyright infringement) or detecting that an ecommerce store on the platform may be using content from another without permission. Such implementations may also be envisaged in the present disclosure. Referring now toFIG.6, the engine300as part of the e-commerce platform100, may be configured in an example computing system600to communicate with one or more other computing devices, including a user computing device (e.g. a customer device150) across a communication network602and instruct the customer device150to output the generated product classification as output classification418thereon. In the example, the customer device150may include a processor, memory and a display configured to display on a screen610the classification recommendations. Such classifications of product related feature data (e.g. product text or product image) may be displayed in a first screen portion614along with user interface controls to accept616or to deny618the received classification recommendation. In at least some aspects, once the customer selects one of the options to accept or deny the classification recommendations displayed thereon, such feedback may be provided back to the engine300as a parameter to tweak or otherwise improve the classifier416and/or data partitioning model408based on the received response from the customer device150. Referring back toFIG.4, to prepare the input source data for building the model, e.g., the classifier416, the data partitioning model408operates as a model optimizer which is configured, in at least some embodiments, to provide an improved method for automatically partitioning available input source data (e.g. known data collected from various sources including input text database402, input image database404, and input product database406) into multiple group allocations such as the training, testing and/or validation data thereby to optimize performance of the classification. The training data may be stored in the training set410, the testing data may be stored in the testing set412and the validation data (if provided) may be stored in the validation set413. The input text database402may be configured to store product information in textual format, such as product description, product websites, product information, product reviews, product categorization, product title, associated product names, product compatibility information, merchant information, manufacturer information, shipment information, types of payments acceptable for merchant, and other textual metadata defining e-commerce products as may be envisaged by a person or persons skilled in the art. The input image database404may be configured to store one or more digital images of objects associated with e-commerce products, such as views of a merchandise item, associated merchandise items for sale, logo or trademark for a seller or manufacturer, images of packaging, images of use of product in various environments, etc. The input product database406may include digital images and/or text information that is linked to, labelled or otherwise mapped to the associated e-commerce product(s). For example, this may include a textual description and/or digital images for products which includes an identification of one or more products (either a direct identification or link to such information) to which they belong, such as a product identification number or product name that helps customers locate products online. As may be envisaged, in some embodiments, there may be overlapping content between the information provided in the input text database402, the input image database404, and the input product database406. In at least some embodiments, the data partitioning model408uses a graph based approach to analyze the source input data for the model as retrieved from the input text database402, the input image database404, and the input product database406, and builds or generates a connectivity data graph409. In the connectivity data graph409(an example of which is shown inFIGS.5A and5B), each node represents an attribute of at least one corresponding item or product (e.g. product image and/or product text) and the nodes in the graph are connected by an edge if the data partitioning model408considers that they are sufficiently similar to one another. A pair of nodes in the connectivity data graph409may be considered sufficiently similar if they are exactly the same or if their distance measurement (e.g. calculated using a Euclidean distance) is below a certain defined threshold. The comparison of two (or more) samples or data points in a sample input dataset (e.g. any combination of data obtained from input text database402, input image database404and/or input product database406) may be achieved by using key words, string hashes, patterns, or other types of identifiers to determine the feature value for use in determining a degree of match. The requisite degree of similarity for the co-grouping of data points may be based on a minimum similarity score and/or an identical match. For example, as shown inFIG.5A, the text data in a text sample1, may be compared to textual data in the product sample1(e.g. containing a combination of text and/or image data linked to the product) and the images in the image sample1is compared to the images in the product sample1. Based on the similarity of the textual content in the nodes indicating similar text (or a degree of likeness), an edge is drawn between text sample1and the product sample1and on the other hand, based on the similarity of digital image content in the nodes indicating similar images (or a degree of likeness), an edge is drawn between the product sample1and the image sample1, in the example embodiment ofFIG.5A. Once the similar nodes are linked via edges, the data partitioning model408may define groups of linked nodes (e.g. a first group501, a second group502and a third group503) and the underlying data for each defined group (which is disjoint relative to other groups) assigned to one of the training/testing/validation data sets for the classification model as shown in the training set410, the testing set412and the validation set413. An example of the process performed by the data partitioning model408of determining connected components or nodes is shown inFIG.5Aand assigning each set of connected components (e.g. forming a cluster) to different data sets (e.g. a first training set504and a first testing set505) is shown atFIG.5B. More specifically and referring toFIGS.4, and5A-5B, the data partitioning model408applies graph mining to the collected source data sets of product images retrieved from the input image database404, product text data retrieved from the input text database402and product data retrieved from the input product database406to generate connectivity data graph409such that each node in the generated graph relates to an input data sample having at least one attribute or feature component describing a product feature (i.e. each product attribute may include images illustrating views of the product; text description, text title, product price, product identification, etc.). The values for the features in respective input data samples (e.g. actual text for the feature relating to the text description) are used to determine a degree of similarity between them. Additionally, the data partitioning model408is configured, to determine whether there is more than a defined degree of similarity or likeness between each pair of nodes and to generate and visualize a link (or edge) between the pair of nodes in the connectivity data graph409if the criteria is satisfied. The connectivity data graph409, may be represented as G=(U, V, E), where V is a set whose elements are the vertices (or nodes), E is the set of paired vertices whose elements are referred to as edges (or links). U represents the cluster to which the graph belongs to and thus in the below equation, U1 and U2 represent subsets of the initial graph G. Thus the endpoints of an edge may be defined by a pair of vertices. In the current example, the graph is split into two partitions, e.g. two graphs, G1=(U1, V1, E1) and G2=(U2, V2, E2), such that no edges cross the partitions. Put another way, for all u1∈U1, v2∈V2, (u1, v2)∉E and for all u2∈U2, v1∈V2, (u1 v1)∉E. In at least some aspects, the graph based approach to classifying and assigning the input data sets into subgroups of testing and training data sets (and if applicable validation data sets), an example of which is shown inFIG.5B, is conveniently advantageous as it allows for an expressive way of visualizing and connecting relationships between the features in the data samples and thus provides an improved classification accuracy. For example, a first node (e.g. text sample1) on the connectivity data graph409corresponding to an input data sample having text and/or image values describing a particular feature or attribute of the product (e.g. image(s) of the product) is chosen to measure a degree of the connectivity or similarity between the individual data points in a sample input dataset. A graph mining algorithm performed by the data partitioning model408processes the dataset such that each of the individual data samples in the input dataset (corresponding to other nodes in the connectivity data graph409) is assigned a distance vector or score of similarity to the first node's chosen features (e.g. how similar is another node's product image to the first node). That is, a comparison is made between the first nodes' features to each of the other nodes' features in a similar dimension (e.g. compare text to text; image to image) via distance measurement to determine a similarity score for each of the other nodes in the connectivity data graph409relative to the first node. If sufficient similarity exists, the similar nodes are connected using an edge between them to form a graph of node components showing the connectivity, an example of which is shown inFIG.5A. This process is repeated for each of the nodes in the graph to determine the similarity with other nodes to determine connected components/nodes. Thus, an initial step performed by the data partitioning model408is identification of connected components. This may occur by using a flooding algorithm whereby an arbitrary node is chosen and iterated by adding its neighbours to a working notion of the component and so on for their neighbours etc. up until they reach some stopping condition (e.g. max iterations). By way of further example such operation as performed by the data partitioning model408to search and identify connected components within a set of product related nodes may include: a first particular node broadcasting a similarity query to its immediate neighbour nodes (e.g. neighbours being one hop away). For each of the one hop neighbours to the first particular node, if it is determined in response to the query, that there is a desired degree of similarity between such pair of nodes, such as by calculating a similarity distance between them as described herein (e.g. two nodes sharing a common product image or sharing common text describing a product), then an edge is drawn between the first particular node and each of the identified similar nodes one hop away. Each of the neighbour nodes one hop away from the first particular node, which may be referred to as secondary nodes, are then configured to broadcast another similarity query respectively to nodes one hop away from the secondary nodes (e.g. may be referred to as tertiary nodes) but excluding nodes previously queried (e.g. excluding the first particular node). Similar to the prior iteration, if the similarity query reveals more than the defined degree of similarity between a pair of secondary and tertiary nodes (e.g. commonality between text and/or image features of the product), then an edge is drawn between the similar node pairs. This process may be repeated in a similar manner, until a defined completion threshold is triggered such as but not limited to: a set number of iterations is reached, a desired number of connected node components located, or a defined number of nodes queried, etc. An example implementation of such similarity query flooding operation is further discussed with reference to operation700inFIG.7and more particularly with reference to step708ofFIG.7. Once the entire sample dataset is graphed, the graph mining algorithm will assign each data point to one of two groups—one as the training subset and the other as the testing subset. In other cases more than two groups of data may be desired for building the classifier416(e.g. a validation subset in addition to the training and the testing subset). Each data point in the sample input data set can only be assigned to one group, and each group is discrete. In some cases, after building an initial graph and identifying its connected components in the connectivity data graph409, there may still be a need for the data partitioning model408to group connected components initially assigned to different groups (e.g. if there are more than two sets of connected components) with other connected components to form training/test data sets of sufficient size and characteristics. FIG.5A, shows an example initial graph of connected nodes which have been partitioned into a first group501, a second group502and a third group503based on determining groups of directly connected components. In the example illustrated inFIG.5A, there is no edge between the nodes (and thus not sufficient direct similarity between pair of nodes) in the second group502and each of the connected node components in the first group501or the third group503. However, given that in the current example, it may be desirable to have a testing data set of a certain size that is not met by any one of the groups, then it may be desirable to combine the second group502with the first group501to reach a first training set504that is of a desirable size. The size of each subset contained in a group or cluster can be configured (e.g. adapted based on model performance). Preferably, in at least some aspects, the training subset represents a larger proportion of the total size of the sample dataset than the testing subset. In other aspects, although the testing and training data set sizes may not have been identified, it may be defined, in one example, that one goal of the data partitioning model408is to generate two distinct groups—e.g. one for the training dataset (e.g. training set410) and another for the testing dataset (testing set412). In this scenario and referring to the example initial grouping inFIG.5A, other methods may be applied to determine how to combine the three groups into two for the training set410and the testing set412. In one implementation, a clustering technique may be applied to further group the data sets of connected components from the initial graph shown inFIG.5Ainto the defined number of groups. Notably, a clustering technique may be applied to group the connected components in the first group501, the second group502and the third group503into the desired two groups. This may be performed by the data partitioning model408computing the centroid of each cluster or group after determining the set of connected components (e.g. as shown inFIG.5A). Thus, in the example ofFIG.5A, this process may include computing the centroid of each of the first group501, the second group502and the third group503. Once computed, a distance measure may be calculated between each pair of centroids, to determine how to reduce the groupings. Thus, one or more of the groups may be re-assigned to its next closest centroid. In the case ofFIG.5A, the second group502is re-assigned to the first group501as having a closer centroid measurement between the respective groups as compared to the centroid measurement between the second group502and the first group501. This process may be repeated until the desired number of groups is reached. The resulting example graph506, being an example of the connectivity data graph409, is shown inFIG.5B, having a first revised group, a first training set504and a second revised group, a first testing set505corresponding to data in the training set410and the testing set412is generated by the data partitioning model408. The grouping of the data points is determined primarily on the basis of ensuring that clusters or groups of data points (i.e. data that are identical, highly similar or otherwise connected to each other by relationships in at least one chosen component/feature) are co-segregated into the same group. For example, two product listings that feature the same image of an iPhone™ 11 would be grouped together either in the training subset (e.g. the first training set504) or the testing subset (e.g. the first testing set505), but not separated into the two different groups. The same would apply for two product listings that feature the exact same product description. In this way, the two data groups for the training/testing are ensured to be mutually exclusive of each other along the chosen component(s)/feature(s). In other words, the resulting training and testing sets are disjoint sets having no common elements. Once the desired training set410and testing set412(and if applicable additional data sets such as the validation set413), they are fed into the classifier416. Generally, the classifier416is configured to use the training set410which includes a set of inputs and correct outputs (e.g. input product information and output classifications) to analyze and train the model in the classifier416to learn over time the model configuration such as the rules in the model and associated hyper parameters. After the model is built, the testing set412is used by the classifier416to validate that the model can make accurate predictions on the classifications of products. As noted, in some cases, a validation set413may also be generated whereby the validation data may inject new data into the model which hasn't been evaluated on the model previously to evaluate how well the trained model performs on the new data. The validation set413may further be used to optimize hyper-parameters on the model. Once trained, tested and in some implementations validated, the generated classifier416may be configured to receive new e-commerce related data from the new data set414(e.g. images, text, and combinations thereof which may be labelled with product associations or unlabelled) having attributes or features of e-commerce products (e.g. product identifier, product name, product images, product categories, brand name, etc.) and generate one or more associated output classification(s)418based on the generated supervised learning model in the classifier416. FIG.7illustrates an example flowchart of operations700which may be performed by the engine300for partitioning data used for generating supervised learning models (e.g. the classifier416), on a computing device, such as the e-commerce platform100or on another computing device such as the merchant devices102or customer device150. The operations700are further described below with reference toFIGS.1-6. The computing device for carrying out the operations700may comprise a processor configured to communicate with a display to provide a graphical user interface (GUI) where the computing device has a network interface to receive various features of product information (e.g. text, images, and other product information as may be stored in input text database402, input image database404and input product database406), data partitioning preferences and wherein instructions (stored in a non-transient storage device), when executed by the processor, configure the computing device to perform operations such as operations700. The data partitioning preferences used by the engine300may include defined information such as the number of desired groups, types of such groups (e.g. testing or validation or training, etc.) and parameters of desired groups of data for the model. An example includes defining that two groups of data are needed for a testing dataset and a training dataset. In general, the description below will refer to the method of operations700being carried out by a processor. In at least some aspects of the operation700, the supervised learning model is a classification model including a neural network for classifying the e-commerce products containing at least image content (and in some aspects, text) into a set of labelled categories of products by training the classification model based on the training dataset and testing a performance of the classification model using the testing dataset. In operation702, the processor receives an input dataset for e-commerce products, each sample in the dataset containing a set of attributes and associated values for each product, the attributes containing at least an image for each product. Notably, the data set received is for building the supervised machine learning model (e.g. the classifier416), and includes features or attributes of a number of e-commerce products. The features/attributes may be in the form of images and/or text characterizing each e-commerce product. The attributes may include but are not limited to: product description, product identification, brand identification, financial statistics related to the products, product images, and customer images. The dataset received may include at least some data (e.g. product images, images of product packaging, images of use of product in environment, images of other similar products, images of various views, etc.) in the form of images. The input dataset (which may be stored in one or more of the input text database402, input image database404, and input product database406) may include attribute data that is unlabelled or labelled data (or otherwise associated with) one or more e-commerce products such mapping may be stored in the input product database (e.g. including images of a product and product identification). Following operation702, operation704includes representing each said sample in the dataset as a node on a graph (e.g. the connectivity data graph409) with the associated values for that sample and associated with a particular product from the e-commerce products to provide a graph of nodes for the dataset. An example of such a connected graph of nodes is shown inFIGS.5A and5B. In one implementation, the processor is configured for representing each sample in the data set using a graph mining algorithm whereby each node is plotted on a graph and corresponds to a particular e-commerce product's attributes represented as text and/or image (e.g. product price, product ID, product brand, product images showing various views of one product and/or related products, images relating to product vendor, etc.) to be used as the basis for splitting the sample dataset. Following operation704, operation706includes measuring a relative similarity distance between each set of two nodes on the graph of nodes based on comparing at least image values for the attributes. Thus, in at least some aspects, at least images in the nodes are compared to one another in order to determine a similarity distance. In one implementation, this operation includes for each pair of nodes, performing a comparison of the attributes (e.g. images and/or text) to other nodes to determine the relative similarity distance (e.g. Euclidean distance) between them. For example, the image attribute values of a first node to the image attribute values of a second node determine how similar the nodes are to one another based on a distance vector calculated. In one aspect, measuring the relative similarity distance between two nodes in operation706may include representing each node as a multi-dimensional vector, at least one dimension representing each of the associated attributes containing text and image samples. Thus, a dimension may include, text values for an attribute of a product, image values for an attribute of a product, or a combination of image and text values for an attribute of a product. This multi-dimensional vector allows distance measurements to be performed between two nodes, e.g. comparing the text values in a first dimension of a first node to the text values in a first dimension of a second node to calculate a first distance measurement and comparing the image values in the second dimension of a first node to the image values in the second dimension of a second node to calculate a second distance measurement and averaging the first and second distance measurement to provide the relative similarity distance across multiple dimensions of attributes. Optionally at operation706, measuring the relative similarity distance between two nodes each containing associated image data for the attributes, further comprises: performing a hashing conversion to each image data in the respective nodes to generate a hash value for each node and calculating a Hamming distance (or other distance measurements as envisaged) between the hash values of the images as the relative similarity distance, the image data for two nodes being considered similar (or sufficiently similar such as to have a high similarity score that exceeds a defined value) if the relative similarity distance provided by the Hamming distance or other distance measurement is below a defined threshold. Optionally at operation706, measuring the relative similarity distance between two nodes each containing associated text data for the attributes (e.g. product identification information, text description of the product), further comprises converting the text data to a vector including a frequency of each word (e.g. frequency of words in a given passage corresponding to product description) and calculating a distance between vectors for the text data, the text data for two nodes being considered similar if the relative similarity distance is below a defined threshold and thus resulting in a high similarity score (above a defined value for the score). Other methods for converting the text into vector representations for distance calculations may be envisaged (e.g. determining a context or intent of the text based on a defined set of intents). In yet other aspects, the text may be compared directly from one node to another node after having been categorized in a relevant category (e.g. brand name). In one aspect of operation706, the similarity distance is calculated between the vectors of the feature values for two nodes. Such distance may be determined, for example, using one or more of: a Euclidean distance, Minkowski distance, Manhattan distance, Hamming distance, and Cosine distance. Following operation706, at operation708, determining for each set of two nodes whether they are related if the relative similarity distance between them is below a defined threshold, and if related, generating an edge between them to provide connected nodes on the graph. Put another way, the processor is configured to determine whether two nodes in the graph are similar to one another based on the distance calculated at operation706being below a defined threshold. If two nodes are considered sufficiently similar, the processor (e.g. the data partitioning model408which generates the connectivity data graph409) is configured to draw an edge between them to show that there is an overlap of information between them. The similarity distance may provide a numerical measure of how different or similar two data objects represented as nodes are to one another and may range from 0 (objects are alike) to infinity (objects are different). Therefore, the smaller the relative similarity distance is between two nodes, the larger the similarity score for them. Following operation708, at operation710, the processor is configured for assigning each node on the graph of nodes to a first group or a second group, a particular node assigned to the first group if connected to at least one other node in the first group and assigned to the second group if no connection to another node in the first group to generate two disjoint groups such that the nodes grouped together have a shortest relative similarity distance with each other. An example of grouping connected nodes is shown inFIGS.5A and5B. The operation710may thus include segregating the input data points (e.g. including any combination of text and image attributes for products) for the input data set into one of two defined groups (a training data set or testing data set) based on co-grouping of data points that are connected or linked to each other based on having similarities in the feature(s). In some aspects as shown inFIG.4, the engine300may be configured to generate three or more data groupings, e.g. for a training set410, a testing set412and a validation set413and thus, the clusters of connected node components are assigned accordingly. Optionally, in at least some aspects of operation710, in order to assign each node to a first group or a second group of connected components, the processor may be configured to generate the connectivity data graph409in a number of iterations since initially, the number of groupings exceeds a desired number of groups for the data sets to build the classifier416. Thus, depending on parameters defined for the data sets and the number of data sets needed for building the machine learning model in the classifier416, e.g. training data set and testing data set, an additional step may occur of grouping two sets of internally connected node components but externally disconnected to each other (e.g. seeFIG.5Athe first group501and the second group502are grouped together to form the first training set504). In one aspect, the consolidation of groups may be performed because it is defined that the first training set504or first testing set505has a certain size. In other aspects, such additional grouping may be performed by determining that a centroid for the first group501is closest in distance to the centroid for the second group502as compared to the centroid in the third group503. In operation712following operation710, it is defined that the first group is used as a training dataset (e.g. training set410) to train a supervised learning model (e.g. classifier416) and the second group is used as a testing set (e.g. testing set412) to test the model, the model for subsequent use in predicting a classification such as output classifications418for a new e-commerce product as received in the new data set414based on at least an image input in the new data set414. Optionally, in some aspects, the operations700may include the processor configured for calculating the similarity distance by grouping connected nodes extending between more than two nodes to form a grouped connection whereby the relative similarity distance being calculated between the grouped connection and at least one of: other nodes in the graph of nodes. That is, in some cases if new data samples are received at the engine300into the data partitioning model408, the nodes corresponding to such new data samples may be compared to the existing grouped connection of nodes (e.g. first group501previously grouped may be compared to each newly added node). Alternately, if the newly added nodes have formed a set of connected components (e.g., the second group502), the distance measurement may be computed between the previously grouped connection (e.g. the first group501) and the other grouped connection of nodes (e.g. the second group502) to determine the distance. Optionally, in some aspects of the operation700, the processor may be configured for identifying connected nodes in operation708by identifying an arbitrary first node in the connectivity data graph409initially generated at operation704and iteratively determining whether the neighbour nodes located closest to it are connected to the first node (e.g. by calculating the similarity distance) and if connected, drawing or generating an edge therebetween. The processor may be configured to repeat this process for remaining nodes in the graph to determine connectivity with other nodes, but may be stopped depending on the maximum data set size defined for the first or the second group (e.g. training set410or the testing set412). That is, a stopping point may be once the max number of nodes for the first group have been reached then no further nodes need to be examined for similarity. In one or more examples, the functions described may be implemented in hardware, software, firmware, or combinations thereof. If implemented in software, the functions may be stored on or transmitted over, as one or more instructions or code, a computer-readable medium and executed by a hardware-based processing unit. Computer-readable media may include computer-readable storage media, which corresponds to a tangible medium such as data storage media, or communication media including such media as may facilitate transfer of a computer program from one place to another, e.g., according to a communication protocol. In this manner, computer-readable media generally may correspond to (1) tangible computer-readable storage media, which is non-transitory or (2) a communication medium such as a signal or carrier wave. Data storage media may be available media that can be accessed by one or more computers or one or more processors to retrieve instructions, code and/or data structures for implementation of the techniques described in this disclosure. A computer program product may include a computer-readable medium. By way of example, and not limitation, such computer-readable storage media can comprise RAM, ROM, EEPROM, optical disk storage, magnetic disk storage, or other magnetic storage devices, flash memory, or any other medium that can be used to store desired program code in the form of instructions or data structures and that can be accessed by a computer. Also, any connection is properly termed a computer-readable medium. For example, if instructions are transmitted from a website, server, or other remote source using wired or wireless technologies, such are included in the definition of medium. It should be understood, however, that computer-readable storage media and data storage media do not include connections, carrier waves, signals, or other transient media, but are instead directed to non-transient, tangible storage media. Instructions may be executed by one or more processors, such as one or more general purpose microprocessors, application specific integrated circuits (ASICs), field programmable logic arrays (FPGAs), digital signal processors (DSPs), or other similar integrated or discrete logic circuitry. The term “processor,” as used herein may refer to any of the foregoing examples or any other suitable structure to implement the described techniques. In addition, in some aspects, the functionality described may be provided within dedicated software modules and/or hardware. Also, the techniques could be fully implemented in one or more circuits or logic elements. The techniques of this disclosure may be implemented in a wide variety of devices or apparatuses, an integrated circuit (IC) or a set of ICs (e.g., a chip set). Furthermore, the elements depicted in the flowchart and block diagrams or any other logical component may be implemented on a machine capable of executing program instructions. Thus, while the foregoing drawings and descriptions set forth functional aspects of the disclosed systems, no particular arrangement of software for implementing these functional aspects should be inferred from these descriptions unless explicitly stated or otherwise clear from the context. Similarly, it may be appreciated that the various steps identified and described above may be varied, and that the order of steps may be adapted to particular applications of the techniques disclosed herein. All such variations and modifications are intended to fall within the scope of this disclosure. As such, the depiction and/or description of an order for various steps should not be understood to require a particular order of execution for those steps, unless required by a particular application, or explicitly stated or otherwise clear from the context. Various embodiments have been described. These and other embodiments are within the scope of the following claims. | 84,875 |
11861883 | DESCRIPTION OF EMBODIMENTS Hereinafter, preferred embodiments of the present disclosure will be described in detail with reference to the accompanying drawings. Note that, in the present specification and the drawings, redundant description of components having substantially the same functional configuration is omitted by assigning the same reference numerals. Note that the description will be given in the following order.1. Background2. Embodiment2.1. Overall configuration example of system12.2. Functional configuration example of recognizer development device202.3. Functional configuration example of information processing server402.4. Operation example2.4.1. Operation example 12.4.2. Operation example 22.4.3. Operation example 32.5. Modification2.5.1. First modification2.5.2. Second modification3. Hardware configuration example4. Conclusion 1. Background First, a background of the present disclosure will be described. In recent years, in a field such as Internet of Things (IoT), a device that performs processing of recognizing a predetermined target in an image, a voice, a sentence, or the like by a recognizer generated using technology related to so-called supervised machine learning such as deep learning has become widespread. In a case of developing the recognizer using the supervised machine learning technology such as the deep learning, generally, it is common to repeat a development cycle including constructing a learning data set, designing and learning the recognizer, transplanting the recognizer to an evaluation device, and evaluating the accuracy of the recognizer. Here, an outline of a development cycle in a case where the recognizer is developed using the supervised machine learning technology will be described with reference toFIG.1. As described above, in general, when the recognizer is developed using the supervised machine learning technology, work is performed in the order of the construction T10 of the learning data set, the designing and learning T20 of the recognizer, the transplantation T30 of the recognizer to the evaluation device, and the accuracy evaluation T40 of the recognizer. The construction T10 of the learning data set is work of collecting learning data including a recognition target to be recognized by the recognizer and labeling the recognition target included in the learning data. Here, the recognition target exists in the learning data. For example, in a case where the learning data is image data, the recognition target is a predetermined region in the image data, and the predetermined region is labeled. Note that, hereinafter, the learning data in which the recognition target is labeled is also referred to as the learning data set. The designing and learning T20 of the recognizer is work of designing and learning the recognizer so as to recognize the recognition target included in the learning data, on the basis of the learning data set constructed by the construction T10 of the learning data set. Further, the transplantation T30 of the recognizer to the evaluation device is work of transplanting the recognizer to the evaluation device that performs the accuracy evaluation of the recognizer. Here, the evaluation device is, for example, a device in which a developed recognizer is actually used. Further, the accuracy evaluation T40 of the recognizer is work of evaluating the recognition accuracy of the recognizer in the evaluation device. Here, in order to further improve the recognition accuracy of the recognizer, improvement of diversity of the learning data can be required. If the diversity of the learning data is not sufficient, the recognizer is not sufficiently generalized, and for example, a target that is similar to, but different from the recognition target, other than the recognition target to be recognized, may be recognized as the recognition target (false positive). Here, the erroneous recognition means that the recognizer recognizes a target included in data different from a predetermined recognition target included in the learning data to be recognized as the recognition target (false positive). As an example, it is called false positive that a recognizer learned to recognize a “tomato” portion in image data recognizes a “paprika” portion different from “tomato” in certain image data as “tomato”. In a case where a recognizer is generated by learning using only image data in which “tomato” is captured in a home garden as learning data, a situation in which the recognizer not only recognizes “tomato” but also recognizes “paprika” or “apple” that is substantially the same in color as “tomato” and slightly different in shape from “tomato” as “tomato” can occur. The above situation can occur due to, for example, that the recognizer performs recognition in response to only the color of “tomato”. In a case where it is desired to develop a recognizer that recognizes “tomato” without recognizing “paprika” or “apple”, it is necessary to generate the recognizer by learning using image data in which “paprika” or “apple” is captured. That is, in order to improve diversity of learning data, more learning data is generally required. However, since the work of labeling a recognition target included in the learning data is performed by a user's hand, a work time increases as an amount of learning data increases, and the burden on a user increases. Therefore, there may be a limit to improvement of the diversity of the learning data. In addition, there may be a situation in which there is a limit to learning data that can be prepared in a predetermined context. In response to the above situation, for example, Patent Literature 1 described above discloses technology for reducing the number of man-hours for constructing a learning data set by semi-automating the labeling work. However, Patent Literature 1 does not consider checking whether or not the amount and diversity of learning data included in the learning data set are sufficient. It is necessary to perform the transplantation T30 of the recognizer to the evaluation device and the accuracy evaluation T40 of the recognizer each time the construction T10 of the learning data set and the designing and learning T20 of the recognizer are completed. Then, as a result of the accuracy evaluation, in a case where the recognition accuracy of the recognizer is not sufficient, it is necessary to repeatedly perform the above process, so that a development period may be increased. The technical ideas according to the present disclosure have been conceived in view of the above points, and it is possible to prevent rework in the development process of the recognizer and decrease the development period by specifying a target that can be erroneously recognized using data of a context substantially the same as the context of the learning data and prompting the user to reconsider the diversity of the learning data. Note that, hereinafter, an example in which the recognizer recognizes an object of a recognition target captured in a predetermined region in image data will be described. 2. Embodiment 2.1. Overall Configuration Example of System1 Next, an example of an overall configuration of the system1according to the present embodiment will be described with reference toFIG.2. As illustrated inFIG.2, the system1includes an input/output terminal10, a recognizer development device20, a network30, and an information processing server40. (Input/Output Terminal10) The input/output terminal10receives input from the user. Further, the input/output terminal10outputs information regarding processing executed by the recognizer development device20or the information processing server40to the user. The input/output terminal10may be, for example, a mobile terminal such as a personal computer (PC), a smartphone, or a tablet terminal. Alternatively, the input/output terminal10may be a liquid crystal display (LCD) device, an organic light emitting diode (OLED) device, or a projector. (Recognizer Development Device20) The recognizer development device20constructs a learning data set on the basis of the learning data transmitted from the input/output terminal10. Specifically, the recognizer development device20labels a recognition target included in the learning data on the basis of the input from the user, and generates a learning data set. Further, the recognizer development device20performs designing and learning of the recognizer based on the learning data. A detailed functional configuration of the recognizer development device20will be described later. (Network30) The network30has a function of connecting the recognizer development device20and the information processing server40. The network30may include a public line network such as the Internet, a telephone line network, or a satellite communication network, various local area networks (LANs) including Ethernet (registered trademark), a wide area network (WAN), and the like. Further, the network30may include a dedicated line network such as an Internet protocol-virtual private network (IP-VPN). Furthermore, the network30may include a wireless communication network such as Wi-Fi (registered trademark) or Bluetooth (registered trademark). (Information Processing Server40) The information processing server40is an example of an information processing apparatus that specifies an erroneous recognition target that is likely to be erroneously recognized by a recognizer generated in order to recognize a predetermined recognition target by the recognizer development device20, and controls output of information regarding the specified erroneous recognition target. Here, the erroneous recognition means that the recognizer recognizes a target, which is different from the predetermined recognition target and included in specifying data, as the recognition target (false positive). Note that the information processing server40receives the recognizer and the learning data set from the recognizer development device20via the network30. A detailed functional configuration of the information processing server40will be described later. Note that the information processing server40may be a server on a cloud available in a cloud service. The configuration example of the system1according to the present embodiment has been described above. Note that the above configuration described usingFIG.2is merely an example, and the configuration of the system1according to the present embodiment is not limited to such an example. The configuration of the system1according to the present embodiment can be flexibly modified according to specifications and operations. 2.2. Functional Configuration Example of Recognizer Development Device20 Next, an example of a functional configuration of the recognizer development device20according to the present embodiment will be described with reference toFIG.3. The recognizer development device20includes a data set management unit210, a recognizer development unit220, a communication unit240, a storage unit250, and a control unit260. Note that the storage unit250includes a learning data set DB251and a recognizer database252. The learning data set DB251is a collection of learning data sets, and the recognizer database252is a collection of recognizers that are being developed or have been developed. (Data Set Management Unit210) The data set management unit210executes construction of a learning data set stored in the storage unit250to be described later, on the basis of input from the user via the input/output terminal10. Specifically, the data set management unit210displays a screen for labeling work on the input/output terminal10at the time of labeling work for each learning data, and labels the learning data on the basis of input from the user to the screen. Here, an example of screen display control for labeling work by the data set management unit210according to the present embodiment will be described with reference toFIGS.4and5.FIG.4illustrates a display screen SC1 for labeling work by the data set management unit210in a case where the input/output terminal10is a personal computer. The display screen SC1 for labeling work includes a display portion SP11 for displaying a labeling work situation, a display portion SP12 for displaying data including a recognition target, a display portion SP13 for displaying information regarding an erroneous recognition target under the control of the information processing server40to be described later, and a display portion SP14 for displaying information regarding an operation on the display screen for labeling work. In the display portion SP11, a labeling work situation for the recognition target is displayed. In the example ofFIG.4, in the display portion SP11, information indicating a situation in which a “tomato” image included in image data displayed in the display portion SP12 is labeled is displayed. In the display portion SP12, data including the recognition target currently performing the labeling work is displayed. In the example ofFIG.4, the image data including the “tomato” image is displayed. Here, in a region of the labeled “tomato” image, display is performed so as to surround the region. In the display portion SP13, information regarding the erroneous recognition target is illustrated under the control of the information processing server40described later. In the example ofFIG.4, an example of an image that a recognizer currently under development recognizes as a “tomato” is displayed. In the display portion SP14, various buttons and the like for operating the display screen in the labeling work are illustrated. In the example ofFIG.4, buttons for performing “region selection”, “label deletion”, “change of display image”, “enlargement of display image”, and “label confirmation” are displayed. Further, as illustrated inFIG.4, in the display portion SP13, an information output avatar may be displayed. On the other hand,FIG.5illustrates a display screen SC2 for labeling work by the data set management unit210in a case where the input/output terminal10is a smartphone or a tablet terminal. The layout of the display screen SC is partially different in that input by a touch operation is possible and a physical size of the screen is different from that of a personal computer. The display screen SC1 for labeling work includes a display portion SP21 for displaying a labeling work situation and a display portion SP22 for displaying information regarding an erroneous recognition target displayed under the control of the information processing server40described later. In a case where the input/output terminal10is a smartphone or a tablet terminal, for example, information can be input by a touch operation for a region on a touch panel. Therefore, as illustrated inFIG.5, the data set management unit210may display a balloon for confirming whether or not to input a label in a case where a touch operation is performed on the “tomato” image to be the recognition target in the display portion SP21. Further, as illustrated inFIG.5, the data set management unit210may display an icon on the upper right of the display screen SC2 as in the display portion SP22, and when the information regarding the erroneous recognition target is displayed, the data set management unit210may display the information in the format of a balloon. As described above, display control of the screen for labeling work by the data set management unit210is executed. Note that a context corresponding to each piece of data may be set before the work of labeling each piece of data is started. The context may indicate a place such as a “farm”, a “home garden”, a “supermarket”, a “bank”, or a “school”. Further, the context may indicate a time zone such as “morning” or “late night”, or may indicate a predetermined scene such as “cooking” or “meeting”. The context is set in a desired format. Of course, the configuration of the screen for labeling work is not limited to such an example. The display of the information regarding the erroneous recognition target described above will be described in detail later. Returning toFIG.3again, an example of the functional configuration of the recognizer development device20will be described. (Recognizer Development Unit220) The recognizer development unit220executes processing related to development of a recognizer for recognizing a predetermined recognition target. Specifically, the recognizer development unit220provides an integrated development environment and an editor of the recognizer to the user via the input/output terminal10, and performs designing and learning of the recognizer on the basis of input from the user via the input/output terminal10. Further, the recognizer development unit220may set a context in which the recognizer to be developed is used when the recognizer is developed. Furthermore, the recognizer development unit220may display an evaluation result of the recognizer to the user via the input/output terminal10. Here, an example of screen display of the evaluation result of the recognizer by the recognizer development unit220according to the present embodiment will be described with reference toFIG.6.FIG.6illustrates a display screen SC3 that displays the evaluation result of the recognition of the learning data by the recognizer displayed by the recognizer development unit220. In the example ofFIG.6, on the display screen SC3 that displays the evaluation result, learning data with which a recognition target has been labeled and evaluation of accuracy in recognition processing of the recognition target are illustrated as reliability. Here, the evaluation in the recognition processing is indicated by an index such as mean average precision (mAP) or intersection over union (IoU). Further, as illustrated in the example ofFIG.6, the recognizer development unit220may display additional information regarding a context as a remark for the learning data. Here, the additional information refers to date and time, position information, a name of a place, and the like of shooting, in a case where the learning data is image data. Furthermore, the recognizer development unit220may display information indicating the context of the learning data described above as the additional information. As described above, processing and display related to the development of the recognizer by the recognizer development unit220are performed. Of course, the configuration of the screen of the evaluation result of the recognizer is not limited to such an example. Returning toFIG.3again, an example of the functional configuration of the recognizer development device20will be described. (Communication Unit240) The communication unit240executes communication with the input/output terminal10or the information processing server40. For example, the communication unit240transmits information regarding screen display to the input/output terminal10, on the basis of an instruction from the data set management unit210or the recognizer development unit220, and receives information indicating the input operation of the user from the input/output terminal10. (Storage Unit250) The storage unit250stores various types of information regarding the processing of the data set management unit210and the recognizer development unit220. As described above, the storage unit250includes, for example, the learning data set DB251and the recognizer database252. The storage unit250provides various types of data of the learning data set DB251and the recognizer database252, on the basis of a request from the data set management unit210or the recognizer development unit220. (Control Unit260) The control unit260has a function of controlling each configuration included in the recognizer development device20according to the present embodiment. The control unit260controls, for example, the start or stop of each configuration. The configuration example of the recognizer development device20according to the present embodiment has been described above. Note that the above configuration described usingFIG.3is merely an example, and the configuration of the recognizer development device20according to the present embodiment is not limited to such an example. The configuration of the recognizer development device20according to the present embodiment can be flexibly modified according to specifications or operations. 2.3. Functional Configuration Example of Information Processing Server40 Next, an example of a functional configuration of the information processing server40according to the present embodiment will be described with reference toFIG.7. The information processing server40includes a context recognition unit410, an erroneous recognition target specifying unit420, a data classifying unit430, an output control unit440, an expansion support unit450, a server communication unit460, a storage unit470, and a control unit480. (Context Recognition Unit410) The context recognition unit410recognizes a context of the learning data received from the recognizer development device20. For example, the context recognition unit410may recognize a context corresponding to the learning data and set in advance. Further, for example, the context recognition unit410may recognize the context of the learning data on the basis of the learning data. For example, in a case where the learning data is image data, the context recognition unit410may recognize the context of the learning data, on the basis of a background portion different from a target that can be recognized by the recognizer in the image data. The context of the learning data is recognized by the context recognition unit410, so that the specification of the erroneous recognition target by the erroneous recognition target specifying unit420described later is more accurately executed. Note that context recognition processing by the context recognition unit410is not limited to such an example. For example, when the context of the image data is recognized, the context recognition unit410may use clothes of a person in the image, character information of a subtitle or a signboard, or the like, in addition to the background of the image. Further, the context recognition unit410may recognize the context on the basis of surrounding information such as a date when an image is created or captured, a voice, a temperature, a humidity, a place, a country, and position information acquired by a global positioning system (GPS), which are added to the learning data as additional information. The context is recognized by various types of information, so that it is easy to specify an erroneous recognition target that conforms to the purpose of the user. Note that, when the learning data set is received from the recognizer development device20, the context recognition unit410recognizes a context common to the learning data forming the learning data set. In a case where all contexts are not substantially the same in a plurality of pieces of learning data, for example, a context occupying the majority of the plurality of pieces of learning data may be recognized by the context recognition unit410as the context of the entire learning data, or a context indicating an intermediate concept of the contexts of the plurality of pieces of learning data may be recognized by the context recognition unit410as the context of the entire learning data. Note that data in the context to be substantially the same as the context recognized by the context recognition unit410on the basis of the learning data set is acquired as specifying data from the specifying data set DB471of the storage unit470to be described later, by the erroneous recognition target specifying unit420to be described later. At that time, the context recognition unit410may recognize the context of the data included in the specifying data set DB471. (Erroneous Recognition Target Specifying Unit420) The erroneous recognition target specifying unit420uses the recognizer to specify an erroneous recognition target by recognition processing on specifying data. Specifically, the erroneous recognition target specifying unit420specifies the erroneous recognition target by executing the recognition processing on the specifying data using the recognizer and using a result obtained by causing the data classifying unit430described later to execute clustering processing, on the basis of a result of the recognition processing. Hereinafter, specific processing of the erroneous recognition target specifying unit420will be described. The erroneous recognition target specifying unit420extracts data in a context to be substantially the same as the context recognized by the context recognition unit410from the specifying data set DB471of the storage unit470as specifying data. Here, the erroneous recognition target specifying unit420may acquire the specifying data on the basis of the context included in the specifying data set DB471and set in advance in the data, or may acquire the specifying data on the basis of a result obtained by causing the context recognition unit410to recognize the context of the data. Further, the erroneous recognition target specifying unit420specifies a target included in specifying data erroneously recognized by the recognizer as an erroneous recognition target, on the basis of results of recognition processing of the recognition target of the specifying data by the recognizer and clustering processing by the data classifying unit430. Details of the clustering processing by the data classifying unit430will be described later. In order to specify the erroneous recognition target, for example, an accuracy evaluation result of the recognition processing by the recognizer of the target in each cluster is used for the target in the specifying data classified into each cluster by the clustering processing. For example, the erroneous recognition target is specified on the basis of an average value of accuracy evaluation of recognition processing of the target recognized by the recognizer in the specifying data in the cluster. (Data Classifying Unit430) The data classifying unit430executes clustering processing, which is a method of so-called unsupervised machine learning, on the target included in the specifying data recognized by the erroneous recognition target specifying unit420using the recognizer, and classifies the target included in the specifying data into a plurality of clusters by the executed clustering processing. For example, in a case where the specifying data is image data, the data classifying unit430executes clustering processing to predetermined regions in the specifying data recognized by the erroneous recognition target specifying unit420using the recognizer, and classifies each predetermined region into any one of a plurality of clusters. Examples of the method of the clustering processing include a principal component analysis method, a k-means method, and the like. Example of Specifying Erroneous Recognition Target Here, an example of specifying an erroneous recognition target by the erroneous recognition target specifying unit420and the data classifying unit430according to the present embodiment will be described with reference toFIGS.8and9. FIG.8is a diagram illustrating processing of extracting data of a context to be substantially the same as a context of learning data as specifying data by the erroneous recognition target specifying unit420.FIG.8illustrates an image data set DS1 in the specifying data set DB471. The image data set DS1 may be an image data set whose context is “farm”, and the image data set DS1 may be an image data set whose context is other than “farm”. The erroneous recognition target specifying unit420extracts, as specifying data, image data whose context is “garden” in which a vegetable or the like is produced from the image data set DS1. An image data set DS2 illustrated on the right side ofFIG.8is a specifying data set whose context is “garden”. Next, the erroneous recognition target specifying unit420recognizes the target by executing the recognition processing using the recognizer received from the recognizer development device20on the image data set DS2. On the left side ofFIG.9, a result table TA of the recognition processing on the image data set DS2 by the erroneous recognition target specifying unit420is illustrated. Note that, in the examples ofFIGS.8and9, the “home garden” and the “garden” are treated as substantially the same context. The data classifying unit430executes clustering processing on the recognized target region (cutout image) included in the result of the recognition processing illustrated on the left side ofFIG.9. Each cutout image is classified into any one of a plurality of clusters by the clustering processing of the data classifying unit430. Further, as illustrated on the right side ofFIG.9, the erroneous recognition target specifying unit420calculates an average of reliabilities of the recognition processing corresponding to a plurality of cutout images included in each cluster. In a cluster column CLR illustrated on the right side ofFIG.9, cutout images classified into clusters CL1 to CL5 and average reliability corresponding to each cluster are illustrated. On the right side ofFIG.9, the erroneous recognition target specifying unit420specifies the erroneous recognition target on the basis of the average reliability to be recognition accuracy calculated. Here, since there is a high possibility that the target of the cutout image of the cluster CL1 having the highest average reliability is the recognition target to be recognized by the recognizer, the target of the cluster CL2 having the second highest average reliability may be specified as the erroneous recognition target. In the example ofFIG.9, the cluster CL1 is an image of “tomato”, and the cluster CL2 is an image of “paprika”. Here, the erroneous recognition target specifying unit420specifies the target of the cluster CL2 having the second highest reliability after the cluster CL1 as the erroneous recognition target. As described above, it is possible to further specify the erroneous recognition target that may be erroneously recognized when the recognition processing is performed by the recognizer in the context in which the erroneous recognition target is substantially the same as the learning data. Note that the method for specifying the erroneous recognition target based on the recognition accuracy described above is not limited to such an example. In the above description, an example has been described in which the target corresponding to the cluster having the second highest average reliability, which is the average of recognition accuracy, is set as the erroneous recognition target. However, for example, in a case where there is a high possibility that a recognition target is divided into two or more clusters and classified, a target corresponding to a cluster having the third highest recognition accuracy or lower may be specified as the erroneous recognition target. A functional configuration of the information processing server40according to the present embodiment will be described with reference toFIG.7again. (Output Control Unit440) The output control unit440controls display of information regarding the erroneous recognition target specified by the erroneous recognition target specifying unit420. For example, the information regarding the erroneous recognition target may be notification information for notifying the user of the erroneous recognition target. The output control unit440may control the display of the notification information on the basis of a specifying result by the erroneous recognition target specifying unit420. The notification information may be visual information or character information. In a case where the learning data is image data, the notification information may be a portion (clipped image) of the image data corresponding to the erroneous recognition target as the visual information. Further, in a case where there is a plurality of pieces of data indicating the erroneous recognition target in the cluster, the output control unit440may display the plurality of pieces of data. Further, the information regarding the erroneous recognition target may be additional information related to the erroneous recognition target, in addition to the information indicating the erroneous recognition target. For example, the output control unit440may control display of information indicating an evaluation on the result of the recognition processing on the specifying data by the recognizer as information regarding the erroneous recognition target. Further, the output control unit440may further control display of information indicating the context of the learning data. As described above, examples of the information indicating the context include information notifying the context and surrounding information such as a date, a voice, a temperature, a humidity, and position information acquired by GPS. By grasping the context of the specifying data, the user can consider what type of learning data should be prepared or in which situation or situation the learning data should be expanded. Further, the output control unit440may control display of information regarding expansion of the learning data set by the expansion support unit450described later. The control of the display of the information regarding the expansion will be described in detail later. Note that the output control unit440may control display of the entire screen other than an information portion regarding the erroneous recognition target in the display screens SC1 to SC3 illustrated inFIGS.4to6described above, instead of the recognizer development device20. Note that, in addition to the visual information described above, the output control unit440may output, to the user, information regarding the erroneous recognition target or the like by a voice. Further, the output control unit440may output information regarding the erroneous recognition target and the like to the user only by a voice. (Expansion Support Unit450) The expansion support unit450controls expansion processing of the learning data on the basis of a specification result of the erroneous recognition target specifying unit420. Here, the expansion processing of the learning data set DB251refers to adding new learning data to the learning data set DB251. That is, the expansion support unit450may add, for example, a combination of labels corresponding to the learning data of the erroneous recognition target specified by the erroneous recognition target specifying unit420to the learning data set DB251as the learning data set. Here, the label may be given by the user or may be automatically given by the expansion support unit450. Output Control Example and Expansion Processing Example As described above, in the expansion support unit450, the output control unit440may control the display of the information regarding the expansion of the learning data and execute the expansion processing of the learning data, on the basis of the feedback from the user on the information regarding the expansion. For example, the expansion support unit450may execute the expansion processing of the learning data set, on the basis of the feedback from the user for information regarding confirmation as to whether or not the erroneous recognition target displayed by the output control unit440is erroneously recognized. At that time, the learning data set expanded in the learning data set DB251may be the same as the data of the erroneous recognition target. The data of the erroneous recognition target is labeled differently from the recognition target and expanded as the learning data, so that the possibility of erroneous recognition of the recognizer is reduced, and as a result, a more accurate recognizer can be developed. For example, as illustrated inFIG.4described above, the output control unit440displays an image and a sentence for confirming whether or not the erroneous recognition target is a target to be recognized so as to be included in the display portion SP13 via the recognizer development device20. By inputting “Yes” or “No” to the display portion SP13, the user can determine whether or not the recognizer recognizes the displayed image (target). Display examples of the information regarding the erroneous recognition target and the information regarding the expansion are not limited to such examples.FIG.4illustrates a display screen example in a case where the input/output terminal10is a personal computer. Here, a display example of information regarding the erroneous recognition target and information regarding the expansion in a case where the input/output terminal10according to the present embodiment is a smartphone or a tablet terminal will be described with reference toFIG.10. InFIG.10, a display screen SC4 for labeling work includes a display portion SP41 for displaying a labeling work situation and a display portion SP42 for displaying information regarding the erroneous recognition target. Unlike the display screen SC2 ofFIG.5, a portion SP43 that displays information indicating the erroneous recognition target and information for confirming the erroneous recognition target is illustrated in the format of a balloon, on the basis of the display portion SP42. As described above, the layout of the various table screens can be changed according to the type of the input/output terminal10. Further, for example, on the display screen SC4 for labeling work, the output control unit440may display a message actively suggesting expansion of learning data, such as “By performing learning so as not to detect this object, recognition accuracy increases”. The display screen illustrated inFIGS.4and10is a display screen at the time of constructing the learning data set. On the other hand, the output control unit440may control the display of the information regarding the expansion by the expansion support unit450at the time of designing and learning the recognizer. Here, an example of display of information regarding the expansion at the time of designing and learning of the recognizer according to the present embodiment will be described with reference toFIG.11.FIG.11illustrates a display screen SC5 of the evaluation result of the recognizer. The display screen SC3 of the evaluation result of the recognizer illustrated inFIG.6described above is an evaluation result of the recognition processing on the learning data, while the display screen SC5 illustrated inFIG.11is an evaluation result of the recognition processing on the specifying data including the erroneous recognition target. In the example ofFIG.11, two erroneous recognition targets (paprika different from tomato to be the recognition target) are illustrated on the display screen SC5. Here, the user can input the feedback to the accuracy evaluation result illustrated on the display screen SC5. For example, the expansion support unit450may control the expansion processing of the learning data set DB251, on the basis of input from the user as to whether or not the accuracy evaluation result is as expected. In a case where “expected detection” is input in a display portion SP43, the expansion support unit450may determine that the displayed erroneous recognition target is actually a recognition target and add the erroneous recognition target with the same label as the recognition target to the learning data set DB251as the learning data. On the other hand, when “unexpected detection” is input in the display portion SP43, the expansion support unit450may determine that the erroneous recognition target is actually an erroneously recognized target, perform another labeling, and add the erroneous recognition target to the learning data set DB251. Note that the format of input from the user may be a format of selection from predetermined options as illustrated inFIG.11, or a format of input by a keyboard shortcut of the input/output terminal10. As described above, the information regarding the erroneous recognition target is displayed by the output control unit440, so that the user can confirm at an early stage what type of target is erroneously recognized by the current learning data set and recognizer, and what type of data should be added as the learning data. Further, expansion of the learning data according to the erroneous recognition target specified by the erroneous recognition target specifying unit420is realized by the expansion support unit450. Further, according to the output control unit440and the expansion support unit450, the expansion of the learning data can be performed at the stage of construction of the learning data set or designing and developing of the recognizer on the basis of whether or not the erroneous recognition target is actually an erroneously recognized target, and the development period can be shortened. Further, active learning in which the user actively understands the importance of securing the amount or diversity of learning data in the development of the recognizer is realized by the output control unit440and the expansion support unit450. The exchange of information with the user via the input/output terminal10by the output control unit440may be performed a plurality of times. For example, by grasping a more detailed context in which the recognizer is used by exchanging information with the user, the expansion support unit450can more accurately specify data to be expanded as the learning data set. The output control example and the expansion processing example have been described above. Returning toFIG.7again, the functional configuration of the information processing server40will be described. (Server Communication Unit460) The server communication unit460executes communication with the recognizer development device20via the network30. For example, the server communication unit460receives the recognizer and the learning data set from the recognizer development device20via the network30, and transmits information regarding the erroneous recognition target or specifying data including the erroneous recognition target to the recognizer development device20. (Storage Unit470) The storage unit470stores the specifying data set DB471and the like. The specifying data set DB471is a set of data and information associated with the data. The information associated with the data is, for example, information indicating a context of the data. A combination of the data and the information indicating the context or the like is also referred to as a specifying data set. The storage unit470may provide data in a predetermined context on the basis of the request from the context recognition unit410and the information indicating the context. Here, the provided data in the predetermined context is the above-described specifying data. Further, the storage unit470may provide the data of the specifying data set DB471to the context recognition unit410to recognize the context, on the basis of the request from the context recognition unit410. Note that each data of the specifying data set DB471may not be data prepared for development of the recognizer. That is, in specifying the erroneous recognition target, the erroneous recognition target specifying unit420may acquire and use data used for other purposes. (Control Unit480) The control unit480has a function of controlling each configuration included in the information processing server40according to the present embodiment. The control unit260controls, for example, the start or stop of each configuration. 2.4. Operation Example 2.4.1. Operation Example 1 Next, an example of the operation related to the work of labeling the learning data by the data set management unit210according to the present embodiment will be described. Referring toFIG.12, first, after the start of an application performing labeling, the data set management unit210of the recognizer development device20causes the input/output terminal10to display a screen prompting designation of a place where the learning data set DB251is stored (S101). When the place where the learning data set DB251is stored is not designated (S102: No), it is determined that the labeling work is not performed, and the data set management unit210ends the operation. On the other hand, when the place where the learning data set DB251is stored is designated (S102: Yes), the data set management unit210causes the input/output terminal10to display a labeling screen (S103). Next, when the operation of labeling the image of the learning data displayed on the labeling screen displayed in step S103is not input (S104: No), the process returns to step S104. On the other hand, when the operation of labeling the image of the learning data displayed on the labeling screen displayed in step S103is input (S104: Yes), the data set management unit210registers a labeling result as the learning data set (S105). Next, when the labeling is continued (S106: No), the process returns to step S103. On the other hand, when the labeling ends (S106: Yes), the data set management unit210ends the operation. 2.4.2. Operation Example 2 Next, an example of the operation related to the work of labeling the learning data by the recognizer development unit220according to the present embodiment will be described. Referring toFIG.13, first, the recognizer development unit220of the recognizer development device20newly creates a project file for developing the recognizer (S201). Next, the recognizer development unit220sets the context of the recognizer and the learning data (S202). Next, the recognizer development unit220executes designing processing of the recognizer on the basis of input from the user or the like (S203). Next, the recognizer development unit220executes learning of the recognizer on the basis of the learning data (S204). Next, the recognizer development unit220evaluates the accuracy of the recognition processing of the recognizer of which the learning has been executed in step S204(S205). Next, when the development of the recognizer is continued on the basis of the input from the user (S206: Yes), the process returns to step S203. On the other hand, next, when the development of the recognizer is ended on the basis of the input from the user (S206: No), the recognizer development unit220releases the recognizer to a developer or a customer (S207), and the recognizer development unit220ends the operation. 2.4.3. Operation Example 3 Next, an example of an operation related to specification of an erroneous recognition target, presentation of information regarding the erroneous recognition target, and expansion of a learning data set by the information processing server40according to the present embodiment will be described. Referring toFIG.14, first, when the server communication unit460does not receive information indicating the occurrence of a predetermined event from the recognizer development device20(S301: No), the process returns to step S301. Here, examples of the predetermined event include completion of designing and learning of the recognizer by the recognizer development device20, a change in setting of a project file for developing the recognizer, and the like. On the other hand, when the server communication unit460receives the information indicating the occurrence of the predetermined event from the recognizer development device20(S301: Yes), the erroneous recognition target specifying unit420acquires the recognizer and the learning data set via the server communication unit460(S302). Next, the context recognition unit410recognizes a context of the learning data acquired in step S302(S303). Next, the erroneous recognition target specifying unit420acquires data of a context to be substantially the same as the context recognized in step S303from the specifying data set DB471of the storage unit470as specifying data (S304). Next, the erroneous recognition target specifying unit420applies the recognizer acquired in step S302to the specifying data acquired in step S304(S305). Next, the data classifying unit430executes clustering processing on the target recognized in step S305(S306). Next, the erroneous recognition target specifying unit420specifies an erroneous recognition target on the basis of a result of the clustering processing executed in step S306(S307). Next, the output control unit440causes the input/output terminal10to display information regarding the erroneous recognition target specified in step S307(S308). When there is input from the user for the information regarding the erroneous recognition target displayed in step S308that the erroneous recognition target is actually the erroneously recognized target (S309: Yes), the expansion support unit450adds the specifying data including the erroneous recognition target specified in step S307to the learning data set (S310). On the other hand, in a case where it is determined that the erroneous recognition target is not actually the erroneously recognized target, when there is input from the user for the information regarding the erroneous recognition target displayed in step S308(S309: No), the process proceeds to step S311. Next, when designing and development of the recognizer are continued (S311: No), the process returns to step S301. On the other hand, when designing and development of the recognizer end (S311: Yes), the information processing server40ends the operation. 2.5. Modification 2.5.1. First Modification Next, modifications of the embodiment of the present disclosure will be described. In the above description, the target recognized by the recognizer is the portion in the still image. In other words, in the above description, the target recognized by the recognizer is the type of the object. However, the scope of application of the technical ideas according to the present disclosure is not limited to such an example. The technical ideas according to the present disclosure are applicable to various recognition processing. For example, the learning data may be voice data, and the recognition target in this case is a predetermined phrase, a word portion, or the like in the voice data. Further, for example, the learning data may be motion data or action data, and the recognition target may be a predetermined gesture performed by a person in moving image data. In this case, the learning data is collected by, for example, an inertial measurement unit (IMU). The IMU is worn on a person's arm, for example. Further, the gesture is, for example, a motion of raising an arm or the like. Here, an example of screen display related to an erroneous recognition target in a case where learning data is motion data in a modification of the embodiment of the present disclosure will be described with reference toFIG.15.FIG.15illustrates a display screen SC6 displayed by the output control unit440. On the display screen SC6, time-series data of the IMU, a moving image time-synchronized with the time-series data, a probability that a predetermined gesture is being executed, and an average probability of a gesture as recognition accuracy are displayed as motion data, instead of the still image displayed in the above examples. The user confirms whether or not the recognized gesture is erroneously recognized while confirming the moving image displayed on the display screen SC6. For example, in a case where a pointing operation is performed on the displayed recognition result, a moving image of a portion corresponding to the operated portion may be reproduced. Further, similarly to the above, the learning data may be expanded on the basis of the feedback from the user on the recognition result. Note that the target recognized by the recognizer in the present disclosure is not limited to such an example. The target recognized by the recognizer may be, for example, document data. In this case, the recognized target is a predetermined sentence, phrase, or word in the document data. At this time, the data classifying unit430may use, for example, a classification vocabulary table at the time of the clustering processing. 2.5.2. Second Modification Incidentally, in a case where a recognition target in a predetermined context is changed after the development of a recognizer that recognizes the recognition target in the predetermined context is completed, there is a possibility that the accuracy of the recognizer is lowered. Examples of a situation in which a change in the recognition target is generated and the accuracy of the recognizer is lowered include the following. For example, in a case where a variety of vegetables grown in a garden is changed, or in a case where fashion of clothes of a person, fashion of music, or the like is changed, there is a possibility that accuracy of a recognizer that recognizes the vegetables or a recognizer that recognizes the clothes of the person and the music is lowered. In addition, for example, even in a case where the variety of the vegetable grown in the garden is not changed, the appearance of the vegetable may change with the lapse of time such as a change in season, and even in a case where the recognizer is developed only on the basis of image data of the vegetable at a certain time point, the accuracy of the recognizer may be lowered. Further, even in a case where the context of the recognition target changes, the accuracy of the recognizer may be lowered. For example, even in a case where a place in which the vegetables and the like are mainly produced is changed (in a case where the garden is changed to a factory) or a case where a country is changed, there is a possibility that the accuracy of the recognizer is lowered due to a change in the recognition target or a change in a target to be easily erroneously recognized in accordance with a change in the context. Therefore, for the above situation, the expansion support unit450may control the expansion processing of the learning data set on the basis of the update of the specifying data set DB471. Specifically, in a case where a change occurs in the data forming the specifying data set DB471, the expansion support unit450may control the expansion processing on the basis of an erroneous recognition target newly specified by the erroneous recognition target specifying unit420. For example, in a case where the contents of the specifying data set DB are changed, the accuracy change of the recognition processing of the recognizer may be displayed in accordance with the change, and the learning data set may be expanded in accordance with the accuracy change. Further, the output control unit440may control display of information regarding the update of the specifying data set DB471. The expansion support unit450may control the expansion processing on the basis of the feedback to the information regarding the update of the specifying data set DB471displayed by the output control unit440. Here, an example of screen display of information regarding the update of the specifying data set DB471by the output control unit440in the modification of the embodiment of the present disclosure will be described with reference toFIG.16.FIG.16illustrates a display screen SC7 that displays information regarding the update of the specifying data set DB471. On the display screen SC7, information regarding a recognizer under development or already developed and a corresponding specifying data set DB is displayed. Further, on the display screen SC7, a status corresponding to a combination of each recognizer and the specifying data set DB is also displayed. The status indicates the status of the recognizer. Here, the status of the recognizer is the accuracy of recognition processing of the recognizer or the like. For example, in a case where the status is “running”, it indicates that the accuracy evaluation of the corresponding recognizer is being executed. Further, for example, in a case where the status is “accuracy maintenance”, it indicates that the accuracy of the recognition processing of the recognizer is not changed by the update of the specifying data set DB471. Further, for example, in a case where the status is “accuracy decrease”, it indicates that the accuracy of the recognition processing of the recognizer is decreased by the update of the specifying data set DB471. As described above, by displaying the information regarding the update of the specifying data set DB471, for example, it is possible to confirm a situation change in a context in which the recognizer is used such as a change in fashion, and it is possible to perform the replenishment of the learning data or designing of the recognizer at an early stage. Further, according to a request from the user, the learning data set can be automatically expanded on the basis of the update of the specifying data set DB124. Although the modifications according to the present disclosure have been described above, the output control unit440may also perform, for the expansion of the learning data set, proposal for purchase of specifying data, proposal for transfer of securities, or the like to the user. Further, in a case where the target recognized by the recognizer is three-dimensional data, the output control unit440may cause a 3D printer connected to the recognizer development device20to generate a model, on the basis of the three-dimensional data corresponding to the erroneous recognition target or the like. 3. Hardware Configuration Example Next, a hardware configuration example common to the input/output terminal10, the recognizer development device20, and the information processing server40according to the embodiment of the present disclosure will be described.FIG.17is a block diagram illustrating a hardware configuration example of the input/output terminal10, the recognizer development device20, and the information processing server40according to the embodiment of the present disclosure. Referring toFIG.17, each of the input/output terminal10, the recognizer development device20, and the information processing server40has, for example, a processor871, a ROM872, a RAM873, a host bus874, a bridge875, an external bus876, an interface877, an input device878, an output device879, a storage880, a drive881, a connection port882, and a communication device883. Note that the hardware configuration illustrated here is an example, and some of the components may be omitted. Further, components other than the components illustrated here may be further included. (Processor871) The processor871functions as, for example, an arithmetic processing device or a control device, and controls the overall operation of each component or a part thereof on the basis of various programs recorded in the ROM872, the RAM873, the storage880, or a removable recording medium901. (ROM872and RAM873) The ROM872is a unit that stores a program read by the processor871, data used for calculation, and the like. The RAM873temporarily or permanently stores, for example, a program read by the processor871, various parameters that appropriately change when the program is executed, and the like. (Host Bus874, Bridge875, External Bus876, and Interface877) The processor871, the ROM872, and the RAM873are mutually connected via, for example, the host bus874capable of high-speed data transmission. On the other hand, the host bus874is connected to the external bus876having a relatively low data transmission speed via the bridge875, for example. Further, the external bus876is connected to various components via the interface877. (Input Device878) As the input device878, for example, a mouse, a keyboard, a touch panel, a button, a switch, a lever, and the like are used. Further, as the input device878, a remote controller capable of transmitting a control signal using infrared rays or other radio waves may be used. Further, the input device878includes a voice input device such as a microphone. (Output Device879) The output device879is a device capable of visually or audibly notifying the user of acquired information, for example, a display device such as a cathode ray tube (CRT), an LCD, or an organic EL, an audio output device such as a speaker or a headphone, a printer, a mobile phone, a facsimile, or the like. Further, the output device879according to the present disclosure includes various vibration devices capable of outputting tactile stimulation. (Storage880) The storage880is a device for storing various types of data. As the storage880, for example, a magnetic storage device such as a hard disk drive (HDD), a semiconductor storage device, an optical storage device, a magneto-optical storage device, or the like is used. (Drive881) The drive881is, for example, a device that reads information recorded on the removable recording medium901such as a magnetic disk, an optical disk, a magneto-optical disk, or a semiconductor memory, or writes information to the removable recording medium901. (Removable Recording Medium901) The removable recording medium901is, for example, a DVD medium, a Blu-ray (registered trademark) medium, an HD DVD medium, various semiconductor storage media, or the like. Of course, the removable recording medium901may be, for example, an IC card on which a non-contact IC chip is mounted, an electronic device, or the like. (Connection Port882) The connection port882is a port for connecting an external connection device902such as a universal serial bus (USB) port, an IEEE1394 port, a small computer system interface (SCSI), an RS-232C port, or an optical audio terminal. (External Connection Device902) The external connection device902is, for example, a printer, a portable music player, a digital camera, a digital video camera, an IC recorder, or the like. (Communication Device883) The communication device883is a communication device for connecting to a network, and is, for example, a communication card for wired or wireless LAN, Bluetooth (registered trademark), or wireless USB (WUSB), a router for optical communication, a router for asymmetric digital subscriber line (ADSL), a modem for various communications, or the like. 4. Conclusion As described above, the system1according to the present disclosure outputs information regarding an erroneous recognition target at an early stage in a development cycle of a recognizer, thereby performing information output and other processing that can prevent rework in development of the recognizer and shorten a development period. The preferred embodiments of the present disclosure have been described in detail with reference to the accompanying drawings, but the technical scope of the present disclosure is not limited to such examples. It is obvious that a person with an ordinary skill in a technological field of the present disclosure could conceive of various alterations or corrections within the scope of the technical ideas described in the appended claims, and it should be understood that such alterations or corrections will naturally belong to the technical scope of the present disclosure. Furthermore, the effects described in the present specification are merely illustrative or exemplary and are not restrictive. That is, the technology according to the present disclosure can exhibit other effects obvious to those skilled in the art from the description of the present specification in addition to or in place of the above effects. Note that the following configurations also belong to the technical scope of the present disclosure. (1) An information processing apparatus comprising:an output control unit that controls display of information regarding an erroneous recognition target different from a predetermined recognition target, the erroneous recognition target being specified as having a possibility of erroneous recognition on the basis of a result of recognition processing on at least one piece of specifying data by a recognizer generated for recognizing the predetermined recognition target and a result of clustering processing on a target recognized by the recognition processing, whereinthe recognizer is generated by learning based on at least one piece of learning data,the at least one piece of learning data includes the predetermined recognition target and is data in substantially the same context, andthe specifying data is data in substantially the same context as the context of the at least one piece of learning data. (2) The information processing apparatus according to (1), whereinthe information regarding the erroneous recognition target is notification information for notifying the erroneous recognition target, andthe output control unit controls display of the notification information. (3) The information processing apparatus according to (1) or (2), whereinthe information regarding the erroneous recognition target is information indicating an evaluation of the recognition processing on the erroneous recognition target, andthe output control unit controls display of the information indicating the evaluation. (4) The information processing apparatus according to any one of (1) to (3), whereinthe output control unit further controls display of information indicating the context of the learning data. (5) The information processing apparatus according to (4), whereinthe output control unit further controls display of additional information regarding the context. (6) The information processing apparatus according to any one of (1) to (5), whereinthe information regarding the erroneous recognition target is information regarding confirmation as to whether or not the erroneous recognition target is the erroneously recognized target, andthe output control unit controls display of the information regarding the confirmation. (7) The information processing apparatus according to any one of (1) to (6), whereinthe output control unit further controls display of information regarding expansion of the learning data. (8) The information processing apparatus according to (7), further comprising:an expansion support unit that controls expansion processing of the at least one piece of learning data, on the basis of a result of specifying the erroneous recognition target. (9) The information processing apparatus according to (8), whereinthe information regarding the erroneous recognition target is information regarding confirmation as to whether or not the erroneous recognition target is the erroneously recognized target,the output control unit controls display of the information regarding the confirmation, andthe expansion support unit controls the expansion processing of the learning data, on the basis of feedback to the information regarding the confirmation. (10) The information processing apparatus according to (9), whereinthe expansion support unit performs control to expand the specifying data including the erroneous recognition target as the learning data. (11) The information processing apparatus according to (8), whereinthe specifying data is data acquired from a specifying data set on the basis of the context of the learning data. (12) The information processing apparatus according to (11), whereinthe expansion support unit controls the expansion processing of the learning data, on the basis of update of the specifying data set. (13) The information processing apparatus according to (12), whereinthe output control unit further controls display of information indicating update of the specifying data set, andthe expansion support unit controls the expansion processing of the learning data, on the basis of feedback to the information indicating the update. (14) The information processing apparatus according to (12), whereinthe output control unit further controls display of information indicating a change in recognition accuracy of the recognizer due to update of the specifying data set. (15) The information processing apparatus according to any one of (1) to (14), further comprising:an erroneous recognition target specifying unit that specifies the erroneous recognition target on the basis of a result of the recognition processing on the at least one piece of specifying data by the recognizer and a result of the clustering processing on the target recognized by the recognition processing, whereinthe output control unit controls display of information regarding the erroneous recognition target specified by the erroneous recognition target specifying unit. (16) The information processing apparatus according to (15), further comprising:a data classifying unit that executes the clustering processing on the target recognized by the recognition processing and classifies the recognized target into any one of a plurality of clusters, whereinthe erroneous recognition target specifying unit specifies the erroneous recognition target, on the basis of a result of the recognition processing and a result of classification of the target into the plurality of clusters by the data classifying unit. (17) The information processing apparatus according to (16), whereinthe erroneous recognition target specifying unit specifies a target corresponding to a cluster other than a cluster having the highest accuracy in the recognition processing, which has higher accuracy in the recognition processing than the other clusters, among the plurality of clusters, as the erroneous recognition target. (18) The information processing apparatus according to any one of (1) to (17), further comprising:a context recognition unit that recognizes the context of the learning data. (19) An information processing method comprising:causing a processor to controls display of information regarding an erroneous recognition target different from a predetermined recognition target, the erroneous recognition target being specified as having a possibility of erroneous recognition on the basis of a result of recognition processing on at least one piece of specifying data by a recognizer generated for recognizing the predetermined recognition target and a result of clustering processing on a target recognized by the recognition processing, whereinthe recognizer is generated by learning based on at least one piece of learning data,the at least one piece of learning data includes the predetermined recognition target and is data in substantially the same context, andthe specifying data is data in substantially the same context as the context of the at least one piece of learning data. (20) A program for causing a computer to function as an information processing apparatus, whereinthe information processing apparatus includes an output control unit that controls display of information regarding an erroneous recognition target different from a predetermined recognition target, the erroneous recognition target being specified as having a possibility of erroneous recognition on the basis of a result of recognition processing on at least one piece of specifying data by a recognizer generated for recognizing the predetermined recognition target and a result of clustering processing on a target recognized by the recognition processing,the recognizer is generated by learning based on at least one piece of learning data,the at least one piece of learning data includes the predetermined recognition target and is data in substantially the same context, andthe specifying data is data in substantially the same context as the context of the at least one piece of learning data. REFERENCE SIGNS LIST 1SYSTEM10INPUT/OUTPUT TERMINAL20RECOGNIZER DEVELOPMENT DEVICE210DATA SET MANAGEMENT UNIT220RECOGNIZER DEVELOPMENT UNIT240COMMUNICATION UNIT250STORAGE UNIT260CONTROL UNIT30NETWORK40INFORMATION PROCESSING SERVER410CONTEXT RECOGNITION UNIT420ERRONEOUS RECOGNITION TARGET SPECIFYING UNIT430DATA CLASSIFYING UNIT440OUTPUT CONTROL UNIT450EXPANSION SUPPORT UNIT460SERVER COMMUNICATION UNIT470STORAGE UNIT480CONTROL UNIT | 72,158 |
11861884 | To facilitate understanding, identical reference numerals have been used, where possible, to designate identical elements that are common to the drawings. It is contemplated that elements and features of one embodiment may be beneficially incorporated in other embodiments without further recitation. DETAILED DESCRIPTION Aspects of the present disclosure provide apparatuses, methods, processing systems, and computer-readable mediums for training an information extraction transformer model architecture. There are myriad tasks in which information from one domain (e.g., a physical document) needs to be extracted and used in another domain (e.g., in a software application). For example, a receipt may include information about a vendor (e.g., name), information about goods purchased (e.g., item names, quantities, amounts), information about the transaction (e.g., time and location), and others. All of these key information elements in the receipt may be necessary for an application that, for example, tracks expenditures for tax reasons. Unfortunately, manual (e.g., human) transcription of such data is often flawed, and, perhaps relatedly, tedious and time consuming. Mistranscription of such data may have serious consequences and thus is a technical problem needing solving. Machine learning provides an effective technical solution for information extraction tasks. In particular, named entity recognition (NER) is a type of natural language processing (NLP) task performed by machine learning models that involves extracting and identifying key information from document text, such as in the text extracted from a receipt example above. An information element that is extracted and categorized, such as a transaction date in a receipt, is referred to as an “entity.” Generally, an entity can be any word, series of words, arrangement of words, and/or image region that consistently refers to the same thing. For example, a name, trademark, and/or logo (image) may consistently refer to an organization. Certain machine learning model architectures have proven particular adept at named entity recognition (NER) tasks, such as transformer-based models. Generally, transformer models may be implemented as neural networks with elements, such as various types of attention elements, that learn context and thus meaning by tracking relationships in sequential data. Further, multi-modal transformer models may perform NER tasks on data including text, layout, and image elements. Multi-modal transformer models represent the state-of-the-art for text- and image-centric key information extraction (KIE) tasks. While machine learning model architectures, such as multi-modal transformers, provide a nominal solution to KIE tasks, they still generally rely on a large amount of manually and accurately labeled data for adequate training. Without a large amount of consistently and accurately labeled data, such models generally will not achieve acceptable levels of task performance. While the answer to this technical problem may seem straightforward—just obtain more labeled data—the vast amount of data types, contexts, and tasks to perform based on the data, make obtaining sufficient labeled data a technically difficult and prohibitively costly problem. This problem is only compounded by the desire to continually train and fine-tune models to incorporate new information from the relevant task domain. Aspects described herein overcome this technical problem by providing an iterative training approach that applies supervised learning to a small, strongly labeled dataset in combination with weakly-supervised learning to a dataset having model generated pseudo-labels. Further, aspects described herein apply transfer learning and active learning to further improve the iterative training approach. In particular, aspects described herein may start with a first, multi-modal transformer model pre-trained on a dataset that is out of domain for the target task. Out of domain in this context generally means that the data does not relate directly to the type or context of the actual task for which a model is being trained. For example, a transformer model may have been trained on generic natural language text containing documents, but not a set of documents specific to the target task. This first model may be “pre-trained” in an unsupervised fashion based on in-domain unlabeled data. For example, a generic multi-modal transformer model having never been trained on receipt images may be trained with a receipt image dataset where the domain of the target task is receipt entity recognition. The first model may be “pre-trained” in a continuous manner such that the first model may be initialized from a pre-trained state that may be publically available, then trained in an iterative fashion to continue to improve as an underlying base model used to train a third model, which third model may also be trained along with a second model, as described in greater detail below. Next, a first labeled dataset, which may be an open-source dataset, may be used to train a second multi-modal transformer model, which, after being fully trained, can be used to create pseudo-labels for the in-domain unlabeled data. Next, a third model is generated via training of the first model to perform key information extraction based on a second labeled dataset, which may be a closed-source labeled dataset, comprising one or more labels, the weakly-labeled dataset as pseudo-labels generated by the second model (e.g., as the generated pseudo-labels), or combinations thereof. In an embodiment, the third model may be trained on the closed-source labeled dataset as a small strongly labeled dataset (e.g., human annotated) when available in place of the pseudo-labels. Training of the first model to generate the third model based on the closed-source labeled dataset in place of the generated pseudo-labels of the second model allows for knowledge transfer from the first model to the third model and label enrichment. Training of the first model to generate the third model based on the generated pseudo-labels of the second model allows for knowledge transfer from the second model to the third model and iterative label enrichment. The unlabeled dataset may further be processed by the third multimodal transformer model to update the pseudo-labels for the unlabeled data. Owing to the nature of the partial weak-supervision for the third model (e.g., based on the pseudo-labels), this training may use an uncertainty-aware training objective such as through a noise-aware loss to allow the model to dynamically and differentially learn from different pseudo-labels based on the amount of label uncertainty. Next the third model may be fine-tuned based on the strongly-labeled dataset. In some cases, the third model may be further fine-tuned through a process of active learning. For example, to improve the model performance in high uncertainty/low confidence data points, uncertainty based active learning samples a subset of low uncertainty pseudo-labels for labeling by humans. The new inputs/labels pairs from this high uncertainty set is then added to the small set of strongly labeled data. This growing set of strongly-labeled data may be used for active and continual fine tuning. In some cases, the samples selected for strong labelling are selected based on a measure of uncertainty of the model's output, which is calibrated during training. This allows uncertainty-based sampling of samples for strong labelling. The aforementioned training approach has many advantages over existing methods, and provides beneficial technical effects. For example, the training architecture allows for training a model with a small proportion of strongly-labeled data, while obtaining the technical effect of improved model performance, as if the model were trained with a large proportion of strongly-labeled data. Further, the training approach described herein enables improved identification and classification of extracted information from multimodal documents. Further, the uncertainty-aware loss ensures the updated model is not overfit to the high uncertainty/low confidence data points. Further yet, the confidence calibration that is used as part of the combination of weakly-labeled and strongly-labeled data allows for uncertainty-based sampling during active learning, thereby having the technical effect of improving pseudo-labels quality via the active learning process and rapidly incorporating new information into the model's training. Example Methodologies for Use and Training of an Information Extraction Transformer Model Architecture FIG.1depicts an example flow100for processing multimodal input data102with an information extraction transformer model104(also referenced herein as model architecture104). In particular, a multimodal data input102is received by the model architecture104. The multimodal data input102may include text features, layout features, image features, or combinations thereof. In example embodiments, text features may include textual characters embedded in a document, layout features may include a style of a layout of the document (such as a table of purchases alongside amounts in a receipt), and image features may include one or more images in the document (e.g., logos, photographs, QR codes, and/or other ornamental representations). In some cases, multimodal data input102may be a document, an image, an image of a document, and or another electronic form of data that may include or not include metadata. In some cases, multimodal data input102may be captured by a sensor of a device, such as a scanner, camera, or the like. The information extracting transformer model104is configured to process the multimodal data input102to generate extracted data106as recognized entity information (e.g., key entity information) that is classified into one or more classification types (i.e., whether an extracted item is a description, quantity, or price in a receipt that is associated with a product sold by an identified vendor at an identified time). The identification of various entities and their corresponding values may be used as inputs (e.g., to corresponding fields) to application108without manual intervention thus providing a faster and more accurate method for data capture, transcription, and entry. For example, application108may be a financial software program that processes the recognized entity information106. Beneficially, the methods for training information extracting transformer model104described herein improve task performance compared to conventional methods because they are able to leverage larger training datasets, including self-generated pseudo-labels, without human intervention. In embodiments, the multimodal data input102may include financial documents such as bills, receipts, tax documents, statements, or similar documents including text, layout, and image data. For example, an amount of $10 associated with a second item purchased at a store as reflected on a receipt as shown inFIG.1may be extracted, labeled, and classified accordingly by the information extracting transformer model104. The extracted and identified data may then be entered into the application108. Example Training Flow for an Information Extracting Transformer Model FIG.2depicts an embodiment of a training flow200of the information extracting transformer model, such as model104ofFIG.1. As will be described in greater detail further below, process block202is part of a first training stage (e.g., Stage 1 ofFIG.2) in which a first model is pre-trained on a dataset, which may include a large amount of unlabeled data. Process blocks208and210are part of a second training stage (e.g., Stage 2 ofFIG.2) associated with pseudo-label generation, process blocks212and214are part of a third training stage (e.g., Stage 3 ofFIG.2) associated with pseudo-label completion and uncertainty-aware training, process block216is part of a fourth training stage (e.g., Stage 4 ofFIG.2) associated with fine-tuning, and process block218is part of a fifth training stage (e.g., Stage 5 ofFIG.2) associated with active learning. Each stage will be described in greater detail further below in association with description of the corresponding process blocks. In process block202, the first multimodal transformer model (e.g.,302inFIG.3) is pre-trained on unlabeled data of input block204. Pre-training at process block202is useful to build a general model of contextual multi-modal document representations before being trained in a task oriented manner for specific KIE tasks, such as multi-modal named entity recognition for documents. Pre-training a model generally involves an initial training of a model on the sequential occurrence of elements in an unlabeled dataset and then using parts or all of the parameters from the pre-trained model as an initialization for another model on another task or dataset. As described below, pre-training schemes may be used at process block202, such as masked language modeling (MLM) for text modality, masked image modeling (MIM) for image modality, and word-path alignment (WPA) for cross-modal alignment to predict whether a corresponding image patch of a text word is masked. In embodiments, the first multimodal transformer model may be referred to as a “base” multimodal transformer model and in some cases may be a state-of-the-art multimodal transformer model configured to perform a variety of document AI tasks, such as document classification and key information extraction. The first (base) multimodal transformer model may be pre-trained on a dataset that is out of domain or otherwise not relevant to the task for which the updated model is being ultimately trained by training flow200. In some cases the out of domain dataset may be a private domain dataset (e.g., one that is not available to the public). The multimodal transformer model may be configured to be trained for document AI-type tasks using unified text and image masking objectives and pre-training schemes such as MLM, MIM, and WPA, as described above. Further, the model may be pre-trained at process block202utilizing unsupervised training to allow the pre-trained model to capture contextual representations that occur in the multimodal document datasets. In process block210, the second multimodal transformer model (e.g.,304ofFIG.3) is trained on a labeled dataset for KIE task205. In some cases, this training data set may be an open source data set that is available in the public domain. Thus, the second multimodal transformer model may be trained utilizing existing and available open source data sets, beneficially reducing the resource requirements for building, maintaining, and/or using internal closed-source datasets. In embodiments, the second multimodal transformer model may be a multimodal transformer model trained on the open source labeled dataset for a KIE task. The trained second multimodal transformer model is used to generate pseudo-labels for the unlabeled data204(that may be retrieved from a private domain). Thus, the second multimodal transformer model is trained with a small proportion of strongly-labeled data along with a larger proportion of unlabeled data to generate pseudo-labels for the unlabeled data to obtain the technical effect of improved model performance via use of the generated pseudo-labels. In process block208, transfer learning occurs via transfer of knowledge from the first multimodal transformer model to the third multimodal transformer model (e.g.,308ofFIG.3). Transfer learning refers to an approach in which a machine learning model trained in one domain is used to learn and/or perform a task in a different but maybe related domain. The third multimodal transformer model is thus initialized based on a state (set of model parameters) of the first multimodal transformer model that should capture the contextual representation learned from the unlabeled data in block204. In process block212ofFIG.2, the third multimodal transformer model is configured to generate updated pseudo-labels (e.g.,306ofFIG.3). In embodiments, given the pseudo-labels from the second multimodal transformer and the third multimodal transformer model together with respective uncertainty scores, the processor may perform an uncertainty-aware label completion on the pseudo-labels to generate the updated correct pseudo-labels reducing the incompleteness of the pseudo-labels that are generated. In process block214, the third multimodal transformer model is trained in an uncertainty-aware manner based on at least (i) the pseudo-labels as updated in process block212and (i) the noise-aware loss function (e.g.,310ofFIG.3) to generate the updated multimodal transformer model (e.g.,312ofFIG.3). In some embodiments, a calibrated confidence score of each of the pseudo-labels based on the second multimodal transformer model. Then during the training of the third multimodal transformer model the noise-aware loss function, which takes account of the calibrated scores as weight coefficient for each pseudo label, is used to compute the iterative updates of the parameters. In embodiments, the calibrated confidence score may be determined utilizing a Dirichlet calibration, which is a model agnostic multiclass calibration method applicable to classifiers from any model class and derived from Dirichlet distributions. Dirichlet distributions are a family of continuous multivariate probability distributions parameterized by a vector of positive real numbers and are a multivariate generalization of the beta distribution. The beta distribution is a family of continuous probability distributions defined in terms of two positive parameters on the interval between zero and one, which parameters appear as exponents that control the distribution shape. The noise-aware loss function may be based on an estimated confidence of the updated pseudo-labels, such as by using the Dirichlet calibration, to adjust the confidence score of updated pseudo-labels to reflect the estimated accuracy of pseudo-labels. A sample equation for the noise-aware loss function is set forth below. LNA({tilde over (Y)}c,f({tilde over (X)};θ))={circumflex over (P)}({tilde over (Y)}c={tilde over (Y)}|{tilde over (X)})L({tilde over (Y)}c,f({tilde over (X)};θ))+{circumflex over (P)}({tilde over (Y)}c≠{tilde over (Y)}|{tilde over (X)})L−1({tilde over (Y)}c,f({tilde over (X)};θ)) (EQUATION 1) In Equation 1, {circumflex over (P)}({tilde over (Y)}c={tilde over (Y)}|{tilde over (X)}) is the estimated confidence of the updated corrected pseudo-labels. The loss functions L and L−1represent the negative log-likelihood and the negative log-unlikelihood, respectively. Further, {tilde over (X)} represent the input unlabeled data, {tilde over (Y)} represents the true labels, {tilde over (Y)}crepresent the corrected pseudo-labels, and f({tilde over (X)};θ) represents the model prediction. In the third training stage, pseudo-label completion and uncertainty-aware training is performed on the third multimodal transformer model based on one or more labels from a closed-source labeled dataset (e.g.,526ofFIG.5). In some cases, the closed-source labeled dataset may be retrieved from a private domain and for which the one or more labels are manually annotated. A private domain is generally a domain that is not externally accessible without, for example, access rights, and is private to an internal domain, whereas a public domain is generally a domain that is externally accessible and is available across domains without requiring access rights. A public domain may be used, for example, as for retrieval of open-source datasets. As set forth above, a calibrated confidence score of each pseudo-labels predicted by the second multimodal transformer model and the third multimodal transformer model may be input into the noise-aware loss function. The calibrated confidence score may be calculated, as described above, via a Dirichlet calibration. The noise-aware loss function takes into account the confidence scores to weight the pseudo-labels. Thus, when used, the one or more labels from the closed-source labeled dataset may be weighted by the noise-aware loss function based on the calibrated confidence score of each of the pseudo labels generated by the second multimodal transformer model and the third multimodal transformer model. In embodiments, the one or more labels from the closed-source labeled dataset may be used as label training samples in a few-shot learning approach. In a few-shot learning approach, a limited number of labeled examples are provided for each new class, while for a zero-shot learning approach, no labeled data is available for new classes. Thus, for a zero-shot learning approach embodiment, the one or more labels from the closed-source labeled dataset are not used. In process block216, the updated multimodal transformer model is optionally fine-tuned via an active learning loop. For example, the updated third multimodal transformer model may be fine-tuned based on the one or more labels of the closed-source labeled dataset. In process block218, one or more documents may be sampled for labeling and added to the set of labeled documents in process block206for continuous active learning (e.g., training the updated multimodal transformer model). The newly labeled documents206may then be utilized to continue to fine-tune the updated multimodal transformer model (e.g., for the KIE task. Thus, the updated multimodal transformer model may be continually improved by such active learning. In embodiments in which the data is continually able to be labeled, an uncertainty based active learning loop may thus be employed that continuously selects such one or more documents for labeling that meet a threshold. For example, documents having a calibrated confidence score determined using the confidence values associated with the pseudo-labels may be selected when their calibrated confidence score is in a score range (e.g., higher or lower than a threshold value). In some cases, the selected documents in the score range may be ranked from a highest calibrated confidence score to a lowest calibrated confidence score, and a subset of the ranked documents may be selected for manual labeling. FIG.3depicts a flow of aspects300associated with the training flow200ofFIG.2. The flow of aspects300include a first multimodal transformer model302, a second multimodal transformer model304, pseudo-labels306, a third multimodal transformer model308, a noise-aware loss function310, and an updated third multimodal transformer model312in an order of creation and processing in accordance with the training flow200ofFIG.2. As described above with respect toFIG.2, the first multimodal transfer model302is pre-trained on unlabeled data in Stage 1. In Stage 2, the second multimodal transformer model304is trained on a labeled dataset (e.g., for a KIE task) and is used to generate pseudo-labels306for the unlabeled data. Further, transfer learning occurs by the subsequential warm initialization of the latter models from the prior models in the training flow. The third multimodal transfer model308is further trained based on the generated pseudo-labels306. In Stage 3, the third multimodal transformer model308may generate updated pseudo-labels306via an uncertainty-aware label completion (as described above with respect to process block212ofFIG.2). The third multimodal transformer model308may further be trained based on the updated pseudo-labels306and the noise-aware loss function310(as described above with respect to process block214ofFIG.2) to generate the updated multimodal transformer model312. Example Operations for Training an Information Extracting Transformer Model FIG.4depicts an embodiment of a process400to implement the training flow200ofFIG.2utilizing the flow of aspects300ofFIG.3. In block402, the first multimodal transformer model302(FIG.3) is pre-trained on an unlabeled dataset (e.g.,522ofFIG.5), as described in detail above (e.g., corresponding to the unlabeled data of input block204as input into process block202ofFIG.2). The unlabeled dataset may include documents including text, layout, and/or image features. In some embodiments, the unlabeled dataset may be retrieved from a private domain. In block404, corresponding to process block210ofFIG.2, the second multimodal transformer model304(FIG.3) is trained, on an open source labeled dataset (e.g.,524ofFIG.5) comprising documents including text features and layout features to perform a named entity recognition task. The open source labeled dataset may include documents including text features and layout features. In embodiments, the documents of the open source labeled dataset may additionally or alternatively include image features and other multimodal features. The open source labeled dataset may be retrieved from a public domain. In block406, corresponding to process block210ofFIG.2, the unlabeled dataset is processed with the second multimodal transformer model304to generate pseudo-labels306for the unlabeled dataset. In block408, corresponding to process block208ofFIG.2, the third multimodal transformer model306is trained based on at least the pseudo-labels308generated by the second multimodal transformer model304in block406and to perform the named entity recognition task. In embodiments, the third multimodal transformer model306is an updated version of the first multimodal transformer model302corresponding to process block202ofFIG.2. In block410, updated pseudo-labels306are generated by the second and the third multimodal transformer model304and308(corresponding to process block212ofFIG.2) based on label completion process. In block412, corresponding to process block214ofFIG.2, the third multimodal transformer model308is further trained using the noise-aware loss function310and the updated pseudo-labels308to generate the updated third multimodal transformer model312. In embodiments, the third multimodal transformer model306may be trained further using a closed-source labeled dataset (e.g.,526ofFIG.5) comprising one or more labels. The one or more labels of the closed-source labeled dataset may be human annotated labels. Further, the noise-aware loss function310may include a calibrated confidence score of each of the pseudo-labels generated by the second multimodal transformer model304and the third multimodal transformer model306as an input used to weight the pseudo-labels308and the one or more labels from the closed-source labeled dataset. The updated third multimodal transformer model312may be fine-tuned based on the closed-source labeled dataset. Further, one or more new documents to be labeled for further training of the updated third multimodal transformer model312may be identified (corresponding to process block218ofFIG.2). In some embodiments, the one or more new documents may be identified based on a set of calibrated confidence scores indicative of model uncertainty for the one or more unlabeled documents. When the calibrated confidence score is above a predetermined threshold, a corresponding document may be identified to be labeled and/or as a labeled document (corresponding to input block206ofFIG.2) for further training of the updated third multimodal transformer model312. The updated third multimodal transformer model312may be utilized to classify and label key information elements in one or more multimodal documents, such as the documents input as multimodal data input102as described above inFIG.1. Note thatFIG.4is just one example of a method, and other methods including fewer, additional, or alternative steps are possible consistent with this disclosure. Example Processing System for Training an Information Extraction Transformer Model FIG.5depicts an example processing system500configured to perform various aspects described herein, including, for example, the methods of flow100, training flow200, flow of aspects300, and process400as described above with respect toFIGS.1-4. Processing system500is generally an example of an electronic device configured to execute computer-executable instructions, such as those derived from compiled computer code, including without limitation personal computers, tablet computers, servers, smart phones, smart devices, wearable devices, augmented and/or virtual reality devices, and others. In the depicted example, processing system400includes one or more processors502, one or more input/output devices504, one or more display devices506, and one or more network interfaces508through which processing system500is connected to one or more networks (e.g., a local network, an intranet, the Internet, or any other group of processing systems communicatively connected to each other), and computer-readable medium512. In the depicted example, the aforementioned components are coupled by a bus510, which may generally be configured for data and/or power exchange amongst the components. Bus510may be representative of multiple buses, while only one is depicted for simplicity. Processor(s)502are generally configured to retrieve and execute instructions stored in one or more memories, including local memories like the computer-readable medium512, as well as remote memories and data stores. Similarly, processor(s)502are configured to retrieve and store application data residing in local memories like the computer-readable medium512, as well as remote memories and data stores. More generally, bus510is configured to transmit programming instructions and application data among the processor(s)502, display device(s)506, network interface(s)508, and computer-readable medium512. In certain embodiments, processor(s)502are included to be representative of a one or more central processing units (CPUs), graphics processing unit (GPUs), tensor processing unit (TPUs), accelerators, and other processing devices. Input/output device(s)504may include any device, mechanism, system, interactive display, and/or various other hardware components for communicating information between processing system500and a user of processing system500. For example, input/output device(s)504may include input hardware, such as a keyboard, touch screen, button, microphone, and/or other devices for receiving inputs from the user. Input/output device(s)504may further include display hardware, such as, for example, a monitor, a video card, and/or other another device for sending and/or presenting visual data to the user. In certain embodiments, input/output device(s)504is or includes a graphical user interface. Display device(s)506may generally include any sort of device configured to display data, information, graphics, user interface elements, and the like to a user. For example, display device(s)506may include internal and external displays such as an internal display of a tablet computer or an external display for a server computer or a projector. Display device(s)506may further include displays for devices, such as augmented, virtual, and/or extended reality devices. Network interface(s)508provides processing system500with access to external networks and thereby to external processing systems. Network interface(s)508can generally be any device capable of transmitting and/or receiving data via a wired or wireless network connection. Accordingly, network interface(s)508can include a communication transceiver for sending and/or receiving any wired and/or wireless communication. For example, Network interface(s)508may include an antenna, a modem, a LAN port, a Wi-Fi card, a WiMAX card, cellular communications hardware, near-field communication (NFC) hardware, satellite communication hardware, and/or any wired or wireless hardware for communicating with other networks and/or devices/systems. In certain embodiments, network interface(s)508includes hardware configured to operate in accordance with the Bluetooth® wireless communication protocol. Computer-readable medium512may be a volatile memory, such as a random access memory (RAM), or a nonvolatile memory, such as nonvolatile random access memory, phase change random access memory, or the like. In this example, computer-readable medium512includes a pre-training component514, a training component516, a processing component518, a generating component520, an unlabeled dataset522, an open source labeled dataset524, and a closed-source labeled dataset526. In certain embodiments, the pre-training component514is configured to pre-train the first multimodal transformer model (e.g.,302ofFIG.3) on the unlabeled dataset522comprising multimodal documents including at least text features and layout features as set forth above in block402ofFIG.4. The training component516is configured to train one or more models as described herein. As a non-limiting example, the training component516is configured to train the second multimodal transformer model (e.g.,304ofFIG.3) on the open source labeled dataset524comprising multimodal documents including at least text features and layout features to perform the named entity recognition task as set forth in block404ofFIG.4. The processing component518is configured to process the unlabeled dataset522with the second multimodal transformer model (e.g.,304ofFIG.3) to generate pseudo-labels306for the unlabeled dataset522as set forth in block406ofFIG.4. The training component516may further be configured to train the first multimodal transformer model to perform the named entity recognition task as set forth in block408ofFIG.4based on at least the pseudo-labels306generated by the second multimodal transformer model304to generate the third multimodal transformer model (e.g.,308ofFIG.3). The generating component520is configured to generate updated pseudo-labels (e.g.,308ofFIG.3) based on label completion predictions from the third multimodal transformer model308as also set forth in block408ofFIG.4. The training component516may further be configured to train the third multimodal transformer model308using a noise-aware loss function310and the updated pseudo-labels306to generate the updated third multimodal transformer model312as set forth in block412ofFIG.4. Note thatFIG.5is just one example of a processing system consistent with aspects described herein, and other processing systems having additional, alternative, or fewer components are possible consistent with this disclosure. Example Clauses Implementation examples are described in the following numbered clauses: Clause 1: A method for training an information extraction transformer model architecture, comprising pre-training a first multimodal transformer model on an unlabeled dataset comprising documents including text features and layout features; training a second multimodal transformer model on source first labeled dataset comprising documents including text features and layout features to perform a key information extraction task; processing the unlabeled dataset with the second multimodal transformer model to generate pseudo-labels for the unlabeled dataset; training the first multimodal transformer model to perform the key information extraction task based on a second labeled dataset comprising one or more labels, the pseudo-labels generated by the second multimodal transformer model, or combinations thereof, to generate a third multimodal transformer model; generating updated pseudo-labels based on label completion predictions from the third multimodal transformer model; and training the third multimodal transformer model using a noise-aware loss function and the updated pseudo-labels to generate an updated third multimodal transformer model. Clause 2: The method in accordance with Clause 1, wherein the unlabeled dataset is retrieved from a private domain, the first labeled dataset comprises an open source labeled dataset retrieved from a public domain, and the second labeled dataset comprises a closed-source labeled dataset. Clause 3: The method in accordance with any of one of Clauses 1-2, wherein the unlabeled dataset and the first labeled dataset each further comprise documents including image features. Clause 4: The method in accordance with any one of Clauses 1-3, further comprising training the third multimodal transformer model based on the second labeled dataset comprising the one or more labels when available in place of the pseudo-labels generated by the second multimodal transformer model. Clause 5: The method in accordance Clause 4, wherein the noise-aware loss function comprises a calibrated confidence score of each of the second multimodal transformer model and the third multimodal transformer model as an input used to weight the pseudo-labels and the one or more labels from the second labeled dataset. Clause 6: The method in accordance with any one of Clauses 4-5, further comprising fine-tuning the updated third multimodal transformer model based on the second labeled dataset. Clause 7: The method in accordance with any one of Clauses 1-6, further comprising identifying one or more new documents to be labeled for further training of the updated third multimodal transformer model based on calibrated confidence scores indicative of an uncertainty of the pseudo-labels for the one or more new documents, wherein the confidence scores are within a predetermined threshold. Clause 8: The method in accordance with any one of Clauses 1-7, further comprising utilizing the updated third multimodal transformer model to classify and label key information elements in one or more multimodal documents. Clause 9: A processing system, comprising: a memory comprising computer-executable instructions; and a processor configured to execute the computer-executable instructions and cause the processing system to perform operations comprising a method in accordance with any one of Clauses 1-8. Clause 10: A processing system, comprising means for performing operations comprising a method in accordance with any one of Clauses 1-8. Clause 11: A non-transitory computer-readable medium comprising computer-executable instructions that, when executed by a processor of a processing system, cause the processing system to perform operations comprising a method in accordance with any one of Clauses 1-8. Clause 12: A computer program product embodied on a computer-readable storage medium comprising code for performing operations comprising a method in accordance with any one of Clauses 1-8. ADDITIONAL CONSIDERATIONS The preceding description is provided to enable any person skilled in the art to practice the various embodiments described herein. The examples discussed herein are not limiting of the scope, applicability, or embodiments set forth in the claims. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments. For example, changes may be made in the function and arrangement of elements discussed without departing from the scope of the disclosure. Various examples may omit, substitute, or add various procedures or components as appropriate. For instance, the methods described may be performed in an order different from that described, and various steps may be added, omitted, or combined. Also, features described with respect to some examples may be combined in some other examples. For example, an apparatus may be implemented or a method may be practiced using any number of the aspects set forth herein. In addition, the scope of the disclosure is intended to cover such an apparatus or method that is practiced using other structure, functionality, or structure and functionality in addition to, or other than, the various aspects of the disclosure set forth herein. It should be understood that any aspect of the disclosure disclosed herein may be embodied by one or more elements of a claim. As used herein, a phrase referring to “at least one of” a list of items refers to any combination of those items, including single members. As an example, “at least one of: a, b, or c” is intended to cover a, b, c, a-b, a-c, b-c, and a-b-c, as well as any combination with multiples of the same element (e.g., a-a, a-a-a, a-a-b, a-a-c, a-b-b, a-c-c, b-b, b-b-b, b-b-c, c-c, and c-c-c or any other ordering of a, b, and c). As used herein, the term “determining” encompasses a wide variety of actions. For example, “determining” may include calculating, computing, processing, deriving, investigating, looking up (e.g., looking up in a table, a database or another data structure), ascertaining and the like. Also, “determining” may include receiving (e.g., receiving information), accessing (e.g., accessing data in a memory) and the like. Also, “determining” may include resolving, selecting, choosing, establishing and the like. The methods disclosed herein comprise one or more steps or actions for achieving the methods. The method steps and/or actions may be interchanged with one another without departing from the scope of the claims. In other words, unless a specific order of steps or actions is specified, the order and/or use of specific steps and/or actions may be modified without departing from the scope of the claims. Further, the various operations of methods described above may be performed by any suitable means capable of performing the corresponding functions. The means may include various hardware and/or software component(s) and/or module(s), including, but not limited to a circuit, an application specific integrated circuit (ASIC), or processor. Generally, where there are operations illustrated in figures, those operations may have corresponding counterpart means-plus-function components with similar numbering. The following claims are not intended to be limited to the embodiments shown herein, but are to be accorded the full scope consistent with the language of the claims. Within a claim, reference to an element in the singular is not intended to mean “one and only one” unless specifically so stated, but rather “one or more.” Unless specifically stated otherwise, the term “some” refers to one or more. No claim element is to be construed under the provisions of 35 U.S.C. § 112(f) unless the element is expressly recited using the phrase “means for” or, in the case of a method claim, the element is recited using the phrase “step for.” All structural and functional equivalents to the elements of the various aspects described throughout this disclosure that are known or later come to be known to those of ordinary skill in the art are expressly incorporated herein by reference and are intended to be encompassed by the claims. Moreover, nothing disclosed herein is intended to be dedicated to the public regardless of whether such disclosure is explicitly recited in the claims. | 42,956 |
11861885 | DESCRIPTION OF SPECIFIC EMBODIMENTS OF THE INVENTION The present invention, in some embodiments thereof, relates to a system and method for characterization of plants and, more particularly, but not exclusively, to a system and method for characterization plants that are members of the Cannabaceae family, and more specifically the genus ofCannabisand the species ofCannabis sativa. The invention provides a reliable, unbiased, automatic assessment of the status of plants that are members of the Cannabaceae family. Some embodiments of the invention provide a system configured to diagnoseCannabis sativaplant status, automatically, by analysis of plant images, while accumulating the analysis' results in databases. This is advantageous for evaluation of the plant, e.g.Cannabis, for cultivation purposes, and to detect the chemical composition of the finished product. For cultivators such a decision support system may consist of at least one of: (a) determining the plant's maturity, (b) determining harvesting time, (c) determining progression of the drying and curing processes (by means of trichomes inspection). Some embodiments of the invention, allow one to assess the chemical composition of the finished product (i.e., for potency) to determine the plant's worth/value and/or the mode/dosage of consumption. Referring toFIG.1of the drawings, reference is first made to the structure ofCannabisplant as illustrated inFIG.1. As illustrated, theCannabisplant has leaves, stem, nodes and flowers. The female flowers (enlarged in the bottom right circle) are the parts that contain the majority of the Active Pharmacological Ingredients (APIs), e.g., psychoactive compounds. In the top left square, an actual image of a portion of the flower is provided. Further zooming in, the flowers and the leaves contain a forest-like resin glands known as trichomes. The trichomes (enlarged in the middle right circle) contain the active chemical compounds. In the top right square, an actual magnified image of few trichomes is provided. The main psychoactive constituent of trichomes is tetrahydroCannabinol (THC). TheCannabisplant contains more than 500 compounds, among them at least 113 Cannabinoids. Besides THC, and Cannabidiol (CBD), most of the Cannabinoids are only produced in trace amounts. CBD is not psychoactive but has been shown to have medicinal positive effects and to modify the effect of THC in the nervous system. Differences in the chemical composition ofCannabisvarieties, may produce different effects in humans. As used herein, the term “Trichomes” means fine outgrowths or appendages found on plants of the Cannabaceae family, e.g.,Cannabisplants. Before explaining at least one embodiment of the invention in detail, it is to be understood that the invention is not necessarily limited in its application to the details of construction and the arrangement of the components and/or methods set forth in the following description and/or illustrated in the drawings and/or the Examples. The invention is capable of other embodiments or of being practiced or carried out in various ways. Referring is now made toFIG.2.FIG.2illustrates a simplified flow chart of a method for characterization of Cannabaceae plants using macro photography images. The method100comprises three steps: (a) receiving one or more macro photography images of Cannabaceae plant110; (b) performing analysis based on the images (120and130); and (c) conditioned upon the products of step (b), calculating and reporting an assessment for the plant under characterization140. The input images in step110are macro photography images. As used herein, the term “macro photography images” means an image with pixel size of less than 100 um×100 um and field of view of more than 1 mm×1 mm, i.e., image size (or image resolution) of at least 10×10 pixels. Typically, macro photography images will have a pixel size of 10 um×10 um or less, and an image resolution of 1000×1000 pixels or more. An image resolution of 1000×1000 with 10 um×10 um pixel size provides a field of view of 10 mm×10 mm. In an exemplary embodiment of the invention, Step110is receipt of a plurality of macro photography images. In this case, each image is analyzed separately and the assessment is done based on the plurality of analyzed products. Alternatively, the plurality of images is combined, e.g., generating a montage, and the analysis is performed over the combined image. The logic behind using the macro photography images is to be able to detect plant organelles, such as trichomes, which are typically 100-400 um long, and 100 um wide. In an exemplary embodiment of the invention, the images contain a spectral band that is not in the visible spectrum such as IR band or UV band and the like. Alternatively, the spectrum band is wider than the visible spectrum. In an exemplary embodiment of the invention, a non-visible spectrum light sensor (spectrometer) may be used to detect additional information on the formation of the sampled material (e.g., UV, NIR). The analysis step may be performed using several images and digital processing techniques. In step120, feature extraction analysis of the trichomes, as viewed in the macro photography images, is performed using image processing. An image processing algorithm identifies the trichomes and measures the trichomes density, shape, size, color, and the like. These values (both the average and the statistics) are transferred to step140for the final assessment. In step130, analysis is performed, using a neural network (e.g., deep learning). The input may comprise of raw image or images that are converted to an input vector for the neural network. Additionally, the input may be the output of step120. The neural network weights (or coefficients) setting is based on a preliminary training phase of the neural network. The training phase comprises using results of lab tests performed on the plants in which the chemical composition and concentrations of central ingredients was analyzed, for plants used to obtain the macro photography images used in the training phase. The outputs of the neural network can be any characteristic of the plant under analysis. For example, the output may be an estimation of a THC concentration. Alternatively, it can be an estimation of the probability the plant has a THC concentration in the range between 10%-20%. In another option, the neural network output can be a quality ranking of between 0-100. The neural network of step130may be comprised of several neural networks each performing part of the full analysis. These networks may be connected in serial or may be used to calculate the final products using post processing stage after running the networks in parallel. In an exemplary embodiment of the invention, the analysis stage is based only on step120. Alternatively, the analysis stage is based only on step130. In an exemplary embodiment of the invention, analysis is based on both step120and step130. Additionally, analysis may be based on other auxiliary analysis techniques as disclosed later on. Additionally, the products of step120can be used as input for step130. Step140, receives the analysis products of steps120and130. Step140calculates and reports an assessment for at least one of or any combination of:Maturity of the plant for harvesting.Diagnosis of the existence of diseases, insects, or pests.Recommendations for irrigation and plant treatments.Recommendations for treatments during plant drying, curing, storage or production processes.Assessment of post-production Cannabaceae plant product quality and price.Assessment of at least one of or any combination of Cannabaceae plants Cannabinoid, terpene or flavonoid ingredient concentrations. The calculation is performed based on the raw data coming from analysis steps120and130. For example, the maturity for harvesting may be determined by the density, average size and average color of the trichomes appearing in the analyzed images and provided by step120. A multi-dimensional threshold may be set and then checked during step140. For example, if the threshold is passed an instruction to harvesting the plant is provided by step140of method100. In another example, step130provides the probabilities of THC concentration in the plant. The probabilities are classified into ranges between 0-10%, 10%-20%, 20%-30%, and over 30%. Step140calculates an assessment of the market price for this plant based on a linear or non-linear regression formula of the product of step130. Yet in another example, step130provides an estimation of the THC concentration in the trichomes, and step140rates the quality of the plant by adjusting this THC concentration with the average density of trichomes in the plant under analysis provided by step120. Further discussion on the specific types of assessments as well as other optional assessment will be disclosed hereinafter. Referring is now made toFIG.3.FIG.3illustrates a more complete flow chart of a method for characterization of Cannabaceae plants using macro photography images. The method,200, comprises steps210,220,230,240that are similar to steps110,120,130and140respectively, with the necessary changes as will be disclosed herein. The method starts with steps210,212and214that receive the input data for the current plant analysis. Step210, similar to step110, receives one or more macro photography images of a Cannabaceae plant. Step214receives other photography images having a pixel size greater than 100 um×100 um, hence potentially having a larger field of view that enables capturing a larger portion of the plant or even a full image of the whole plant. Step212receives additional information such as the location the images were taken, the date & time the images were taken, the data inputting user name, ID and type (e.g., farmers, growers, producers, cultivators wholesalers, retailers, end consumers, and the like). For example if the data is uploaded by a specific cultivator the strain type of the plant might be deduced with 100% certainty without performing an analysis. If the data uploaded by a wholesaler, retailer or end customer, the name (or ID) of the previous link in the distribution chain and the commercial name of the product may give additional important information that can assist and improve the analysis. The type of photography imager and optics may also be entered into the auxiliary data as well as any other data that can be helpful in one way or another to characterize the plants or the plant products. All the input data are forward to step250. Step250(analysis manager and input preprocessing) decides based on the input data which further steps will be performed. Some preprocessing of the data, may be performed if necessary, for example a montage may be made from a plurality of images. Step250stores the input data in the database. Step250may also fetch some data from database260and use it in the current analysis. For example, data from the same growers, with a previous date, may be fetched for performing comparative analysis. Step250forwards the image data to a two-stage classifier: step252and254. Step252is a filtering step that filters out all image data that is not valid images of Cannabaceae plants. These may be images of other plants, images that are taken by accident and can be fabrics, panoramic views and the like. It can also be, as frequently happens, an image of Cannabaceae plants that was captured out of optical focus so the blurred image cannot be used in the analysis. After the filtering step there is a step of classifying the type of plant that is under analysis. The classification is at least in one level from a general four layer classifier. First, the plant's genus, e.g.,Cannabisis determined. Second, the species, such asCannabis sativa, Cannabis indica, and the like is determined. Then the strain is determined and finally if applicable the phenotype is determined. The next step in the method is the analysis. The analysis contains three different analysis steps: step220, step230and step225. Step220, trichome feature extraction, is similar to step120. Step230, is neural network analysis, which is similar to step130. However in step230other input data such as the non-macro photography images and the auxiliary data may be used as additional inputs to the neural networks. Step225, auxiliary analysis includes all other image and digital analysis on the data that is used to assist with the overall assessment of the plant. It may be image processing of the non-macro images which detect pests, such as fungi, insects, mold, slugs and the like. As in method100, the final assessment is done in step240, which is similar to step140but now may contain additional calculations and extended assessment of the plant. The final assessment is stored in database260. The assessment results are stored in a way that any assessment can fetch the input data it is relied on. Step270is the training subsystem that runs in the background and optionally, runs offline. The training subsystem fetches the data from database260and uses this data from time to time to update the neural networks coefficients (illustrated in the figure by a dashed arrow between training subsystem270and analysis230). Optionally, training subsystem can update any models, rule based variables and algorithms used in steps220and225,252,254. In an exemplary embodiment of the invention, the data from classifier254is used to assist the trichome feature extraction (illustrated in the figure by a dashed arrow between step254and analysis220). Optionally, the classification data selects different subsets of the neural network in step230and assist the analysis of step225. In an exemplary embodiment of the invention, auxiliary analysis225uses data from database260(illustrated in the figure by a dashed arrow between database260and auxiliary analysis225). For example, to determine the maturity of the plant, auxiliary analysis225may use the history of assessment of the specific plant as well as other plants that are correlated with the specific plant, for example in the same geographical area, same grower and the like. Optionally, steps220, step230and step240may use the data in the database to assist their analysis (for figure clarity reasons the dashed line to illustrate this relationship was not drawn). In an exemplary embodiment of the invention, the method performs averaging across different location of theCannabisflower. APIs inCannabisare concentrated mainly in trichomes, which are visible for the capturing device. Some areas of a flower may be much more abundant in trichomes than others, to the extent that analyses performed on such variable areas may vary as well. In one exemplary embodiment of the invention, theCannabisflower is ground to a powder; the mixing up of all different parts of the flower evens-out the heterogeneity. To perform the averaging, the training subsystem270uses the smallest possible sample weight allowed by the lab. As little as a 100 mg sample weight may be used, or even a single trichome can be analyzed. In some cases, the imaged material is split between two labs to get optimal performance so 200 mg samples are used. The analysis is performed by having a batch of a plurality of images taken from different areas of the plant sample. Each image is analyzed in the system, gets its own result, then all the results from that batch are averaged, to give a general result for the entire batch. In an exemplary embodiment of the invention, the batch imaging is performed using a video of the plant, in this case the method separates the video into still frames, and selects a few (typically 3-100) representative frames (filter out blurry/unfocused/otherwise faulty frames) to form the images. In an exemplary embodiment of the invention, the method uses 3D reconstruction or other spatial relative location detection methods to make sure the selected frames from the video are indeed from different parts of the plant, and possibly chooses the images from specific pre-configured areas. In an exemplary embodiment of the invention, machine vision (elements220,225,230) is performed as follows: Each photo received from the user by the system is checked for the existence of several organs/organelles/organisms and their characteristics, including plant and other phenotypes. The checking procedure may be fully or partially automatic. The general scheme of such a check is detection of phenomena (e.g., features, or a combination thereof) distinctive to targeted organs/organelles/organisms using image analysis (e.g., machine/deep learning) software, comprising of one or several algorithms, each designed to detect a different target. If such target is found in an image, it may be segmented (“extracted”, i.e. using object detection techniques)—i.e., isolated as a Region Of Interest (hereinafter, ROI) out of the entire image. Such ROIs can then be classified and measured for different characteristics (such as abundance, size, shape, color and more). Alternatively, the images are analyzed without the use of ROIs, but by detection of certain metrics from the large parts or the entire image (e.g., coloration), or that the segmentation process is done concurrently by the same algorithm used for the target detection, or some combination thereof. Herein below is a list of examples of different targets and possible versions of algorithm based feature extraction, none, all or any subset of which may be provided by the proposed system. Example implementations for each are described below:Maturity—determined by trichome (density, shape, size, color), pistils (color and shape), and flower total appearance (bud “density”).Potency—same as maturity. Can be done on “fresh” (in cultivation) or “dry” (post-harvest) flowers. Using such examples discussed in the maturity feature depicted above. Dataset is built so that each image has a corresponding total THC value obtained from testing the photographed plant in recognized test systems such as HPLC-MS labs.Mold—Botrytis cinerea(Bc) (early—hyphae and spores, late—brown spots), Powdery mildew (white coloration on leaves and flowers).Insects/pests—Acari (mites themselves, the webbing, secretions, larvae), aphid (aphid themselves, the webbing, secretions, larvae), arthropods herbivory marks.Nutrient—coloration of leaves.Turgor—physical appearance of leaves.Growth rate/direction—by continuous plant tracking (possibly a “time-lapse” like data).Sexing—axillary bud detection in the first maturation phase of the plant. There is a distinct difference of that area (between male and female plants) which is distinctive of the plant's sex.Phenotyping—appreciating leaf and flower coloration and trichome shape, color, size and density. Important for breeders.Dry—by leaf color, flower volume, trichome shape, color, size and density.Cure—by leaf color, flower volume, trichome shape, color, size and density and more specifically dynamic changes in the size of the trichome head (capitates).Storing/purchase tests—these checks are a combination of one or more of the above checks, e.g., to check if the flower is infected with mold AND if it is cured properly etc. In an exemplary embodiment of the invention, machine vision learning algorithms are provided. The detection of features distinctive of targeted phenomena (e.g., pathogen, organelle) may be achieved by machine learning methods such as deep learning and neural networks. For each phenomena, targeted as biologically relevant, a separate algorithm is developed, e.g., an algorithm for detection spider mites and another algorithm for detection ofBotrytis cinereahyphae. The learning algorithms may be used with human (or otherwise) classified data. For example, 10,000 images (2D or 3D) that are classified as containing a spider mite may be used as an algorithm training dataset (while comparing to similar photos classified as not containing spider mite). The learning methodology may output an algorithm that detects visual aspects distinctive of that spider mite (in this example), which may even be (but not limited to) a combination of shape, color, and relation to other aspects of the image (different visual elements on one hand, and metadata such as date and geography on the other hand). To enable this leaning algorithm methodology the following operations are performed: Manual/semi-manual classification/usage of a pre-classified images indicating what is apparent in them, such as organelles, organisms, and other biologically relevant phenomena (e.g., spider mites). The classification may be in text describing the existence of the phenomena and/or by marking the location and outline of the phenomena and/or by documenting quantitative characteristics of the phenomena (such as color hue, size, certainty of classification accuracy).1. Learning methodology with a training set and a test (validation) set may be used.2. Continuous iterations may be done, until the algorithm is refined and robust, while avoiding over-fitting and minimizing under-fitting.3. Continuous refinement of the algorithm may be done, by implementation of feedback from the ecosystem of other algorithms developed for other phenomena, and of user and other feedbacks as to the accuracy of the algorithm (e.g. from user feedback and from the service provider's inner company feedback). In an exemplary embodiment of the invention, the one or more of the following Cannabaceae plant characterizations and assessments are performed:1. Assessing maturity2. Detection of mold3. Detection of insects and pests4. Detection of nutrient deficiencies and excesses by leaf colors5. Detection of turgor (water pressure) by plant shape6. Detection of growth rate/direction by plant shape and movement over time7. Assessing plant gender/sex organs8. Phenotyping support (for breeders)9. Assist in flower drying10. Assist in flower curing11. Assist in flower storing12. Assessment of flowers' APIs such as Cannabinoid, terpene or flavonoid concentrations e.g., Tetra-hydro-Cannabinol (THC), Cannabidiol (CBD), and Cannabinol (CBN)13. Purchase testing—including potency and safety (mold, insects). Optionally can be used in an automated purchase testing system such as an e-commerce setting. In an exemplary embodiment of the invention, the assessment of maturity; detection of mold, insects and pests; assessing plant gender; phenotyping support; assist in flower drying, curing and storing; assessment of APIs concentrations and purchase testing is provided, using macro photography images. In an exemplary embodiment of the invention, detection of mold and other diseases, insects and pests is provided. Each pathogen may be checked for by using the feature extraction method depicted above, only instead of detecting spheres to locate trichomes, a unique shape detection tool is used for each pathogen, based its actual shape in the real world. For example, an initial infection byBotrytis cinerealooks in macro photographic image as white lines with sharp angles, and no apparent unified direction, reaching lengths of 10 to hundreds of micrometers. Early infections byAlternariaspp. andCladosporiumspp. causing a “sooty mold” look in macro photographic image as black spots, reaching length of 10 micrometers to centimeters. Aphids look in macro photographic image like white, yellow, black, brown, red or pink insects in the scale of millimeters. In an exemplary embodiment of the invention, nutrient deficiencies and excesses assessment is provided. Some nutrient deficiencies and excesses in plants such as but not limited toCannabis, cause color changes on the plant's fan leaves to the extent human inspection of these leaves lead to a successful diagnosis. Although the mechanism of the color change is not fully known it is a well experienced method used by seasoned agronomists. The image processing protocol for extracting the leaf coloration pattern comprises feature extraction methods to detect the entire fan leaf (by shape, maybe by color as well) and then spatially fragmenting the detected leaf region into separate ROIs. The ROIs in each leaf is measured for color and the color differentiation profile between the different parts of the leaf is given a grade (e.g., a leaf with yellow tips may be marked as C while a homogenously green leaf may receive the mark A). Taking other data into account (such as the leaf location on the plant, the plant age and maturity, environmental conditions etc.) the calculated marks of different leaves in a plant helps determine the nutrient deficiency/excess of that plant. In an exemplary embodiment of the invention, the system or method assesses the turgor, growth rate and growth direction. By continuous measuring of the angle of the main and minor stems and fan leaves of the plant the system learns about the curvature of the plant which leads to conclusions regarding water tension (turgor). Adding the angular data to continuous volumetric assessments (by using 3D imaging or other methods) leads to assessment of growth rate and growth direction. In an exemplary embodiment of the invention, the system or method detects gender/sex of the plants. In the early flowering stage of theCannabisplant (and other Cannabaceae plants) there is a clear physiological difference between the male and female organs (in the scale of millimeters). By means of feature extraction (as discussed above, with or without using 3D) the organs can be detected and differentiated. In an exemplary embodiment of the invention, the system or method assesses phenotypes. In cultivation of certain plants such asCannabis, breeders seek to increase potency and yield of plants by cross breeding two parent plants, and screen the resulting offspring. InCannabisfor example, image analysis with the current invention enables the users to assess the density and spatial location of trichomes in the offspring's flowers (i.e., the relevant phenotype), and in that manner assist the breeder in giving a specific score to that offspring, reflecting their attractiveness for further cultivation. In an exemplary embodiment of the invention, the system or method assesses Cannabaceae plants products characteristics during drying, curing, storing and selling phase the producers for cultivators, traders, wholesalers, retailers and the like. The system checks for change in stalked-capitate trichomes' “head” (sphere) diameter, since it was shown to change with the curing process. Alongside microscopic observations, macro-scale (centimeters) observation of the cut flower (bud) can be used to inspect the change in its total volume and coloration change in the shades of green. This visual data can be accompanied by relative humidity (RH) data the user has regarding the bud in question, days into process, reported smell changes and more. In an exemplary embodiment of the invention, the system or method analysis includes receiving, processing, modeling and analyzing 3D images. While a single image of a captured location may contain a lot of information, it may be of necessity to use 3D reconstruction methods by the means of stereophotogrammetry, which involves taking several images of the approx. same location from different positions. Alternatively, a user creates a short video or a sequence of images taken in short intervals (10 images per second for example, in “burst mode”), while moving the camera (pan, tilt, close in, back up etc.), creating consecutive frames with multiple points of view for the same location of the plant/plant tissue (this technique can be done with or without magnification). The slight differences in point of view of images of the same location, while still keeping identifiable common points on each image, allows a 3d reconstruction by principles such as parallax, triangulation, minimal reprojection errors, and minimal photometric error (the latter refers to non-specific feature-based method, also called direct method, which uses intensity differences). There are software applications that perform such single camera 3D reconstruction (also known as monocular 3D reconstruction, and Structure from Motion or SfM), for example: Meshlab, VisualFSM. The analysis method of the current invention may utilize one of the above mentioned software or may use a specifically developed application. The 3D model of the plant (or the plant tissue) is used to perform 3D feature extraction with improved photogrammetry capabilities such as trichome volume (when modeling from micrographs), leaf relative size and positioning on the entire plant (when modeling from non-magnified images of the plant). In an exemplary embodiment of the invention, the assessment is qualitative (for example, assessment or detection of pest). In an exemplary embodiment of the invention, the assessment is quantitative (for example, assessment of maturity level or THC concentration). In an exemplary embodiment of the invention, the assessment is followed by suggestions for action. Maturity diagnosis is both qualitative, and upon reaching a certain threshold (for example, 100% maturity), suggestion for harvest is reported. The threshold can be set based on the service provider's knowledge base or upon the user's input. That threshold for harvest which leads to a suggestion for action is an example for a rule—a combination of a detected plant condition and a suggested action. For example, for maturity the rule can be “when the detected flower is 100% mature—harvest it”. The service provider knowledge base or the user preferences may also determine the plant condition which may be set as the threshold condition. Reference is now made toFIG.4.FIG.4illustrates a conceptual block diagram of a system for characterization of Cannabaceae plants. The system,300, comprises: one or more macro photographic imager310; one or more user terminals320, receiving images data from the one or more macro photographic imager; and a computing subsystem330comprising one or more processors. Macro photographic imager310takes images of the Cannabaceae plants P. Macro photographic imager310may comprise a separate optical device312such as a lens, or a lens that contains an integrated light source or the like. Macro photographic imager310may be a camera, a web-cam, a macro-shooting capable camera, a professional DSLR camera with a macro lens, a smartphone, a smartphone mounted with an accessory magnifying lens (typically 10-40× magnification is used), a capable of image taking binocular or microscope. Optical device312may be a hand-held magnifying glass (such as a jeweler's loupe) manually held inside the optical path between a camera and a plant P. In an exemplary embodiment of the invention, optical device312is a lens and holder designated to be attached to certain cameras, including smartphones. For the trichome analysis, the magnification should provide a pixel size smaller the 100 um×100 um. User terminals320preferably contain an interface to receive the images from macro photographic imager310. User terminal contains input and output devices to command and control the system, for example to initiate an image taking, and to present the reports of the computing subsystem to the user. The user may be farmers, growers, producers, cultivators wholesalers, retailers, end consumers and the like. In an exemplary embodiment of the invention, user terminal320comprises email application for sending the image from a computer or a smartphone to computing subsystem330resides on the cloud. Alternatively, a dedicated software application running on the user terminal320is used to transmit the images to a computing subsystem330. In specific, the dedicated software application may be a smartphone app running in a native iOS/android/windows/Linux/other a native OS for smartphones. Computing subsystem330preferably comprises one or more processors and communication links having wire or wireless interfaces to receive the images from user terminals320. In an exemplary embodiment of the invention, computing subsystem330comprises remote processors and the communication between user terminals320and the processors is performed using a network, in general, and the Internet in specific. The interface of user terminal320to the communication link, e.g., the network, may include cellular protocols, such as GSM, CDMA2000, 3G, 4G, or the like, LAN protocol such as WiFi, Ethernet or the like, and PAN protocols such as Bluetooth, NFC or the like. In an exemplary embodiment of the invention, computing subsystem330comprises a server farm resides in characterization of Cannabaceae plants service provider's premises. Alternatively, computing subsystem330is a cloud computing service. In an exemplary embodiment of the invention, computing subsystem330is implemented as part of the user terminal. For example, a smart phone with macro capable camera may implement system300in full, wherein macro photographic imager310, user terminal320, and computing subsystem330are all implemented on the same device. Yet in another exemplary embodiment of the invention, the macro photographic imager310is a professional DSLR with macro lens and WiFi modem, user terminal320, and computing subsystem330are both implemented in a personal computer (PC) resides in the user's premises. The PC comprises WiFi modem to capture the images immediately after they are taken. Alternatively, the images are stored in the camera storage and uploaded using interface such as USB to the PC. Reference is now made toFIG.5.FIG.5illustrates a mixed flow and block diagram of an exemplary embodiment of the invention. The diagram contains the following elements: A user1, a camera2comprising memory space to store images taken by the user, An optional magnifying lens3attached to the camera2, A communication link4for transferring the images to the computing subsystem, a database5of unprocessed images and metadata of corresponding image, a machine vision algorithm6(e.g., image processing algorithm), a database7of vision algorithm6output and metadata, a data mining learning algorithm8for improving the vision algorithm, an optional data mining learning algorithm9for improving the rules for achieving a diagnosis, a rule set—output diagnosis10, a diagnoses database11, a formatter12that convert the diagnosis into a result message, a communication link13for transmitting the decision back to user1, and user terminal14that transfer the images from camera2to the communication link4and present the result massage to user1. The elements of theFIG.5diagram are now described in more detail, in accordance with exemplary embodiments of the invention. User1is a cultivator/vendor/customer that need the assessment of the Cannabaceae plants or its processed products. Camera2(i.e., macro photographic imager) may for example comprise: an amateur digital (or film) camera, a web-cam, a macro-shooting capable camera, a professional DSLR camera with a macro lens, a smartphone, a smartphone mounted with a magnifying lens (2-100× magnification), a binocular, or a microscope. Camera2has memory, for example camera's memory card. The images stored in the memory is transferred to a personal computer (PC), alternatively, a printed image is scanned and stored into the PC. The image data may be transferred from the macro photographic imager to the PC memory space (or to a smartphone memory space) by a cable or by wireless communication link. Magnifying lens3may be a hand-held magnifying glass (such as a jeweler's loupe) manually held inside the optical path between the camera and the plant tissue. Alternatively, a hard-wired lens which is a part of a camera2may be used. Yet another option is having a lens and a holder which is designated to be attached to camera2. In an exemplary embodiment of the invention, a camera is connected to or integrated into, a microscope, replacing the camera2-lens3pair in the system. For the characterization of Cannabaceae plants that include the extracting of the information related to trichomes, the magnification should provide an appropriate image resolution to detect these organelles (which are typically 100-500 um long, 25-100 um wide). In order to capture as many of these organelles as possible, as large as possible field of view and depth of field is preferred. In order to resolve these organelles the ‘pixel size’ (the size of an object in the real world, represented by each pixel) should preferably not be over 100 um×100 um, but in order to capture as much organelles as possible, the field of view size should preferably not be smaller than 1 mm×1 mm. This may necessitate special optics, elaborated hereinafter. Communication link4may be an email service for sending the images from a PC/smartphone and other possible user terminals. Alternatively, communication link4may be a cloud sharing folder on user terminal14devices, or a browser based SaaS service. A dedicated software or smartphone app that is in connection with the computing subsystem is yet another option. Such dedicated smartphone app may be a native iOS/android/windows/Linux/other native OS for smartphone, or a cross-platform based app. The app may be developed using tools or platforms like Cordoba and Adobe PhoneGap Developer. The said app may connect with the service provider's server through IP, TCP/IP, http, https or JSON protocols. Database5. Memory, stores all unprocessed images, including all data collected at the capture event (plant identity, time, geographical data, environmental and internal conditions) and other data concerning the image (such as user credentials). Database5may be any SQL based database (e.g., MySQL) or non-SQL database (e.g., Cassandra, MongoDB). Machine vision algorithm6is a computer software code running on the computing subsystem; the software transforms the visual data of the received images into meaningful metrics. One analysis option is to perform color based classification. Another option is to do feature extraction by shape (e.g., trichomes' round head, mold's webbings' straight lines), each of which is later measured for different parameters (e.g., color, shape, diameter, size, volume). Optionally, Machine vision algorithm6is implemented by applying machine learning, deep learning (e.g., using neural networks). Such a learning algorithm can be constructed by training (and validation/test) of datasets containing pairs of images (similar to images used as input to the analysis algorithm). The values correlated with those images, depend on the diagnostics features (e.g., total THC concentration, mite species, maturation and the like). In an exemplary embodiment of the invention, the algorithms improve over time by getting user/internal feedback. Such user feedback can be comprised of user rating as to their perceived quality of the diagnostics they received, metadata provided automatically from the capturing/transmission device (e.g., time, location) or manually by the user (e.g., name of strain), by user inputted organoleptic data (e.g., smell), or by user decision making (e.g., chosen course of action in light of a given diagnosis, either entered manually or by choosing the solution offered by the app, leveraging the wisdom of the crowd). The machine vision algorithm may be a tensorflow python graph file (.pb), .h5, .hdf5, or other format used for neural nets architecture and weights storage. The algorithms may also be written in other languages/libraries such as PyTorch, Caffe, and others. The neural networks weights are set through a process of back-propagation to minimize a loss function, through iterations on a training set, with a control against over- and under-fitting with a non-congruent validation set. In an exemplary embodiment of the invention, the programming language and libraries are installed directly on the computing subsystem server or in a virtual machine/cloud environment. The server may be a physical computer belonging to (or rented by) the service provider or may be a “cloud” instance with no specific allocated machine at all times. The machine vision algorithm (and other components of the system) may be all on the user's computation device (e.g., a smartphone) with the inference and result reporting done locally in the user's machine. In an exemplary embodiment of the invention, the machine vision algorithm can actually be composed of several similar algorithms as disclosed hereinafter. A typical example for a structure for such algorithm hierarchy forCannabisTHC detection is: first classify if the image contains aCannabisplant or not. If it is classified as containingCannabis, then the image is classified into a subgroup of species/strains/phenotypes. In that subgroup, a specific THC concentration regression model is applied to detect the final result. Database7stores the data derived from each image (depend on the parameters, the algorithm6is set to measure) and additional metadata from database5. Database7may be an SQL based database (e.g., MySQL) or non-SQL database (e.g., Cassandra, MongoDB). Data mining learning algorithm8is based on a learning set of manually classified images (not illustrated in the figure), the improved vision algorithm may be improved by utilizing methods of machine learning. Continuous product development efforts of the service provider is a way to obtain growing, high quality data needed for the quality of the analysis provided by algorithm6. Data mining learning algorithm9takes into account the entire data gathered, including some not elaborated upon above (such as chemical analysis of flower previously photographed). A learning technique (such as deep learning, neural networks and more) may be utilized to find new unforeseen connections between different levels of data regarding each flower and the latter's status. This section regards the continuous product development efforts for improving algorithm10. Rule set—output diagnosis10is the step of calculating and reporting. The rule set is a list of pre-defined conditions and the matching diagnosis that arise from passing them. The rules and diagnoses matching can be determined by the service provider or the user1. For example, a set of rules can be:1. IF 30% of captured trichomes are 70% brown THEN the diagnosis is: “Harvest ready”.2. IF mold is detected with 99.99% accuracy THEN the diagnosis is: “Mold detected”.3. IF both (a) and (b) occur, the diagnosis is the same as (b).4. IF mold is not detected with over 10% THEN the diagnosis is; “Mold not detected”.5. IF (4) is true and the total THC is larger than 20% THEN buy the product. After being checked to pass each of the rules in the set, the diagnosis regarding that image is achieved. Diagnoses database11stores all diagnoses, linked to their counterpart images in databases5and7. Diagnoses database11may be an SQL based database (e.g., MySQL) or non-SQL database (e.g., Cassandra, MongoDB). Formatter12generates a result message. The message may be formulated as text and/or a graphical message (e.g., plot, drawing, photo, video animation and the like). The message may be intended to be a decision support tool. An exemplary massage for the cases discussed above may be the following:1. “This plant/flower is ready for harvest”.Another possible answer:“Harvest is due in 13 days”.2. “This sample contains mold with 99.99% certainty”.3. “It is not advised to harvest this flower as it is potentially toxic, contains mold”.4. “No mold detected” (or, in this case, the system may not report any message).5. “This product is in good quality, you can purchase it”. Communication link13may be the same link as communication link4. Alternatively it can be a different link. In an exemplary embodiment of the invention, communication link13is the Internet. The link transfers back to user1a written (text) and/or graphical messages conveying the result message depicted in12. In the case of a smartphone app, the messages may be transmitted from the service provider's server through IP, TCP/IP, http, https, or JSON protocol. User terminal14comprises input devices like touch screen, keyboard, mouse and the like. User terminal14comprises output devices like display, speakerphone and the like. In an exemplary embodiment of the invention, the user terminal14is a smartphone, a tablet, a laptop, a dedicated hand held terminal or the like. In an exemplary embodiment of the invention, the user terminal is a stationary terminal such as a desktop computer, a workstation or the like. In an exemplary embodiment of the invention, a plurality of imaging is performed one image at a time wherein user terminal14displays a user interface with some guidance to the user regarding which areas to capture images of next, and when the number of images is sufficient to obtain a reliable diagnosis. Reference is now made toFIG.6.FIG.6illustrates a mixed flow and block diagram of another exemplary embodiment of the invention. In this exemplary embodiment of the invention a potency analysis ofCannabisplants orCannabisproducts are disclosed. Potency analysis (e.g., THC concentration) is one of the applications of this invention. The chemical test is not limited to dryCannabisflowers, but also to fresh flower, and concentrates. Specifically it may be applied on certain kinds of hash (e.g., live resin/rosin, isolator, bubble-hash and other concentrated that keep the trichomes structure intact). These products are made out of condensed trichomes, which can be detected by the system's algorithms. It may also be applied onto concentrates that do not preserve the trichome head structure, with the aid of a white-balance calibration apparatus to appear in the same frame as the concentrate. The system or method ofFIG.6comprises user terminal14and communication link4and13similar to elements14,4and13disclosed inFIG.5and their companion description hereinabove. User terminal14is operated by user1and comprises camera2and magnifying lens3(not shown in the figure). Database25is a unified database (replacing databases5,7and11of the embodiment illustrated inFIG.5) that stores the images the user sent as well as the analysis products and the final report. Optionally, the video combine macro (zoom in) and normal (no zoom in) images to get better context and therefore improved results. After the one or more images are received by the computing subsystem using communication link4, classifier22analyzes the image. Classifier22may be a neural network classifier that has been trained to detectCannabisflower images (standard or macro images). In addition, with the images sent to analysis, user may send auxiliary information such as plant identity, time and location, environmental conditions (temperature, humidity and the like), and subjective data like smell of the plant. In an exemplary embodiment of the invention, the smell is provided by an artificial nose small apparatus. Artificial nose small apparatus may detect the presence of chemical molecules in the air surrounding the plant. In an exemplary embodiment of the invention, a non-visible spectrum light sensor (spectrometer) may be used to detect additional information on the formation of the sampled material (e.g., UV, NIR). If an image passed the first “isCannabis” classifier22, the analysis of the image continues with analysis of the strain ofCannabisby strain classifier24. Next this image is analyzed in class X potency estimator26. A simplified exemplary implementation of class X potency estimator26is provided inFIG.7. Reference is now made toFIG.7.FIG.7illustrates a block diagram of a simplified potency estimator. The estimation is performed in two steps: neural network26N followed by regressor26R. In this simplified example, the neural network26N has four outputs and has been trained to detect a specific strain ofCannabis, and four “classes” of THC potency (i.e., THC concentration) have been trained: 5%, 15% and 25% and 35%. The training of this four output neural network has been performed by taking the image training data set with the corresponding THC concentration lab tests and splitting them into four sets. The images in which the measured THC concentration is less than 10%, are assigned to the class 5%. The images having a measured THC concentration between 10% and 20% are assigned to the class 15%. The images having a THC concentration between 20% and 30% are assigned to the class 25%. Finally, the images having a THC concentration higher than 30% are assigned to the class 35%. Note that it is a rare event that THC concentration is higher than 40%. In the training, each set is inserted into a process of back-propagation to minimize a neural network loss function, through iterations on a training set, to set the training coefficients. The network output is configured in a way that the output for each concentration class is the probability the image is associated to this class, and the overall probability sums to one. The output of the four probabilities is the input to regressor26N. The regressor in this case, is a linear regressor. The THC concentration vector Y is multiplied (inner vector multiply) by the probabilities vector P. The output in this case is THC concentration of 15.5%. In general, the regressor formula may be non-linear and the weight coefficients may not be direct probabilities as in this simple example. Reference is now made back toFIG.6. The potency estimation or assessment is stored in database25. If there are more images associated with the plant, each of these images flows through steps22,24and26and the associated estimation product is stored in database25. When all images are processed, the final result is calculated by considering (e.g., averaging) all estimation products. Taking a plurality of images and averaging the analysis is recommended because of the inherent variability/heterogeneity of theCannabisflower, incorporating concentrated potency loci (a.k.a. hotspots). As the flower is mostly consumed as a large piece (typically about 0.3-2 grams), the user consumes a mix of the hotspots and the weaker spots. It is therefore more meaningful to the user to find out what is the average potency of the flower they are testing—including hotspots and other parts as well. Therefore a scan of the flower from as many facets and angles as readily possible can be a key to a successful diagnosis. Such a scan can be achieved by taking a set of images (a batch), and sending them as one test to the computing subsystem. In an exemplary embodiment of the invention, analysis of each image is performed individually, the images' results are averaged, and the averaged result is stored and sent to the user. Alternatively, the computing subsystem combines the images (the batch) to one image, and processes it in a single analysis pass. In an exemplary embodiment of the invention, the system is capable of receiving a short video of the flower from all sides, and extracting a plurality of images from this video. In an exemplary embodiment of the invention, the user terminal14(and more specifically, the macro photographic imager310inFIG.4) is a device, into which the plant sample is inserted. The said device is equipped with multiple cameras able to image the sample from several angles, or one camera with a mirror system to allow a photograph to be taken from many angles with one camera. The said device can capture the sampled plant from many angles in a short time and controlled conditions, easing the process of sampling, and can be applied to testing pre-harvest or post-harvest flowers. When all the analysis product data is saved in database25, a report generator12sends the results to the user terminal14using communication link13. In this exemplary embodiment only one strain, designated by the name “class X” in the figure is further analyzed. Alternatively, a multi-strain analyzer may be used. In this case, either a unified analysis algorithm for multiple strains is used, or a specific algorithm for each strain exists in the computing subsystem and the appropriate one is performed based on the classification provided by strain classifier24. The data stored in database25is used by the training and validation processes28disclosed in more details hereinafter. In an exemplary embodiment of the invention, the system helps the end user with assessment of the consumption dosing for a user. The generation of a potency result together with the user predefined desired dosage, enables the system to provide a suggestion as to how much of the analyzed product to consume. For example, if the user tests aCannabisflower and it has 20% THC (meaning 200 mg THC per a gram), and the user would like to consume 50 mg of THC in that sitting, for an accurate dosage, the system will suggest weighing 0.25 grams of the product. This utilization of the invention creates a novel, non-destructive, quick and accurate way to inspect the dose of the actual product and get instructions on how much to consume, in order to get a precise, repeatable medication/recreational effects. In an exemplary embodiment of the invention, averaging over the entire plant is provided. The closer/more magnified the image is, the lower the probability that the captured image represents the real average of the sample API content (e.g., THC), causing a sampling error and reduction in result repeatability (the chance the user will take another photo of a different area of the same sample and get the same result decreases). To overcome this problem the user may use a lens with special optic features, specifically a combination of a large field of view and a deep focal depth, while using a high resolution camera sensor. For example, a field of view of 3 cm×3 cm with a resolution 10 um (pixel size of 10 um*10 um), necessitates only a 9 mega-pixel sensor. The challenge in this case is the optics needed to keep the entire field of view in focus, both because most smartphone lenses cannot provide such a wide field of view at such a high resolution, but mainly because the typical extreme topography of aCannabisflower, necessitating a deep field of view (minimum 1 mm). This optic requirement may be fulfilled either by an optic device supplemented to the image capturing device in the user terminal14(e.g., the user's smartphone14) especially for the service offered by the system; or by some other off-the-shelf relevant optical attachment/capturing device; or without any optical device other than the image capturing device (e.g., the user's smartphone), providing the latter has built-in optic capabilities filling the discussed requirements. After the user captures this hyper-high resolution image discussed above, the image is sliced to smaller images to be considered a part of one batch. Each image is analyzed on an estimator that was trained on similar small images, and finally all results from that batch are averaged. Reference is now made toFIG.8.FIG.8illustrates a typical maturity vs. time function of aCannabisplant. The function of maturity50has an optimal point52for harvesting the plant. One purpose of the method or the system is to assess the current status point54in order to recommend to a grower when to harvest. In an exemplary embodiment of the invention, assessment of plant maturity is performed by trichome size, shape, density and color, or other features or combinations thereof. Color differentiation on the range between white and brown, and also yellow, purple and other colors can be achieved using classic image analysis methods by the means of background subtraction, i.e., removing all pixels which are not brown, white or any light brown colors in between. The assessment is performed by appointing each remaining pixel with level of “brown” (0 being white) and then calculating the intensity of brown out of the total number of pixels in the background-subtracted image. An alternative approach is feature extraction, namely picking out the trichomes in each frame and assigning each with an individual region of interest (ROI), with classic image analysis methods or with machine learning methods that may extract more basic features and may build complex connections between these features during the training process. In an exemplary embodiment of the invention, feature extraction is based on 3D imaging. The feature (for example, a trichome) may be detected by (but not limited to) the distinctive round shape (sphere) of the stalked-capitate trichome head. The round shape (which is otherwise in these scales usually rare) may enable a decisive recognition of the trichome head to establish it as an individual ROI. These ROIs (whether established thanks to round object detection or not) can be tested for coloration (such as the levels of brown example depicted above), size, shape, density, and more. In an exemplary embodiment of the invention, the system provides customization for growers with a preferred rule set. Expert growers have an idea of how a mature flower looks like, in terms of trichome, pistil and over all flower characteristics. In order to empower those expert users and automate the process of evaluation in a way that may be as close as possible to their preferred rules (the visual appearance of the flower as the user wishes to have once the flower has fully matured), the system may ask those users quantitative and qualitative queries that may be used to assess those rules. Such queries may include the following: “what percentage of clear/white/brown trichomes (each) do you expect to see?”, “what should be the color of the pistils?”, and “should the flower seem denser and less hairy?” The answers may be quantified and kept in a dataset (table) of diagnosis conditions and the actions fitting to those conditions (the rules dataset). An example for a rule may be: “when there are 50% amber trichomes the plant is a 100% mature harvest the flower”. The expert grower may say that their rule's condition is at a certain measurable metric but have a misconception of the actual appearance of their condition. Continuing with the example above, the user may say they saw 50% amber trichome when the actual number is 70%. For the system to account for that deviation, the real condition the user is looking for must be found, and not what the users thought or said. In order to do that a series of photos of trichomes in different plants' maturation stages may be sent to the expert grower with different levels of the condition they are looking for and the expert grower may select the image which they feel is the best representative of their rule's condition. In the example above, this may be a series of images displaying different levels of amber trichome concentration, and the expert grower may select the photo they feel has 50% amber trichomes. The system may set the number the expert grower chose (70%) as 100% maturity, rather than what the user declared (50%). In addition, the conditions disclosed above may be set according to scientific or other knowledge accessible to the service provider. The condition may be set according to a certain strain in a certain season under certain growth conditions, based on external knowledge (for example scientific or horticulture literature) or internal knowledge (result of past users' data processing). The internal and external setting conditions may be continuously updated due to the service provider's research. Such research which causes updating of the setting conditions, may be result of growers and cultivators feedback, chemical analysis of plant material upon which a diagnosis was made, and big data analysis by the service provider's supported by machine learning tools. All of these processes are part of the self-improving mechanism and training of the system as illustrated in block270inFIG.3and block28inFIG.6. In an exemplary embodiment of the invention, a computerized process, service, platform or system for automated plant diagnostics is provided, based on image analysis of micro- and normal-scale photos, and a decision support automated system enabled by that embodiment. A lot of the metrics used for plant diagnostics are in the small scale, i.e., scale of single millimeters and down. For example, inCannabiscultivation maturity evaluation is based largely on visual inspection of trichomes, small (tens to hundreds of micro-meters in length) hair-like organelles which produce and harbor most of the medical and recreational desired active molecules. In fact, the trichome status can also be used to derive theCannabisflower status after cultivation is over, to monitor flower status through all chains of theCannabisflower industry—dry, cure, store, and purchase (potency and safety) testing. Phenomena important for any plant cultivator are the initiation of infection of the plant by fungi, viruses and bacteria, as well as nematodes and insects such as mites, aphids and herbivores. Many of these pests and diseases only show macro-scale symptoms days and even weeks after their micro-scale early stages of infection can be apparent through visual magnification. Expert cultivators/agronomists use a magnification apparatus (jeweler's loupe, microscope) and manually check plants to determine their maturity (by evaluating trichomes) and to detect first stages of phytopathological evidence (pests and diseases). Magnification is used for (bulk or consumer) purchase testing. The system illuminates the necessity of the work of an expert, by exploiting the knowledge of the rare, knowledgeable person, who is the only one qualified and responsible for determining health/maturity of the plants. The system solves the problem and limitation in manpower qualified to diagnose plants for health/maturity and enable more checks per plant. In fact, in most cases the expert only has time to check as little as 1 in 20 plants of an industrial cultivation operation, actually performing batch-testing. Beyond cultivation, the lack of fast, low-cost and non-destructive diagnosis for many Cannabaceae plants, such asCannabis, prevents an objective quality assessment needed for adequate pricing. The system disclosed hereinabove provides the need for added checking and testing for every plant, typically enabling everyone who is able shoot a photo of the plant and send it, to receive instant diagnosis. In an exemplary embodiment of the invention, the system or method assesses maturity curves by repeating acquisition of each location (i.e., plant in general or a more specific location on that plant) as the time passes. Each image and its subsequent analysis contain a timestamp relating to the exact time of acquisition. The results derived from each analysis of each location are organized in a table (or array, matrix or similar data set), and two of the parameters (column values) in this table are the time of acquisition referred to above, and the maturation level (directly or indirectly through metrics such as trichome colors) for that location. Thus for each location a curve can be plotted where the X axis is the time of acquisition (specifically, time in flowering stage calculated by reducing the acquisition timestamp from the user's-report-of-transition-to-flowering-stage timestamp) and Y is the maturity level. For example, it is assumed maturity level is calculated as the percentage of brown/amber trichomes out of all trichomes. In the same way a function can be derived for each location, with X being the time of acquisition and Y the level of maturity. The function may be single to fit all grow stages or may be composed of several sub-functions that correlate to different rates of maturation across the flowering stage. For example of the latter, at the beginning (day 0 and 1) of the flowering phase the maturation level may be 0%, in day 10 of the flowering phase it may increase to 15%, in day 20 to 50%, in day 30 to 90%, and in day 40 (the end of the flowering phase in this example) to 100%. So in this example, the calculated function (and subsequent curve) is a presentation of a (hypothetic) typical dynamic of the rate of the maturation—maturation starts slow, speeds up more and more, then slows down until it reaches a full stop at the end. To enable predictions using this kind of maturation function, the service provider may build in advance (possibly during development) a “model function” of maturation of a typical plant, “the way a plant ‘normally’ matures”, to be regarded as a model of the maturation process, and so to be used to forecast the date in which a certain location may reach a maturity goal (as described above, the goal may be dictated either by the user or the service provider). Different model functions may exist for different plants, strains, growth methods and/or other factors. At any specific time in a user's cultivation, before the cultivation is done, the service provider can overlay the user's actual maturity function of the location in question based on the data the user has sent until that time, and compare the dynamics of that “actual function” of maturity to the model maturity curve. The fit does not have to be (and probably will not be) perfect; the fit between both functions may be in the dynamics, i.e., derivative phase. To continue with the same example as above, the user sends data every day of the flowering stage, and on day 15 a user-data based actual function is created. In this example it is revealed that on days 14-15 the slope of the maturation rate has turned from “a” measured in days 1-2 to 2*a, meaning the derivative of the maturation rate has increased significantly (in this case—doubled), and so the point in time in which the slope has increased to 2*a is determined. This time point where the derivative doubles may (in this example) be assigned as the middle of the flowering stage. In that way, even if the curves don't fit exactly a prediction for harvest is achieved. For some (not all) phenomena the user may use optical magnification, which can be provided using an off-the-shelf device. The optical magnification device may be a lens attached to a regular smartphone camera, or may contain an internal camera alongside the optic magnification apparatus, in which case the device may be able to be integrated into a smartphone or computer through a USB cable of wirelessly through Wi-Fi, or Bluetooth for example, or be a standalone device that connects to the internet and sends/analyzes the photos without the need for a smartphone/computer. The magnification device may optionally provide some or all of the spatial illumination, light filtration, and specialized focusing. In an exemplary embodiment of the invention, the diagnosis system may potentially be robust enough to derive data from images of the organ in question, shot (imaged) by different cameras, under different lighting, magnification and color balance, providing that they are in a good enough (minimal) focus and resolution. After sending a picture to the diagnosis service provider, the user gets an electronic message (e.g., email, in-app prompt message) with a written or graphical diagnosis based on those images after a few seconds to a few days. Alternatively, the analyses results may be saved on a server, not sent to the user immediately, to be accessed by the user/another user on a dashboard/platform in a SaaS or native software surroundings. Reference is now made toFIG.9.FIG.9is a block diagram of an optical magnification device attached to a regular smartphone camera. The optical magnification device400comprises a clip410, a magnifier lens420, lighting dome430, light sources440, magnifier focal point/plane450, and an aperture460(previously510). The clip is configured to grip a smart phone500comprising phone camera aperture, phone camera lens520, and phone image detector530. The plant tissue is attached to magnifier focal plane450. In this distance from the phone camera aperture510the phone camera image will be in focus. In this example the magnifier aperture/focal point450is about 10 mm and the other relevant dimension of the optical system is provided in the figure. Optical magnification device400light sources440(optional) generate an optimal light to take an image. Light sources440are typically LEDS but other light sources including lasers and the like might be used. The magnification of the device illustrated in this figure is about 10×. Other geometries and magnifying lens may provide higher or lower magnification. Digital or optical magnification in the smartphone camera subsystem may be used. Clip410is used to hold the optical magnification device400to smart phone500in such a way that the magnifier lens420will be in optical matching with phone camera lens520as well as phone camera aperture510. Phone image detector530is typically a CMOS imager but may also be any other camera image sensor technology, such as CCD, used in smartphones. In an exemplary embodiment of the invention, the system performs better than expert on tasks such as potency testing and harvest timing, to the extent its superhuman performance achieves the task no human can ever fulfill consistently. In these tasks the system allows a chemical analysis of the plant matter to a statistical error of as low as 5%. Better performance is anticipated in the future as more data is gathered. Using a well annotated, high quality, biologically diverse database allows the creation of machine vision algorithms that can measure variety of chemicals. In another aspect, the proposed system is also much more effective when it comes to home-growers and consumers/medical patients, which are typically less informed than their expert, industrial/retail counterparts. These home-growers/consumers cannot pick (generally speaking) the alternative of summoning an expert to inspect their plants and therefore may benefit greatly from a web based publically accessed platform, which will provide high-quality scientific diagnosis of their plant. The system also presents a novel usage of the cultivation technique (microscopic inspection), one which is particularly advantageous for non-cultivators seeking to check the status of their plant matter after it has been harvested. In theCannabisflower for example, many processes that occur post-harvest can be crucial for the end-product quality; these are drying, curing, storing, and purchasing—all are of interest to cultivators, vendors (“dispensaries”) and consumers. Vendors and consumers may be supported by the system in their effort to check what they are buying prior to and while they are purchasing. The system disclosed herein above is far superior to that of employing a large number of diagnosticians, because of the cumulative data organization and analysis which it offers. This “data-handling” the solution provides, can enable the sole expert to control each and every plant grown in their facility over a long period of time, or to enable a store owner to have a potency record of all products in stock, through the use a computerized display setting and analysis dashboard, or to enable a store owner to run a potency test on each portion of product sold/bought and set the appropriate price for it on the spot according to the system generated result. This level of data control can transform the abilities of the farmers, grower, expert, trader to reach better, well-founded knowledgeable decisions regarding their crop and the operational side of their facility. In an exemplary embodiment of the invention, the system provided novel chemistry testing forCannabis. Building on the unique morphology of theCannabisplant, a novel approach to chemical testing is disclosed. As noted above, the accumulation of APIs (mainly Cannabinoids, terpenes and flavonoids) in trichomes, combined with the former's visible coloration and localization within the trichome, allows the system to draw a connection between certain chemicals and the visual appearance of the plant matter. In that perspective, the current invention may serve as a chemical analysis tool used for cultivation, trade, and dosing scenarios. The proposed system can thus be certified to a certain extent as a legitimate chemical testing service, alongside or as an alternative of the existing lab testing, consisting of HPLC/GC/HPLC-MS/GC-MS test equipment and test methods. In an exemplary embodiment of the invention, the users' information database which may be created as a result of using this system is potentially harboring a lot of value to plant cultivators and consumers, in the form of personalized agronomic suggestions. Relying on the accumulative user data, upon researched agronomic knowledge, and upon user feedback (e.g., of past suggestions produced by the system), the system may be able to use the diagnosis of a user as a basis for a formation of a suggestion to that user, as to how to act upon said diagnosis. Thus, direct personalized recommendations may be provided to users depending on diagnoses the system produced (for example, if the system finds the user's plants are infected by mold a message may be selected (by suitable preconfigured logic) which recommends use of an anti-mold product to that specific user), making the procurement side of the agricultural process more efficient and therefore shorten the lead time of reacting to problems and improve planning. Moreover, the personalized cultivation suggestions may be a lucrative revenue model for the system provider, as it may act as a direct and personalized product marketing/sales/advertisement platform. Such marketed products may be pesticides, herbicides, fungicides, nutrients supplements, nutrient removal products, seeds, clones, service providers (i.e., harvesters), and more. In an exemplary embodiment of the invention, personalized suggestions is offered to consumers. Basing the diagnosis on the user uploaded image, the system may detect the predicted affect the flower may have on that user (as described in the self-improving feature) and suggest actions to that user. For example, the system may detect the user is analyzing a strain which may have a sedating effect, and thus can warn the user not to plan an activity which demands their full energy. Another example may involve advertisement; say the detected strain enhances appetite, the system may prompt an ad for a food vendor and suggest to the user to order food in advance, so when the food craving is in full swing, their food order will arrive to their door. These suggestions hold a great added value to consumers and may also become a lucrative revenue model. In an exemplary embodiment of the invention, raising the standards of plant checking to a single-plant checking (and even within the same plant), is practically not feasible with human experts in an industrial context; suggesting that once used to this technology, the users may become dependent upon it. Resembling, for example, the dependency many people have on navigation software nowadays, which is self-perpetuating since many new users do not acquire independent navigation skills. The same goes for home or novice growers which may rely on the system's diagnoses to the extent they may not feel the need to educate themselves on the area of knowledge the system's solution covers. Furthermore, the system serves as backend for a fully automated cultivation/processing system, leaving some of all of the human need for involvement in these processes. For example, automated growing robots (or pods), robotic arms, drones, or other automated systems equipped with cameras, using them to send plant pictures to our system, which sends back a diagnosis to the cultivation system. That automated cultivation system act upon the diagnosis in a fully closed loop automated manner. Training and Self-Improvement In an exemplary embodiment of the invention, the system is constantly improved and uses the new acquired data to improve the coefficients of the neural networks and other signal processing algorithms and rules base decision making. In an exemplary embodiment of the invention, the system use machine learning, deep learning, machine vision, artificial intelligence and the like for the analysis phase. In order to build the machine vision deep learning model to detect APIs from plant images, in the training phase many high quality images of plants (similar to the images the user can upload later when using the service) are collected. The plants of those images sent to lab testing. The test result with the relevant APIs is used to train the network. The result can be one-dimensional for a network for a specific API, e.g., THC, or multidimensional to train a more complex model able to predict several APIs contents at once. The following are some thumb roles for a good dataset for the training task forCannabisAPIs:1. The imaged plant samples should be as small as possible (i.e., 100 mg sample is better than 1 gr sample), and this depends on the labs to which the samples are later sent. This will keep this heterogeneity as small as possible.2. Work with several labs, as current lab testing has its own errors which may change from lab to lab and even between different samples. For the best results, image the plant sample, afterwards grind it up and send it to the different labs (e.g., image a 200 mg plant sample, grind it up and send to two different labs that can each analyze 100 mg samples).3. Collect samples from as many distinct phenotypes/genotypes as possible. Have several samples from plants across several distinctive spectra (examples here show couples of words standing for the extremes of each spectrum): strong/week,sativa/indica, indoor/outdoor, dry/fresh, cured/uncured, etc. To further elucidate, to satisfy the need for diversity of samples across these 5 spectra, one may collect at the least 32=25 different samples. In an exemplary embodiment of the invention, self-improving in performed as follows: as more users make use of the service the better and faster the image analysis gets; by virtue of suitable learning functionality, pictures uploaded by users will be added to newer iterations of the model training, thus making the model more robust to various use cases, such as different magnifications, resolutions, optical aberrations, lighting,Cannabisstrains, and more. Alternatively or additionally, self-improving in performed is done by allowing the users to add input as they use the system to allow a user-generated annotation of the user data. The more data is annotated by users or others, the larger the annotated data becomes and potentially better predictions are possible, and depending on the annotation—the larger the scope of diagnoses becomes. For example, if the user can input the strain name of the plant they are imaging, the image can get the strain-name label, and be a part of a future training/validation set for a model trained to detect strains. Same with smell, if a user can input the smell of the flower being photographed, this makes possible a dataset that can predict the smell of a flower based on its optical appearance, paving the way to predict the chemical content of scent generating compounds like terpenes from visual appearance of a flower. As another example, another input can be the medicinal and/or recreational/psychoactive effect the user experience by consuming the flower used for imaging. This has two main applications—predicting such effects flowers may have from the flowers' images (to generalize the reported effect to other flowers and other users depending on similarities between the original image and the new images used for prediction), or predicting for that specific user how other flowers may affect them (for example if the system detects a user reacts to certain strains in a certain way, the system can notify the user it may experience those effects again if it detects the user analyzes a similar flower). The latter application has to do with the personal decision support system potential of the system, discussed herein above. Further than that, the creation of the database as a scientific record of plant status across a myriad of different conditions may provide insights that cannot be revealed without it, potentially connecting apparently distant facts to meaningful scientific findings and actionable business advice. In summation, some embodiments in the current invention are ever improving and the data it collects becomes more valuable along with the growth in users. In one example, the training of the THC concentration estimator neural network (seeFIG.7) has been done using two sets of images. The first was 10,000 images of noCannabiscase. Those images include: (1) nonCannabisplants, (2) fabrics, (3)Cannabisimages not in focus or other image quality issues. The second set was 10,000 images ofCannabismacro photography images that were analyzed by labs. In one optional neural network configuration, the goodCannabisimages were classified by the ranges of the THC in the lab tests (as shown inFIG.7) and the network was trained by inputting those images and back propagating the true result to refine the network. In another optional neural network configuration, the neural network is configured to give only one output which is the THC concentration. Yet in another option the neural network is configured to give an assessment of price. Many type of network may be trained for different assessments. The key point for good training is a-priory knowledge of the specific assessment for the images. While in the case of lab test this knowledge is objective, in other cases it may be subjective and depend on an expert assessment. In some neural networks used in this invention the output stage has 1 to 30 outputs (e.g.,4inFIG.7), optionally, up to 1000 outputs are used. The initial training phase is followed by validation (/test) phase. The validation phase performs an additional use of known a-priory images and runs these images on the trained network. If the validation results are not adequate, additional training with improved, data, hyper parameter, architecture or other elements may be pursued. In an exemplary embodiment of the invention, 90% of the known a-priory images are used for training and the other 10% is used for validation, alternatively, 50% to 95% of the known a-priory images are used for training and the other are used for validation. The network implementation may be a tensorflow python graph file (.pb) (it may also be written in other languages/libraries such as PyTorch, Caffe, and others). The network weights or coefficients has been set through a process of back-propagation to minimize a loss function, through iterations on a training set, with the control against over- and under-fitting with a non-congruent validation set. In an exemplary embodiment of the invention, databases metadata and training sets for machine learning algorithms leads to new visual parameters that can serve as better indicators to plant status. Since the inception of the service, each image is classified manually or semi-manually, to detect and record the features and meaning every agronomist may see when they use a magnifying glass to diagnose all the phenomena discussed above. Some of the manual classification is done by dedicated employees and some by the uploading users themselves—as assisting information. That way each diagnosis in database can be rated for accuracy, and henceforth the rule matching and pattern detection algorithms can be judged for efficiency. Furthermore, this manual (or semi-manual) classification paves the way for meaningful breakthroughs due to the usage of machine learning tools. For example, after a sufficient period of time of offering the service, the service provider may accumulate thousands of images of plant maturity tests and the user feedback regarding the diagnosis the initial algorithm proposed. This semi-manually classified dataset can be split into learning and training sets, which may be used to build and train a learning algorithm which may find that there is a better way to diagnose for maturity. For example, the algorithm may find that there is a stronger link between color and maturity when stalked glandular trichome head diameter increases. In an exemplary embodiment of the invention, the database is open to researchers as a source for visual plant data across a myriad of conditions and genetic backgrounds, enabling scientists to explore basic scientific questions, thus potentially paving the way to new findings that can promote agriculture sciences. Decision Support System In an exemplary embodiment of the invention, Decision Support System (DSS) is provided. The term DSS means a knowledge-based systems intended to help decision makers compile useful information from a combination of raw data, documents, and personal knowledge, or business models to identify and solve problems and make decisions. In an exemplary embodiment of the invention, the responsibility for choosing the course of action may always be overridden by the grower end user. By adopting the system's suggestions, the grower may automate processes they are already doing, but furthermore the grower may utilize the amount of data that is collected by the system on the plants to reach more educated and learned decisions than growers could have reached without the use of the system. Such automation can serve as the basis for fully automated cultivation/sorting/processing facilities. An example for such utilization of the accumulated data, is a precise prediction of plant maturity date and therefore harvest timing. By collecting micrographs of individual plants continuously along the growth cycle (about every week) the maturity status calculated by the system at each point can be a basis for calculation of a plant maturity curve (seeFIG.8). This curve is a formula of the maturity process progression up-to-date, added with a trajectory of the predicted maturity profile (based on the system's previous data regarding similar plants). When the predicted trajectory of maturity meets a pre-defined goal (again, defined by the user or by the system/service) a predicted date and time of reaching that goal is produced. The produced date of reaching the goal (which may mean harvest time) can be sent to the user and provide them with value such as better logistic planning for the day of harvest, realization that maturity is too fast or too slow etc. In an exemplary embodiment of the invention, growers may choose at what resolution to work in diagnosing each flower/flower cone at a time, each plant at a time or any other resolution. In an exemplary embodiment of the invention, the system or method may supply growers with a dashboard-like software to visualize plants status in their facility, making large scale insight possible. In this software a visual interface may portray the diagnostic data accumulated by the system per each plant (or under other resolutions as discussed above), spatially and temporally, in layers of different data kinds. For example, the grower may view in one glance the location-caused variability in plant maturity, and to see if there is or isn't a correlation between maturity, mold detections, and other accumulated data (including environmental data from sensors and other sources). In an exemplary embodiment of the invention, upon diagnosis the user may get offered paths of action (the above discussed suggestions) in light of that diagnosis, either as a DSS or as part of a fully automated system as described above. For example, when mold is detected, the system can offer the user several solutions such as: cut away the infected flower/check if your humidity is high/buy a specific fungicide. In that way the system not only diagnoses but offers helpful information, may offer discounts for using a specific brand, collects data on users' chosen course of action when faced with different diagnoses. It is expected that during the life of a patent maturing from this application many relevant algorithms will be developed and the scope of some of the term is intended to include all such new technologies a priori. As used herein the term “about” refers to ±10% The terms “comprises”, “comprising”, “includes”, “including”, “having” and their conjugates mean “including but not limited to”. The term “consisting of” means “including and limited to”. The term “consisting essentially of” means that the composition, method or structure may include additional ingredients, steps and/or parts, but only if the additional ingredients, steps and/or parts do not materially alter the basic and novel characteristics of the claimed composition, method or structure. As used herein, the singular form “a”, “an” and “the” include plural references unless the context clearly dictates otherwise. For example, the term “a compound” or “at least one compound” may include a plurality of compounds, including mixtures thereof. Throughout this application, various embodiments of this invention may be presented in a range format. It should be understood that the description in range format is merely for convenience and brevity and should not be construed as an inflexible limitation on the scope of the invention. Accordingly, the description of a range should be considered to have specifically disclosed all the possible subranges as well as individual numerical values within that range. For example, description of a range such as from 1 to 6 should be considered to have specifically disclosed subranges such as from 1 to 3, from 1 to 4, from 1 to 5, from 2 to 4, from 2 to 6, from 3 to 6 etc., as well as individual numbers within that range, for example, 1, 2, 3, 4, 5, and 6. This applies regardless of the breadth of the range. Whenever a numerical range is indicated herein, it is meant to include any cited numeral (fractional or integral) within the indicated range. The phrases “ranging/ranges between” a first indicate number and a second indicate number and “ranging/ranges from” a first indicate number “to” a second indicate number are used herein interchangeably and are meant to include the first and second indicated numbers and all the fractional and integral numerals therebetween. It is appreciated that certain features of the invention, which are, for clarity, described in the context of separate embodiments, may also be provided in combination in a single embodiment. Conversely, various features of the invention, which are, for brevity, described in the context of a single embodiment, may also be provided separately or in any suitable sub-combination or as suitable in any other described embodiment of the invention. Certain features described in the context of various embodiments are not to be considered essential features of those embodiments, unless the embodiment is inoperative without those elements. Although the invention has been described in conjunction with specific embodiments thereof, it is evident that many alternatives, modifications and variations will be apparent to those skilled in the art. Accordingly, it is intended to embrace all such alternatives, modifications and variations that fall within the spirit and broad scope of the appended claims. All publications, patents and patent applications mentioned in this specification are herein incorporated in their entirety by into the specification, to the same extent as if each individual publication, patent or patent application was specifically and individually indicated to be incorporated herein by reference. In addition, citation or identification of any reference in this application shall not be construed as an admission that such reference is available as prior art to the present invention. To the extent that section headings are used, they should not be construed as necessarily limiting. Features of the present invention, including method steps, which are described in the context of separate embodiments, may also be provided in combination in a single embodiment. Conversely, features of the invention, which is described for brevity in the context of a single embodiment or in a certain order may be provided separately, or in any suitable sub combination or in a different order. Any or all of computerized sensors, output devices or displays, processors, data storage and networks may be used as appropriate to implement any of the methods and apparatus shown and described herein. | 92,724 |
11861886 | DETAILED DESCRIPTION Embodiments of the disclosure are described in detail below, and examples of the embodiments are shown in accompanying drawings, where the same or similar elements or the elements having same or similar functions are denoted by the same or similar reference numerals throughout the description. The embodiments that are described below with reference to the accompany drawings are merely examples, and are only used to fully convey the disclosure and cannot be construed as a limitation to the disclosure. A person skilled in the art would understand that, the singular forms “a”, “an”, “said”, and “the” used herein may include the plural forms as well, unless the context clearly indicates otherwise. It is to be further understood that, the terms “include” and/or “comprise” used in this specification of the disclosure refer to the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or combinations thereof. It is to be understood that, when an element is “connected” or “coupled” to another element, the element may be directly connected to or coupled to another element, or an intermediate element may exist. In addition, the “connection” or “coupling” used herein may include a wireless connection or a wireless coupling. The term “and/or” used herein includes all of or any of units and all combinations of one or more related listed items. To make objectives, technical solutions, and advantages of the embodiments of the disclosure clearer, the following further describes in detail implementations of the disclosure with reference to the accompanying drawings. The embodiments of the disclosure provide a video description information generation method, performed by an electronic device. The electronic device that performs the video description information generation method will be described in detail later.FIG.1Ais a schematic flowchart of a video description information generation method according to an embodiment of the disclosure. As shown inFIG.1A, the method includes the following operations. Operation S101. Obtain a frame-level video feature sequence corresponding to a video. Operation S102. Generate a global part-of-speech sequence feature of the video according to the video feature sequence. Operation S103. Generate natural language description information of the video according to the global part-of-speech sequence feature and the video feature sequence. According to the video description information generation method provided in the embodiments of the disclosure, a global part-of-speech sequence feature corresponding to a natural language may be extracted effectively from video data and may be used for guiding to generate accurate natural language description, to improve a video description capability. In an example embodiment of the disclosure, the video may be a video that is shot in real time. For example, a video shot by a camera in real time needs to be described in an intelligent monitoring and behavior analysis scenario. In this case, the to-be-described video may be a video shot by the camera in real time. Alternatively, the video may be a video obtained from a network. For example, a video obtained from the network needs to be described by using a natural language in a video content preview scenario, to implement previewing video content by a user. In this case, the video may be a video that needs to be previewed and that is obtained from the network. Alternatively, the video may be a locally stored video. For example, a video needs to be described in a video classification storage scenario and is classified and stored according to description information. In this case, the video may be a video that needs to be classified and stored and that is locally stored. A person skilled in the art would understand that, the foregoing several scenarios and video sources are only examples, and appropriate changes based on these examples may also be applicable to the disclosure, and the embodiments of the disclosure do not limit the sources and scenarios of the video. In an example embodiment, the video may be alternatively considered as an image set with consecutive frames, and processing of the video may be processing of each frame of an image in the image set. In an example embodiment of the disclosure, a frame-level feature is a video feature extracted from each frame of a video image of the video, and the frame-level video feature sequence is a sequence formed by combining a video feature of each frame of video image. For example, a video feature of each frame of an image of the video may be extracted by using a convolutional neural network, and the frame-level video feature sequence is obtained based on the extracted video feature of each frame of an image. As an example, for a video with m frames of images, a video feature is extracted from each frame of a video image of the video. For example, a video feature extracted from the first frame of an image of the video is v1, a video feature extracted from the second frame of an image of the video is v2, . . . , and a video feature extracted from the mthframe of an image of the video is vm. A frame-level video feature sequence may be obtained based on the extracted video feature of each frame of an image, that is, V={v1, v2, . . . , vm}. In an example embodiment of the disclosure, an illustrative implementation for step S101may be as follows. A convolutional neural network feature is extracted for each frame of the video by using a convolutional neural network (CNN), to obtain the frame-level video feature sequence corresponding to the video, that is, V={v1, v2, . . . , vm}, and the frame-level video feature sequence is directly used in operation S102. In an example embodiment, there may be a plurality of CNNs for extracting the CNN feature, and choices may be made by a person skilled in the art according to the actual situations. This is not limited in an example embodiment of the disclosure. In an example embodiment of the disclosure, another illustrative implementation operation S101may be as follows. The video feature sequence is a video feature sequence including time series information. That is, after the CNN feature (that is, a frame-level video feature) is extracted for each frame of the video by using the CNN, to obtain a CNN feature sequence (that is, the frame-level video feature sequence), time series information of the extracted CNN feature sequence is extracted and fused by using a recurrent neural network, to obtain a frame-level video feature sequence corresponding to the video and having the time series information. The frame-level video feature sequence with the time series information is provided in a manner such that time series information of the frame-level video feature sequence is extracted based on the frame-level video feature sequence V={v1, v2, . . . , vm} and according to a time series relationship between frame-level video features (v1to vm) in a time direction, and the extracted time series information is fused with the frame-level video feature sequence. As an example, for a video with m frames, after a CNN feature sequence V={v1, v2, . . . , vm} is obtained, time series information in the sequence V is found by using a recurrent neural network, and the time series information is embedded into the sequence V. An execution process may be represented as: hi=RNN(vi,hi−1)where RNN represents a general calculation process of the recurrent neural network, and represents a frame-level video feature sequence of the first i−1 frames in which time series information is embedded. After the ithframe of CNN feature is inputted, to obtain a frame-level video feature sequence hiof the first i frames in which the time series information is embedded, a frame-level video feature sequence of the first frame (in this case, i=1, that is, the first frame) in which the time series information is embedded to a frame-level video feature sequence of the first m frames in which the time series information is embedded are combined, to finally obtain a video feature sequence including the time series information, that is, H={h1, h2, . . . , hm}. In an example embodiment of the disclosure, the video feature sequence including the time series information H={h1, h2, . . . , hm} is used for performing operation S102, and the accuracy and the reliability of subsequent video processing may be improved by using the video feature sequence including the time series information. In an example embodiment, the recurrent neural network for extracting and fusing the time series information may be a recurrent neural network based on a long short-term memory (LSTM) unit, or the like. FIG.2is a framework and a flowchart of generating natural language description information according to an embodiment of the disclosure. In an example embodiment of the disclosure, when the electronic device includes an encoder, operation S101may be performed by the encoder in the electronic device as shown inFIG.2. The encoder may include a CNN, and after a video is inputted to the encoder, a frame-level video feature sequence corresponding to the video is outputted. Specifically, the video is inputted to the encoder, that is, inputted to the CNN in the encoder, the frame-level video feature sequence corresponding to the video is extracted by using the CNN, the CNN outputs the extracted frame-level video feature sequence as an output of the encoder, and operation S102is performed by using the video feature sequence outputted by the encoder. Alternatively, the encoder may include a CNN and a recurrent neural network, and after a video is inputted to the encoder, a frame-level video feature sequence corresponding to the video and including time series information is outputted, as shown in an encoder inFIG.2. Specifically, the video is inputted to the encoder, that is, inputted to the CNN (e.g., corresponding to a CNN inFIG.2) in the encoder, the frame-level video feature sequence corresponding to the video is extracted by using the CNN, and the CNN outputs the extracted frame-level video feature sequence. The extracted frame-level video feature sequence is inputted to the recurrent neural network (e.g., corresponding to modules such as hi−1and hiinFIG.2) in the encoder, time series information of the extracted CNN feature sequence is extracted and fused by using the recurrent neural network, the recurrent neural network outputs the video feature sequence including the time series information as an output of the encoder, and operation S102is performed by using the video feature sequence outputted by the encoder. Further, in operation S102in an example embodiment of the disclosure, when the electronic device includes a part-of-speech sequence generator, the global part-of-speech sequence feature of the video may be generated by using the part-of-speech sequence generator in the electronic device according to the video feature sequence. That is, potential parts of speech of natural language description of the video are predicted according to the video feature sequence outputted in operation S101, to generate the global part-of-speech sequence feature. In an example embodiment of the disclosure, a global part-of-speech refers to parts of speech corresponding to natural language description information of the video, a global part-of-speech sequence is a sequence of a combination of the parts of speech, and the global part-of-speech sequence feature is a feature of the sequence of the combination of the parts of speech. A part of speech is an attribute of a character, a word, a phrase, or a word, and a plurality of parts of speech are defined in various languages. As an example, Chinese includes, but is not limited to: parts of speech of noun, verb, adjective, classifier, adverb, preposition, and the like; English includes, but is not limited to: parts of speech of noun, verb, gerund, adjective, adverb, article, preposition, and the like; and in another language, other types of parts of speech may also be included. Details are not described herein. A part-of-speech sequence is relative to a sentence described in a natural language, and the sentence is usually formed by two or more words. A part-of-speech sequence feature is a combination of part-of-speech features of words in the sentence. For example, if potential content of a video is “a man is shooting . . . ”, a possible part-of-speech sequence feature is a feature corresponding to [article, noun, verb . . . ]. It would be understood that in a specific application, English letters may be used for representing the parts of speech. For example, ‘art.’ represents article, ‘n.’ represents noun, and ‘v.’ represents verb, that is, the part-of-speech sequence feature is the feature corresponding to [art., n., v. . . . ]. In an example embodiment, to obtain the global part-of-speech sequence feature according to the video feature sequence, operation S102may include the following operations. Operation S1021. Determine a fused feature of the video according to the video feature sequence. Operation S1022. Generate the global part-of-speech sequence feature of the video based on the fused feature of the video by using a first neural network. The fused feature is a fused video feature obtained after fusion processing is performed on video features in the video feature sequence. There may be a plurality of fusion processing manners that may be used. This is not limited in an example embodiment of the disclosure. For ease of understanding, the following provides two illustrative implementations as examples. In a first illustrative implementation, in operation S1021, a video feature sequence may be transformed into a fused feature ϕ(Z)by using an average feature algorithm, that is, an average value of video features in the video feature sequence is calculated, to obtain the fused feature ϕ(Z). Subsequently, the fused feature is inputted to the first neural network, and the global part-of-speech sequence feature of the video is outputted. Fused features inputted at different moments of the first neural network may be the same fused feature ϕ(Z). In a second illustrative implementation, in operation S1021, the video feature sequence obtained in operation S101may be respectively integrated into different fused features corresponding to moments (for example, a fused feature ϕt(Z)corresponding to a moment t), for the moments of the first neural network by using a nonlinear network such as a network with an attention mechanism (or referred to as a fusion network). In an example embodiment of the disclosure, the first neural network may be a recurrent neural network. Fused features need to be inputted to the recurrent neural network at different moments in a processing process. The fused features inputted at the moments of the first neural network are fused features corresponding to the moments. FIG.3is a diagram of a module architecture of a part-of-speech sequence feature generator according to an embodiment of the disclosure. As an example,FIG.3shows the first neural network (corresponding to a neural network formed by modules such as ht−1(Z), ht(Z), and ht+1(Z)inFIG.3) and the fusion network (corresponding to a network A inFIG.3), and the first neural network in the example is the recurrent neural network. A fused feature inputted at a moment t−1 corresponding to the first neural network is ϕt−1(Z), a fused feature inputted at a moment t corresponding to the first neural network is ϕt(Z), a fused feature inputted at a moment n corresponding to the first neural network is ϕn(Z), and so on. In the second illustrative implementation of the fused feature, weights corresponding to the moments of the first neural network are determined, for example, a weight corresponding to a moment t is at. The weights (including a weight corresponding to each moment) are weights of frame features in a video feature sequence, for example, a weight of the ithframe feature in the video feature sequence is ai. The frame features in the video feature sequence are fused according to the weights corresponding to the moments (that is, a weight of the first frame feature in the video feature sequence that corresponds to each moment, for example, a weight of the ithframe feature in the video feature sequence that corresponds to the moment t is ait), to obtain fused features of the video corresponding to the moments, that is: ϕt(Z)(H)=∑i=1mαithiwhere ϕt(Z)(H) represents a fused feature obtained at the moment t of the first neural network. aitrepresents a weight dynamically allocated to the ithframe feature by an attention mechanism at the moment t, which meets: ∑i=1mαit=1 It would be understood that a larger weight indicates that a corresponding frame feature is more helpful for prediction of a current part of speech. In an example embodiment of the disclosure, a weight corresponding to a current moment may be obtained according to a part-of-speech sequence feature determined at a previous moment and frame features in the video feature sequence. Specifically, aitmay be obtained as follows: αit=exp(eit)/∑j=1mexp(ejt) eit=wTtanh(Wht−1(Z)+Uhi+b)where wT, W, U, and b are trainable parameters, for example, when weights are allocated by using the attention mechanism network, wT, W, U, and b are parameters learned from a process of training the attention mechanism network, exp( ) represents an exponential function, tanh( ) represents a hyperbolic tangent function, or may be used as an activation function, ht−1(Z)represents a part-of-speech sequence feature determined at a previous moment, and hirepresents frame features in a video feature sequence. Further, in operation S1022, the fused feature obtained in operation S1021is inputted to the first neural network, and the global part-of-speech sequence feature of the video is outputted. In an example embodiment of the disclosure, the first neural network may be a recurrent neural network. Specifically, as shown inFIG.3, the first neural network may include one layer of LSTM unit (corresponding to a neural network formed by modules such as ht−1(Z), ht(Z), and ht+1(Z)inFIG.3). An execution process may be represented as: ht(Z),ct(Z)=LSTM(Z)([E(zt−1),ϕt(Z)(H)],ht−1(Z))where LSTM(Z)represents a related operation of one layer LSTM unit in the first neural network, zt−1is a part of speech predicted at a previous moment, or may be represented as a memory state ct−1(Z)of the LSTM unit at a previous moment, ϕt(Z)(H) represents the fused feature corresponding to the moment t of the first neural network, or may be replaced as the same ϕ(Z)(H) at the moments, ht−1(Z)represents a part-of-speech sequence feature determined at the previous moment, and also corresponds to a hidden state of the long short-term memory unit at the previous moment, E(·) represents mapping an input to a vector space, and [·] represents a cascade operation. As shown inFIG.3, a hidden state ht(Z)and a memory state ct(Z)of the LSTM unit at a current moment t respectively represent a part-of-speech sequence feature determined at the current moment t and the part of speech ztpredicted at the current moment t. In this way, as shown inFIG.3, each part of speech may be predicted, for example, inFIG.3, zt−1outputted by ht−1(Z), ztoutputted by ht(Z), zt+1outputted by ht+1(Z), and so on, and a part-of-speech sequence Z={z1, z2, . . . , zn} and a global part-of-speech sequence feature are obtained. Specifically, after the first neural network determines that the entire part-of-speech sequence is generated, a hidden state of a latest moment includes information about the entire sequence, that is, the global part-of-speech sequence feature: φ=hn(Z) In an example embodiment of the disclosure, as shown inFIG.2, operation S102may be performed by using a part-of-speech sequence generator. The part-of-speech sequence generator may include a network (the attention mechanism network A) generating the fused feature and the first neural network. After the video feature sequence is inputted to the part-of-speech sequence generator, the global part-of-speech sequence feature is outputted, as shown in a part-of-speech sequence generator inFIG.2. Specifically, a video feature sequence H={h1, h2, . . . , hm} (corresponding to video features including time series information outputted by the modules such as hi−1and hiinFIG.2, or may be V={v1, v2, . . . , vm} directly outputted by the CNN in another embodiment, and for an illustrative application manner, reference may be made to the above descriptions of H={h1, h2, . . . , hm}, and details are not described below again) is inputted to the part-of-speech sequence generator, that is, inputted to the attention mechanism network A in the part-of-speech sequence generator, the video feature sequence H={h1, h2, . . . , hm} is fused by using the attention mechanism network A, and the attention mechanism network A outputs a fused feature φ(Z)and inputs the fused feature φ(Z)to the first neural network. The first neural network outputs a predicted global part-of-speech sequence feature φ, for example, the feature corresponding to [article, noun, verb . . . ] shown inFIG.2, the global part-of-speech sequence feature is used as an output of the part-of-speech sequence generator, and operation S103is performed by using the global part-of-speech sequence feature outputted by the part-of-speech sequence generator. In an example embodiment of the disclosure, a probability of predicting each part of speech correctly is represented as follows: P(zt|z<t,V;θz)=Softmax(Wzht(Z)+bz)where Wzand bzrepresent learnable parameters, for example, parameters learned from a process of training the part-of-speech sequence generator. θzrepresents all parameters of the part-of-speech sequence generator. P(zt|z<t, V, θz) represents that a probability of a current part of speech ztis correctly predicted for a given video V under the premise that part of a part-of-speech sequence z<t={z1, z2, . . . , zt−1} has been predicted. Further, in operation S103in an example embodiment of the disclosure, the natural language description information of the video is generated according to the global part-of-speech sequence feature and the video feature sequence. FIG.4is a diagram of a module architecture of a decoder according to an embodiment of the disclosure. In an example embodiment of the disclosure, referring toFIG.4, operation S103may include the following operations. Operation S1031. Determine a fused feature of the video according to the video feature sequence. Operation S1032. Generate the natural language description information of the video according to the global part-of-speech sequence feature and the fused feature of the video by using a second neural network. For an illustrative implementation of operation S1031, reference may be made to operation S1021. As shown inFIG.4, if the fused feature is determined by using the attention mechanism, in operation S1031, weights corresponding to moments of the second neural network (e.g., corresponding to a neural network formed by modules such as ht−1(S1), ht−1(S2), ht(S1), and ht(S2)inFIG.4) may be calculated by using hidden states of two layers. Weights corresponding to the moments of the first neural network are determined, for example, a weight corresponding to a moment t is βt. The weights (including a weight corresponding to each moment) are weights of frame features in a video feature sequence, for example, a weight of the ithframe feature in the video feature sequence is βi. The frame features in the video feature sequence are fused according to the weights corresponding to the moments (that is, a weight of the first frame feature in the video feature sequence that corresponds to each moment, for example, a weight of the ithframe feature in the video feature sequence that corresponds to the moment t is βit, to obtain fused features of the video corresponding to the moments: ϕt(S)(H)=∑i=1mβithiwhere φt(S)(H) represents a fused feature obtained at the moment t of the second neural network. βitrepresents a weight dynamically allocated to the ithframe feature by an attention mechanism at the moment t, which meets: ∑i=1mβit=1 For other corresponding part(s), reference may be made to description in operation S1021, and details are not described herein again. Further, in operation S1032, the global part-of-speech sequence feature obtained in operation S102and the fused feature obtained in operation S1031are inputted to the second neural network, and the natural language description information of the video is outputted. In an example embodiment of the disclosure, operation S1032may include the following operations. Operation SA. Obtain prediction guided information at a current moment in the global part-of-speech sequence feature according to word information corresponding to a previous moment and the global part-of-speech sequence feature. Operation SB. Obtain word information corresponding to the current moment according to the fused feature of the video and the prediction guided information by using the second neural network. Operation SC. Generate the natural language description information of the video according to word information corresponding to moments. In an example embodiment of the disclosure, the word information may include, but is not limited to, a character, a word, a phrase, or a word corresponding to a natural language. Specifically, operation SA may be implemented by using a cross-gating mechanism: ψ=σ(Wst−1+b)φwhere st−1represents word information predicted at a previous moment, W and b represent learnable parameters, for example, parameters learned from a process of training the cross-gating mechanism network, and σ(·) represents a nonlinear activation function. A function of operation SA is to enhance prediction guided information related to a word to be predicted at a current moment in the global part-of-speech sequence feature by using a word at a previous moment and guide to predict the word at the current moment in operation SB by using the prediction guided information. In an example embodiment of the disclosure, as shown inFIG.4, operation SA may be implemented by using a context guided (CG), that is, the prediction guided information at the current moment in the global part-of-speech sequence feature is obtained according to the word information determined at the previous moment and the global part-of-speech sequence feature by using the CG. As shown inFIG.4, the second neural network may be a recurrent neural network. Specifically, the recurrent neural network may include double-layer LSTM units (corresponding to a first layer formed by modules such as ht−1(S1)and ht(S1)and a second layer formed by modules such as ht−1(S2)and ht(S2)inFIG.4), that is, in operation SB, the word information corresponding to the current moment is obtained according to the fused feature of the video and the prediction guided information by using the double-layer LSTM units. An execution process may be represented as: ht(S1),ct(S1)=LSTM(S1)([E(st−1),ψ],ht−1(S1)) Formula 1 ht(S2),ct(S2)=LSTM(S2)([ht(S1),ϕt(S)(H)],ht−1(S2)) Formula 2where LSTM(S1)and LSTM(S2)respectively represent related operations of the first layer of LSTM unit and the second layer of LSTM unit in the second neural network, St−1is word information predicted at a previous moment, or may be represented as a hidden state ht−1(S2)of the second layer of LSTM unit at a previous moment, ii represents prediction guided information related to a word to be predicted at a current moment and outputted by the CG, ht−1(S1)represents a hidden state of the first layer of LSTM unit at the previous moment and may be used as an input of the first layer of LSTM unit at the current moment, E(·) represents mapping the input to a vector space, and [·] represents a cascade operation. As shown inFIG.4, a memory state ct(S1)and a hidden state ht(S1)(corresponding to an output at a right side of ht(S1)inFIG.4) of the first layer of LSTM unit at the current moment may be obtained by using formula 1 as an input of the first layer of LSTM unit at a next moment, and the hidden state ht(S1)of the first layer of LSTM unit at the current moment is used as an input (corresponding to an output at an upper side of ht(S1)inFIG.4) of the second layer of LSTM unit at the current moment. ϕt(S)(H) represents a fused feature corresponding to a moment t of the second neural network, or may be replaced with the same ϕ(S)(H) at moments, and ht−1(S2)represents a hidden state of the second layer of LSTM unit at the previous moment and may be used as an input of the second layer of LSTM unit at the current moment. A memory state ct(S2)and a hidden state ht(S2)of the second layer of LSTM unit at the current moment may be obtained by using formula 2, and the hidden state ht(S2)(corresponding to an output at an upper side of ht(S2)inFIG.4) of the second layer of LSTM unit at the current moment is word information Stpredicted at the current moment. It would be understood that for another recurrent neural network, two outputs of the layers in the process are hidden states h, and prediction of word information may also be implemented. In this way, the second neural network may predict natural language description word by word, for example, inFIG.4, St−1outputted by ht−1(S2), Stoutputted by ht(S2), and so on. Further, in operation SC, the natural language description information S={s1, s2, . . . , sn} of the video may be generated according to word information s1, s2, . . . , sncorresponding to the moments. In an example embodiment of the disclosure, as shown inFIG.2, when the electronic device includes a decoder, operation S103may be performed by the decoder in the electronic device. The decoder may include a network (the attention mechanism network A) generating the fused feature and the second neural network, and after the video feature sequence and the global part-of-speech sequence feature are inputted to the decoder, the natural language description information of the video is outputted, as shown in the decoder inFIG.2. Specifically, a video feature sequence H={h1, h2, . . . , hm} (corresponding to the video features including the time series information outputted by the modules such as hi−1and hiinFIG.2, in another embodiment, or may be V={v1, v2, . . . , vm} directly outputted by the CNN, and for an illustrative implementation, reference may be made to the above descriptions of H={h1, h2, . . . , hm}, and details are not described below again) and a global part-of-speech sequence feature (p are inputted to the decoder. The video feature sequence H={h1, h2, . . . , hm} is inputted to the attention mechanism network A in the decoder, and the video feature sequence H={h1, h2, . . . , hm} is fused by using the attention mechanism network A, the attention mechanism network A outputs a fused feature ϕ(S), and the fused feature ϕ(S)is inputted to the second neural network. The global part-of-speech sequence feature ϕ is inputted to a CG (corresponding to a guided module inFIG.2) in the decoder, prediction guided information related to a word to be predicted at a current moment in the global part-of-speech sequence feature is enhanced by using the CG by using a word, the CG outputs the prediction guided information, and the prediction guided information is inputted to the second neural network. The second neural network outputs predicted word information, for example, [s1, s2, . . . , sn] shown inFIG.2, as an output of the decoder, for example, an output sentence “a man is shooting” shown inFIG.2. In an example embodiment of the disclosure, the network generating the fused feature included in the part-of-speech sequence generator and the network (the attention mechanism network A) generating the fused feature included in the decoder may be the same or may be different, that is, may be disposed alone or may be encapsulated into one network. This is not limited in an example embodiment of the disclosure. In an example embodiment of the disclosure, a probability of predicting each piece of word information correctly is represented as follows: P(st|s<t,V;θs)=Softmax(Wsht(S2)+bs)where Wsand bsrepresent learnable parameters, for example, parameters learned from a process of training the decoder. θsrepresents all parameters of the decoder. The Softmax function converts the hidden state of the second layer of LSTM unit at the current moment in the decoder into probability distribution of each piece of word information, and predicts the most possible word information from the probability distribution. When the decoder meets a termination condition, complete natural language description information is obtained. In an example embodiment of the disclosure, in addition to the extraction manner and the guide manner of the global part-of-speech sequence feature described above, any other neural network and nonlinear network that may be used in the method for generating video description information and that are used for improving the accuracy of video description also fall within the protection scope of an example embodiment of the disclosure. In an example embodiment of the disclosure, the entire network shown inFIG.2may be trained by minimizing a model loss function minθℒ(θ) in an end-to-end manner. Specifically, a loss function in the training process may be represented as: (θz,θs)=λ(θz)+(1−λ)(θs) ℒ(θz)=∑k=1N{-logP(Zk❘"\[LeftBracketingBar]"Vk;θz)}ℒ(θs)=∑k=1N{-logP(Sk❘"\[LeftBracketingBar]"Vk;θs)}where λ is a balance parameter used for balancing impact of losses of the part-of-speech sequence generator and the decoder. N represents a quantity of pieces of training data. For each piece of training data, the losses of the part-of-speech sequence generator and the decoder may be represented as: P(Zk❘"\[LeftBracketingBar]"Vk;θz)=∑tnP(stk❘"\[LeftBracketingBar]"s<tk,Vk;θz)P(Sk❘"\[LeftBracketingBar]"Vk;θs)=∑tnP(stk❘"\[LeftBracketingBar]"s<tk,Vk;θs) In an example embodiment of the disclosure, by using the part-of-speech sequence generator and the CG, a semantic relationship between a part-of-speech sequence of natural language description information and a video feature sequence of a video may be effectively mined, and a larger feature utilization space is provided for the decoder. Compared with the related art in which only a video-level visual feature is used, but impact of a part of speech in a natural language is ignored, in an example embodiment of the disclosure, accurate natural language description information may be generated for the video, and performance of generating video description information is improved, thereby helping to understand and analyze a video, for example, performing video classification and retrieval, and achieving potential economic benefits. The following describes an implementation process of the video description information generation method provided in the disclosure as a whole by using an example in which content of a video is that a man is playing basketball. As shown inFIG.2, video frames of the video (corresponding to an input video inFIG.2) are inputted to the encoder, that is, inputted to a CNN (corresponding to a CNN inFIG.2) in the encoder, a frame-level video feature sequence corresponding to the video is extracted by using the CNN, and the CNN outputs the extracted frame-level video feature sequence. The extracted frame-level video feature sequence is inputted to a recurrent neural network (corresponding to modules such as hi−1and hiinFIG.2) in the encoder, time series information of the extracted CNN feature sequence is extracted and fused by using the recurrent neural network, and the recurrent neural network outputs the video feature sequence (for ease of description, which is referred to as an advanced video feature sequence) including the time series information as an output of the encoder. The advanced video sequence feature outputted by the encoder is inputted to the part-of-speech sequence generator, that is, inputted to the attention mechanism network A in the part-of-speech sequence generator, the advanced video sequence feature is fused by using the attention mechanism network A, the attention mechanism network A outputs a fused feature ϕ(Z)and the fused feature is inputted to a single-layer LSTM network. The single-layer LSTM network outputs a predicted global part-of-speech sequence feature ϕ, for example, the feature corresponding to [article, noun, verb . . . ] shown inFIG.2, and the global part-of-speech sequence feature is used as an output of the part-of-speech sequence generator. The advanced video sequence feature outputted by the encoder and the global part-of-speech sequence feature φ outputted by the part-of-speech sequence generator are inputted to the decoder. The advanced video sequence feature is inputted to the attention mechanism network A in the decoder, the advanced video sequence feature is fused by using the attention mechanism network A, the attention mechanism network A outputs a fused feature ϕ(S), and the fused feature is inputted to a double-layer LSTM memory network. The global part-of-speech sequence feature φ is inputted to the CG (corresponding to the guided module inFIG.2) in the decoder, prediction guided information related to a word to be predicted at a current moment in the global part-of-speech sequence feature is enhanced by using the CG by using a word, the CG outputs the prediction guided information, and the prediction guided information is inputted to the double-layer LSTM network. The double-layer LSTM network outputs predicted word information, for example, [s1, s2, . . . , sn] shown inFIG.2, as an input of the decoder. Finally, the decoder outputs natural language description information “a man is shooting”. In an example embodiment, the encoder, the part-of-speech sequence generator, and the decoder may be integrated into a function network. During training, the encoder, the part-of-speech sequence generator, and the decoder may be trained separately, or the function network may be directly trained. In an example embodiment, the method may be applied in an online process, in which a video is inputted to the function network, and natural language description information may be automatically outputted. The video description information generation method (or the function module) provided in the embodiments of the disclosure may be deployed on a terminal for describing a video that is shot in real time, download, or locally stored, or may be deployed on a cloud server for describing a video that is in a database or received. The video description information generation method provided in the embodiments of the disclosure may be used for providing a video content understanding service, or may be deployed on a video website for video classification and rapid retrieval, or combined with a speech system for assisting the visually impaired. Specifically, the embodiments of the disclosure further provide a video processing method based on natural language description information of a video, performed by an electronic device described below.FIG.1Bis a schematic flowchart of a video processing method based on natural language description information of a video according to an embodiment of the disclosure. The method includes the following operations. Operation S201. Obtain natural language description information of a video, the natural language description information of the video being obtained by using the video description information generation method according to any one of the foregoing embodiments. The video may be a video shot in real time, for example, a user behavior needs to be classified in an intelligent monitoring and behavior analysis scenario. In this case, the video may be a video shot by a camera in real time, or the video may be a video obtained from a network, for example, a video needs to be classified in a video website or application, and rapid retrieval or video recommendation may be implemented based on a classification result. In this case, the video may be a video that is obtained from the network and needs to be previewed, or the video may be a locally stored video. A person skilled in the art would understand that, the foregoing several scenarios and video sources are only examples, and appropriate changes based on these examples may also be applicable to the disclosure, and the embodiments of the disclosure do not limit the sources and scenarios of the video. In an example embodiment of the disclosure, it may be alternatively considered that the video is inputted to the function network, and the natural language description information of the video is automatically outputted. For an illustrative implementation, reference may be made to description of the embodiments above, and details are not described herein again. Operation S202. Process the video based on the natural language description information. Specifically, the processing the video includes at least one of the following:video classification, video retrieval, and generating prompt information corresponding to the video. For example, the processing the video is to perform video classification on the video. In an implementation, the video may be classified based on the generated natural language description information by using a classification network. Specifically, in the classification network, a text feature may be first extracted from the natural language description information by using a feature extraction network, and then classification is performed based on the text feature by using a classifier. Specifically, after the natural language description information is obtained in operation S201, the natural language description information is inputted to the classification network, that is, inputted to the feature extraction network in the classification network, a text feature of the natural language description information is outputted, the text feature outputted by the feature extraction network is inputted to the classifier in the classification network, and a classification result of the video is outputted and used as an output of the classification network. According to the video processing method based on natural language description information of a video provided in the embodiments of the disclosure, when the method is used for video classification, a video may be automatically recognized, natural language description information of the video is outputted, and the video may be classified based on the natural language description information of the video, thereby effectively improving efficiency and precision of video classification. For example, the processing the video is to perform video retrieval on the video. In an implementation, after the natural language description information of the video is obtained, the natural language description information of the video is pre-stored. When video retrieval is performed, a retrieval condition is received, and the retrieval condition matches the stored natural language description information of the video. When matching succeeds, the video corresponding to the natural language description information is obtained based on the successfully matched natural language description information, and the obtained video is used as a retrieval result for displaying. In an implementation, to improve retrieval efficiency, the video classification method may be combined, and after the natural language description information of the video is obtained, the natural language description information of the video is classified and stored in advance. When video retrieval is performed, a retrieval condition is received, classification on a video corresponding to the retrieval condition is determined, and the retrieval condition matches the stored natural language description information of the video in the corresponding classification. When matching succeeds, the video corresponding to the natural language description information is obtained based on the successfully matched natural language description information, and the obtained video is used as a retrieval result for displaying. According to the video processing method based on natural language description information of a video provided in the embodiments of the disclosure, when the method is used for video retrieval, a video may be automatically recognized, natural language description information of the video is outputted, and the video may be retrieved based on the natural language description information of the video, thereby effectively improving efficiency and precision of video retrieval. For example, the processing the video is to generate the prompt information corresponding to the video. In an implementation, after the natural language description information of the video is obtained, the obtained natural language description information is converted into audio information as the prompt information corresponding to the video. A specific implementation of converting natural language description information into audio information is not limited in the embodiments of the disclosure, and a person skilled in the art may set according to an actual situation. The prompt information may be used for assisting user in understanding video content. For example, the prompt information may be used for assisting the visually impaired in understanding video content by using auditory sensation. In another embodiment, the prompt information corresponding to the video generated according to the obtained natural language description information may be alternatively another type of information. According to the video processing method based on natural language description information of a video provided in the embodiments of the disclosure, when the method is used for assisting in video understanding, a video may be automatically recognized, natural language description information of the video is outputted, and prompt information corresponding to the video may be generated based on the natural language description information of the video, thereby effectively assisting a user in understanding a video. A person skilled in the art would understand that the service scenario is only an example, and appropriate changes based on the example may be used in another scenario, or may belong to the spirit or scope of the disclosure. The embodiments of the disclosure further provide a video description information generation apparatus.FIG.5is a schematic structural diagram of a video description information generation apparatus according to an embodiment of the disclosure. As shown inFIG.5, the generation apparatus50may include: an obtaining module501, a first generation module502, and a second generation module503. The obtaining module501is configured to obtain a frame-level video feature sequence corresponding to a video. The first generation module502is configured to generate a global part-of-speech sequence feature of the video according to the video feature sequence. The second generation module503is configured to generate natural language description information of the video according to the global part-of-speech sequence feature and the video feature sequence. In an implementation, the video feature sequence is a video feature sequence including time series information. In an implementation, the first generation module502is configured to determine a fused feature of the video according to the video feature sequence, and generate the global part-of-speech sequence feature of the video according to the fused feature of the video. In an implementation, the first generation module502is configured to determine weights corresponding to moments of a first neural network, weights being weights of frame features in the video feature sequence; and respectively fuse the frame features in the video feature sequence according to the weights corresponding to the moments, to obtain the fused features of the video that correspond to the moments. In an implementation, the first generation module502is configured to obtain the weight corresponding to a current moment according to a part-of-speech sequence feature determined at a previous moment and the frame features in the video feature sequence. In an implementation, the first neural network is an LSTM network. In an implementation, the second generation module503is configured to determine a fused feature of the video according to the video feature sequence, and generate natural language description information of the video according to the global part-of-speech sequence feature and the fused feature of the video. In an implementation, the second generation module503is configured to determine weights corresponding to moments of a second neural network, weights being weights of frame features in the video feature sequence; and respectively fuse the frame features in the video feature sequence according to the weights corresponding to the moments, to obtain the fused features of the video that correspond to the moments. In an implementation, the second generation module503is configured to obtain prediction guided information at a current moment in the global part-of-speech sequence feature according to word information corresponding to a previous moment and the global part-of-speech sequence feature; obtain word information corresponding to the current moment according to the fused feature of the video and the prediction guided information by using a second neural network; and generate the natural language description information of the video according to word information corresponding to the moments. In an implementation, the second neural network is an LSTM network. In an implementation, the second generation module503is configured to obtain the prediction guided information at the current moment in the global part-of-speech sequence feature according to the word information determined at the previous moment and the global part-of-speech sequence feature by using a CG. According to the video description information generation apparatus provided in an example embodiment of the disclosure, a semantic relationship between a part-of-speech sequence of natural language description information and a video feature sequence of a video may be effectively mined, and a larger feature utilization space is provided for the decoder. Compared with the related art in which only a video-level visual feature is used, but impact of part of speech in a natural language is ignored, in an example embodiment of the disclosure, accurate natural language description information may be generated for the video, and performance of generating video description information is improved, thereby helping to understand and analyze a video, for example, performing video classification and retrieval, and achieving potential economic benefits. A person skilled in the art would clearly understand that an implementation principle and a technical effect of the video description information generation apparatus provided in the embodiments of the disclosure are the same as those of the foregoing method embodiments. For the convenience and conciseness of the description, for the parts not mentioned in the apparatus embodiment, reference may be made to the corresponding content in the foregoing method embodiment, and details are not described herein again. The embodiments of the disclosure further provide an apparatus for video processing based on natural language description information of a video, and the apparatus for video processing may include: an obtaining module and a processing module. The obtaining module is configured to obtain natural language description information of a video, the natural language description information of the video being obtained by using the video description information generation method according to any one of the foregoing embodiments. The processing module is configured to process the video based on the natural language description information. In an implementation, the processing the video includes at least one of the following:video classification, video retrieval, and generating prompt information corresponding to the video. A person skilled in the art would clearly understand that an implementation principle and a technical effect of the apparatus for video processing based on natural language description information of a video provided in the embodiments of the disclosure are the same as those of the foregoing method embodiments. For the convenience and conciseness of the description, for the parts not mentioned in the apparatus embodiment, reference may be made to the corresponding content in the foregoing method embodiment, and details are not described herein again. The embodiments of the disclosure further provide an electronic device, including a processor and a memory, the memory storing instructions, the instructions, when executed by the processor, causing the processor to perform the corresponding method in the foregoing method embodiments. In an example, the electronic device may include the encoder shown inFIG.2, the decoder, and the part-of-speech sequence generator. In an example, the electronic device may further include a transceiver. The processor is connected to the transceiver by a bus. In an example embodiment, there may be one or more transceivers. The structure of the electronic device does not constitute a limitation on an example embodiment of the disclosure. The processor may be a central processing unit (CPU), a general-purpose processor, a digital signal processor (DSP), an application-specific integrated circuit (ASIC), a field programmable gate array (FPGA) or another programmable logic device, a transistor logic device, a hardware component, or any combination thereof. The processor may implement or perform various examples of logic blocks, modules, and circuits described with reference to content disclosed in the embodiments of the disclosure. The processor may alternatively be a combination to implement a computing function, for example, may be a combination of one or more microprocessors, or a combination of a DSP and a microprocessor. The bus may include a channel, to transmit information between the foregoing components. The bus may be a peripheral component interconnect (PCI) bus, an extended industry standard architecture (EISA) bus, or the like. The bus may be classified as an address bus, a data bus, a control bus, or the like. The memory may be a read only memory (ROM) or another type of static storage device that may store static information and a static instruction; or a random access memory (RAM) or another type of dynamic storage device that may store information and an instruction; or may be an electrically erasable programmable read-only memory (EEPROM), a compact disc (CD)-ROM or another compact-disc storage medium, optical disc storage medium (including a compact disc, a laser disk, an optical disc, a digital versatile disc, a Blu-ray disc, or the like) and magnetic disk storage medium, another magnetic storage device, or any other medium that may be configured to carry or store expected program code in a form of an instruction or a data structure and that is accessible by a computer, but is not limited thereto. According to the electronic device provided in the embodiments of the disclosure, a semantic relationship between a part-of-speech sequence of natural language description information and a video feature sequence of a video may be effectively mined, and a larger feature utilization space is provided for the decoder. Compared with the related art in which only a video-level visual feature is used, but impact of part of speech in a natural language is ignored, in an example embodiment of the disclosure, accurate natural language description information may be generated for the video, and performance of generating video description information is improved, thereby helping to understand and analyze a video, for example, performing video classification and retrieval, and achieving potential economic benefits. The embodiments of the disclosure further provide a storage medium, for example, a computer-readable storage medium, the computer-readable storage medium being configured to store computer instructions, the computer instructions, when run on a computer, causing the computer to perform corresponding operation in the foregoing method embodiments. In the embodiments of the disclosure, a global part-of-speech sequence feature corresponding to a natural language may be extracted effectively from video data and may be used for guiding to generate accurate natural language description, to improve a video description capability. It is to be understood that, although the operations in the flowchart in the accompanying drawings are sequentially shown according to indication of an arrow, the operations are not necessarily sequentially performed according to a sequence indicated by the arrow. Unless explicitly specified in this specification, execution of the operations is not strictly limited in the sequence, and the operations may be performed in other sequences. In addition, at least some operations in the flowcharts in the accompanying drawings may include a plurality of suboperations or a plurality of stages. The suboperations or the stages are not necessarily performed at the same moment, but may be performed at different moments. The suboperations or the stages are not necessarily performed in sequence, but may be performed in turn or alternately with another operation or at least some of suboperations or stages of the another operation. According to the video description information generation method and apparatus, the electronic device, and the readable medium provided in the embodiments of the disclosure, in a manner in which a frame-level video feature sequence corresponding to a video is obtained; a global part-of-speech sequence feature of the video is generated according to the video feature sequence; and natural language description information of the video is generated according to the global part-of-speech sequence feature and the video feature sequence. Accordingly, a global part-of-speech sequence feature corresponding to a natural language may be extracted effectively from video data and may be used for guiding to generate accurate natural language description, to improve a video description capability. At least one of the components, elements, modules or units described herein may be embodied as various numbers of hardware, software and/or firmware structures that execute respective functions described above, according to an example embodiment. For example, at least one of these components, elements or units may use a direct circuit structure, such as a memory, a processor, a logic circuit, a look-up table, etc. that may execute the respective functions through controls of one or more microprocessors or other control apparatuses. Also, at least one of these components, elements or units may be specifically embodied by a module, a program, or a part of code, which contains one or more executable instructions for performing specified logic functions, and executed by one or more microprocessors or other control apparatuses. Also, at least one of these components, elements or units may further include or implemented by a processor such as a central processing unit (CPU) that performs the respective functions, a microprocessor, or the like. Two or more of these components, elements or units may be combined into one single component, element or unit which performs all operations or functions of the combined two or more components, elements of units. Also, at least part of functions of at least one of these components, elements or units may be performed by another of these components, element or units. Further, although a bus is not illustrated in the block diagrams, communication between the components, elements or units may be performed through the bus. Functional aspects of the above example embodiments may be implemented in algorithms that execute on one or more processors. Furthermore, the components, elements or units represented by a block or processing operations may employ any number of related art techniques for electronics configuration, signal processing and/or control, data processing and the like. The foregoing descriptions are merely example embodiments of the disclosure and are not intended to limit the disclosure. Any modification, equivalent replacement, or improvement and the like made within the spirit and principle of the disclosure fall within the protection scope of the disclosure. | 61,210 |
11861887 | DETAILED DESCRIPTION Conventional ultrasound systems are large, complex, and expensive systems that are typically only purchased by large medical facilities with significant financial resources. Recently, cheaper and less complex ultrasound imaging devices have been introduced. Such imaging devices may include ultrasonic transducers monolithically integrated onto a single semiconductor die to form a monolithic ultrasound device. Aspects of such ultrasound-on-a chip devices are described in U.S. patent application Ser. No. 15/415,434 titled “UNIVERSAL ULTRASOUND DEVICE AND RELATED APPARATUS AND METHODS,” filed on Jan. 25, 2017 (and assigned to the assignee of the instant application), which is incorporated by reference herein in its entirety. The reduced cost and increased portability of these new ultrasound devices may make them significantly more accessible to the general public than conventional ultrasound devices. The inventors have recognized and appreciated that although the reduced cost and increased portability of ultrasound imaging devices makes them more accessible to the general populace, people who could make use of such devices have little to no training for how to use them. For example, a small clinic without a trained ultrasound technician on staff may purchase an ultrasound device to help diagnose patients. In this example, a nurse at the small clinic may be familiar with ultrasound technology and human physiology, but may know neither which anatomical views of a patient need to be imaged in order to identify medically-relevant information about the patient nor how to obtain such anatomical views using the ultrasound device. In another example, an ultrasound device may be issued to a patient by a physician for at-home use to monitor the patient's heart. In all likelihood, the patient understands neither human physiology nor how to image his or her own heart with the ultrasound device. Accordingly, the inventors have developed assistive ultrasound imaging technology for guiding an operator of an ultrasound device to properly use the ultrasound device. This technology enables operators, having little or no experience operating ultrasound devices, to capture medically relevant ultrasound images and may further assist the operators in interpreting the contents of the obtained images. For example, some of the techniques disclosed herein may be used to: (1) identify a particular anatomical view of a subject to image with an ultrasound device; (2) guide an operator of the ultrasound device to capture an ultrasound image of the subject that contains the particular anatomical view; and (3) analyze the captured ultrasound image to identify medical information about the subject. It should be appreciated that the embodiments described herein may be implemented in any of numerous ways. Examples of specific implementations are provided below for illustrative purposes only. It should be appreciated that these embodiments and the features/capabilities provided may be used individually, all together, or in any combination of two or more, as aspects of the technology described herein are not limited in this respect. A. Instructing an Operator of an Ultrasound Device how to Position the Device The disclosure provides techniques for instructing an operator of an ultrasound device how to position the ultrasound device on a subject to capture a medically relevant ultrasound image. Capturing an ultrasound image of a subject that contains a particular anatomical view may be challenging for novice ultrasound device operators. The operator (e.g., a nurse, a technician or a lay person) needs to know not only where to initially position the ultrasound device on the subject (e.g., a patient), but also how to adjust the position of the device on the subject to capture an ultrasound image containing the target anatomical view. In cases where the subject is also the operator, it may be even more challenging for the operator to identify the appropriate view as the operator may not have a clear view of the ultrasound device. Accordingly, certain disclosed embodiments relate to new techniques for guiding the operator to capture an ultrasound image that contains the target anatomical view. The guidance may be provided via a software application (hereinafter “App”) installed on a computing device of the operator (such as: a mobile device, a smartphone or smart-device, tablet, etc.). For example, the operator may install the App on a computing device and connect the computing device to an ultrasound device (e.g., using a wireless connection such as BLUETOOTH or a wired connection such as a Lightning cable). The operator may then position the ultrasound device on the subject and the software application (via the computing device) may provide feedback to the operator indicating whether the operator should reposition the ultrasound device and how he/she should proceed to do so. Following the instructions allows a novice operator to capture medically-relevant ultrasound images containing the target anatomical view. In some embodiments, the instructions provided to the operator may be generated at least in part by using state-of-the-art image processing technology such as deep learning. For example, the computing device may analyze a captured ultrasound image using deep learning techniques to determine whether the ultrasound image contains the target anatomical view. If the ultrasound image contains the target anatomical view, the computing device may provide a confirmation to the operator that the ultrasound device is properly positioned on the subject and/or atomically start recording ultrasound images. Otherwise, the computing device may instruct the operator how to reposition the ultrasound device (e.g., “MOVE UP,” “MOVE LEFT,” “MOVE RIGHT,” “ROTATE CLOCKWISE,” “ROTATE COUNTER-CLOCKWISE,” or “MOVE DOWN”) to capture an ultrasound image that contains the target anatomical view. The deep learning techniques described herein may be implemented in hardware, software or a combination of hardware and software. In one embodiment, a deep learning technique is implemented in an App executable on a smart device accessible to the operator. The App may, for example, leverage a display integrated into the smart device to display a user interface screen to the operator. In another embodiment, the App may be executed on a cloud and communicated to the operator through the smart device. In yet another embodiment, the App may be executed on the ultrasound device itself and the instructions may be communicated to the user either through the ultrasound device itself or a smart device associated with the ultrasound device. Thus, it should be noted that the execution of the App may be at a local or a remote device without departing from the disclosed principles. In some embodiments, techniques for providing instructions to an operator of an ultrasound device regarding how to reposition the ultrasound device to capture an ultrasound image containing a target anatomical view of a subject may be embodied as a method that is performed by, for example, a computing device that is communicatively coupled to an ultrasound device. The computing device may be a mobile smartphone, a tablet, a laptop, a workstation, or any other suitable computing device. The ultrasound device may be configured to transmit acoustic waves into a subject using ultrasonic transducers, detect the reflected acoustic waves, and use them to generate ultrasound data. Example ultrasonic transducers include capacitive micromachined ultrasonic transducers (CMUTs), CMOS ultrasonic transducers (CUTs), and piezoelectric micromachined ultrasonic transducers (PMUTs). The ultrasonic transducers may be monolithically integrated with a semiconductor substrate of the ultrasound device. The ultrasound device may be implemented as, for example, a handheld device or as a patch that is configured to adhere to the subject. In some embodiments, an exemplary method may include obtaining an ultrasound image of a subject captured using the ultrasound device. For example, the ultrasound device may generate ultrasound sound data and transmit (via a wired or wireless communication link) the ultrasound data to the computing device. The computing device may, in turn, generate the ultrasound image using the received ultrasound data. The method may further include determining whether the ultrasound image contains a target anatomical view using an automated image processing technique. For example, the ultrasound image may be analyzed using the automated image processing technique to identify the anatomical view contained in the ultrasound image. The identified anatomical view may be compared with the target anatomical view to determine whether the identified anatomical view matches the target anatomical view. If the identified anatomical view matches the target anatomical view, then a determination is made that the ultrasound image does contain the target anatomical view. Otherwise, a determination is made that the ultrasound image does not contain the target anatomical view. It should be appreciated that any of a variety of automated image processing techniques may be employed to determine whether an ultrasound image contains the target anatomical view. Example automated image processing techniques include machine learning techniques such as deep learning techniques. In some embodiments, a convolutional neural network may be employed to determine whether an ultrasound image contains the target anatomical view. For example, the convolutional neural network may be trained with a set of ultrasound images labeled with the particular anatomical view depicted in the ultrasound image. In this example, an ultrasound image may be provided as an input to the trained convolutional neural network and an indication of the particular anatomical view contained in the input ultrasound image may be provided as an output. In another example, the convolutional neural network may be trained with a set of ultrasound images labeled with either one or more instructions regarding how to move the ultrasound device to capture an ultrasound image containing the target anatomical view or an indication that the ultrasound image contains the target anatomical view. In this example, an ultrasound image may be provided as an input to a trained convolutional neural network and an indication that the ultrasound image contains the target anatomical view or an instruction to provide the operator may be provided as an output. The convolutional neural network may be implemented using a plurality of layers in any suitable combination. Example layers that may be employed in the convolutional neural network include: pooling layers, rectified linear units (ReLU) layers, convolutional layers, dense layers, pad layers, concatenate layers, and/or upscale layers. Examples of specific neural network architectures are provided herein in the Example Deep Learning Techniques section. In some embodiments, the method may further include providing at least one instruction (or one set of instructions) to an operator of the ultrasound device indicating how to reposition the ultrasound device in furtherance of capturing an ultrasound image of the subject that contains the target anatomical view when a determination is made that the ultrasound image does not contain the target anatomical view. The instruction may be provided to the operator in any of a variety of ways. For example, the instruction may be displayed to the operator using a display (e.g., a display integrated into the computing device, such as the operator's mobile device) or audibly provided to the operator using a speaker (e.g., a speaker integrated into the computing device). Example instructions include “TURN CLOCKWISE,” “TURN COUNTER-CLOCKWISE,” “MOVE UP,” “MOVE DOWN,” “MOVE LEFT,” and “MOVE RIGHT.” In some embodiments, the method may further include providing an indication to the operator that the ultrasound device is properly positioned when a determination is made that the ultrasound image contains the target anatomical view. The indication to the operator that the ultrasound device is properly positioned may take any of a variety of forms. For example, a symbol may be displayed to the operator such as a checkmark. Alternatively (or additionally), a message may be displayed and/or audibly played to the operator such as “POSITIONING COMPLETE.” The instructions can be computed based on the current position of the ultrasound device with respect to the subject's body. The instructions may be pre-recorded and determined by comparing the current positioning of the ultrasound device relative to one or more prior positions of the ultrasound device which yielded the target ultrasound image. B. Determining how to Guide an Operator of an Ultrasound Device to Capture a Medically Relevant Ultrasound Image The disclosure provides techniques for guiding an operator of an ultrasound device to capture a medically relevant ultrasound image of a subject. Teaching an individual how to perform a new task, such as how to use an ultrasound device, is a challenging endeavor. The individual may become frustrated if they are provided instructions that are too complex or confusing. Accordingly, certain disclosed embodiments relate to new techniques for providing clear and concise instructions to guide the operator of an ultrasound device to capture an ultrasound image containing a target anatomical view. In some embodiments, the operator may position the ultrasound device on the subject and a computing device (such as a mobile smartphone or a tablet) may generate a guidance plan for how to guide the operator to move the ultrasound device from an initial position on the subject to a target position on the subject. The guidance plan may comprise a series of simple instructions or steps (e.g., “MOVE UP,” “MOVE DOWN,” “MOVE LEFT,” or “MOVE RIGHT”) to guide the operator from the initial position to the target position. The guidance plan may optionally avoid using more complex instructions that may confuse the operator such as instructing the operator to move the ultrasound device diagonally. Once the guidance plan has been generated, instructions from the guidance plan may be provided in a serial fashion to the operator to avoid overloading the operator with information. Thereby, the operator may easily follow the sequence of simple instructions to capture an ultrasound image containing the target anatomical view. In one embodiment, the guidance plan may be devised by comparing the current ultrasound image with the target ultrasound image and by determining how the positioning of the ultrasound device with respect to the subject should be changed to approach the target ultrasound image. In some embodiments, techniques for determining how to guide an operator of an ultrasound device to capture an ultrasound image containing a target anatomical view may be embodied as a method that is performed by, for example, a computing device that is communicatively coupled to an ultrasound device. The method may include obtaining an ultrasound image of a subject captured using the ultrasound device. For example, the computing device may communicate with the ultrasound device to generate ultrasound data and send the generated ultrasound data to the computing device. The computing device may, in turn, use the received ultrasound data to generate the ultrasound image. In some embodiments, the method may further include generating, using the ultrasound image, a guidance plan for how to guide the operator to capture an ultrasound image of the subject containing the target anatomical view when a determination is made that the ultrasound image does not contain the target anatomical view. The guidance plan may comprise, for example, a guide path along which an operator may be guided between an initial position of the ultrasound device on the subject and a target position of the ultrasound device on the subject where an ultrasound image that contains the target anatomical view may be captured. For example, the initial position of the ultrasound device may be identified using the ultrasound image and the target position of the ultrasound device may be identified using the target anatomical view. Once the initial and target positions of the ultrasound device have been identified, a guide path may be determined between the two positions. In some embodiments, the guide path between the initial position of the ultrasound device and the target position of the ultrasound device may not be the most direct path between the two positions. For example, a longer guide path may be selected that forms an “L” over a direct diagonal line between the two points because the “L” shaped guide path may be easier to communicate to an operator. In some embodiments, the guide path may advantageously minimize travel of the ultrasound device over areas of the subject that contain hard tissue (e.g., bone). Capturing an ultrasound image of bone may yield a blank (or nearly blank) ultrasound image because the acoustic waves emitted by an ultrasound device typically do not penetrate hard tissues. As a result, there may be little or no information contained in the ultrasound image that may be used by the computing device to determine a position of the ultrasound device on the subject. Minimizing travel over these hard tissues may advantageously allow the computing device to more easily track the progress of the ultrasound device as the operator moves the ultrasound device along the guide path by analyzing the captured ultrasound images. The method may further include providing at least one instruction to the operator based on the determined guidance plan. For example, instructions may be generated that instruct the operator to move the ultrasound device along a determined guide path in the guidance plan. Alternatively (or additionally), the guidance plan may include a sequence of instructions to guide the operator of the ultrasound device to move the device and the instructions may be provided directly from the guidance plan. The instructions may be provided from the guidance plan in a serial fashion (e.g., one at a time). It should be appreciated that the guidance plan may be updated (e.g., continuously updated) based on the actions actually taken by an operator. In some embodiments, the guidance plan may be updated when the action taken by an operator does not match the instruction provided to the operator. For example, the computing device may issue an instruction for the operator to move the ultrasound device left and the operator may have inadvertently moved the ultrasound device up. In this example, the computing device may generate a new guidance plan between the current position of the ultrasound device and the target position of the ultrasound device. C. Creating an Augmented Reality Interface to Guide an Operator of an Ultrasound Device The disclosure provides techniques for creating an augmented reality interface that guides an operator of an ultrasound device. Providing written and/or spoken instructions may be challenging for an operator to understand. For example, conveying an instruction to move an ultrasound device in a particular direction (e.g., “MOVE LEFT”) may be ambiguous because the point of reference used by the operator may be different. Thereby, the operator may move the ultrasound device in an incorrect direction while believing that they are properly following the instructions. Accordingly, certain disclosed embodiments relate to new techniques for providing instructions to an operator of an ultrasound device through an augmented reality interface. In the augmented reality interface, the instructions may be overlaid onto a view of the operator's real-world environment. For example, the augmented reality interface may include a view of the ultrasound device positioned on the subject and an arrow indicative of the particular direction that the ultrasound device should be moved. Thereby, an operator may easily reposition the ultrasound device by moving the ultrasound device on the subject in a direction consistent with the arrow in the augmented interface. In some embodiments, techniques for providing an augmented reality interface to guide an operator to capture an ultrasound image containing a target anatomical view may be embodied as a method that is performed by, for example, a computing device having (or being in communication with) a non-acoustic imaging device such as an imaging device configured to detect light. The method may include capturing, using a non-acoustic imaging device, an image of the ultrasound device. For example, an image may be captured of the ultrasound device positioned on a subject. In some embodiments, the method may further include generating a composite image at least in part by overlaying, onto the image of the ultrasound device, at least one instruction indicating how the operator is to reposition the ultrasound device. For example, a pose (e.g., position and/or orientation) of the ultrasound device in the captured image may be identified using an automated image processing technique (e.g., a deep learning technique) and the information regarding the pose of the ultrasound device may be used to overlay an instruction onto at least part of the ultrasound device in the captured image. Example instructions that may be overlaid onto the image of the ultrasound device include symbols (such as arrows) indicating a direction in which the operator is to move the device. It should be appreciated that additional elements may be overlaid onto the image of the ultrasound device using the identified pose of the ultrasound device. For example, the ultrasound image captured using the ultrasound device may be overlaid onto the image of the ultrasound device in such a fashion to make the ultrasound image appear as though it is extending outward from the ultrasound device into the subject. Thereby, the operator may gain a better appreciation for the particular region of the subject that is being imaged given the current position of the ultrasound device on the subject. In some embodiments, the method may further include presenting the composite image to the operator. For example, the computing device may include an integrated display and the composite image may be displayed to the operator using the display. D. Tracking a Location of an Ultrasound Device Using a Marker on the Ultrasound Device The disclosure provides techniques for tracking a location of an ultrasound device using a marker disposed on the ultrasound device. As discussed above, providing instructions to an operator of an ultrasound device through an augmented reality interface may make the instructions clearer and easier to understand. The augmented reality interface may include a captured image of a real-world environment (e.g., captured by a camera on a mobile smartphone) and one or more instructions overlaid onto the captured image regarding how to move the ultrasound device. Such augmented reality interfaces may be even more intuitive when the instructions are positioned relative to real-world objects in a captured image. For example, an arrow that instructs the operator to move the ultrasound device left may be clearer to the operator when the arrow is positioned proximate the ultrasound device in the captured image. Accordingly, aspects of the technology described herein relate to new techniques for tracking an ultrasound device in a captured image such that instructions may be properly positioned in the augmented reality interface. The problem of identifying the location of the ultrasound device in a captured image may be eased by placing a distinct marker on the ultrasound device that is visible in the captured image. The marker may have, for example, a distinctive pattern, color, and/or image that may be readily identified using automated image processing techniques (such as deep learning techniques). Thereby, the position of the ultrasound device in the captured image may be identified by locating the marker in the captured image. Once the position of the ultrasound device in the captured image has been identified, an instruction may be overlaid onto the captured image at a position proximate the ultrasound device to form a more intuitive augmented reality interface. In some embodiments, techniques for tracking a location of an ultrasound device in a captured image using a marker disposed on the ultrasound device may be embodied as a method that is performed by, for example, a computing device that is communicatively coupled to an ultrasound device. The location of the ultrasound device in a captured image may be tracked to, for example, properly position an instruction over the captured image so as to form an augmented reality interface. For example, the instruction may be positioned proximate the ultrasound device in the captured image. In some embodiments, these techniques may be embodied as a method that is performed by, for example, a computing device having (or in communication with) a non-acoustic imaging device such as an imaging device configured to detect light. The non-acoustic imaging device may be employed to capture an image of a marker on an ultrasound device. The marker may be constructed to have a distinctive pattern, color, and/or image that may be recognized. The marker may be implemented in any of a variety of ways. For example, the marker may be: a monochrome marker, a holographic marker, and/or a dispersive marker. Monochrome markers may comprise a monochrome pattern such as ArUco markers. Holographic markers may comprise a hologram that presents different images depending upon the particular angle from which the hologram is viewed. Dispersive markers may comprise a dispersive element that presents different colors depending upon the particular angle from which the dispersive element is viewed. In some embodiments, the method may further include automatically identifying a pose of the ultrasound device at least in part by analyzing at least one characteristic of the marker in the captured image. For example, a location of the marker in the image may be identified to determine a position of the ultrasound device in the image. Additionally (or alternatively), one or more properties of the marker may be analyzed to determine an orientation of the ultrasound device in the image. For example, the marker may be a dispersive marker and the color of the marker may be analyzed to determine an orientation of the ultrasound device. In another example, the marker may be a holographic marker and the particular image presented by the marker may be analyzed to determine an orientation of the ultrasound device. In some embodiments, the method may further include providing an instruction to an operator of the ultrasound device using the identified pose of the ultrasound device. For example, the instruction may comprise a symbol (e.g., an arrow) overlaid onto the captured image that is presented to the operator. In this example, the identified pose of the ultrasound device in the image may be employed to accurately position the symbol over at least part of the ultrasound device in the captured image. E. Automatically Interpreting Captured Ultrasound Images The disclosure provides techniques for automatically interpreting captured ultrasound images to identify medical parameters of a subject. Novice operators of an ultrasound device may not be able to interpret captured ultrasound images to glean medically relevant information about the subject. For example, a novice operator may not know how to calculate medical parameters of the subject from a captured ultrasound image (such as an ejection fraction of a heart of the subject). Accordingly, certain disclosed embodiments relate to new techniques for automatically analyzing a captured ultrasound image to identify such medical parameters of the subject. In some embodiments, the medical parameters may be identified using state of the art image processing technology such as deep learning. For example, deep learning techniques may be employed to identify the presence of particular organs (such as a heart or a lung) in the ultrasound image. Once the organs in ultrasound image have been identified, the characteristics of the organs (e.g., shape and/or size) may be analyzed to determine a medical parameter of the subject (such as an ejection fraction of a heart of the subject). In some embodiments, techniques for identifying a medical parameter of a subject using a captured ultrasound image may be embodied as a method that is performed by, for example, a computing device that is communicatively coupled to an ultrasound device. The method may include obtaining an ultrasound image of a subject captured using an ultrasound device. For example, the computing device may communicate with the ultrasound device to generate ultrasound data and send the generated ultrasound data to the computing device. The computing device may, in turn, use the received ultrasound data to generate the ultrasound image. In some embodiments, the method may further include identifying an anatomical feature of the subject in the ultrasound image using an automated image processing technique. Example anatomical features of the subject that may be identified include: a heart ventricle, a heart valve, a heart septum, a heart papillary muscle, a heart atrium, an aorta, and a lung. These anatomical features may be identified using any of a variety of automated image processing techniques such as deep learning techniques. In some embodiments, the method may further include identifying a medical parameter of the subject using the identified anatomical feature in the ultrasound image. For example, an ultrasound image of a heart may be captured and the ventricle in the ultrasound image may be identified as an anatomical feature. In this example, one or more dimensions of the heart ventricle may be calculated using the portion of the ultrasound image identified as being a heart ventricle to identify medical parameters associated with the heart. Example medical parameters associated with the heart include: an ejection fraction, a fractional shortening, a ventricle diameter, a ventricle volume, an end-diastolic volume, an end-systolic volume, a cardiac output, a stroke volume, an intraventricular septum thickness, a ventricle wall thickness, and a pulse rate. F. Automatically Generating a Diagnosis of a Medical Condition The disclosure provides techniques for generating a diagnosis of a medical condition of a subject using a captured ultrasound image. Novice operators of an ultrasound device may be unaware of how to use an ultrasound device to diagnose a medical condition of the subject. For example, the operator may be unsure of which anatomical view of a subject to image to diagnose the medical condition. Further, the operator may be unsure of how to interpret a captured ultrasound image to diagnose the medical condition. Accordingly, certain disclosed embodiments relate to new techniques for assisting an operator of an ultrasound device to diagnose a medical condition of a subject. In some embodiments, these techniques may be employed in a diagnostic App that may be installed on a computing device (e.g., a smartphone) of a health care professional. The diagnostic App may walk the health care professional through the entire process of diagnosing a medical condition of the subject. For example, the diagnostic App may prompt the health care professional for medical information about the subject (e.g., age, weight, height, resting heart rate, blood pressure, body surface area, etc.) that may be employed to determine a particular anatomical view of the subject to image with an ultrasound device. Then, the diagnostic App may guide the health care professional to capture an ultrasound image of the anatomical view. The diagnostic App may employ the captured ultrasound image (or sequence of ultrasound images) and/or raw ultrasound data from the ultrasound device. It should be appreciated that other information (such as the medical information about the subject) may be employed in combination with the ultrasound image(s) and/or raw ultrasound data to diagnose the medical condition of the subject. In some embodiments, techniques for diagnosing a medical condition of a subject using an ultrasound device may be embodied as a method that is performed by, for example, a computing device that is communicatively coupled to an ultrasound device. The method may include receiving medical information about a subject. Example medical information about a subject includes: heart rate, blood pressure, body surface area, age, weight, height, and medication being taken by the subject. The medical information may be received from an operator by, for example, posing one or more survey questions to the operator. Alternatively (or additionally), the medical information may be obtained from an external device such as an external server. In some embodiments, the method may further include identifying a target anatomical view of the subject to be captured using an ultrasound device based on the received medical information. Example anatomical views that may be identified include: a parasternal long axis (PLAX) anatomical view, a parasternal short-axis (PSAX) anatomical view, an apical four-chamber (A4C) anatomical view, and an apical long axis (ALAX) anatomical view. In some embodiments, the medical information may be analyzed to determine whether the subject has any health problems associated with a particular organ that may be imaged, such as a heart or a lung. If the medical information indicated that the subject has such health problems, an anatomical view associated with the organ may be identified. For example, the medical information may include an indication that the subject has symptoms of congestive heart failure (such as recently experiencing paroxysmal nocturnal dyspnea). In this example, an anatomical view associated with the heart (such as the PLAX anatomical view) may be identified as the appropriate view to be captured. In some embodiments, the method may further include obtaining an ultrasound image containing the target anatomical view of the subject. For example, the ultrasound image may be obtained from an electronic health record of the subject. Additionally (or alternatively), the operator may be guided to obtain the ultrasound image containing the target anatomical view. For example, the operator may be provided one or more instructions (e.g., a sequence of instruction) to reposition the ultrasound device on the subject such that the ultrasound device is properly positioned on the subject to capture the target anatomical view. In some embodiments, the method may further include generating a diagnosis of a medical condition of the subject using the ultrasound image containing the target anatomical view. For example, one or more medical parameters (e.g., an ejection fraction) may be extracted from the ultrasound image (or sequence of ultrasound images) and employed to generate a diagnosis. It should be appreciated that additional information separate from the ultrasound image containing the target anatomical view may be employed to identify a diagnosis of a medical condition of the subject. For example, the medical information regarding the subject may be employed in combination with one or more medical parameters extracted from the ultrasound device to generate the diagnosis. In some embodiments, the method may further include generating one or more recommended treatments for the subject. The recommended treatments may be generated based on diagnosed medical condition of the subject. For example, the subject may be diagnosed with a heart condition (e.g., congestive heart failure) and the recommended treatment may comprise a pharmaceutical drug employed to treat the heart condition (e.g., a beta blocker drug). G. Further Description FIG.1shows an example ultrasound system100that is configured to guide an operator of an ultrasound device102to obtain an ultrasound image of a target anatomical view of a subject101. As shown, the ultrasound system100comprises an ultrasound device102that is communicatively coupled to the computing device104by a communication link112. The computing device104may be configured to receive ultrasound data from the ultrasound device102and use the received ultrasound data to generate an ultrasound image110. The computing device104may analyze the ultrasound image110to provide guidance to an operator of the ultrasound device102regarding how to reposition the ultrasound device102to capture an ultrasound image containing a target anatomical view. For example, the computing device104may analyze the ultrasound image110to determine whether the ultrasound image110contains a target anatomical view, such as a PLAX anatomical view. If the computing device104determines that the ultrasound image110contains the target anatomical view, the computing device104may provide an indication to the operator using a display106that the ultrasound device102is properly positioned. Otherwise, the computing device104may provide an instruction108using the display106to the operator regarding how to reposition the ultrasound device102. The ultrasound device102may be configured to generate ultrasound data. The ultrasound device102may be configured to generate ultrasound data by, for example, emitting acoustic waves into the subject101and detecting the reflected acoustic waves. The detected reflected acoustic wave may be analyzed to identify various properties of the tissues through which the acoustic wave traveled, such as a density of the tissue. The ultrasound device102may be implemented in any of variety of ways. For example, the ultrasound device102may be implemented as a handheld device (as shown inFIG.1) or as a patch that is coupled to patient using, for example, an adhesive. Example ultrasound devices are described in detail below in the Example Ultrasound Devices section. The ultrasound device102may transmit ultrasound data to the computing device104using the communication link112. The communication link112may be a wired (or wireless) communication link. In some embodiments, the communication link112may be implemented as a cable such as a Universal Serial Bus (USB) cable or a Lightning cable. In these embodiments, the cable may also be used to transfer power from the computing device104to the ultrasound device102. In other embodiments, the communication link112may be a wireless communication link such as a BLUETOOTH, WiFi, or ZIGBEE wireless communication link. The computing device104may comprise one or more processing elements (such as a processor) to, for example, process ultrasound data received from the ultrasound device102. Additionally, the computing device104may comprise one or more storage elements (such as a non-transitory computer readable medium) to, for example, store instructions that may be executed by the processing element(s) and/or store all or any portion of the ultrasound data received from the ultrasound device102. It should be appreciated that the computing device104may be implemented in any of a variety of ways. For example, the computing device104may be implemented as a mobile device (e.g., a mobile smartphone, a tablet, or a laptop) with an integrated display106as shown inFIG.1. In other examples, the computing device104may be implemented as a stationary device such as a desktop computer. Additional example implementations of the computing device are described below in the Example Ultrasound Systems section. The computing device104may be configured to provide guidance to an operator of the ultrasound device102using the ultrasound data received from the ultrasound device102. In some embodiments, the computing device104may generate the ultrasound image110using the received ultrasound data and analyze the ultrasound image110using an automated image processing technique to generate the instruction108regarding how the operator should reposition the ultrasound device102to capture an ultrasound image containing the target anatomical view. For example, the computing device104may identify the anatomical view contained in the ultrasound image110using a machine learning technique (such as a deep learning technique) and determine whether the anatomical view contained in the ultrasound image110matches the target anatomical view. If the identified anatomical view matches the target anatomical view, the computing device104may provide an indication that the ultrasound is properly positioned via the display106. Otherwise, the computing device104may identify an instruction to provide the operator to reposition the ultrasound device102and provide the instruction via the display106. In another example, the computing device104may generate the instruction108without performing the intermediate step of determining whether the ultrasound image110contains the target anatomical view. For example, the computing device104may use a machine learning technique (such as a deep learning technique) to directly map the ultrasound image110to an output to provide to the user such as an indication of proper positioning or an instruction to reposition the ultrasound device102(e.g., instruction108). In some embodiments, the computing device104may be configured to generate the instruction108for the operator regarding how to position the ultrasound device102on the subject101using a guidance plan. The guidance plan may comprise a guide path indicative of how the operator should be guided to move the ultrasound device102from an initial position on the subject101to a target position on the subject101where an ultrasound image containing the target anatomical view may be captured. An example of such a guide path on a subject is shown inFIG.2. As shown, the ultrasound device may be initially positioned on a subject201at an initial position202(on a lower torso of the subject201) and the computing device may generate a guide path208between the initial position202and a target position204. The guide path208may be employed by the computing device to generate a sequence of instructions to provide the operator. For example, the computing device may generate a first instruction to “MOVE RIGHT” and a second instruction to “MOVE UP” for the guide path208. The generated instructions may also include an indication of the magnitude of the movement, such as “MOVE RIGHT 5 CENTIMETERS.” The computing device may provide these instructions serially (e.g., one at a time) to avoid overloading the operator with information. The computing device may identify the initial position202by analyzing the ultrasound data received from the ultrasound device using an automated image processing technique (e.g., a deep learning technique). For example, the computing device may provide an ultrasound image (generated using the ultrasound data) as an input to a neural network that is configured (e.g., trained) to provide as an output an indication of the anatomical view contained in the ultrasound image. Then, the computing device may map the identified anatomical view to a position on the subject201. The mappings between anatomical views and positions on the subject201may be, for example, stored locally on the computing device. The computing device may identify the target position204based on the target anatomical view. For example, the computing device may map the target anatomical view to a position on the subject201. The mappings between target anatomical views and positions on the subject201may be, for example, stored locally on the computing device. Once the initial position202and the target position204have been identified, the computing device may identify the guide path208that an operator should follow to move the ultrasound device from the initial position202to the target position204. The computing device may generate the guide path208by, for example, identifying a shortest path between the initial position202and the target position204(e.g., a diagonal path). Alternatively, the computing device may generate the guide path208by identifying a shortest path between the initial position202and the target position204that satisfies one or more constraints. The one or more constraints may be selected to, for example, ease communication of instructions to the operator to move the ultrasound device along the guide path208. For example, movement in particular directions (such as diagonal directions) may be more challenging to accurately communicate to an operator. Thereby, the computing device may identify a shortest path that omits diagonal movements as the guide path as shown by the “L” shaped guide path208inFIG.2. Additionally (or alternatively), the guide path208may be selected to minimize traversal over hard tissue (e.g., bone) in the subject. Minimizing the travel over such hard tissues may advantageously allow the computing device to more readily track the movement of the ultrasound device along the guide path208. For example, ultrasound images of bone may be blank (or nearly blank) because the acoustic waves emitted by an ultrasound device typically do not penetrate hard tissues. The computing device may be unable to analyze such ultrasound images to determine which anatomical view they belong to and, thereby, lose track of the position of the ultrasound device on the subject201. Minimizing travel over these hard tissues may advantageously allow the computing device to more easily track the progress of the ultrasound device as the operator moves the ultrasound device along the guide path208by analyzing the captured ultrasound images. The computing device may store the generated guide path208locally and use the guide path to generate a sequence of instructions to provide to the operator. For example, the computing device may use the guide path208to generate the sequence of instructions: (1) “MOVE LATERAL,” (2) “MOVE UP,” and (3) “TWIST CLOCKWISE.” These instructions may be, in turn, provided to the operator in a serial fashion to guide the operator to move the ultrasound device from the initial position202to the target position204. As discussed above, novice operators of an ultrasound device may have little or no knowledge of human physiology. Thereby, the initial position202may be far away from the target position204. For example, an operator may initially place the ultrasound device on a leg of the subject201when the target position204is on an upper torso of the subject201. Providing a sequence of individual instructions to move the ultrasound device from the distant initial position202to the target position204may be a time-consuming process. Accordingly, the computing device may initially provide the operator a coarse instruction to move the ultrasound device to a general area of the subject201(such as an upper torso of the subject201) and subsequently provide one or more fine instructions to move the ultrasound device in particular directions (such as “MOVE UP”). In some embodiments, the computing device may make the determination as to whether to issue a coarse instruction or a fine instruction based on a determination as to whether the ultrasound device is positioned on the subject within a predetermined area206on the subject201. The predetermined area206may be an area on the subject201that includes the target position204and is easy for the operator to identify. For example, the target position204may be over a heart of the subject201and the predetermined area206may comprise an upper torso of the subject. The computing device may provide a fine instruction responsive to the position of the ultrasound device being within the predetermined area206and provide a coarse instruction responsive to the ultrasound device being outside of the predetermined area206. For example, an operator may initially position the ultrasound device on a leg of the subject201and the computing device may provide a coarse instruction that instructs the operator to move the ultrasound device to an upper torso (e.g., the predetermined area206) of the subject201. Once the operator has positioned the ultrasound device on the upper torso of the subject201(and thereby within the predetermined area206), the computing device may provide a fine instruction including an indication of a particular direction to move the ultrasound device towards the target position204. Providing coarse instructions may advantageously expedite the process of guiding the operator of the ultrasound device. For example, an operator may be unfamiliar with human physiology and initially place the ultrasound device on a leg of the subject201while the operator is attempting to capture an ultrasound image containing an anatomical view of a heart of the subject201. In this example, the operator may be provided a coarse instruction including an indication of where to place the ultrasound device (e.g., on an upper torso of the subject) instead of providing a set of instructions for the operator to move the ultrasound device: (1) from the thigh to the lower torso and (2) from the lower torso to the upper torso. FIG.3Ashows an example coarse instruction302that may be provided to an operator via a display306on a computing device304. The coarse instruction302may be provided when the ultrasound device is positioned outside of a predetermined area on the subject. As shown, the coarse instruction302includes an indication of where the operator should position the ultrasound device on the subject to be within the predetermined area. In particular, the coarse instruction302comprises a symbol308(e.g., a star) showing where the predetermined region is located on a graphical image of the subject301. The coarse instruction302also includes a message310with an arrow pointing to the symbol308instructing the operator to “POSITION ULTRASOUND DEVICE HERE” to communicate to the operator that the ultrasound device should be placed where the symbol308is located on the graphical image of the subject301. FIG.3Bshows an example fine instruction312that may be provided to an operator via the display306on the computing device304. The fine instruction312may be provided when the ultrasound device is positioned within the predetermined area on the subject. As shown, the fine instruction312includes a symbol314indicating which direction the operator should move the ultrasound device. The symbol314may be animated in some implementations. For example, the symbol314(e.g., an arrow and/or model of the ultrasound device) may move in a direction in which the ultrasound device is to be moved. The fine instruction312may also comprise a message316that compliments the symbol314such as the message “TURN CLOCKWISE.” The symbol314and/or the message316may be overlaid onto a background image311. The background image311may be, for example, an ultrasound image generated using ultrasound data received from the ultrasound device. FIG.3Cshows an example confirmation318that may be provided to an operator via the display306on the computing device304. The confirmation318may be provided when the ultrasound device is properly positioned on the subject to capture an ultrasound image containing the target anatomical view. As shown, the confirmation318includes a symbol320(e.g., a checkmark) indicating that the ultrasound device is properly positioned. The confirmation318may also comprise a message322that compliments the symbol320such as the message “HOLD.” The symbol320and/or the message322may be overlaid onto the background image311. The background image311may be, for example, an ultrasound image generated using ultrasound data received from the ultrasound device. Once the operator has successfully captured an ultrasound image that contains the target anatomical view, the computing device may be configured to analyze the captured ultrasound image. For example, the computing device may analyze the captured ultrasound image using an automated image processing technique to identify a medical parameter of the subject. Example medical parameters of the subject that may be obtained from the ultrasound image include: an ejection fraction, a fractional shortening, a ventricle diameter, a ventricle volume, an end-diastolic volume, an end-systolic volume, a cardiac output, a stroke volume, an intraventricular septum thickness, a ventricle wall thickness, and a pulse rate. The computing device may identify these medical parameters by, for example, identifying an anatomical feature in the ultrasound image (such as a heart ventricle, a heart valve, a heart septum, a heart papillary muscle, a heart atrium, an aorta, and a lung) and analyzing the identified anatomical feature. The computing device may identify the anatomical feature using an automated imaging processing technique (such as a deep learning technique). For example, the computing device may provide the captured ultrasound image to a neural network that is configured (e.g., trained) to provide as an output an indication of which pixels in the ultrasound image are associated with a particular anatomical feature. It should be appreciated that this neural network may be separate and distinct from any neural networks employed to guide the operator. The generated medical parameters may be overlaid onto the captured ultrasound image as shown inFIG.4. As shown, a computing device404may display (via an integrated display406) an ultrasound image408and a set of medical parameters410overlaid onto the ultrasound image408. The ultrasound image408may contain a PLAX view of a subject that includes a view of a heart of the subject. In the ultrasound image408, the computing device may identify the left ventricle as an anatomical feature402and analyze the characteristics of the left ventricle (such as the left ventricle diameter shown as anatomical feature characteristic404) to identify the medical parameters410. The medical parameters410shown inFIG.4comprise: a left ventricle diameter (LVD) of 38.3 millimeters (mm), a left ventricle end-systolic diameter (LVESD) of 38.2 mm, a left ventricle end-diastolic diameter (LVEDD) of 49.5 mm, a fractional shortening (FS) of 23%, an ejection fraction (EF) of 45%. It should be appreciated that the computing device may identify the medical parameters410using more than a single ultrasound image containing the target anatomical view. In some embodiments, a sequence of ultrasound images of the heart may be captured that span at least one complete heartbeat to generate the medical parameters. For example, the ultrasound images may be analyzed to determine which ultrasound image was captured at the end of the contraction of a heart ventricle (referred to as the end-systolic image) and which ultrasound image was captured just before the start of the contraction of a heart ventricle (referred to as the end-diastolic image). The end-systolic image may be identified by, for example, identifying the ultrasound image in the sequence that has a smallest ventricle volume (or diameter). Similarly, the end-diastolic image may be identified by, for example, identifying the ultrasound image in the sequence that has the largest ventricle volume (or diameter). The end-systolic image may be analyzed to determine one or more medical parameters that are measured at the end of the heart contraction such as an end-systolic diameter (ESD) and/or an end-systolic volume (ESV). Similarly, the end-diastolic image may be analyzed to determine one or more medical parameters that are measured just before the start of a heart contraction such as an end-diastolic diameter (EDD) and/or an end-diastolic volume (EDV). Some medical parameters may require analysis of both the end-systolic image and the end-diastolic image. For example, the identification of the EF may require (1) an EDV identified using the end-diastolic image and (2) an ESV identified using the end-systolic image as shown in Equation (1) below: EF=EDV-ESVEDV*100(1) Similarly, the identification of the FS may require (1) an EDD identified using the end-diastolic image and (2) an ESD identified using the end-systolic image as shown in Equation (2) below: FS=EDD-ESDEDD*100(2) In some embodiments, the computing device may change a color of the medical parameters410shown in the display406based on the value of the medical parameters. For example, the medical parameters410may be displayed in a first color (e.g., green) to indicate that the values are within a normal range, a second color (e.g., orange) to indicate that the values are in a borderline abnormal range, and a third color (e.g., red) to indicate that the values are in an abnormal range. Example Augmented Reality Interfaces The inventors have recognized that providing instructions to an operator through an augmented reality interface may advantageously make the instructions easier to understand for the operator.FIG.5Ashows an example ultrasound system that is configured to provide the operator an augmented reality interface. As shown, the ultrasound system comprises an ultrasound device502communicatively coupled to a computing device504via a communication link512. The ultrasound device502, communication link512, and/or computing device504may be similar to (or the same as) the ultrasound device102, the communication link112, and/or the computing device104, respectively, described above with reference toFIG.1. The ultrasound system further comprises a marker510disposed of the ultrasound device502. The marker advantageously allow the computing device504to more easily track the location of the ultrasound device in non-acoustic images captured by an imaging device506(e.g., integrated into the computing device504). The computing device504may use the tracked location of the ultrasound device in the non-acoustic images to overlay one or more elements (e.g., instructions) onto the non-acoustic images to form an augmented reality interface. Such an augmented reality interface may be displayed via a display508(e.g., integrated into the computing device502and disposed on an opposite side relative to the imaging device506). It should be appreciated that the computing device504does not need to be implemented as a handheld device. In some embodiments, the computing device504may be implemented as a wearable device with a mechanism to display instructions to an operator. For example, the computing device504may be implemented as a wearable headset and/or a pair of smart glasses (e.g., GOOGLE GLASS, APPLE AR glasses, and MICROSOFT HOLOLENS). FIG.5Bshows another view of the ultrasound system from the perspective of an operator. As shown, the display508in the computing device504displays an augmented reality interface comprising a non-acoustic image512of the ultrasound device502being used on the subject501(e.g., captured by the imaging device506) and one or more elements overlaid onto the image512. For example, an instruction516indicative of a direction for the operator to move the ultrasound device502, a symbol indicating a location of the target anatomical plane, and/or an ultrasound image514captured by the ultrasound device502may be overlaid onto the image512. These elements may be implemented, for example, as: opaque elements (so as to obscure the portion of the image512under the element), transparent elements (so as to not obscure the portion of the image512under the element), pseudo colorized elements, and/or cut-away elements. In some embodiments, the instruction516may be overlaid onto the image512such that at least a portion of the instruction516is overlaid onto the ultrasound device502in the image512. The computing device504may, for example, use the marker510to identify a pose (e.g., a position and/or orientation) of the ultrasound device502in the image512and position the instruction516in the augmented reality interface using the identified pose. The marker510may be constructed to have one or more distinctive characteristics that may easily be recognized in the image512. Example markers include: monochrome markers, holographic markers, and dispersive markers. Monochrome markers may comprise a monochrome pattern such as ArUco markers. Holographic markers may comprise a hologram that presents different images depending on the particular angle from which the hologram is viewed. Dispersive markers may comprise a dispersive element that presents different colors depending on the particular angle from which the dispersive element is viewed. The computing device504may identify the pose of the ultrasound device502in any of a variety of ways. In some embodiments, the computing device may identify a position of the ultrasound device502in the image512by identifying a location of the marker510. The location of the marker510may be identified by searching for one or more distinct characteristics of the marker510in the image512. Additionally (or alternatively), the computing device may identify an orientation of the ultrasound device502in the image512by analyzing one or more characteristics of the marker512. For example, the marker510may be a dispersive marker and the computing device may identify an orientation of the ultrasound device502in the image512by identifying a color of the marker510in the image512. In another example, the marker510may be a holographic marker and the computing device may identify an orientation of the ultrasound device502in the image512by identifying an image presented by the marker510in the image512. In yet another example, the marker510may be a patterned monochrome marker and the computing device may identify an orientation of the ultrasound device502in the image512by identifying an orientation of the pattern on the marker510in the image512. It should be appreciated that the pose of the ultrasound device502may be identified without the marker510. For example, the ultrasound device502may have distinctive characteristics (e.g., shape and/or color) that may be readily identifiable in the image512. Thereby, the computing device504may identify the pose of the ultrasound device502in the image510by analyzing one or more characteristics of the ultrasound device502in the image510. In some embodiments, the identified pose of the ultrasound device502in the image512may be employed to overlay other elements onto the image512separate from the instruction516. For example, the identified pose of the ultrasound device502may be employed to overlay the ultrasound image514over the image512such that ultrasound image514appears to be extending out of the ultrasound device502into the subject501. Such a configuration may advantageously provide an indication to the operator of the particular portion of the subject that is being imaged by the ultrasound device502. An example of such an augmented reality interface is shown inFIG.6being displayed on a display606of a computing device604. The augmented reality interface overlays the ultrasound image610and an ultrasound device symbol608onto an image of an ultrasound device602being used to image the subject601(e.g., captured from a front-facing camera in the handheld device computing device604). As shown, the ultrasound image610is overlaid onto the portion of the subject601that is being imaged by the ultrasound device602. In particular, the ultrasound image610has been positioned and oriented so as to be extending from the ultrasound device602into the subject601. This position and orientation of the ultrasound image610may indicate to the operator the particular portion of the subject601that is being imaged. For example, the ultrasound device602may be positioned on an upper torso of the subject601and the ultrasound image610may extend from an end of the ultrasound device602in contact with the subject601into the upper torso of the subject601. Thereby, the operator may be informed that the captured image is that of a 2D cross-section of body tissue in the upper torso of subject601. It should be appreciated that additional (or fewer) elements may be overlaid onto the image of the ultrasound device602being used on the subject601inFIG.6. For example, the ultrasound device symbol608overlaid onto the ultrasound device602may be omitted. Additionally (or alternatively), the user interface may overlay instructions (e.g., augmented reality arrows) onto the image of the ultrasound device602on the subject601to provide guidance to the operator. Example Diagnostic Applications The inventors have recognized that ultrasound imaging techniques may be advantageously combined with diagnostics and treatment recommendations to provide an ecosystem of intelligent and affordable products and services that democratize access to medical imaging and accelerate imaging into routine clinical practice and/or patient monitoring. This may provide an advance in conventional clinical decision support (CDS) applications by empowering healthcare professionals and/or patients to make diagnostic and treatment decisions at an earlier state of disease, as well as to assist novice imaging users (e.g., consumers) to detect various conditions earlier and monitor patient response to therapy. The technology improvements described herein may enable, among other capabilities, focused diagnosis, early detection and treatment of conditions by an ultrasound system. The ultrasound system may comprise an ultrasound device that is configured to capture ultrasound images of the subject and a computing device in communication with the ultrasound device. The computing device may execute a diagnostic application that is configured to perform, for example, one or more of the following functions: (1) acquire medical information regarding the subject, (2) identify an anatomical view of the subject to image with the ultrasound device based on the acquired medical information regarding the subject, (3) guide the operator to capture ultrasound image(s) that contain the identified anatomical view, (4) provide a diagnosis (or pre-diagnosis) of a medical condition of the subject based on the captured ultrasound images, and (5) provide one or more recommended treatments based on the diagnosis. FIGS.7A-7Hshow an example user interface for a diagnostic application that is configured to assist an operator determine whether a subject is experiencing heart failure. The diagnostic application may be designed to be used by, for example, a health care professional such as a doctor, a nurse, or a physician assistant. The diagnostic application may be executed by, for example, a computing device704. The computing device704may comprise an integrated display706that is configured to display one or more user interface screens of the diagnostic application. The computing device704may be communicatively coupled to an ultrasound device (not shown) using a wired or wireless communication link. FIG.7Ashows an example home screen that may be displayed upon the diagnostic application being launched. Information that may be presented on the home screen include an application title702, an application description703, and a sponsor region708. The sponsor region708may display information, for example, indicating the name, symbol, or logo of any sponsoring entity providing the diagnostic application. In the case of a heart failure diagnostic application, a pharmaceutical manufacturer that provides one or more medications or therapies for treating such a condition may sponsor the application. The home screen may further include a selection region that allows the operator to perform various functions within the diagnostic application such as: schedule a follow-up examination with the subject, access more medical resources, or begin a new diagnosis. The computing device704may transition from the home screen to a clinical Q&A screen shown inFIG.7Bresponsive to the “Begin New Diagnosis” button in selection region710being activated in the home screen shown inFIG.7A. The clinical Q&A screen may pose one or more clinical questions712to the operator. For a heart failure diagnosis application, an appropriate clinical question712posed to the operator may be: “Is the patient experiencing paroxysmal nocturnal dyspnea?” Paroxysmal nocturnal dyspnea may be attacks of severe shortness of breath and coughing that generally occur at night. Such attacks may be a symptom of congestive heart failure. The diagnostic application may receive an answer to the clinical question in the response region712. As will also be noted fromFIG.7B, the sponsor region708may continue to be provided in the diagnostic application. The sponsor region708may comprise a link to exit the diagnostic application to a site hosted by the sponsor. The computing device704may transition from the clinical Q&A screen to an examination screen responsive to the “Yes” button being activated in response region714. The examination screen may pose one or more examination questions718to the operator. For a heart failure diagnostic application, the examination question718may be to determine a current heart rate of the subject to be diagnosed. The diagnostic application may receive a response through the response region720. For example, the operator may indicate that the heart rate of the subject is below a first value (e.g., less than 91 beats per minute (bpm)), within a range between the first value and a second value (e.g., between 91 and 110 bpm), or above the second value (e.g., more than 110 bpm) in the response region720. Once the computing device704has received a response to the examination question718, the computing device704may transition from the examination screen shown inFIG.7Cto an ultrasound image acquisition screen shown inFIG.7D. The ultrasound image acquisition screen may present an imaging instruction722to the operator. For a heart failure diagnostic application, the imaging instruction722may instruct the operator to begin an assisted ejection fraction (EF) measurement of the subject. EF may be a measure of how much blood a heart ventricle pumps out with each contraction. The EF may be identified be computed by, for example, analyzing one or more ultrasound images of a heart of the subject. The computing device704may begin an assisted EF measurement process responsive to the “Begin Measurement” button in selection region724being activated. The computing device702may communicate with an ultrasound device to capture ultrasound images response to the “Begin Measurement” button being activated in the selection region724. The computing device702may also transition from the image acquisition screen shown inFIG.7Dto an image acquisition assistance screen shown inFIG.7E. The image acquisition assistance screen may display an ultrasound image726captured using the ultrasound device. In some embodiments, the image acquisition assistance screen may display one or more instructions regarding how to reposition the ultrasound device to obtain an ultrasound image that contains the target anatomical view (e.g., a PLAX view). Once the ultrasound device has been properly positioned, the image acquisition assistance screen may display an indication that the ultrasound device is properly positioned. When a suitable (clinically relevant) image(s) is obtained, the operator may confirm the acquisition via the “Confirm” button. The computing device704may transition from the image acquisition assistance screen shown inFIG.7Eto a diagnostic results screen shown inFIG.7Fonce the ultrasound images have been confirmed by the operator. The diagnostic results screen may display diagnostic results728,732determined from analyzing the captured ultrasound image730. As shown, the diagnostic results screen may display an EF of 30% for the subject and an associated New York Heart Association (NYHA) classification of IV. This classification system utilizes four categories of heart failure, from I-IV with IV being the most severe. The computing device704may transition from the diagnostic results screen shown inFIG.7Fto one or more of the treatment screens shown inFIGS.7G and7Hresponsive to the “view possible treatments” button being activated in selection region734. The treatment screen shown inFIG.7Gmay display a treatment question736regarding a current treatment being provided to the subject and suggested treatments738determined based on, for example, any one of the following: (1) a response to the treatment question736, (2) diagnostic results, (3) the captured ultrasound image, (4) a response to the physical examination question, and/or (5) a response to the clinical question. The treatment screen shown inFIG.7Hmay be an extension of the treatment screen inFIG.7G. For example, an operator may access the treatment screen inFIG.7Hby scrolling down from the treatment screen shown inFIG.7G. The treatment screen inFIG.7Hmay display a treatment selection740where an operator may select which treatment they want to provide to the subject. As shown, the treatment selection740may allow an operator to pick between one or more medications to treat heart failure such as angiotensin-converting-enzyme inhibitors (ACE inhibitors), angiotensin receptor blockers (ARB), or other alternatives. The diagnostic application may, then, display one or more external links742based on the selected treatment to provide more information to the operator regarding the treatment. It should be appreciated that diagnostic application shown inFIGS.7A-7His only one example implementation and other diagnostic applications may be created for other conditions separate and apart from congestive heart failure. Further, diagnostic applications may be created for use by a subject at-home (instead of a physician). For example, a physician may issue an ultrasound device configured for in-home use by a subject to monitor a condition of the subject using the ultrasound device. A diagnostic application may also be provided to the subject to use with the ultrasound device. Such a diagnostic application may be installed on a personal mobile smartphone or tablet of the subject. The diagnostic application may be configured to assist the subject to operate the ultrasound device and store (and/or upload) the captured ultrasound images for analysis by the physician. Thereby, the physician may be able to remotely monitor a condition of the subject without making the subject remain in inpatient care. FIGS.8A-8Dshow an example user interface for such a diagnostic application that is designed to be used by a subject in an at-home environment. The diagnostic application may be configured to assist an operator (e.g., the subject) use an ultrasound device to capture ultrasound images in an at-home setting. The diagnostic application may be executed by, for example, a computing device804(such as a mobile smartphone or a tablet of the subject). The computing device804may comprise an integrated display806that is configured to display one or more user interface screens of the diagnostic application. The computing device804may be communicatively coupled to an ultrasound device (not shown) using a wired or wireless communication link. The computing device may also comprise an imaging device805(e.g., a camera) that is configured to capture non-acoustic images. The imaging device805may be disposed on a same side as the display806to allow the operator to simultaneously capture images of themselves holding an ultrasound device while viewing one or more instructions displayed on the display806. FIG.8Ashows an example home screen that may be displayed upon the diagnostic application being launched. The home screen includes a message808to the operator to instruct the operator to scan a quick response (QR) code associated with the ultrasound device. The QR code may be, for example, disposed on the ultrasound device itself and/or disposed on a packaging associated with the ultrasound device. The home screen may also display images captured by an imaging device (e.g., integrated into the computing device804and disposed on a side opposite the display806). The home screen may show a scanning region810in the captured images to illustrate where a user should place the QR code in the field of view of the imaging device to have the QR code read. Once the computing device804reads the QR code, the computing device804may transition from the home screen shown inFIG.8Ato a subject information screen shown inFIG.8B. The subject information screen may include a display of subject information810obtained by the computing device804using the scanned QR code. For example, the computing device804may have employed the scanned QR code to access medical records of the subject in a remote server. Once the operator has confirmed that the subject information810is correct, the operator may activate the confirm button in the selection region812. It should be appreciated that other types of bar codes may be employed separate from QR codes. Other example bar codes include: MaxiCode bar codes, Codabar bar codes, and Aztec bar codes. The computing device804may transition from the subject information screen shown inFIG.8Bto the image acquisition screen shown inFIG.8Cresponsive to the “Confirm” button being activated in the selection region812. As shown, the image acquisition screen includes a message814for the operator to apply gel to the ultrasound device and a selection region816including a being button for the operator to begin acquisition of ultrasound images. The computing device804may transition from the image acquisition screen shown inFIG.8Cto the image acquisition assistance screen shown inFIG.8Dresponsive to the “Begin” button being activated in the selection region816. As shown, the image acquisition assistance screen may include a non-acoustic image (e.g., captured by the imaging device805) of a subject818holding an ultrasound device820. An instruction822may be superimposed over the captured non-acoustic image to guide the operator (e.g., the subject) to capture an ultrasound image containing a target anatomical view. Once the computing device804has captured the ultrasound image containing the target anatomical view, the computing device804may locally store the captured ultrasound image for later retrieval by a physician and/or upload the image to an external server to be added to a set of medical records associated with the subject. The computing device804may further display a confirmation to the operator that the ultrasound image was successfully captured. Example Processes FIG.9shows an example process900for guiding an operator of an ultrasound device to capture an ultrasound image that contains a target anatomical view. The process900may be performed by, for example, a computing device in an ultrasound system. As shown, the process900comprises an act902of obtaining an ultrasound image, an act904of determining whether the ultrasound image contains the target anatomical view, an act906of generating a guidance plan, an act908of providing instructions to reposition the ultrasound device, and an act910of providing an indication of proper positioning. In act902, the computing device may obtain an ultrasound image of the subject. The computing device may obtain the ultrasound image by communicating with an ultrasound device communicatively coupled to the computing device. For example, the computing device may send an instruction to the ultrasound device to generate ultrasound data and send the ultrasound data to the computing device. The computing device may, in turn, use the received ultrasound data to generate the ultrasound image. Additionally (or alternatively), the ultrasound image may be generated by the ultrasound device and the computing device may retrieve the ultrasound image from the ultrasound device. In act904, the computing device may determine whether the ultrasound image contains the target anatomical view. If the computing device determines that the ultrasound image contains the target anatomical view, the computing device may proceed act910and provide an indication of proper positioning. Otherwise the system may proceed to act906to generate a guidance plan for the operator to move the ultrasound device. In some embodiments, the computing device may employ an automated image processing technique, such as a deep learning technique, to determine whether the ultrasound image contains the target anatomical view. For example, the ultrasound image may be provided as input to a neural network that is trained to identify an anatomical view contained in the ultrasound image. The output of such a neural network may be an indication of the particular anatomical view that is contained in the ultrasound image. In this example, the identified anatomical view may be compared with the target anatomical view to determine whether they match. If the identified anatomical view and the target anatomical view match, the computing device may determine that the ultrasound image contains the target anatomical view. Otherwise, the computing device may determine that the ultrasound image does not contain the anatomical view. In another example, the neural network may be configured to directly provide an indication of an instruction for the operator based on an input ultrasound image. Thereby, the neural network may provide as an output a confirmation that the ultrasound devices properly positioned or an instruction to move the ultrasound device in a particular direction. In this example, the computing device may determine that the ultrasound image contains the target anatomical view responsive to the neural network providing a confirmation as an output. Otherwise, the computing device may determine that the ultrasound image does not contain the anatomical view. In act906, the computing device may generate a guidance plan regarding how to guide the operator to move the ultrasound device. In some embodiments, the guidance plan may comprise a guide path along which the operator should move the ultrasound device from an initial position to a target position where an ultrasound image containing the target anatomical view may be captured. In these embodiments, the computing device may identify the initial position of the ultrasound device on the subject at least in part by: identifying an anatomical view contained in the ultrasound image (e.g., using deep learning techniques) and map the identified anatomical view to a position on the subject. The target position may be identified by, for example, mapping the target anatomical view to a position on the subject. Once the initial and target positions have been identified, the computing device may identify a guide path between the initial and target positions along which the ultrasound device should move. The guide path may comprise a sequence of directions (e.g., translational directions or rotational directions) for the ultrasound device to travel along to reach the target position. The generated guide path may not be the shortest path between the initial position of the ultrasound device in the target position of the ultrasound device. For example, the generated path may avoid using diagonal movements that may be challenging to properly convey to the operator. Alternatively (or additionally), the generated path may avoid certain areas of the subject such as areas comprising hard tissues. Once the guide path between the initial position and the target position of the ultrasound device has been determined, the computing device may generate a sequence of one or more instructions to provide to the operator to instruct the operator to move the ultrasound device along the guide path. In act908, the computing device may provide an instruction to reposition the ultrasound device to the operator. The instruction may be, for example, an audible instruction played through a speaker, a visual instruction displayed using a display, and/or a tactile instruction provided using a vibration device (e.g., integrated into the computing device and/or the ultrasound device). The instruction may be provided based on, for example, the sequence of instructions in the guidance plan generated in act906. For example, the computing device may identify a single instruction from the sequence of instructions and provide the identified instruction. It should be appreciated that the instruction need not originate from a guidance plan. For example, as discussed above, a neural network may be configured to directly output an instruction based on a received ultrasound image. In this example, the output instruction may be directly provided and the act906of generating a guidance plan may be omitted. Once the computing device has provided the instruction to reposition the ultrasound device, the computing device may repeat one or more of acts902,904,906and/or908to provide the operator additional instructions. In act910, the computing device may provide an indication of proper positioning. For example, the computing device may provide an audible confirmation played through a speaker, a visual confirmation displayed using a display, or a tactile confirmation provided through a vibration device. FIG.10shows an example process1000for providing an augmented reality interface for an operator. The augmented reality interface may include a non-acoustic image of a real-world environment including an ultrasound device and one or more elements (such as instructions) overlaid onto the non-acoustic image. The process1000may be performed by, for example, a computing device in an ultrasound system. As shown inFIG.10, the process1000comprises an act1002of obtaining an image of an ultrasound device, an act1003of generating a composite image, and an act1008of presenting the composite image. The act1003of generating the composite image may comprise an act1004of identifying a pose of an ultrasound device in the image and an act1006of overlaying the instruction onto the image using the identified pose. In act1002, the computing device may capture an image (e.g., a non-acoustic image) of the ultrasound device. The non-acoustic image may be captured by an imaging device (e.g., a camera) integrated into the computing device. For example, the non-acoustic image may be captured using a front-facing camera of a mobile smartphone (e.g., on the same side as the display) when the operator is also the subject. In another example, the non-acoustic image may be captured using a rear-facing camera of a mobile smartphone (e.g., on the opposite side as the display) when the operator is a person (or group of people) separate from the subject. In act1003, the computing may generate the composite image. The composite image may comprise the non-acoustic image captured in act1002and one or more elements overlaid onto the non-acoustic image. The one or more elements overlaid onto the non-acoustic image may be, for example, one or more instructions designed to provide feedback to the operator regarding how to reposition the ultrasound device to obtain an ultrasound image that contains a target anatomical view. The computing device may generate the composite image in any of a variety of ways. In some embodiments, the computing device may be configured to generate the composite image by performing acts1004and1006. In act1004, the computing device may identify a pose (e.g., position and/or orientation) of the ultrasound device in the non-acoustic image. The computing device may identify the pose of the ultrasound device in the captured image using an automated image processing technique (e.g., a deep learning technique). For example, the non-acoustic image may be provided as an input to a neural network that is configured to identify which pixels in the non-acoustic image are associated with the ultrasound device. In this example, the computing device may use the identified pixels to determine a position of the ultrasound device in the non-acoustic image. In some embodiments, the ultrasound device may have a marker disposed thereon that is visible in the image to ease identification of the ultrasound device in the non-acoustic image. The marker may have a distinct shape, color, and/or image that is easy to recognize using an automated image processing technique. Additional information may also be employed to identify the pose of the ultrasound device in combination with (or in place of) the information extracted from the non-acoustic image. For example, the ultrasound device may comprise one or more sensors configured to detect movement (e.g., accelerometers, gyroscopes, compasses, and/or inertial measurement units). In this example, movement information from these sensors in the ultrasound device may be employed to determine the pose of the ultrasound device. In another example, the ultrasound device (e.g., ultrasound device502) and a computing device (e.g., computing device504) connected to the ultrasound device may comprise sensors configured to detect movement. In this example, the movement information from the sensors in both the ultrasound device and the computing device may be used in concert to identify the pose of the ultrasound device relative to the computing device and, thereby, identify the pose of the ultrasound device in the captured non-acoustic image. In act1006, the computing device may overlay an instruction onto the non-acoustic image using the identified pose to form an augmented reality interface. For example, the computing device may overlay an instruction regarding how to move the ultrasound device (e.g., a directional arrow) onto the non-acoustic image so as to be proximate and/or partially covering the ultrasound device. Additionally (or alternatively), the pose may be employed to position other elements in the augmented reality interface. For example, the pose of the ultrasound device may be employed to position an ultrasound image in the augmented reality interface. In this example, the ultrasound image may be positioned in the augmented reality interface so as to appear to be extending from the ultrasound device in the non-acoustic image into the subject. Thereby, the operator may gain an appreciation for the particular portion of the subject that is being imaged with the ultrasound device. In act1008, the computing device may present a composite image to the operator. For example, the computing device may present the composite image to the operator using a display integrated into the computing device. Alternative (or additionally), the computing device may transmit the composite image to another device (e.g., to be presented on a display of the other device). FIG.11shows an example process1100for tracking the location of an ultrasound device in non-acoustic images using a marker disposed thereon. The process1100may be performed by, for example, a computing device in an ultrasound system. As shown, the process1100comprises an act1102of obtaining an image of a marker disposed on an ultrasound device, an act1103of identifying a pose of the ultrasound device, and an act1108of presenting an instruction using the identified pose. The act1103of identifying the pose of the ultrasound device may comprise an act1104of identifying the location of the marker in the image and an act1106of analyzing a characteristic of the marker. In act1102, the computing device may capture a non-acoustic image of the marker on the ultrasound device. The non-acoustic image may be captured by an imaging device (e.g., a camera) integrated into the computing device. For example, the non-acoustic image may be captured using a front-facing camera of a mobile smartphone (e.g., on the same side as the display) when the operator is also the subject. In another example, the non-acoustic image may be captured using a rear-facing camera of a mobile smartphone (e.g., on the opposite side as the display) when the operator is person (or group of people) separate from the subject. In act1103, the computing device may identify a pose (e.g., a position and/or orientation) of the ultrasound device in the captured image using the marker. The computing device may identify the pose of the ultrasound device in the captured image in any of a variety of ways. In some embodiments, the computing device may identify the pose of the ultrasound device in the non-acoustic image by performing acts1104and1106. In act1104, the computing device may identify a location of the marker in the non-acoustic image. The computing device may use the identified location of the marker to identify a position of the ultrasound device on which the marker is disposed. The location of the marker may be determined by, for example, locating one or more features characteristic to the marker, such as a shape, color, and/or image, in the image using an automated image processing technique. In act1106, the computing device may analyze a characteristic of the marker. The computing device may analyze a characteristic of the marker to, for example, determine an orientation of the ultrasound device in the captured image. The particular way in which the computing device determines the orientation using characteristics of the marker may depend on, for example, the particular marker employed. In one example, the marker may be a monochrome marker comprising a pattern. In this example, the pattern may be analyzed in order to determine in orientation of the pattern and, thereby, determine an orientation of the ultrasound device in the non-acoustic image. In another example, the marker may be a dispersive marker that is configured to present different colors depending on the viewing angle. In this example, the computing device may identify a color of the marker in the non-acoustic image and use the identified color to determine an orientation of the marker and, thereby, an orientation of the ultrasound device. In yet another example, the marker may be a holographic marker that is configured to present different images depending on the viewing angle. In this example, the computing device may identify an image presented by the holographic marker and use the identified image to determine an orientation of the marker, and thereby, an orientation of the ultrasound device. In act1108, the computing device may present an instruction using the identified pose. In some embodiments, the computing device may overlay the instruction onto the non-acoustic image obtained in act1102using the identified pose to form a composite image for an augmented reality interface. For example, the computing device may overlay an instruction regarding how to move the ultrasound device (e.g., a directional arrow) onto the non-acoustic image so as to be proximate and/or partially covering the ultrasound device. Additionally (or alternatively), the pose may be employed to position other elements in the augmented reality interface. For example, the pose of the ultrasound device may be employed to position an ultrasound image in the augmented reality interface. In this example, the ultrasound image may be positioned in the augmented reality interface so as to appear to be extending from the ultrasound device in the non-acoustic image into the subject. Thereby, the operator may gain an appreciation for the particular portion of the subject that is being imaged with the ultrasound device. FIG.12shows an example process1200for analyzing captured ultrasound images to identify a medical parameter of the subject. The process1200may be performed by, for example, a computing device in an ultrasound system. As shown, the process1200comprises an act1202of obtaining an ultrasound image, an act1204of identifying an anatomical feature in the ultrasound image, and an act1206of identifying a medical parameter using the identified anatomical feature. In act1202, the computing device may obtain an ultrasound image of the subject. The computing device may obtain the ultrasound image by communicating with an ultrasound device communicatively coupled to the computing device. For example, the computing device may send an instruction to the ultrasound device to generate ultrasound data and send the ultrasound data to the computing device. The computing device may, in turn, use the received ultrasound data to generate the ultrasound image. In act1204, the computing device may identify an anatomical feature in the ultrasound image. For example, the computing device may identify a heart ventricle, a heart valve, a heart septum, a heart papillary muscle, a heart atrium, an aorta, or a lung as an anatomical feature in the ultrasound image. The computing device may identify the anatomical feature using an automated image processing technique, such as a deep learning technique. For example, the computing device may provide the ultrasound image as an input to a neural network that is configured (e.g., trained) to provide, as an output, an indication of which pixels in the ultrasound image are associated with an anatomical feature. It should be appreciated that this neural network may be separate and distinct from any neural networks employed to guide an operator to obtain an ultrasound image containing a target anatomical view (such as those employed in process900described above). In act1206, the computing device may identify a medical parameter using the identified anatomical feature. For example, the computing device may determine an ejection fraction, a fractional shortening, a ventricle diameter, a ventricle volume, an end-diastolic volume, an end-systolic volume, a cardiac output, a stroke volume, an intraventricular septum thickness, a ventricle wall thickness, or a pulse rate of the subject. In some embodiments, the computing device may identify the medical parameters by analyzing one or more characteristics of the identified anatomical feature. For example, the computing device may identify a heart ventricle in the ultrasound image and the dimensions of the heart ventricle may be extracted from the ultrasound image to determine a ventricle volume and/or a ventricle diameter. It should be appreciated that the computing device may analyze more than a single ultrasound image to identify the medical parameter. For example, the computing device may identify a ventricle volume in each of a plurality of ultrasound images and select a lowest ventricle volume as the end-systolic volume and select the highest ventricle volume as the end-diastolic volume. Further, the end-systolic and end-diastolic volumes may be employed to determine another medical parameter, such as an EF. FIG.13shows an example process1300for generating a diagnosis of a medical condition of a subject. The process1300may be performed by, for example, a computing device in an ultrasound system. As shown, the process1300comprises an act1302of receiving medical information about the subject, an act1304of identifying a target anatomical view, an act1306of obtaining an ultrasound image containing the target anatomical view, an act1308of generating a diagnosis of a medical condition of a subject, and an act1310of generating recommended treatments for the subject. In act1302, the computing device may receive medical information about the subject. Example medical information about the subject that may be received includes: a heart rate, a blood pressure, a body surface area, an age, a weight, a height, and a medication being taken by the subject. The computing device may receive the medical information by, for example, posing one or more questions to an operator and receiving a response. Additionally (or alternatively), the computing device may communicate with an external system to obtain the medical information. For example, the operator may scan a barcode (e.g., a QR code) on the ultrasound device using the computing device and the computing device may use information obtained from the barcode to access medical records associated with the subject on a remote server. In act1304, the computing device may identify a target anatomical view based on the received medical information. The computing device may analyze the received medical information to identify one or more organs that may be functioning abnormally. Then, the computing device may identify an anatomical view that contains the identified one or more organs. For example, the medical information about the subject may indicate that the heart of the subject is functioning abnormally (e.g., the patient has symptoms of congestive heart failure) and identify a PLAX view as the anatomical view to image. In act1306, the computing device may obtain an ultrasound image containing the target anatomical view. For example, the computing device may retrieve an ultrasound image of the subject containing the target anatomical view from an electronic health record of the patient. Alternatively (or additionally), the computing device may guide the operator to obtain an ultrasound image that contains the target anatomical view. For example, the computing device may issue one or more instructions regarding how the operator should position the ultrasound device on the subject to obtain an ultrasound image containing the target anatomical view. The computing device may generate and/or provide these instructions in any of a variety of ways. For example, the computing device may perform a process that is similar to (or identical to) the process900described above. In act1308, the computing device may generate a diagnosis of a medical condition of the subject using the ultrasound image containing the target anatomical view. In some embodiments, the computing device may analyze the ultrasound image containing the target anatomical view to identify one or more medical parameters (e.g., an EF of the subject) and use the identified one or more medical parameters (alone or in combination with other information such as medical information of the subject) to generate the diagnosis. In these embodiments, the computing device may perform one or more acts in process1200to identify a medical parameter of the subject. For example, the computing device may determine an ejection fraction of the subject by performing acts1202,1204, and/or1206and compare the resulting ejection fraction value with a threshold to determine whether the subject is likely suffering from congestive heart failure. The computing device may combine the information regarding the medical parameters with other information (such as the medical information about the subject received in act1302) to diagnose a medical condition of the subject. For example, the computing device may diagnose a patient with congestive heart failure responsive to the computing device determining that the ejection fraction of the subject is below a threshold and that the subject has reported symptoms of congestive heart failure (such as experiencing paroxysmal nocturnal dyspnea). It should be appreciated that the computing device may be configured to diagnose any of a variety of medical conditions such as: heart conditions (e.g., congestive heart failure, coronary artery disease, and congenital heart disease), lung conditions (e.g., lung cancer), kidney conditions (e.g., kidney stones), and/or joint conditions (e.g., arthritis). In act1310, the computing device may generate one or more recommended treatments for the subject. The computing device may generate the one or more recommended treatments based on the diagnosis of the subject. Example recommended treatments include: a change in diet, a change in exercise routine, a pharmaceutical drug, a biologic (e.g., vaccines, gene therapies, cellular therapies), radiotherapy, chemotherapy, and surgical intervention. For example, the subject may be diagnosed with congestive heart failure and the computing device generate a recommended treatment of: angiotensin-converting-enzyme inhibitors (ACE inhibitors), angiotensin receptor blockers (ARB), or other alternatives. It should be appreciated that the computing device may use information other than the diagnosis to generate the recommended treatment, such as medical information of the subject and/or one or more medical parameters extracted from the ultrasound image. For example, the medical information of the subject may indicate that the subject is a smoker and the computing device may include a recommended treatment of quitting smoking when the subject is diagnosed with congestive heart failure. In another example, the medical information of the subject may include one or more drug allergies of the subject and the computing device may not recommend any treatments that involve administration of a drug to which the subject is allergic. In yet another example, the medical information of the subject may include one or more drugs taken by the subject and the computing device may not recommend any treatments that would adversely interact with one or more of the drugs already taken by the subject. Various inventive concepts may be embodied as one or more processes, of which examples have been provided. The acts performed as part of each process may be ordered in any suitable way. Thus, embodiments may be constructed in which acts are performed in an order different than illustrated, which may include performing some acts simultaneously, even though shown as sequential acts in illustrative embodiments. Further, one or more of the processes may be combined. For example, the process1300of identifying an anatomical view to image based on medical information about the subject may be combined with the process1200for analyzing captured ultrasound images to identify a medical parameter of the subject. Thereby, the computing device may (1) identify an anatomical view to image, (2) guide the operator to capture an ultrasound image containing the anatomical view, and (3) analyze the captured ultrasound image to identify medical parameters of the subject. In this example, the ultrasound device may additionally make one or more treatment recommendations based on the identified medical parameters and/or medical information regarding the subject. Example Deep Learning Techniques Aspects of the technology described herein relate to the application of automated image processing techniques to analyze images, such as ultrasound images and non-acoustic images. In some embodiments, the automated image processing techniques may comprise machine learning techniques such as deep learning techniques. Machine learning techniques may comprise techniques that seek to identify patterns in a set of data points and use the identified patterns to make predictions for new data points. These machine learning techniques may involve training (and/or building) a model using a training data set to make such predictions. The trained model may be used as, for example, a classifier that is configured to receive a data point as an input and provide an indication of a class to which the data point likely belongs as an output. Deep learning techniques may include those machine learning techniques that employ neural networks to make predictions. Neural networks typically comprise a collection of neural units (referred to as neurons) that each may be configured to receive one or more inputs and provide an output that is a function of the input. For example, the neuron may sum the inputs and apply a transfer function (sometimes referred to as an “activation function”) to the summed inputs to generate the output. The neuron may apply a weight to each input to, for example, weight some inputs higher than others. Example transfer functions that may be employed include step functions, piecewise linear functions, and sigmoid functions. These neurons may be organized into a plurality of sequential layers that each comprise one or more neurons. The plurality of sequential layers may include an input layer that receives the input data for the neural network, an output layer that provides the output data for the neural network, and one or more hidden layers connected between the input and output layers. Each neuron in a hidden layer may receive inputs from one or more neurons in a previous layer (such as the input layer) and provide an output to one or more neurons in a subsequent layer (such as an output layer). A neural network may be trained using, for example, labeled training data. The labeled training data may comprise a set of example inputs and an answer associated with each input. For example, the training data may comprise a plurality of ultrasound images that are each labeled with an anatomical view that is contained in the respective ultrasound image. In this example, the ultrasound images may be provided to the neural network to obtain outputs that may be compared with the labels associated with each of the ultrasound images. One or more characteristics of the neural network (such as the interconnections between neurons (referred to as edges) in different layers and/or the weights associated with the edges) may be adjusted until the neural network correctly classifies most (or all) of the input images. In some embodiments, the labeled training data may comprise sample patient images are obtained that need not all be “standard” or “good” image of an anatomic structure. For example, one or more of the sample patient images may be “non-ideal” for training purposes. Each of these sample patient images may be evaluated by a trained clinician. The trained clinician may add a qualitative label to each of the sample patient images. In the specific example of a PLAX image, the clinician may determine that the given image is “normal” (i.e., depicts a good view of the structure for analysis purposes). In the alternative, if the image is not ideal, the clinician may provide a specific label for the image that describes the problem with it. For example, the image may represent an image taken because the ultrasound device was oriented “too counterclockwise” or perhaps “too clockwise” on the patient. Any number of specific errors may be assigned to a sample given image. Once the training data has been created, the training data may be loaded to a database (e.g., an image database) and used to train a neural network using deep learning techniques. Once the neural network has been trained, the trained neural network may be deployed to one or more computing devices. It should be appreciated that the neural network may be trained with any number of sample patient images. For example, a neural network may be trained with as few as 7 or so sample patient images, although it will be appreciated that the more sample images used, the more robust the trained model data may be. Convolutional Neural Networks In some applications, a neural network may implemented using one or more convolution layers to form a convolutional neural network. An example convolutional neural network is shown inFIG.14that is configured to analyze an image1402. As shown, the convolutional neural network comprises an input layer1404to receive the image1402, an output layer1408to provide the output, and a plurality of hidden layers1406connected between the input layer1404and the output layer1408. The plurality of hidden layers1406comprises convolution and pooling layers1410and dense layers1412. The input layer1404may receive the input to the convolutional neural network. As shown inFIG.14, the input the convolutional neural network may be the image1402. The image1402may be, for example, an ultrasound image or a non-acoustic image. The input layer1404may be followed by one or more convolution and pooling layers1410. A convolutional layer may comprise a set of filters that are spatially smaller (e.g., have a smaller width and/or height) than the input to the convolutional layer (e.g., the image1402). Each of the filters may be convolved with the input to the convolutional layer to produce an activation map (e.g., a 2-dimensional activation map) indicative of the responses of that filter at every spatial position. The convolutional layer may be followed by a pooling layer that down-samples the output of a convolutional layer to reduce its dimensions. The pooling layer may use any of a variety of pooling techniques such as max pooling and/or global average pooling. In some embodiments, the down-sampling may be performed by the convolution layer itself (e.g., without a pooling layer) using striding. The convolution and pooling layers1410may be followed by dense layers1412. The dense layers1412may comprise one or more layers each with one or more neurons that receives an input from a previous layer (e.g., a convolutional or pooling layer) and provides an output to a subsequent layer (e.g., the output layer1408). The dense layers1412may be described as “dense” because each of the neurons in a given layer may receive an input from each neuron in a previous layer and provide an output to each neuron in a subsequent layer. The dense layers1412may be followed by an output layer1408that provides the output of the convolutional neural network. The output may be, for example, an indication of which class, from a set of classes, the image1402(or any portion of the image1402) belongs to. It should be appreciated that the convolutional neural network shown inFIG.14is only one example implementation and that other implementations may be employed. For example, one or more layers may be added to or removed from the convolutional neural network shown inFIG.14. Additional example layers that may be added to the convolutional neural network include: a rectified linear units (ReLU) layer, a pad layer, a concatenate layer, and an upscale layer. An upscale layer may be configured to upsample the input to the layer. An ReLU layer may be configured to apply a rectifier (sometimes referred to as a ramp function) as a transfer function to the input. A pad layer may be configured to change the size of the input to the layer by padding one or more dimensions of the input. A concatenate layer may be configured to combine multiple inputs (e.g., combine inputs from multiple layers) into a single output. Convolutional neural networks may be employed to perform any of a variety of functions described herein. For example, a convolutional neural networks may be employed to: (1) identify an anatomical view contained in an ultrasound image, (2) identify an instruction to provide an operator, (3) identify an anatomical feature in an ultrasound image, or (4) identify a pose of ultrasound device in a non-acoustic image. It should be appreciated that more than a single convolutional neural network may be employed to perform one or more of these functions. For example, a first convolutional neural network may be employed to identify an instruction to provide an operator based on an input ultrasound image and a second, different convolutional neural network may be employed to identify an anatomical feature in an ultrasound image. The first and second neural networks may comprise a different arrangement of layers and/or be trained using different training data. An example implementation of a convolutional network is shown below in Table 1. The convolutional neural network shown in Table 1 may be employed to classify an input image (e.g., an ultrasound image). For example, the convolutional network shown in Table 1 may be configured to receive an input ultrasound image and provide an output that is indicative of which instruction from a set of instructions should be provided to an operator to properly position the ultrasound device. The set of instructions may include: (1) tilt the ultrasound device inferomedially, (2) rotate the ultrasound device counterclockwise, (3) rotate the ultrasound device clockwise, (4) move the ultrasound device one intercostal space down, (5) move the ultrasound device one intercostal space up, and (6) slide the ultrasound device medially. In Table 1, the sequence of the layer is denoted by the “Layer Number” column, the type of the layer is denoted by the “Layer Type” column, and the input to the layer is denoted by the “Input to Layer” column. TABLE 1Example Layer Configuration for Convolutional neural networkLayer NumberLayer TypeInput to Layer1Input LayerInput Image2Convolution LayerOutput of Layer 13Convolution LayerOutput of Layer 24Pooling LayerOutput of Layer 35Convolution LayerOutput of Layer 46Convolution LayerOutput of Layer 57Pooling LayerOutput of Layer 68Convolution LayerOutput of Layer 79Convolution LayerOutput of Layer 810Pooling LayerOutput of Layer 911Convolution LayerOutput of Layer 1012Convolution LayerOutput of Layer 1113Pooling LayerOutput of Layer 1214Fully Connected LayerOutput of Layer 1315Fully Connected LayerOutput of Layer 1416Fully Connected LayerOutput of Layer 15 Another example implementation of a convolutional neural network is shown below in Table 2. The convolutional neural network in Table 2 may be employed to identify two points on the basal segments of the left ventricle in an ultrasound image. In Table 2, the sequence of the layer is denoted by the “Layer Number” column, the type of the layer is denoted by the “Layer Type” column, and the input to the layer is denoted by the “Input to Layer” column. TABLE 2Example Layer Configuration for Convolutional neural networkLayer NumberLayer TypeInput to Layer1Input LayerInput Image2Convolution LayerOutput of Layer 13Convolution LayerOutput of Layer 24Pooling LayerOutput of Layer 35Convolution LayerOutput of Layer 46Convolution LayerOutput of Layer 57Pooling LayerOutput of Layer 68Convolution LayerOutput of Layer 79Convolution LayerOutput of Layer 810Pooling LayerOutput of Layer 911Convolution LayerOutput of Layer 1012Convolution LayerOutput of Layer 1113Convolution LayerOutput of Layer 1214Fully Connected LayerOutput of Layer 1315Fully Connected LayerOutput of Layer 1416Fully Connected LayerOutput of Layer 15 Yet another example implementation of convolutional neural network is shown below in Table 3. The convolutional neural network shown in Table 3 may be configured to receive an ultrasound image and classify each pixel in the input image as belonging to the foreground (anatomical structure, e.g., left ventricle) or to the background. Relative to the convolutional neural networks shown in Tables 1 and 2, upsampling layers have been introduced to increase the resolution of the classification output. The output of the upsampled layers is combined with the output of other layers to provide accurate classification of individual pixels. In Table 3, the sequence of the layer is denoted by the “Layer Number” column, the type of the layer is denoted by the “Layer Type” column, and the input to the layer is denoted by the “Input to Layer” column. TABLE 3Example Layer Configuration for Convolutional neural networkLayer NumberLayer TypeInput to Layer1Input LayerInput Image2Convolution LayerOutput of Layer 13Convolution LayerOutput of Layer 24Pooling LayerOutput of Layer 35Convolution LayerOutput of Layer 46Convolution LayerOutput of Layer 57Pooling LayerOutput of Layer 68Convolution LayerOutput of Layer 79Convolution LayerOutput of Layer 810Pooling LayerOutput of Layer 911Convolution LayerOutput of Layer 1012Convolution LayerOutput of Layer 1113Convolution LayerOutput of Layer 1214Upscale LayerOutput of Layer 1315Convolution LayerOutput of Layer 1416Pad LayerOutput of Layer 1517Concatenate LayerOutput of Layers 9 and 1618Convolution LayerOutput of Layer 1719Convolution LayerOutput of Layer 1820Upscale LayerOutput of Layer 1921Convolution LayerOutput of Layer 2022Pad LayerOutput of Layer 2123Concatenate LayerOutput of Layers 6 and 2224Convolution LayerOutput of Layer 2325Convolution LayerOutput of Layer 2426Upscale LayerOutput of Layer 2527Convolution LayerOutput of Layer 2628Pad LayerOutput of Layer 2729Concatenate LayerOutput of Layers 3 and 2830Convolution LayerOutput of Layer 2931Convolution LayerOutput of Layer 3032Convolution LayerOutput of Layer 31 Integrating Statistical Knowledge into Convolutional Neural Networks In some embodiments, statistical prior knowledge may be integrated into a convolutional neural network. For example, prior statistical knowledge, obtained through principal components analysis (PCA), may be integrated into a convolutional neural network in order to obtain robust predictions even when dealing with corrupted or noisy data. In these embodiments, the network architecture may be trained end-to-end and include a specially designed layer which incorporates the dataset modes of variation discovered via PCA and produces predictions by linearly combining them. Further, a mechanism may be included to focus the attention of the convolutional neural network on specific regions of interest of an input image in order to obtain refined predictions. The complexity of anatomical structures along with the presence of noise, artifacts, visual clutter, and poorly defined image areas often cause ambiguities and errors in image analysis. In the medical domain, many of these errors can be resolved by relying on statistical prior knowledge. For example, in segmentation it is useful to incorporate prior knowledge about the segmentation contour. Landmark localization tasks can benefit from the semantic relationships between different landmarks and how their positions are allowed to change with respect to each other. Finally, statistical models capturing the appearance of selected regions have been shown to improve results in a number of cases. Shape models have also been used to constrain segmentation algorithms that are based on machine learning. This has been done by learning a posterior distribution of PCA coefficients and by re-projecting portions of ground truth contours onto unseen examples. These models rely on shallow architectures, manually engineered or learned features and shape constraints being imposed as part of a regularization or post-processing step. Deep learning approaches and convolutional neural networks in particular, have shown astonishing capabilities to learn a hierarchy of features directly from raw data. Deep learning models are organized in multiple layers, where features are extracted in a cascaded fashion. As the depth of the network increases, the extracted features refer to bigger image regions and therefore recognize higher level concepts compared to the ones extracted in earlier layers. Unfortunately, the applicability of deep learning approaches in medical image analysis is often limited by the requirement to train with large annotated datasets. Supplying more annotated data during the learning process allows a larger amount of challenging, real-world situations to be captured and therefore partly overcomes the difficulty to integrate prior statistical knowledge in the learning process. In the medical domain, it is often difficult to obtain large annotated datasets due to limitations on data usage and circulation and the tediousness of the annotation process. Moreover, medical images typically exhibit large variability in the quality and appearance of the structures across different scans, which further hampers the performances of machine vision algorithms. Ultrasound images, in particular, are often corrupted by noise, shadows, signal drop regions, and other artifacts that make their interpretation challenging even to human observers. Additionally, ultrasound scans exhibit high intra- and inter-operator acquisition variability, even when scanned by experts. In some embodiments, PCA may be employed to advantageously discover the principal modes of variation of training data. Such discovered principle modes of variation may be integrated into a convolutional neural network. The robustness of the results is increased by constraining the network predictions with prior knowledge extracted by statistically analyzing the training data. This approach makes it possible to process cases where the anatomy of interest appears only partially, its appearance is not clear, or it visually differs from the observed training examples. A convolutional network architecture may be employed that includes a new PCA layer that incorporates the dataset modes of variation and produces predictions as a linear combination of the modes. This process is used in procedure that focuses the attention of the subsequent convolutional neural network layers on the specific region of interest to obtain refined predictions. Importantly, the network is trained end-to-end with the shape encoded in a PCA layer and the loss imposed on the final location of the points. The end-to-end training makes it possible to start from a random configuration of network parameters and find the optimal set of filters and biases according to the estimation task and training data. This method may be applied to, for example, the landmark localization in 2D echocardiography images acquired from the parasternal long axis view and to the left ventricle segmentation of the heart in scans acquired from the apical four chamber view. Incorporating statistical prior knowledge obtained through PCA into a convolutional neural network may advantageously overcome the limitations of previous deep learning approaches which lack strong shape priors and the limitations of active shape models which lack advanced pattern recognition capabilities. This approach may be fully automatic and therefore differs from most previous methods based on ASM which required human interaction. The neural network outputs the prediction in a single step without requiring any optimization loop. In some embodiments, a training set containing N images and the associated ground truth annotations consisting of coordinates referring to P key-points which describe the position of landmarks may be employed. The training set may be used to first obtain the principal modes of variation of the coordinates in Y and then train a convolutional neural network that leverages it. The information used to formulate our predictions is obtained after multiple convolution and pooling operations and therefore fine-grained, high-resolution details might be lost across the layers. For this reason, a mechanism may be employed that focuses the attention of the network on full-resolution details by cropping portions of the image in order to refine the predictions. The architecture may be trained end-to-end, and all the parameters of the network may be updated at every iteration. Much of the variability of naturally occurring structures, such as organs and anatomical details of the body, is not arbitrary. By simple observation of a dataset of shapes representative of a population, for example, one can notice the presence of symmetries and correlations between different shape parts. In the same way, it is often possible to observe correlations in the position of different landmarks of the body since they are tightly entangled with each other. PCA can be used to discover the principal modes of variation of the dataset at hand. When shapes are described as aligned point sets across the entire dataset, PCA reveals what correlations exist between different points and defines a new coordinates frame where the principal modes of variation correspond to the axes. Having a matrix Y containing the dataset, where each observation yiconstitutes one of its columns, its principal components may be obtained by first de-meaning Y through equation (3): Y~=Y-μ;withμ=1N∑iyi(3) and then by computing the eigenvectors of the covariance matrixYYT. This corresponds to U in equation (4): Y~=UΣVT(4) Which is obtained via singular value decomposition (SVD). The matrixY=UΣVTis diagonal and contains the eigenvalues of the covariance matrix and represent the variance associated with each principle component in the eigenbase. Any example in the dataset can be synthesized as a linear combination of the principle components as shown in Equation (5): yi=Uw+μ(5) Each coefficient of the linear combination governs not only the position of one, but multiple correlated points that may describe the shape at hand. Imposing constraints on the coefficients weighting the effect of each principal component, or reducing their number until the correct balance between percent-age of retained variance and number of principal components is reached, it is possible to synthesize shapes that respect the concept of “legal shape” introduced before. The convolutional neural network may not be trained to perform regression on the weights w in Equation 5. Instead, an end-to-end architecture may be used where the network directly uses the PCA eigenbase to make predictions from an image in the form of key-points locations. This has direct consequences on the training process. The network learns, by minimizing the loss, to steer the coefficients while being “aware” of their effect on the results. Each of the weights controls the location of multiple correlated key-points simultaneously. Since the predictions are obtained as a linear combination of the principal components, they obey the concept of “legal shape” and therefore are more robust to missing data, noise, and artifacts. The network may comprises two branches. The first branch employs convolutional, pooling, and dense layers, and produces a coarse estimate of the key-point locations via PCA. The second branch operates on full resolution patches cropped from the input image around the coarse key-point locations. The output of the second network refines the predictions made by the first by using more fine-grained visual information. Both the branches are trained simultaneously and are fully differentiable. The convolutions are all applied without padding and they use kernels of size 3×3 in the first convolutional neural network branch and 5×5 in the second, shallower, branch. The nonlinearities used throughout the network are rectified linear functions. All the inputs of the PCA layer, are not processed through nonlinearities. The PCA layer implements a slightly modified of the synthesis equation in 5. In addition to the weights w, which are supplied by a dense layer of the network, a global shift s that is applied to all the predicted points is also supplied. Through the bi-dimensional vector s, translations of the anatomy of interest are able to be handled. With a slight abuse of notation, Equation 5 may be re-written as shown in Equation (6): yi=Uw+μ+s.(6) The layer performing cropping follows an implementation inspired to spatial transformers which ensures differentiability. A regular sampling pattern is translated to the coarse key-point locations and the intensity values of the surrounding area are sampled using bilinear interpolation. Having P key-points, P patches may be obtained for each of the K images in the mini-batch. The resulting KP patches are then processed through a 3-layers deep convolutional neural network using 8 filters applied without padding, which reduces their size by a total of 12 pixels. After the convolution layers, the patches are again arranged into a batch of K elements having P×8 channels, and further processed through three dense layers, which ultimately compute wAhaving the same dimensionality of w. The refined weights wFwhich are employed in the PCA layer to obtain a more accurate key-point prediction, are obtained as wF=wA+w. This approach has been tested on two different ultrasound dataset depicting the human heart with the aim to solve two different tasks with good results. The first task is segmentation of the left ventricle (LV) of the heart form scans acquired from the apical view, while the second task is a landmark localization problem where the aim is to localize 14 points of interest in images acquired from the parasternal long axis view. In the first case, the model leverages prior statistical knowledge relative to the shape of the structures of interest, while in the second case the model captures the spatiotemporal relationships between landmarks across cardiac cycles of different patients. For the segmentation task a total of 1100 annotated images, 953 for training and 147 for testing, were employed. Techniques for Landmark Localization Using Convolutional Neural Networks The inventors have appreciated that accurate landmark localization in ultrasound video sequences is challenging due to noise, shadows, anatomical differences, and scan plane variation. Accordingly, the inventors have conceived and developed a fully convolutional neural network trained to regress the landmark locations that may address such challenges. In this convolutional neural network, a series of convolution and pooling layers is followed by a collection of upsampling and convolution layers with feature forwarding from the earlier layers. The final location estimates are produced by computing a center of mass of the regression maps in the last layer. In addition, uncertainty of the estimates are computed as the standard deviations of the predictions. The temporal consistency of the estimates is achieved by a Long Short-Term memory cells which processes several previous frames in order to refine the estimate in the current frame. The results on automatic measurement of left ventricle in parasternal long axis views and subsequent ejection fraction computation show accuracy on par with inter-user variability. Regression modeling is an approach for describing relationship between an independent variable and one or more dependent variables. In machine learning, this relationship is described by a function whose parameters are learned from training examples. In deep learning models, this function is a composition of logistic (sigmoid), hyperbolic tangent, or more recently rectified linear functions at each layer of the network. In many applications, the function learns a mapping between input image patches and a continuous prediction variable. Regression modeling has been used to detect organ or landmark locations in images, visually track objects and features, and estimate body poses. The deep learning approaches have outperformed previous techniques especially when a large annotated training data set is available. The proposed architectures used cascade of regressors, refinement localization stages, and combining cues from multiple landmarks to localize landmarks. In medical images, the requirements on accurate localization are high since the landmarks are used as measurement points to help in diagnosis. When tracking the measurements in video sequences, the points must be accurately detected in each frame while ensuring temporal consistency of the detections. A fully convolutional network architecture for accurate localization of anatomical landmark points in video sequences has been devised. The advantage of the fully convolutional network is that the responses from multiple windows covering the input image can be computed in a single step. The network is trained end-to-end and outputs the locations of the landmarks. The aggregation of the regressed locations at the last convolution layer is ensured by a new center-of-mass layer which computes mean position of the predictions. The layer makes it possible to use new regularization technique based on variance of the predicted candidates and to define new loss based on relative locations of landmarks. The evaluation is fast to process each frame of a video sequence at near frame rate speeds. The temporal consistency of the measurements is improved by Convolutional Long Short-term Memory (CLSTM) cells which process the feature maps from several previous frames and produce updated features for the current frame in order to refine the estimate. Denote an input image of width w and height h as I (independent variable) and the position of k landmarks stacked columnwise into p (dependent variable). The goal of the regression is to learn a function ƒ(I; θ)=p parametrized by θ. θ may be approximated by a convolutional neural network and train the parameters params using a database of images and their corresponding annotations. Typically, a Euclidean loss is employed to train ƒ using each annotated image. Previously, regression estimates were obtained directly from the last layer of the network, which was fully connected to previous layer. This is a highly non-linear mapping, where the estimate is computed from the fully connected layers after convolutional blocks. Instead of fully connected network, we propose to regress landmark locations using a fully convolutional architecture (FCNN). Their advantage is that the estimates can be computed in a single evaluation step. In the proposed architecture, landmark coordinate estimates may be obtained at each image location. The aggregated landmark coordinate estimates are computed in a new center of mass layer from input at each predicting location lij: p^=1w×h∑i=1h∑j=1wIij(7) Recurrent neural networks (RNN) can learn sequential context dependencies by accepting input xtand updating a hidden vector htat every time step t. The RNN network can be composed of Long-short Term Memory (LSTM) units, each controlled by a gating mechanism with three types of updates, it, ƒt, otthat range between 0 and 1. The value itcontrols the update of each memory cell, ƒtcontrols the forgetting of each memory cell, and otcontrols the influence of the memory state on the hidden vector. In Convolutional LSTMs (CLSTMs), the input weights and hidden vector weights are convolved instead of multiplied to model spatial constraints. The function introduces a non-linearity, which may be chosen to be tanh. Denoting the convolutional operator as * for equations 8-10, the values at the gates are computed as follows: forgetgate:ft=sigm(Wf*[ht-1,xt]+bf)(8)inputgate:it=sigm(Wi*[ht-1,xt]+bi)(9)outputgate:ot=sigm(Wo*[ht-1,xt]+bo)(10) The parameters of the weights W and biases b are learned from training sequences. In addition to the gate values, each CLSTM unit computes state candidate values: gt=tanh(Wg*[ht-1,xt]+bg)(11) where gtranges between −1 and 1 and influences memory contents. The memory cell is updated by ct=ft⊙ct-1+it⊙gt(12) which additively modifies each memory cell. The update process results in the gradients being distributed during backpropagation. The symbol ct=ƒt⊙ct−1+it⊙gtdenotes the Hadamard product. Finally, the hidden state is updated as: ht=ot⊙tanh(ct)(13) In sequential processing of image sequences, the inputs into the LSTM consist of the feature maps computed from a convolutional neural network. In this work, two architectures are proposed to compute the feature maps. The first architecture is a neural network with convolution and pooling layers. After sequential processing the feature maps in CLSTM, the output is fed into fully connected layers to compute the landmark location estimate. In the second architecture, the CLSTM inputs is the final layer of a convolutional path of the fully convolutional architecture (FCN). The landmark location estimates are computed from the CLSTM output processed through the transposed convolutional part of the FCN network. Example Ultrasound Systems FIG.15Ais a schematic block diagram illustrating aspects of an example ultrasound system1500A upon which various aspects of the technology described herein may be practiced. For example, one or more components of the ultrasound system1500A may perform any of the processes described herein. As shown, the ultrasound system1500A comprises processing circuitry1501, input/output devices1503, ultrasound circuitry1505, and memory circuitry1507. The ultrasound circuitry1505may be configured to generate ultrasound data that may be employed to generate an ultrasound image. The ultrasound circuitry1505may comprise one or more ultrasonic transducers monolithically integrated onto a single semiconductor die. The ultrasonic transducers may include, for example, one or more capacitive micromachined ultrasonic transducers (CMUTs), one or more CMOS ultrasonic transducers (CUTs), one or more piezoelectric micromachined ultrasonic transducers (PMUTs), and/or one or more other suitable ultrasonic transducer cells. In some embodiments, the ultrasonic transducers may be formed the same chip as other electronic components in the ultrasound circuitry1505(e.g., transmit circuitry, receive circuitry, control circuitry, power management circuitry, and processing circuitry) to form a monolithic ultrasound device. The processing circuitry1501may be configured to perform any of the functionality described herein. The processing circuitry1501may comprise one or more processors (e.g., computer hardware processors). To perform one or more functions, the processing circuitry1501may execute one or more processor-executable instructions stored in the memory circuitry1507. The memory circuitry1507may be used for storing programs and data during operation of the ultrasound system1500B. The memory circuitry1507may comprise one or more storage devices such as non-transitory computer-readable storage media. The processing circuitry1501may control writing data to and reading data from the memory circuitry1507in any suitable manner. In some embodiments, the processing circuitry1501may comprise specially-programmed and/or special-purpose hardware such as an application-specific integrated circuit (ASIC). For example, the processing circuitry1501may comprise one or more tensor processing units (TPUs). TPUs may be ASICs specifically designed for machine learning (e.g., deep learning). The TPUs may be employed to, for example, accelerate the inference phase of a neural network. The input/output (I/O) devices1503may be configured to facilitate communication with other systems and/or an operator. Example I/O devices that may facilitate communication with an operator include: a keyboard, a mouse, a trackball, a microphone, a touch screen, a printing device, a display screen, a speaker, and a vibration device. Example I/O devices that may facilitate communication with other systems include wired and/or wireless communication circuitry such as BLUETOOTH, ZIGBEE, WiFi, and/or USB communication circuitry. It should be appreciated that the ultrasound system1500A may be implemented using any number of devices. For example, the components of the ultrasound system1500A may be integrated into a single device. In another example, the ultrasound circuitry1505may be integrated into an ultrasound device that is communicatively coupled with a computing device that comprises the processing circuitry1501, the input/output devices1503, and the memory circuitry1507. FIG.15Bis a schematic block diagram illustrating aspects of another example ultrasound system1500B upon which various aspects of the technology described herein may be practiced. For example, one or more components of the ultrasound system1500B may perform any of the processes described herein. As shown, the ultrasound system1500B comprises an ultrasound device1514in wired and/or wireless communication with a computing device1502. The computing device1502comprises an audio output device1504, an imaging device1506, a display screen1508, a processor1510, a memory1512, and a vibration device1509. The computing device1502may communicate with one or more external devices over a network1516. For example, the computing device1502may communicate with one or more workstations1520, servers1518, and/or databases1522. The ultrasound device1514may be configured to generate ultrasound data that may be employed to generate an ultrasound image. The ultrasound device1514may be constructed in any of a variety of ways. In some embodiments, the ultrasound device1514includes a transmitter that transmits a signal to a transmit beamformer which in turn drives transducer elements within a transducer array to emit pulsed ultrasonic signals into a structure, such as a patient. The pulsed ultrasonic signals may be back-scattered from structures in the body, such as blood cells or muscular tissue, to produce echoes that return to the transducer elements. These echoes may then be converted into electrical signals, or ultrasound data, by the transducer elements and the electrical signals are received by a receiver. The electrical signals representing the received echoes are sent to a receive beamformer that outputs ultrasound data. The computing device1502may be configured to process the ultrasound data from the ultrasound device1514to generate ultrasound images for display on the display screen1508. The processing may be performed by, for example, the processor1510. The processor1510may also be adapted to control the acquisition of ultrasound data with the ultrasound device1514. The ultrasound data may be processed in real-time during a scanning session as the echo signals are received. In some embodiments, the displayed ultrasound image may be updated a rate of at least 5 Hz, at least 10 Hz, at least 20 Hz, at a rate between 5 and 60 Hz, at a rate of more than 20 Hz. For example, ultrasound data may be acquired even as images are being generated based on previously acquired data and while a live ultrasound image is being displayed. As additional ultrasound data is acquired, additional frames or images generated from more-recently acquired ultrasound data are sequentially displayed. Additionally, or alternatively, the ultrasound data may be stored temporarily in a buffer during a scanning session and processed in less than real-time. Additionally (or alternatively), the computing device1502may be configured to perform any of the processes described herein (e.g., using the processor1510) and/or display any of the user interfaces described herein (e.g., using the display screen1508). For example, the computing device1502may be configured to provide instructions to an operator of the ultrasound device1514to assist the operator select a target anatomical view of a subject to image and to guide the operator capture an ultrasound image containing the target anatomical view. As shown, the computing device1502may comprise one or more elements that may be used during the performance of such processes. For example, the computing device1502may comprise one or more processors1510(e.g., computer hardware processors) and one or more articles of manufacture that comprise non-transitory computer-readable storage media such as the memory1512. The processor1510may control writing data to and reading data from the memory1512in any suitable manner. To perform any of the functionality described herein, the processor1510may execute one or more processor-executable instructions stored in one or more non-transitory computer-readable storage media (e.g., the memory1512), which may serve as non-transitory computer-readable storage media storing processor-executable instructions for execution by the processor1510. In some embodiments, the computing device1502may comprise one or more input and/or output devices such as the audio output device1504, the imaging device1506, the display screen1508, and the vibration device1509. The audio output device1504may be a device that is configured to emit audible sound such as a speaker. The imaging device1506may be configured to detect light (e.g., visible light) to form an image such as a camera. The display screen1508may be configured to display images and/or videos such as a liquid crystal display (LCD), a plasma display, and/or an organic light emitting diode (OLED) display. The vibration device1509may be configured to vibrate one or more components of the computing device1502to provide tactile feedback. These input and/or output devices may be communicatively coupled to the processor1510and/or under the control of the processor1510. The processor1510may control these devices in accordance with a process being executed by the process1510(such as any of the processes shown inFIGS.9-13). For example, the processor1510may control the display screen1508to display any of the above described user interfaces, instructions, and/or ultrasound images. Similarly, the processor1510may control the audio output device1504to issue audible instructions and/or control the vibration device1509to change an intensity of tactile feedback (e.g., vibration) to issue tactile instructions. Additionally (or alternatively), the processor1510may control the imaging device1506to capture non-acoustic images of the ultrasound device1514being used on a subject to provide an operator of the ultrasound device1514an augmented reality interface (e.g., as shown inFIGS.5B and6). It should be appreciated that the computing device1502may be implemented in any of a variety of ways. For example, the computing device1502may be implemented as a handheld device such as a mobile smartphone or a tablet. Thereby, an operator of the ultrasound device1514may be able to operate the ultrasound device1514with one hand and hold the computing device1502with another hand. In other examples, the computing device1502may be implemented as a portable device that is not a handheld device such as a laptop. In yet other examples, the computing device1502may be implemented as a stationary device such as a desktop computer. In some embodiments, the computing device1502may communicate with one or more external devices via the network1516. The computing device1502may be connected to the network1516over a wired connection (e.g., via an Ethernet cable) and/or a wireless connection (e.g., over a WiFi network). As shown inFIG.15B, these external devices may include servers1518, workstations1520, and/or databases1522. The computing device1502may communicate with these devices to, for example, off-load computationally intensive tasks. For example, the computing device1502may send an ultrasound image over the network1516to the server1518for analysis (e.g., to identify an anatomical feature in the ultrasound image and/or identify an instruction to provide the operator) and receive the results of the analysis from the server1518. Additionally (or alternatively), the computing device1502may communicate with these devices to access information that is not available locally and/or update a central information repository. For example, the computing device1502may access the medical records of a subject being imaged with the ultrasound device1514from a file stored in the database1522. In this example, the computing device1502may also provide one or more captured ultrasound images of the subject to the database1522to add to the medical record of the subject. The terms “program,” “application,” or “software” are used herein in a generic sense to refer to any type of computer code or set of processor-executable instructions that may be employed to program a computer or other processor to implement various aspects of embodiments as discussed above. Additionally, according to one aspect, one or more computer programs that when executed perform methods of the disclosure provided herein need not reside on a single computer or processor, but may be distributed in a modular fashion among different computers or processors to implement various aspects of the disclosure provided herein. Processor-executable instructions may be in many forms, such as program modules, executed by one or more computers or other devices. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. Typically, the functionality of the program modules may be combined or distributed. Also, data structures may be stored in one or more non-transitory computer-readable storage media in any suitable form. For simplicity of illustration, data structures may be shown to have fields that are related through location in the data structure. Such relationships may likewise be achieved by assigning storage for the fields with locations in a non-transitory computer-readable medium that convey relationship between the fields. However, any suitable mechanism may be used to establish relationships among information in fields of a data structure, including through the use of pointers, tags or other mechanisms that establish relationships among data elements. Example Ultrasound Devices FIG.16shows an illustrative example of a monolithic ultrasound device1600that may be employed as any of the ultrasound devices described above such as ultrasound devices102,502,602, and1514or any of the ultrasound circuitry described herein such as ultrasound circuitry1505. As shown, the ultrasound device1600may include one or more transducer arrangements (e.g., arrays)1602, transmit (TX) circuitry1604, receive (RX) circuitry1606, a timing and control circuit1608, a signal conditioning/processing circuit1610, a power management circuit1618, and/or a high-intensity focused ultrasound (HIFU) controller1620. In the embodiment shown, all of the illustrated elements are formed on a single semiconductor die1612. It should be appreciated, however, that in alternative embodiments one or more of the illustrated elements may be instead located off-chip. In addition, although the illustrated example shows both TX circuitry1604and RX circuitry1606, in alternative embodiments only TX circuitry or only RX circuitry may be employed. For example, such embodiments may be employed in a circumstance where one or more transmission-only devices1600are used to transmit acoustic signals and one or more reception-only devices1600are used to receive acoustic signals that have been transmitted through or reflected off of a subject being ultrasonically imaged. It should be appreciated that communication between one or more of the illustrated components may be performed in any of numerous ways. In some embodiments, for example, one or more high-speed busses (not shown), such as that employed by a unified Northbridge, may be used to allow high-speed intra-chip communication or communication with one or more off-chip components. The one or more transducer arrays1602may take on any of numerous forms, and aspects of the present technology do not necessarily require the use of any particular type or arrangement of transducer cells or transducer elements. Indeed, although the term “array” is used in this description, it should be appreciated that in some embodiments the transducer elements may not be organized in an array and may instead be arranged in some non-array fashion. In various embodiments, each of the transducer elements in the array1602may, for example, include one or more capacitive micromachined ultrasonic transducers (CMUTs), one or more CMOS ultrasonic transducers (CUTs), one or more piezoelectric micromachined ultrasonic transducers (PMUTs), and/or one or more other suitable ultrasonic transducer cells. In some embodiments, the transducer elements of the transducer array102may be formed on the same chip as the electronics of the TX circuitry1604and/or RX circuitry1606. The transducer elements1602, TX circuitry1604, and RX circuitry1606may, in some embodiments, be integrated in a single ultrasound device. In some embodiments, the single ultrasound device may be a handheld device. In other embodiments, the single ultrasound device may be embodied in a patch that may be coupled to a patient. The patch may be configured to transmit, wirelessly, data collected by the patch to one or more external devices for further processing. A CUT may, for example, include a cavity formed in a CMOS wafer, with a membrane overlying the cavity, and in some embodiments sealing the cavity. Electrodes may be provided to create a transducer cell from the covered cavity structure. The CMOS wafer may include integrated circuitry to which the transducer cell may be connected. The transducer cell and CMOS wafer may be monolithically integrated, thus forming an integrated ultrasonic transducer cell and integrated circuit on a single substrate (the CMOS wafer). The TX circuitry1604(if included) may, for example, generate pulses that drive the individual elements of, or one or more groups of elements within, the transducer array(s)1602so as to generate acoustic signals to be used for imaging. The RX circuitry1606, on the other hand, may receive and process electronic signals generated by the individual elements of the transducer array(s)102when acoustic signals impinge upon such elements. In some embodiments, the timing and control circuit1608may, for example, be responsible for generating all timing and control signals that are used to synchronize and coordinate the operation of the other elements in the device1600. In the example shown, the timing and control circuit1608is driven by a single clock signal CLK supplied to an input port1616. The clock signal CLK may, for example, be a high-frequency clock used to drive one or more of the on-chip circuit components. In some embodiments, the clock signal CLK may, for example, be a 1.5625 GHz or 2.5 GHz clock used to drive a high-speed serial output device (not shown inFIG.16) in the signal conditioning/processing circuit1610, or a 20 Mhz or 40 MHz clock used to drive other digital components on the semiconductor die1612, and the timing and control circuit1608may divide or multiply the clock CLK, as necessary, to drive other components on the die1612. In other embodiments, two or more clocks of different frequencies (such as those referenced above) may be separately supplied to the timing and control circuit1608from an off-chip source. The power management circuit1618may, for example, be responsible for converting one or more input voltages VIN from an off-chip source into voltages needed to carry out operation of the chip, and for otherwise managing power consumption within the device1600. In some embodiments, for example, a single voltage (e.g., 12V, 80V, 100V, 120V, etc.) may be supplied to the chip and the power management circuit1618may step that voltage up or down, as necessary, using a charge pump circuit or via some other DC-to-DC voltage conversion mechanism. In other embodiments, multiple different voltages may be supplied separately to the power management circuit1618for processing and/or distribution to the other on-chip components. As shown inFIG.16, in some embodiments, a HIFU controller1620may be integrated on the semiconductor die1612so as to enable the generation of HIFU signals via one or more elements of the transducer array(s)1602. In other embodiments, a HIFU controller for driving the transducer array(s)1602may be located off-chip, or even within a device separate from the device1600. That is, aspects of the present disclosure relate to provision of ultrasound-on-a-chip HIFU systems, with and without ultrasound imaging capability. It should be appreciated, however, that some embodiments may not have any HIFU capabilities and thus may not include a HIFU controller1620. Moreover, it should be appreciated that the HIFU controller1620may not represent distinct circuitry in those embodiments providing HIFU functionality. For example, in some embodiments, the remaining circuitry ofFIG.16(other than the HIFU controller1620) may be suitable to provide ultrasound imaging functionality and/or HIFU, i.e., in some embodiments the same shared circuitry may be operated as an imaging system and/or for HIFU. Whether or not imaging or HIFU functionality is exhibited may depend on the power provided to the system. HIFU typically operates at higher powers than ultrasound imaging. Thus, providing the system a first power level (or voltage level) appropriate for imaging applications may cause the system to operate as an imaging system, whereas providing a higher power level (or voltage level) may cause the system to operate for HIFU. Such power management may be provided by off-chip control circuitry in some embodiments. In addition to using different power levels, imaging and HIFU applications may utilize different waveforms. Thus, waveform generation circuitry may be used to provide suitable waveforms for operating the system as either an imaging system or a HIFU system. In some embodiments, the system may operate as both an imaging system and a HIFU system (e.g., capable of providing image-guided HIFU). In some such embodiments, the same on-chip circuitry may be utilized to provide both functions, with suitable timing sequences used to control the operation between the two modalities. In the example shown, one or more output ports1614may output a high-speed serial data stream generated by one or more components of the signal conditioning/processing circuit1610. Such data streams may, for example, be generated by one or more USB 3.0 modules, and/or one or more 10 GB, 40 GB, or 100 GB Ethernet modules, integrated on the semiconductor die1612. In some embodiments, the signal stream produced on output port1614can be fed to a computer, tablet, or smartphone for the generation and/or display of 2-dimensional, 3-dimensional, and/or tomographic images. In embodiments in which image formation capabilities are incorporated in the signal conditioning/processing circuit1610, even relatively low-power devices, such as smartphones or tablets which have only a limited amount of processing power and memory available for application execution, can display images using only a serial data stream from the output port1614. As noted above, the use of on-chip analog-to-digital conversion and a high-speed serial data link to offload a digital data stream is one of the features that helps facilitate an “ultrasound on a chip” solution according to some embodiments of the technology described herein. Devices1600such as that shown inFIG.16may be used in any of a number of imaging and/or treatment (e.g., HIFU) applications, and the particular examples discussed herein should not be viewed as limiting. In one illustrative implementation, for example, an imaging device including an N×M planar or substantially planar array of CMUT elements may itself be used to acquire an ultrasonic image of a subject, e.g., a person's abdomen, by energizing some or all of the elements in the array(s)1602(either together or individually) during one or more transmit phases, and receiving and processing signals generated by some or all of the elements in the array(s)1602during one or more receive phases, such that during each receive phase the CMUT elements sense acoustic signals reflected by the subject. In other implementations, some of the elements in the array(s)1602may be used only to transmit acoustic signals and other elements in the same array(s)1602may be simultaneously used only to receive acoustic signals. Moreover, in some implementations, a single imaging device may include a P×Q array of individual devices, or a P×Q array of individual N×M planar arrays of CMUT elements, which components can be operated in parallel, sequentially, or according to some other timing scheme so as to allow data to be accumulated from a larger number of CMUT elements than can be embodied in a single device1600or on a single die1612. In yet other implementations, a pair of imaging devices can be positioned so as to straddle a subject, such that one or more CMUT elements in the device(s)1600of the imaging device on one side of the subject can sense acoustic signals generated by one or more CMUT elements in the device(s)1600of the imaging device on the other side of the subject, to the extent that such pulses were not substantially attenuated by the subject. Moreover, in some implementations, the same device1600can be used to measure both the scattering of acoustic signals from one or more of its own CMUT elements as well as the transmission of acoustic signals from one or more of the CMUT elements disposed in an imaging device on the opposite side of the subject. FIG.17is a block diagram illustrating how, in some embodiments, the TX circuitry1604and the RX circuitry1606for a given transducer element1702may be used either to energize the transducer element1702to emit an ultrasonic pulse, or to receive and process a signal from the transducer element1702representing an ultrasonic pulse sensed by it. In some implementations, the TX circuitry1604may be used during a “transmission” phase, and the RX circuitry may be used during a “reception” phase that is non-overlapping with the transmission phase. In other implementations, one of the TX circuitry1604and the RX circuitry1606may simply not be used in a given device1600, such as when a pair of ultrasound units is used for only transmissive imaging. As noted above, in some embodiments, an ultrasound device1600may alternatively employ only TX circuitry1604or only RX circuitry1606, and aspects of the present technology do not necessarily require the presence of both such types of circuitry. In various embodiments, TX circuitry1604and/or RX circuitry1606may include a TX circuit and/or an RX circuit associated with a single transducer cell (e.g., a CUT or CMUT), a group of two or more transducer cells within a single transducer element1702, a single transducer element1702comprising a group of transducer cells, a group of two or more transducer elements1702within an array1602, or an entire array1602of transducer elements1702. In the example shown inFIG.17, the TX circuitry1604/RX circuitry1606includes a separate TX circuit and a separate RX circuit for each transducer element1702in the array(s)1602, but there is only one instance of each of the timing & control circuit1608and the signal conditioning/processing circuit1610. Accordingly, in such an implementation, the timing & control circuit1608may be responsible for synchronizing and coordinating the operation of all of the TX circuitry1604/RX circuitry1606combinations on the die1612, and the signal conditioning/processing circuit1610may be responsible for handling inputs from all of the RX circuitry1606on the die1612. In other embodiments, timing and control circuit1608may be replicated for each transducer element1702or for a group of transducer elements1702. As shown inFIG.17, in addition to generating and/or distributing clock signals to drive the various digital components in the device1600, the timing & control circuit1608may output either an “TX enable” signal to enable the operation of each TX circuit of the TX circuitry1604, or an “RX enable” signal to enable operation of each RX circuit of the RX circuitry1606. In the example shown, a switch1716in the RX circuitry1606may always be opened before the TX circuitry1604is enabled, so as to prevent an output of the TX circuitry1604from driving the RX circuitry1606. The switch1716may be closed when operation of the RX circuitry1606is enabled, so as to allow the RX circuitry1606to receive and process a signal generated by the transducer element1702. As shown, the TX circuitry1604for a respective transducer element1702may include both a waveform generator1714and a pulser1712. The waveform generator1714may, for example, be responsible for generating a waveform that is to be applied to the pulser1712, so as to cause the pulser1712to output a driving signal to the transducer element1702corresponding to the generated waveform. In the example shown inFIG.17, the RX circuitry1606for a respective transducer element1702includes an analog processing block1718, an analog-to-digital converter (ADC)1720, and a digital processing block1722. The ADC1720may, for example, comprise a 10-bit or 12-bit, 20 Msps, 25 Msps, 40 Msps, 50 Msps, or 80 Msps ADC. After undergoing processing in the digital processing block1722, the outputs of all of the RX circuits on the semiconductor die1612(the number of which, in this example, is equal to the number of transducer elements1702on the chip) are fed to a multiplexer (MUX)1724in the signal conditioning/processing circuit1610. In other embodiments, the number of transducer elements is larger than the number of RX circuits, and several transducer elements provide signals to a single RX circuit. The MUX1724multiplexes the digital data from the RX circuits, and the output of the MUX1724is fed to a multiplexed digital processing block1726in the signal conditioning/processing circuit1610, for final processing before the data is output from the semiconductor die1612, e.g., via one or more high-speed serial output ports1614. The MUX1724is optional, and in some embodiments parallel signal processing is performed. A high-speed serial data port may be provided at any interface between or within blocks, any interface between chips and/or any interface to a host. Various components in the analog processing block1718and/or the digital processing block1722may reduce the amount of data that needs to be output from the semiconductor die1612via a high-speed serial data link or otherwise. In some embodiments, for example, one or more components in the analog processing block1718and/or the digital processing block1722may thus serve to allow the RX circuitry1606to receive transmitted and/or scattered ultrasound pressure waves with an improved signal-to-noise ratio (SNR) and in a manner compatible with a diversity of waveforms. The inclusion of such elements may thus further facilitate and/or enhance the disclosed “ultrasound-on-a-chip” solution in some embodiments. The ultrasound devices described herein may be implemented in any of a variety of physical configurations including as part of a handheld device (which may include a screen to display obtained images) or as part of a patch configured to be affixed to the subject. In some embodiments, an ultrasound device may be embodied in a handheld device1802illustrated inFIGS.18A and18B. Handheld device1802may be held against (or near) a subject1800and used to image the subject. Handheld device1802may comprise an ultrasound device and display1804, which in some embodiments, may be a touchscreen. Display1804may be configured to display images of the subject (e.g., ultrasound images) generated within handheld device1802using ultrasound data gathered by the ultrasound device within device1802. In some embodiments, handheld device1802may be used in a manner analogous to a stethoscope. A medical professional may place handheld device1802at various positions along a patient's body. The ultrasound device within handheld device1802may image the patient. The data obtained by the ultrasound device may be processed and used to generate image(s) of the patient, which image(s) may be displayed to the medical professional via display1804. As such, a medical professional could carry the handheld device1802(e.g., around their neck or in their pocket) rather than carrying around multiple conventional devices, which is burdensome and impractical. In some embodiments, an ultrasound device may be embodied in a patch that may be coupled to a patient. For example,FIGS.18C and18Dillustrate a patch1810coupled to patient1812. The patch1810may be configured to transmit, wirelessly, data collected by the patch1810to one or more external devices for further processing.FIG.18Eshows an exploded view of patch1810. In some embodiments, an ultrasound device may be embodied in handheld device1820shown inFIG.18F. Handheld device1820may be configured to transmit data collected by the device1820wirelessly to one or more external device for further processing. In other embodiments, handheld device1820may be configured transmit data collected by the device1820to one or more external devices using one or more wired connections, as aspects of the technology described herein are not limited in this respect. Various aspects of the present disclosure may be used alone, in combination, or in a variety of arrangements not specifically discussed in the embodiments described in the foregoing and is therefore not limited in its application to the details and arrangement of components set forth in the foregoing description or illustrated in the drawings. For example, aspects described in one embodiment may be combined in any manner with aspects described in other embodiments. Further, some actions are described as taken by a “operator” or “subject.” It should be appreciated that a “operator” or “subject” need not be a single individual, and that in some embodiments, actions attributable to an “operator” or “subject” may be performed by a team of individuals and/or an individual in combination with computer-assisted tools or other mechanisms. Further, it should be appreciated that, in some instances, a “subject” may be the same person as the “operator.” For example, an individual may be imaging themselves with an ultrasound device and, thereby, act as both the “subject” being imaged and the “operator” of the ultrasound device. Use of ordinal terms such as “first,” “second,” “third,” etc., in the claims to modify a claim element does not by itself connote any priority, precedence, or order of one claim element over another or the temporal order in which acts of a method are performed, but are used merely as labels to distinguish one claim element having a certain name from another element having a same name (but for use of the ordinal term) to distinguish the claim elements. The terms “approximately” and “about” may be used to mean within ±20% of a target value in some embodiments, within ±10% of a target value in some embodiments, within ±5% of a target value in some embodiments, and yet within ±2% of a target value in some embodiments. The terms “approximately” and “about” may include the target value. Also, the phraseology and terminology used herein is for the purpose of description and should not be regarded as limiting. The use of “including,” “comprising,” or “having,” “containing,” “involving,” and variations thereof herein, is meant to encompass the items listed thereafter and equivalents thereof as well as additional items. Having described above several aspects of at least one embodiment, it is to be appreciated various alterations, modifications, and improvements will readily occur to those skilled in the art. Such alterations, modifications, and improvements are intended to be object of this disclosure. Accordingly, the foregoing description and drawings are by way of example only. | 174,381 |
11861888 | DETAILED DESCRIPTION The present invention will now be described more fully with reference to the accompanying drawings, in which several embodiments of the invention are shown. This invention may, however, be embodied in various forms and should not be construed as being limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the invention to those skilled in the art. It will be appreciated that the present disclosure may be embodied as methods, systems, or computer program products. Accordingly, the present inventive concepts disclosed herein may take the form of a hardware embodiment, a software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present inventive concepts disclosed herein may take the form of a computer program product on a non-transitory computer-readable storage medium having computer-usable program code embodied in the medium. Any suitable non-transitory computer readable medium may be utilized including hard disks, CD-ROMs, optical storage devices, flash memories, or magnetic storage devices. Computer program code or software programs that are operated upon or for carrying out operations according to the teachings of the invention may be written in a high level programming language such as C, C++, JAVA®, Smalltalk, JavaScript®, Visual Basic®, TSQL, Perl, use of .NET™ Framework, Visual Studio® or in various other programming languages. Software programs may also be written directly in a native assembler language for a target processor. A native assembler program uses instruction mnemonic representations of machine level binary instructions. Program code or computer readable medium as used herein refers to code whose format is understandable by a processor. Software embodiments of the disclosure do not depend upon their implementation with a particular programming language. The methods described in connection with the embodiments disclosed herein may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. A software module may reside in RAM memory, flash memory, ROM memory, EPROM memory, EEPROM memory, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art. A non-transitory computer-readable storage medium may be coupled to the processor through local connections such that the processor can read information from, and write information to, the storage medium or through network connections such that the processor can download information from or upload information to the storage medium. In the alternative, the storage medium may be integral to the processor. FIG.1Aillustrates a media recognition system100that includes image and video content recognition, fingerprinting, logo recognition, and searching operations in accordance with an embodiment of the present invention. The media recognition system100includes user sites102and103, a server106, a video database108, a remote user device114, such as a smartphone, with a wireless connection to the server106, a media presentation device116, such as a television (TV), laptop, tablet, smartphone, or the like, and an exemplary image and video content identification and fingerprinting process112operated, for example, by user site102. The video identification process includes image content recognition, such as logo recognition, by using various techniques as described herein including optical character recognition (OCR) and use of neural network classifiers. The remote user device114is representative of a plurality of remote user devices which may operate as described in accordance with embodiments of the present invention. The media presentation device116connects to a content provider117, such as provided by a cable delivery service, a satellite service, a digital video device (DVD) player, or the like. The media presentation device116may also connect to the network104for Internet and intranet access, by use of a cable118, for example, wireless or network connection. A network104, such as the Internet, a wireless network, or a private network, connects sites102and103, media presentation device116, and server106. Each of the user sites,102and103, remote user device114, media presentation device116, and server106may include a processor complex having one or more processors, having internal program storage and local user controls such as a monitor, a keyboard, a mouse, a printer, and may include other input or output devices, such as an external file storage device and communication interfaces. The user site102may comprise, for example, a personal computer, a laptop computer, a tablet computer, or the like equipped with programs and interfaces to support data input and output and video content identification, fingerprinting and search monitoring that may be implemented both automatically and manually. The user site102and the remote user device114, for example, may store programs, such as the image and video content identification and fingerprinting process112, which is an implementation of a content-based video identification process of the present invention. The user site102and the remote user device114also have access to such programs through electronic media, such as may be downloaded over the Internet from an external server, accessed through a universal serial bus (USB) port from flash memory, accessed from disk media of various types, or the like. The media recognition system100may also suitably include more servers and user sites than shown inFIG.1A. Also, multiple user sites each operating an instantiated copy or version of the image and video content identification and fingerprinting process112may be connected directly to the server106while other user sites may be indirectly connected to it over the network104. User sites102and103and remote user device114may generate user video content which is uploaded over the Internet104to a server106for storage in the video database108. The user sites102and103and remote user device114, for example, may also operate the image and video content identification and fingerprinting process112to generate fingerprints and search for video content in the video database108. The image and video content identification and fingerprinting process112inFIG.1Ais scalable and utilizes highly accurate video fingerprinting and identification technology as described in more detail below. The process112is operable to check unknown video content against a database of previously fingerprinted video content, which is considered an accurate or “golden” database. The image and video content identification and fingerprinting process112is different in a number of aspects from commonly deployed processes. For example, the process112extracts features from the video itself rather than modifying the video. The image and video content identification and fingerprinting process112allows the server106to configure a “golden” database specific to business requirements. For example, general multimedia content may be filtered according to a set of guidelines for acceptable multimedia content that may be stored on the media recognition system100configured as a business system. The user site102, that is configured to connect with the network104, uses the image and video content identification and fingerprinting process112to compare local video streams against a previously generated database of signatures in the video database108. The terms fingerprints and signatures may be used interchangeably herein. The video database108may store video archives, as well as data related to video content stored in the video database108. The video database108also may store a plurality of video fingerprints that have been adapted for use as described herein and in accordance with the present invention. It is noted that depending on the size of an installation, the functions of the image and video content identification and fingerprinting process112and the management of the video database108may be combined in a single processor system, such as user site102or server106, and may operate as directed by separate program threads for each function. The media recognition system100for both image and video content recognition and media fingerprinting is readily scalable to very large multimedia databases, has high accuracy in finding a correct clip, has a low probability of misidentifying a wrong clip, and is robust to many types of distortion as addressed further herein. The media recognition system100uses one or more fingerprints for a unit of multimedia content that are composed of a number of compact signatures, including cluster keys and associated metadata. The compact signatures and cluster keys are constructed to be easily searchable when scaling to a large database of multimedia fingerprints. The multimedia content is also represented by many signatures that relate to various aspects of the multimedia content that are relatively independent from each other. Such an approach allows the system to be robust to distortion of the multimedia content even when only small portions of the multimedia content are available. This process is described in U.S. Pat. No. 8,189,945 issued May 29, 2012 entitled “Digital Video Content Fingerprinting Based on Scale Invariant Interest Region Detection with an Array of Anisotropic Filters” which is assigned to the assignee of the present application and incorporated by reference herein in its entirety. Multimedia, specifically audio and video content, may undergo several different types of distortions. For instance, audio distortions may include re-encoding to different sample rates, rerecording to a different audio quality, introduction of noise and filtering of specific audio frequencies or the like. Sensing audio from the ambient environment allows interference from other sources such as people's voices, playback devices, and ambient noise and sources to be received. Video distortions may include cropping, stretching, re-encoding to a lower quality, using image overlays, or the like. While these distortions change the digital representation, the multimedia is perceptually similar to undistorted content to a human listener or viewer. Robustness to these distortions refers to a property that content that is perceptually similar will generate fingerprints that have a small distance according to some distance metric, such as Hamming distance for bit based signatures. Also, content that is perceptually distinct from other content will generate fingerprints that have a large distance, according to the same distance metric. A search for perceptually similar content, hence, is transformed to a problem of searching for fingerprints that are a small distance away from the desired fingerprints. One aspect of the invention presents a method to identify brands and logos of content on screen by capturing the audio and video data from the mobile device, from web sites, streaming video, social media, broadcast television, over-the-top (OTT) video and then using the techniques described herein to identify the brands. For example, if a user is playing a movie accessed from a streaming media provider, a logo for the streaming media provider is presented on a display at various times while the movie is playing. By identifying the logo, the streaming media provider may be recognized and with ancillary information, such as time and location where the movie is playing, the movie may also be recognized. In a similar manner, if the user is playing a game on a TV console or other media presentation device, for example, a logo of the game, a logo of a developer of the game, game title and content, and other images may be recognized, such as game characters, utilizing embodiments of the invention described herein. Additionally, recognizing logos for network channels, TV commercials, live broadcasts, over the top (OTT) providers, and the like, may play a role in identifying media content being played or advertised. Another example of a method for logo recognition uses saliency analysis, segmentation techniques, and stroke analysis to segment likely logo regions. The saliency of an item is a state or quality by which the item stands out relative to its neighbors. Saliency detection relies on the fact that logos have significant information content compared to the background the logo is placed against. Multi-scale comparison is performed to remove less interesting regions around a suspected logo, such as text strings within a sea of text, and objects in large sets of objects, such as faces or a small number of faces in a sea of faces. To achieve high robustness and accuracy of detection, the methods described herein are used to recognize a logo in images and videos and further verify a likely candidate logo with feature matching and neural net classification. The methods for logo recognition include feature matching, neural network classification and optical character recognition. One aspect of the invention presents a method for optical character recognition (OCR) with a character based segmentation and multi character classifiers. Another method uses stroke analysis and heuristics to select one or more of text classifiers for recognition. An alternate method for OCR performs segment level character recognition with one or more of selected text classifiers and N-gram matching. Another aspect of the invention presents a method to train classifiers to identify new logos and objects with synthetically generated images. Another aspect utilizes transfer learning features of neural networks. Both these methods enable fast addition of new logos into a recognition system, and can provide further refinement with more data and feedback to improve accuracy. Another aspect of the invention improves and extends the methods for feature based signature generation. One method combines neighboring detected keypoints with an affine Gaussian Hessian based detector to generate a larger feature keypoint. For any object in an image, interesting points on the object can be extracted to provide a “feature description” of the object. This feature description, when extracted from a training image, can then be used to identify the object image when attempting to locate the object in a test image containing many other objects. To perform reliable recognition, the features extracted from the training image should be detectable even under changes in image scale, noise and illumination. Such feature points usually lie on high-contrast regions of the image, such as object edges. These interesting points in an image are termed “keypoints”. The detection and description of local image features can help in object recognition. The local image features are detected based on the appearance of the object at particular interest points, and are generally invariant to image scale and rotation. The local image features are also generally robust to changes in illumination, noise, and minor changes in viewpoint. In addition to these properties, the local image features are usually highly distinctive, relatively easy to extract and allow for correct object identification with low probability of mismatch. Recognition can be performed in close-to-real time, at least for small databases and on modern computer hardware Another method describes the lines in an object or a possible logo in context of its neighbor lines to better represent line based logos and objects. The method then generates signatures for the line based logo. The signatures generated with extended feature methods are used to detect logos by an indexed search and correlation system, such as using a compact hash signature, also referred to as a traversal index, generated from an original descriptor, as an address to the associated content. In another embodiment of the invention, methods are presented to verify and iterate over possible logo matches. Since a detected likely logo may be partially matched or matched incorrectly, more specialized methods as described herein are applied in an iterative manner to provide additional verification of a match. For example, a likely logo match is detected and then verified with a logo specific neural network classifier and a feature based matcher. An initial neural network classifier that was trained with a plurality of logo images, such as a thousand or more logos, may generate a likely logo match with low confidence in its accuracy. By retraining the neural network with an expected brand logo, the likely logo match is verified more accurately to be a positive match or determined to be a false match. In an alternative embodiment, the accuracy of detecting a logo based on partial matches at points in a region of an image frame, may be improved by expanding the region, or merging the region with other close-by regions and then reprocessing the expanded keypoint region to increase the confidence in the match. In another embodiment of the invention, methods are presented to configure a convolution neural network to input multi-scale and different image representations, optimize the neural network parameters, utilize rectified linear unit (RELU) neuron outputs, use of a dropout regularization method, and use of a combination of max and average pooling at different stages of the neural network. In another embodiment of the invention, methods are presented to segment word characters using analysis of contours of connected components, stroke analysis and stroke heuristics. In another embodiment of the invention, methods are presented for retail display management. This includes receiving a video stream or sequence of images which are processed to generate video frames from which images are tracked on selected video frames. Identified images are segmented to identify and localize a selected product. For example, a logo may be identified in a selected video frame as a particular brand and a detected image may be recognized as particular product of that identified brand. Further, feature alignment and previous knowledge of product locations are used to create a three dimensional (3D) physical map of products including the selected product as displayed in the retail environment. With this 3D physical map of all the products, applications for retail management, product display management and a better shopping experience are enabled. Video frames are selected from a sequence of video frames for processing by many methods. For example, for a 30 second TV commercial, a predetermined number of video frames, such as five video frames, are selected. These five video frames are selected by combination of various methods, such as determining a scene change has occurred in a video frame, tracking of logo regions in the selected video frames, and selecting video frames for regions that are relatively larger, and of longer duration across multiple frames. In another embodiment of the invention, methods are presented for tracking selected logos that are displayed at an identified broadcasted event. This includes receiving a video stream and processing to track one or more images on selected video frames and to segment the one or more images to identify and localize a selected logo. Feature alignment and previous knowledge of logo locations are used to create a 3D physical map of logos displayed at the actual event or locale such as a stadium. With this 3D physical map of all the logos displayed, a dynamic accounting of dynamic displays is performed. For example, with an event broadcast from a specific location, mapping the logo display to a physical location is very useful. From the 3D physical map, a separate analysis is performed to evaluate the viewability of a product, a logo, or both to an audience or to foot and vehicle traffic. Since advertising value is a function of views and quality and size of logo and product display, this measurement is very valuable for trading of advertising display services. Dynamic measurement is also valuable as advertisers may prefer to optimize their spending to cost and targeting of a relevant demographic. FIG.1Billustrates a flowchart of a process150for image segmentation and processing for logos in accordance with an embodiment of the present invention. The process150comprises steps to segment an image, to combine and select regions using saliency, including both spectral and spatial analysis, using stroke width transform (SWT) analysis capabilities, and using edge and contour processing techniques. The saliency of an item is a state or quality by which the item stands out relative to its neighbors in the same context. The process150also includes use of character segmentation, character recognition, and string matching. An image is input at step151and is segmented at step152with a preferred fast segmentation method. In one embodiment, the input image is a frame, or an image located within a frame, selected from a video captured by a camera, such as a camera on the remote user device114, such as a smartphone. The input image may also be selected from a video stored on a digital video disc (DVD), selected from a video accessed from broadcast streaming media, or selected from media downloaded from the Internet, or the like. The video when initially received may be in an encoded format, such as the moving picture experts group (MPEG) format. In another embodiment, the image is a single image that when initially received may be in an encoded format, such as the joint photographic experts group (JPEG) format. Encoded formats are decoded at step151. A presently preferred fast segmentation method is a graph based segmentation approach that sorts neighbor pixel vectors by their differences and identifies regions according to minimum-area and region-value (intensity or color) thresholds. Segments are combined if they are adjacent and similar to each other. Segments are classified as separate if they are not adjacent or if the segments are not similar to each other even if the segments are adjacent. For example, an image of a capital “T” has a horizontal segment and a vertical segment. The horizontal segment would be classified as not similar to the vertical segment even though the two segments are adjacent, they are not similar. Next, at step160, each of the segments is analyzed for different properties. For example, a first property includes text like properties determined using stroke width transform (SWT) analysis to generate stroke statistics and heuristics. A stroke-width transform analyzes an image to create a second image in which (a) background pixels are 0-valued and (b) every pixel in a foreground stroke or region has a value equal to an estimated minimum width of the region or stroke through that pixel. From this, a connected-component analysis is done, which labels each separate foreground stroke or region with a different integer label and can compute a minimum bounding box for said stroke or region. By using the shapes, positions, and sizes of the thus listed strokes/regions, and values computed from the pixels within each stroke/region area in the original and transform images, text-like strokes can be discriminated from other image features and groups of text-like strokes, candidates for letters and words, discriminated from isolated or spurious strokes that have no significance. A second property determined at step160includes segment statistics such as density and use of a color space. A color space is a means of uniquely specifying a color within a region. There are a number of color spaces in common usage depending on the particular industry and/or application involved. For example, humans normally determine color by parameters such as brightness, hue, and colorfulness. On computers, it is more common to describe a color by three components, normally red, green, and blue. A third property includes spectral saliency determined by using a discrete cosine transform (DCT) of each local region. A fourth property includes spatial multi-scale saliency determined from the DCT results by calculating similarity for a hue, saturation, value (HSV) representation of an image histogram and for a gradient orientation histogram. At step161, segments are classified into segments that are logo-like or segments that are non-logo like. Segments, selected according to the properties determined at step160, are compared to ground truth segments. Ground truth segments include sets of logo-like and sets of non-logo like segments that are used to train the classifier at step161. Ground truth logo-like segments are segments that match segments in actual logos. At step162, classified segments that are identified as logo-like segments are further segmented into characters using, for example, contour analysis of connected components, stroke width transform analysis and stroke density analysis, including analysis of a number of horizontal and vertical strokes and number of loops, stroke transition analysis, and use of stroke heuristics to segment the image into characters. At step163, an iterative step is performed for connected and touching letters to segment the characters using stroke transition analysis and stroke heuristics. At step170, the segmented characters are recognized using one or more text classifiers with one or more optical character recognition (OCR) models. In the preferred embodiment, two classifiers are used to allow different types of fonts. For example, a first classifier is used for bold fonts and a second classifier is used for fonts with shadows. Further classifiers may be added for cursive bold fonts and another classifier for combination of all standard fonts. At step171, string matching is performed. String matching allows consideration of frequently occurring words with reduced weight in string match scoring for commonly occurring words or sub-strings. At step173, the output includes an optical character recognition (OCR) report, a score from the string matching per brand at segment and image level, and a likely matching logo. At step172, the classified segments from step161are sorted by their logo like properties and only the top k segments are selected for next stage processing via feature analysis, signature generation, and passing the classified segments for neural network classification. The parameter “k” is a predetermined number which is set to a specific number, such as 20, that represents a maximum number of logo-like segments that should be processed per image. Also, at step172, the regions are sorted by logo-like properties and selected segments are fingerprinted. One fingerprinting approach is a global method which uses gradients and trend and phase agreement to create a descriptor and then generate the signatures. Additional fingerprints generated are line context signatures of detected keypoint regions. Line-context signatures are derived from line-context descriptors and a line-context descriptor is a set of parameter values organized in consecutive order and derived from lines and edges detected as passing through an area centered on a keypoint. Signatures are generated for original detected keypoints, as well as, for extended or combined keypoint regions. At step174, outputs are generated, such as global and line context signatures for segments. FIG.2illustrates a flowchart of a process200for logo recognition including image processing, logo detection and recognition in accordance with an embodiment of the present invention. The process200describes a presently preferred embodiment to detect logos in images and video frames. An image is input at step201in a similar manner as described above with regard to step151. At step202, the input image is processed to identify likely locations of logos and objects. Multiple local regions are analyzed for different properties, including spectral saliency using a discrete cosine transform (DCT) of each local region and spatial multi-scale saliency of each local region determined by calculating similarity for a hue, saturation, value (HSV) histogram and for a gradient orientation histogram. Thus, likely locations for logos and objects are identified within the input image. At step203, logo and object saliency is determined according to segment statistics such as density, color space values and being text-like as determined by using a stroke width transform (SWT) and generating stroke statistics and heuristics. Optionally, saliency for each image region is determined with spatial multi-scale similarity by comparing the HSV histogram and for determining gradient orientation histograms for multiple scales at each likely logo region. At step204, segmentation is performed in the region of interest using a masked version of a graph cuts segmentation or any other accurate and fast segmentation method. At step205, a stroke width analysis is performed to establish bounding boxes, such as word boxes around likely logo character strings, for selected regions and likely logo segments for further logo recognition analysis. A high level process flow, shown below, illustrates steps for text recognition which include: Logo candidates→word boxes→character segmentation→character recognition (multiple classifiers)→string matching→logo recognition scores With reference toFIG.2, at step210, character segmentation and character classification includes a first process to produce logo candidates. A second process is employed to analyze each of the logo candidates and for each logo candidate, produce a set of word boxes. A third process analyzes each of the word boxes and produces a set of character segmentations to delineate the characters in likely logo character strings. A fourth process uses multiple classifiers to analyze each set of character segmentations to detect character words. At step213, a fifth process uses string matching on the detected character words and provides logo match scoring. At step211, signatures are generated for likely logo segments identified in step205. At step214, segment fingerprints are searched against a two stage indexed search system which holds reference logo signatures. For any likely matches, geometric correlation is further performed. Alternatively, a stop word process in performed to eliminate common signatures, or visual words, from the reference signature index. The signature generation flow has information measuring steps, as well as uniqueness analysis steps to transfer only unique and more robust logo segment signatures for further processing. At step212, convolutional neural networks (CNNs) are used to classify the incoming likely logo segment into a likely logo. In one embodiment of logo classification or logo selection, regions with text, as determined by stroke width transform (SWT) analysis and use of stroke width heuristics, are preserved. At step215, decision logic is applied to results from steps212,213, and214to decide whether a likely match as reported by OCR analysis from step213, by logo segment signatures analysis from step214, or by use of CNNs in the classification analysis from step212, is correct. At step215, the likely matches can be further verified by specific logo feature matching or by use of a specific logo neural network. Iteration around selected regions, identified in steps202-205, is used to improve on a likely match or to eliminate false matching. Based upon the results from the decision logic and iterations of step215, a brand is recognized and the identified brand's associated placement in a location or locations are reported at step216. FIG.3illustrates a flowchart of a process300for spectral, spatial and stroke analysis for logo region detection in accordance with an embodiment of the present invention. The process300describes the steps that select and segment logo-like regions. An image, including red green blue (RGB) components, is input at step301. At step302, the input image is processed to identify local regions that likely contain logos and objects using a spectral saliency process. At step302, spectral saliency region selection is computed, for example, by performing a discrete cosine transform (DCT) of each local region. At step303, the image from step301is converted to hue, saturation, value (HSV) planes311, while at step304gradient images312are generated by using gradient edge filters. The gradient images312are generated at eight orientations, and an intensity image310is also generated. At step305, a stroke width transform is generated for the image from step301followed by text stroke analysis to generate text stroke images313. At step320, the HSV images311, gradient images312and intensity image310, and text stroke images313are used to generate histograms and localization data to select and segment logo-like regions. At step320, spatial and object like saliency is measured by calculating similarity for HSV histograms and for gradient orientation histograms at multiple scales, and using stroke statistics to determine textual saliency at multiple scales. In general, a “saliency” image is bright where a desired kind of image object is likely and dark otherwise. At step321, a refined segmentation is performed using the stroke width images313, segmentation with a graph method, and contour analysis of connected components. In image segmentation, a color or intensity image is divided into regions according to one or more criteria. The method generally iterates, either merging or splitting regions to produce a segmentation with fewer or more regions. The split/merge relations from one iteration to the next can be expressed as a graph. A connected component is a foreground shape in a usually binary image of shapes that does not touch any other shape, and which does not consist of separate regions. Mathematically, for any two pixels in the shape, there must be an unbroken path connecting the pixels that is completely within the shape. The contour of the shape is the set of pixels on the edge of the shape. For a binary image, these pixels completely define the shape, so it is intuitive and efficient to define and analyze a shape by its contour. Operations include finding a minimum bounding-box for the contour and approximating it by a set of line segments or curves. In general, contour analysis extracts geometric information from a set of contour pixels. At step322, segments for a particular logo selected from one of the logo-like regions are generated and at step323, a location and an image for the particular logo are generated for further processing. Once the particular logo is recognized, an indicator that identifies the recognized logos can be logged and stored as metadata associated with the content. FIG.4Aillustrates a flowchart of a process400for character segmentation in accordance with an embodiment of the present invention. An image segment is input at step401. The process400illustrates steps for character segmentation of the image segment input at step401using a first stroke analysis process402including steps405,406, and407and using contour analysis of connected components, also referred to as a connected contour analysis process403including steps412,413, and414. Within the image segment from step401, an image blob may be separated into more distinct characters based on an analysis of contours of image sections which make up the image blob. The contour based separation of characters generally requires alignment of a number of detected and separated contours, such as three isolated contours. The first stroke analysis process402also addresses fewer than three separated contours which occurs many times for connected and touching characters. The first stroke analysis process402also optimizes processing steps, such as avoiding three additional rotations and image inversions to detect light on dark and vice-versa. At step405, a stroke width transform (SWT) analysis generates stroke statistics and, at step406, stroke detection heuristics are suitably employed. A stroke-width transform analyzes an image to create a second image in which (a) background pixels are 0-valued and (b) every pixel in a foreground stroke or region has a value equal to an estimated minimum width of the region or stroke through that pixel. From this, at step407, a connected-component analysis is done, for example, which labels each separate foreground stroke or region with a different integer label and can compute a minimum bounding box for said stroke or region. The connected contour analysis process403provides contour based character segmentation which includes contour separation, at step412. Then at step413, the input image segment is analyzed to find two or more closely spaced but separated image blobs each having a potential character, then at step414, the three potential characters are analyzed to find potential text segments, including one or more words using word bounding boxes. Process403, including steps412,413, and414, is performed for three 90 degree rotations of the input image segment if only the connected contour analysis process403is used. Also, for each potential text segment having a set of likely words, it is determined whether a horizontal alignment slant is present in text alignment or a vertical slant is present in the text alignment. At step415, the process400corrects for these alignment slants. Further, at step415, the process400performs a vertical transition analysis and a stroke density analysis. For each detected contour, an estimate is made whether the image segment from step401comprises multiple characters. This estimate is made with heuristics of character height, width, transition in stroke density, and overall segment character geometry statistics. The first stroke analysis process402makes a significant improvement over the connected contour analysis process403of connected components for finding characters that are connected to each other and do not have three closely spaced but separated image blobs. A second SWT analysis, including steps418-421, provides improvements to the character segmentation results provided by the first stroke analysis process402and this is an iterated step to partition difficult joined characters. The stroke detection heuristics406is also utilized to select from a set of text classifiers so that improved matching occurs. At step418, two sets of character recognition models, also referred to as text classifiers, are described such as a first text classifier using a convolutional neural networks (CNN) and a second text classifier using a character recognition neural network (CRNN), to allow more robust text matching. Training of the two text classifiers is performed, for example, with two methods. A first method utilizes characters of various font types that are placed in training sets of character images for the first text classifier, the CNN. A second method utilizes characters of logos of the likely brands in training sets of character images for the second text classifier, the CRNN. In order to scale to large logo sets, and to be able to add new logos associated with new brands very quickly, a combination of the first text classifier and the second text classifier is employed. At step418, the first text classifier, the CNN, produces a first set of character strings and the second text classifier, the CRNN, produces a second set of character strings. Both of the two sets of character strings likely contain characters that are used in the brand logos that are likely to be encountered. At step419, string matching is performed for the two sets of character strings produced by step418against a set of logos for brand name products. Further, at step420rescoring is performed on how well each character string matches to a particular logo. Such rescoring can be used to reduce the score for frequently occurring words that are not likely to be included in a logo in order to reduce false positives. Strings that represent potential matches to frequent words are required to match visually or as determined by a neural network classifier and also the font types are to be declared a match. In step421, the process400produces a recognized string including likely word scores. SWT heuristics are used to segment characters and classify text. For example, an SWT analysis is applied to an input image to produce stroke statistics for a potential set of letters. The SWT analysis results are then evaluated for height, width, number of strokes, strokes traversed in a vertical direction, and strokes traversed in a horizontal direction to determine characters of a text segment in the input image. The SWT stroke statistics results are also evaluated with heuristics to segment characters. For example, a typical character width to height ratio is used that ranges from 0.15 of height of a thin letter, such as “I” to 1.5 of height for a wide letter, such as “W”. Stroke width median values and stroke median separation values are used to refine the above range using heuristic rules. For example, if a median stroke has a measured value S, and a median stroke separation has a measured value P, then a minimum letter width is considered to have a width S and a minimum letter separation is considered to have a separation S+P. Also, for example, a maximum letter width may be set to a width 3S+2P and a maximum letter separation may be set to a separation 3S+3P. It is appreciated that different fonts and font sizes may utilize different heuristics in this evaluation. Heuristics on the nature of contours are used to estimate stroke characteristics of the potential characters in the input image. For example, in many fonts, the letter “I” consists of one stroke, the letter “D” consists of two strokes, and more complex letters consist of three or more strokes. In a first step, combine arcs returning to a vertical stroke, such as contained in the letters “b” and “d” and a vertical stroke may be in the letter “a”, while a vertical stroke would not likely be in the letter “e”. Also, an order of the strokes is identified, such as the vertical stroke in a “b” is determined first for scanning left to right, while the vertical stroke in a “d” is the second stroke determined for scanning left to right. In a second step, connect arcs at 45 degrees (N, M, Z, W) and split arcs at 90 degrees turn from start point. In a third step, tolerate some error by allowing overlapping boxes. Steps for stroke width transform (SWT) analysis, such as used inFIG.4B, are described next. Elements of the SWT analysis include, for example, detecting strokes and their respective widths. An initial value of each of the elements of the SWT analysis is set to a very large number to represent the effects of the number being infinite (∞). In order to recover strokes, edges in the image are computed using a Canny edge detector. After edges are detected in the input image, a gradient direction of every pixel p, along a detected edge is considered. If the pixel p lies on a stroke boundary then dp must be roughly perpendicular to the stroke orientation. A ray, as defined by p+dp*n, can be evaluated until another edge pixel q is found. If pixel q is found on the other side of the stroke boundary, then dq is roughly opposite to dp, must be flexible to allow for shadow type fonts, then the ray cuts across the stroke and each pixel “s” along p to q is assigned to a width, determined by the equation (p minus q), unless it has a lower value. If pixel q is not roughly opposite, the ray is discarded. An overlapping bounding box (BB) algorithm is described next. Characters are suitably represented to allow efficient detection of rotated characters, through use of a permutation of the feature vector. Characters having orientations of zero degrees (0°), 45°, and 90° are able to be detected and regions that overlap suppressed, which may be identified by use of a selected color, based on confidence in the classification. In another embodiment, strokes are efficiently detected by convolving the gradient field with a set of oriented bar filters. The detected strokes induce the set of rectangles to be classified, which reduces the number of rectangles by three orders of magnitude when compared to the standard sliding-window methods. FIG.4Billustrates a flowchart of a process450for character recognition in accordance with an embodiment of the present invention. An image segment is input at step451. The process450illustrates steps for character segmentation of the image segment input at step451using a second stroke analysis452including steps455,456, and457and using a second contour analysis of connected components453including steps462,463, and464. The second stroke analysis452including steps455,456, and457follows the description of the first stroke analysis process402including steps405,406, and407ofFIG.4Aabove. Also, the second contour analysis of connected components453including steps462,463, and464follows the description of the connected contour analysis process403of connected components including steps412,413, and414ofFIG.4Aabove. The steps for the second contour analysis of connected components453include, a step462for contour separation, a step463that searches for two or more closely spaced but separated image blobs, and then in step464, searches for potential text segments. Further, for each set of likely words, a determination is made whether there is a horizontal slant in text alignment or a vertical slant, and if such alignment slant or slants are found, a correction is made for these detected slants. The corrections remove any vertical slant from each set of likely words. Further, a vertical transition and stroke density analysis are performed. For each contour, an estimate is made whether the contour is of multiple characters. This estimate is determined with heuristics of character height, width, transition in stroke density, and overall segment character geometry statistics. A second stroke width transform (SWT) analysis step455makes a significant improvement over the second contour analysis of connected components453for finding characters that are connected to each other and do not have two or more closely spaced but separated image blobs. A third SWT analysis including steps468-470provides improvements to the accurate character segmentation results provided by the second SWT analysis step455and this is an iterated step to partition difficult joined characters and improve confidence in accurate detection, wherein the number of iterations depend on complexity of the joined characters. The stroke detection heuristics step456is also utilized to select from a set of text classifiers so that optimal matching occurs. At step468, two sets of character recognition models, such as text classifiers are selected to allow more robust text matching. Training of the two text classifiers is performed, for example, with two methods. A first method utilizes characters of various font types that are placed in training sets of characters for the first text classifier. A second method utilizes characters of logos of the likely brands in training sets of characters for the second text classifier. It is noted that images of characters are used in the training sets. At step468, appropriate character recognition models are selected, and matched to available and trained neural networks. At step469, string recognition models including convolutional neural networks with multiple character classifiers are selected as suitable selected character recognition models. At step470, an N-gram matcher is used to detect likely logos, in conjunction with use of a logo dictionary of likely brand characters and words. The logo dictionary is preferably a small searchable database. The likely brand, if detected, is returned to the media recognition system100, as shown inFIG.1A, along with string, location and scores. FIG.5illustrates a flowchart of a process500to generate a line context descriptor in accordance with an embodiment of the present invention. The line context descriptor generated in process500emphasizes the geometric location of contiguous and prominent line segments at a few scales within a region of an image around a detected keypoint. At step501, an image segment is input and at step502, the image segment is processed by an interest region detector. At step504, the interest region detector returns a detected keypoint with attributes of location x,y and multi-scales Sx, Sy. At step505, an interest region is established around the keypoint, in the form of a rectangle or ellipse, for example, according to the shape of the image region around the keypoint. Other interest region shapes may be used such as a square, a circle, a triangle, a hexagon, and the like. At step506, the image segment is processed to detect edges and contours. Next, at step508, a list of line segments is calculated. At step509, local segments relevant to each keypoint region according to step504, such as the ellipse or rectangle, are listed. At step515, results from steps505and509are received and overlapping local segments and a region unit, such as a square, are normalized. At step517, each segment angle is calculated. At step518, a dominant angle calculation is performed for the region around the detected keypoint. At step519, each of the calculated segment angles relative to the dominant angle orientation are then calculated. At step527, the average length for the region unit is calculated. At step528, the segments at each scale in the x direction Sx and a different scale in the y direction Sy are calculated. At step529, segments at uniformly spaced points in the region are calculated. At step530, a 3D histogram of distance, segment angle, and scale is calculated based on results received from steps519and529to generate a line context descriptor. At step532, a threshold is applied to the line context descriptor to produce a signature. Regarding the edge contour analysis at step506, to detect edges at different scales, a multi-scale edge detector, such as a multi-scale Canny edge detector or the like, is used with Gaussian derivatives applied at several pre-selected scales in both the x-direction and the y-direction. Two phases are used to remove unstable edges. A first phase applies a Laplacian operator. The edges determined from the first phase results that do not attain a distinctive extremum over scales are removed in the second phase. The edge pixels are linked to connected curves at different scales. Several cases are considered for representing curves with line segments. For example, a curve is fitted by multiple line segments. Two line segments, having a small gap less than a pre-specified gap threshold in between them, are merged into one larger line segment even if they have different scales. For a line segment to survive at a higher scale, the segments must belong to that scale or a higher scale. For each keypoint, line segments are searched for in the neighborhood of each keypoint. The neighborhood of a keypoint is called a context of the keypoint, and also may be referred to as a context of a feature associated with the keypoint. Line segments lying inside or partially inside the context are called context segments. An initial scale a provides an estimate of the size of a searching area. All the segments in scale level a and lower scales are included in the context as long as part of the segment is within distance kσ, where k is a pre-specified positive value 0<k≤10, for example. Segments with very small lengths, less than a pre-specified value, are removed from the image and from further evaluation. The line context descriptor is based on multiple sampled points as representation of segments in the context. Each sample point has 4 parameters.1) the distance r to the keypoint.2) the angle α∈[0, . . . , 360) between the direction from keypoint to sample point and reference direction, which is a keypoint dominant orientation. The keypoint dominant orientation is determined from a majority of pixels aligned in a particular orientation in the keypoint neighborhood.3) the angle β∈(−180, . . . , 180) between the reference direction and the orientation of an underlying segment.4) the underlying segment scale a. After sampling, all the sample points will be used to form the keypoint line context descriptor. The four parameters for each sample point are used to compute a line context descriptor for each keypoint with a coarse histogram of the sample points at relative coordinates of the line segment sample points. The histogram uses a normalized distance bin to record the sample points in reference to a coordinate system for the neighborhood of a keypoint to vote for the relative distances and thereby weight the sample points. The histogram is generated by binning sample points on selected edges according to angle and distance with reference to a dominant orientation of the selected edges. The accumulated weights from all sample points form a 3D descriptor. The scale60and one level lower are good estimations for most cases. FIG.6illustrates a flowchart of a process600to generate a descriptor region by extending an affine detector for logo and object detection in accordance with an embodiment of the present invention. The process600generates a combined keypoint region and an associated descriptor. The combined keypoint region combines relevant and separated neighboring keypoints to generate a new keypoint region. At step601, an image is input and at step602, the input image is processed by a fast interest region detector to determine x and y coordinates of keypoints in associated interest regions. At step603, an accurate affine and Gaussian Hessian detector is applied to identify a plurality of keypoints. One presently preferred process is described in U.S. Pat. No. 8,189,945 issued May 29, 2012 entitled “Digital Video Content Fingerprinting Based on Scale Invariant Interest Region Detection with an Array of Anisotropic Filters” which is assigned to the assignee of the present application and incorporated by reference herein in its entirety. The process of step603uses an array of filters. The coordinates x and y are 2D coordinates of the keypoint from step602and Sx and Sy are scale values in each dimension, such that the array of filters are used to generate the x, y, Sx, Sy values in a 4D space. For fast and accurate calculation of the affine keypoint with Gaussian filters a localized set of filters is calculated, then the peak Hessian is detected followed by interpolation to calculate location (x,y) and scales Sx, Sy. At step605, all neighboring keypoints are compared for distance from each other and difference in scales in the x direction and in the y direction. Neighboring keypoints with closest distance to each other, scale agreement, and similar Hessian peak values are combined in a combined neighborhood keypoint. A new region, such as a rectangle or an ellipse, is formed around the combined neighborhood keypoint. At step610, the combined neighborhood keypoint is used in addition to the original keypoints to describe the segment image. At step611, a descriptor grid image is formed. At step612, a gradient calculation and a phase agreement calculation for each grid point are completed. Also, at step612, the results of the gradient calculation and phase agreement calculation are normalized and the descriptor is formed by in a binary representation of the results which produces a grid based descriptor. At step614, the grid based descriptor is further thresholded to generate a signature. Further details on a method to generate a global feature descriptor and signature are provided by U.S. Pat. No. 8,195,689 issued Jun. 5, 2012 entitled “Media Fingerprinting and Identification System” which is assigned to the assignee of the present application and incorporated by reference herein in its entirety. FIG.7illustrates a flowchart of a process700for synthetic image generation for training neural networks for logo recognition in accordance with an embodiment of the present invention. Since it is expected that a logo would be distorted in some manner by the time it is displayed, logos in training sets are also distorted as might be expected. Anticipated distortions that might be expected include, effects due to zooming, changes due to differences in display formats such as for cell phone displays, for tablet and lap top displays, and for displays in home entertainment systems, and the like. At step701, a set of expected logo images that are undistorted are input, at step702, a set of background images are input, and at step703, a set of scene images are input. Generally, logos are placed on specific backgrounds. However, since the backgrounds may vary, various backgrounds are included in the training sets due to type of material used in the background, such as use of different fabrics, metals, cloths, wall colors, use of wallpaper, lighting, effects due to aging, and the like. At step704, the received logo images are warped and cropped as might be expected. Other distortion effects may also be applied depending upon a priori knowledge of anticipated effects that might occur on selected logos. At step705, the distorted logo image is placed on appropriate backgrounds appropriate for that logo creating a set of background distorted logo images. For example, some logos would be placed only on apparel backgrounds, while other logos would be placed only on metallic car backgrounds. At step710, specific warping and distortions are performed on the set of background distorted logo images creating a distorted logo image with multiple distortions applied to the background and original logo. At step711, the distorted logo image is inserted by blending the distorted logo image into a scene image with selected blend options. At step712, color transformations, brightness, and gray image versions are generated. These additional distortions are added as a separate step to encompass all the previous distortion additions. Also at step712, a further blending of logo image with scene is performed. At step714, after the color, and brightness distortions have been applied on the distorted logo image at step712, the distorted logo image is cropped to select regions around the logo to generate a training logo image. In order to generate synthetic training images for an image classifier including CNN (convolutional neural networks), appropriate logo backgrounds, such as banners, shirts, cars, sails, and the like, and appropriate scene images for backgrounds are selected. For the selected logo, the logo background, and the scene image, appropriate types of distortions are chosen. Also, consideration is given in how a chosen type of distortion is to be applied. For example, types of distortions include selected color distortions and gray distortions. Also, distortions due to blending to merge a selected logo with the selected logo background colors, include, for example, using <10% blending and distortions caused by affine transforms taking into account that some affine transforms are not that effective since the logo has been cropped, warped, and resized. Additional distortions include distortions due to perspective transforms, and distortions due to rotations of up to, for example, +/−12%, and the like. An alternate set of sequence of steps to generate synthetic logo image segment. At step701, Select logo, at step704, choose a logo size, at step705, choose an impose point to insert the logo, also, at step705, add logo image on a background selected at step702, at step710, distort the logo image on the background using functions to morph, blur, and or warp the selected logo. It is noted that in step704, distortions may be applied to the logo without the background. At this point, a selected logo image has been distorted and placed on a selected logo background (logo_bg), referred to as a logo_bg image. At step711, the logo_bg image is inserted on a scene image selected at step703, at step712, selected distortions that are applied to the logo_bg image with scene include a color distortion such as a 50% shuffle color, a 50% gray, and a selected color scale, color bias, np and clip, and color invert. Color bias includes red, green, blue bias, np clip is saturation, color invert is where colors are inverted, and the like. At step711, utilize up to a max 20% blend of the logo_bg image with the selected scene to represent a situation where a background bleeds into the logo image. At step712, distort the image as described above. At step714, choose crop points and prepare the distorted logo image for training. It is noted that multiple variations of a logo may exist related to the same product. In such a case, multiple logo representations are used in varying combinations based on many factors, such as image area available to include one or more of the logo representations. FIG.8Aillustrates a flowchart of a process800for optimizing multi-scale CNNs for logo recognition in accordance with an embodiment of the present invention. At step801, an image segment is input. At step802, the input image segment is resized to a 64×64 Y image which is a luminance image. At step803, the input image segment is resized to a 32×32 RGB image. The 32×32 grid size is an appropriate pre-selected grid for the intended application of processing the RGB image. At step804, a convolution neural network (CNN) with 4 convolutional layers (4 Conv) and 2 fully connected layers (2 FC) processes the 64×64 Y image. In step806, classification of the input image segment as processed in step804occurs according to specific logo classes. Similarly, at step805, a CNN with 3 convolutional layers (3 Conv) and 2 fully connected layers (2 FC) processes the 32×32 RGB image. In step807, classification of the input image segment as processed in step805occurs according to specific logo classes. At steps810and811, detected pre-specified classes and probabilities of detecting the pre-specified classes are returned to the calling program to be used for subsequent processing. The above process800uses an optimized and complementary system. Since the Y luminance image includes most of the useful information, the Y luminance image provides significant accuracy with a 64×64 image as input. The RGB components of the input image segment also provide information that is useful. Accordingly, the 32×32 image grid is considered suitable for recognition. The combined features associated with luminance and RGB processing are classified separately in the embodiment ofFIG.8A. In another embodiment, the last stage of the fully connected layers of each input layer are combined for classification training. FIG.8Billustrates a flowchart of a process850for optimizing a multi-scale CNN for logo recognition in accordance with an embodiment of the present invention. At step851, an image segment is input. At step852, the input image segment is resized to a 64×64 Y image which is a luminance image. At step853, the input image segment is resized to a 32×32 RGB image. At step854, a convolution neural network (CNN) with four convolutional layers (4 Conv) and one fully connected layer (1 FC) processes the 64×64 Y image. Similarly, at step855, a CNN with three convolutional layers (3 Conv) and one fully connected layer (1 FC) processes the 32×32 RGB image. At step857, the outputs of these networks are combined and fully connected. At step859, the output of the CNN of step857is classified according to pre-specific logo classes. At step861, the detected pre-specified classes and probability of detecting the pre-specified classes are returned to the calling program to be used for subsequent processing. FIG.8Cillustrates a flowchart of a process870for logo text recognition using a CNN and an N gram classifier in accordance with an embodiment of the present invention. At step871, an image segment is input. At step872, the input image segment is resized to a 32×100 Y image which is a luminance image. For logos, it was determined experimentally that a grid size of 32×100 provided better accuracy than a 64×64 grid, especially for text based logos. At step873, the input image segment is resized to a 16×50 RGB image. For logos, it was determined experimentally that a grid size of 16×50 provided better accuracy than a 32×32 grid, especially for text based logos. At step874, a convolution neural network (CNN) with three convolutional layers (3 Conv) and one fully connected layer (1 FC) processes the 32×100 Y image. Similarly, at step875, a CNN with two convolutional layers (2 Conv) and one fully connected layer (1 FC) processes the 16×50 RGB image. At step877, the outputs of these networks are combined and fully connected. At step879, the output of the CNN of step877is classified by an N-way classification process and an N-gram string matching process according to pre-specific logo classes. At step881, the detected N-gram string and probability of the detected classes are returned to the calling program to be used for subsequent processing. The above process870uses an optimized and complementary system. Since the Y luminance image includes most of the useful information, the Y luminance image provides significant accuracy with a 32×100 image as input. The RGB components of the input image segment also provide information that is useful. Accordingly, the 16×50 image grid is considered suitable for recognition. An N-gram is a sequence of N items from a sequence of text or speech. The items can be phonemes, syllables, letters, words, or the like. The N-gram string matching process utilized in step879, also known as N-gram logistic training, comprises beginning with selecting a value for N and a word to be evaluated. In more detail, an N-gram (GN) of word (w) is a substring s of length up to N letters: GN(w)={s:s⊂w∧|s|≤N}equation 1 with GN=∪w∈WGN (w) the set of all such grams in the language. For example, for N=3, and w=spires, G3(spires)={s, p, i, r, e, s, sp, pi, ir, re, es, spi, pir, ire, res}. Given w, the system predicts a vector using the same base CNN, and a connected layer with |GN| neurons to represent the encoding vector. The GN scores of the fully connected layer are probabilities of an N-gram being present in the image segment. The CNNs of steps874,875, and877together are therefore learning to recognize the presence of each N-gram somewhere within the input image. The training problem becomes that of |GN| separate binary, true positive match and zero false match, classification tasks, and the logistic regression loss is back-propagated with respect to each N-gram class independently, which represents a logistic regression. A logistic loss function is defined as: V(f(x→),y)=1ln2ln(1+e-yf(x→)) This function displays a similar convergence rate to a hinge loss function, and since it is continuous, gradient descent methods can be utilized. However, the logistic loss function does not assign zero penalty to any points. Instead, functions which correctly classify points with high confidence, that is high values of |f({right arrow over (x)})|, are penalized less. This structure leads the logistic loss function to be sensitive to outliers in the data. The logistic loss function holds some balance between the computational attractiveness of the square loss function and the direct applicability of the hinge loss function. To jointly train a range of N-grams, some occurring frequently and some rarely, the gradients for each N-gram class are scaled by the inverse frequency of the N-gram class appearance in the training word corpus. FIG.9illustrates a flowchart of a process900including detection logic for logo recognition in accordance with an embodiment of the present invention. At step901, a video input is received and pre-processed to select frames and image regions within the selected frames for likely logo locations. At step902, the likely logo locations are further processed into image segments. At step904, the image segments are input to be processed by multiple recognition methods. For example, at step904, the input image segments are applied to a first convolutional neural network (CNN) classifier and a first set of probability scores for pre-specified logos is produced. At step906, the input image segments are applied to a second CNN classifier and a second set of probability scores for the pre-specified logos is produced. At step908, the input image segments are analyzed to generate a set of features which are then matched to known features resulting in a third set of probability scores for the pre-specified logos. At step910, optical character recognition is used to generate candidate character strings which are matched to a set of words known to be used in logos resulting in a fourth set of probability scores for the pre-specified logos and text segments. If the above steps904,906,908, and910generate a high confidence match with known input segments, then at step916, decision logic selects the high confidence match and passes this result to step920, representing a matched brand, a location on an image from which the input image segment was selected, and the individual sets of scores. In case a high confidence match is not obtained, such as having the steps904,906,908, and910produce inconsistent results or produce generally weak scores from each step, then two additional verification methods are used. At step912, a per logo specific neural network (NN) or classifier trained to detect that pre-specified and specific logo evaluates the input image segments. Also, at step914, a per logo feature matcher evaluates the input image segments. The input image segments from step902, are further evaluated in the neighborhoods of the likely matches to produce new overlapping segments or combined segments or new segments. The outputs from steps912and914are then considered along with outputs from the steps904,906,908, and910at step916, by the decision logic. A high confidence match, if obtained, is output from step916as a match to step920. Since neural networks and feature matching approaches generally improve their classification results through learning using training images, a continuous training approach is taken to improve the decision logic at step916. At step920, ground truth (GT) logo-like segments that match segments in actual logos are input to automatically verify matches identified at step910. At step920, a user could also check the match results to verify the matches are of good quality. The above ground truth logo-like segments are used to train a support vector machine (SVM) or an ensemble classifier at step930. The results of step930are provided as feedback to step910and evaluated by the decision logic directly or fed back as a weighted input. If the evaluation of the step930results are of sufficient quality, as determined, for example, by whether the results exceed a match probability threshold, a high confidence match is produced as a final result output to post processing step935. It is also noted that some identified characters may have a high match probability and other identified characters may have a lower match probability. Such situations are indicated at step935in the final result output. The number of iterations of steps916,920, and930are also taken into account against a pre-specified iteration limit to determine at step935, the final result output. At step935, the post processing refines a predicted match to a user's profile restrictions and outputs a final result. Once a logo is recognized, an indicator that identifies the recognized logos can be logged and stored as metadata associated with the content. With every high confidence match, as identified from step930and that is fed back to step916, the neural networks, of steps904,905, and912, and feature classifiers, of steps908and914, learn and improve the accuracy of their internal processes. Thus, the identified process of continuously training is adopted into the flow of process900. The continuous training system may be enhanced by allowing undetected or new logo images and relevant data to be added into the system. Another embodiment of a continuously learning system allows undetected or new logo images to be added into the system at each step. The new input logo images and relevant data should be added to existing brands or new brands at the following steps:i) Adding to feature matching references after stop word processing;ii) training image generation for CNNs for logo classifiers at steps904and906;iii) training segment based OCR recognizer at step910;iv) adding the new text strings to match for character based OCR in step910; v) training the per logo NN recognizer or classifier in steps912and914;vi) training the text string weighting to reflect more frequent words;vii) training to reflect the association or correlation with topic of associated content; andviii) training with false positive examples at synthetic image generation, and at decision SVM logic; andix) training with missing true positive examples at synthetic image generation, and decision SVM logic. FIG.10illustrates a flowchart of a process1000for tracking and mapping of a product brand and logo to a 3 dimensional (3D) physical location of an indoor or outdoor event or retail display in accordance with an embodiment of the present invention. The process1000is useful for broadcast TV and TV screen applications in video personalization. At step1002, a received identified match logo is tracked and the logo and associated product is segmented and measured to determine a brand. At step1004, a location where the product and logo are placed is identified. At step1006, the logo is classified as being for a wearable product, located on a banner, or on a fixture. At step1008, the product and brand are mapped to a 3D physical location at an indoor or outdoor event or retail display. Those of skill in the art will appreciate from the present disclosure additional, alternative systems and methods to associated multimedia tags with user comments and user selected multimedia snippets for efficient storage and sharing of tagged items between users, based on television program audio and video content fingerprinting, in accordance with the disclosed principles of the present invention. Thus, while particular embodiments and applications of the present invention have been illustrated and described, it is to be understood that the invention is not limited to the precise construction and components disclosed herein and that various modifications, changes and variations which will be apparent to those of ordinary skill in the art may be made in the arrangement, operation and details of the method and apparatus of the present invention disclosed herein without departing from the spirit and scope of the invention. | 75,153 |
11861889 | DESCRIPTION OF EMBODIMENTS Hereinafter, modes for carrying out the present invention will be described with reference to the drawings. The present invention is not limited to the embodiments described below and includes appropriately modified examples within a scope obvious to those skilled in the art. FIG.1shows an analysis device according to a first aspect of the present invention. As shown inFIG.1, the analysis device includes a light source1; a light irradiation region3irradiated with light from the light source1; a light-receiving unit7configured to receive scattered light (including Raman scattering), transmitted light, fluorescence, or electromagnetic waves from an observed object5located in the light irradiation region3and convert the received light or electromagnetic waves into an electrical signal; a storage unit9configured to receive and record the electrical signal from the light-receiving unit7; an analysis unit11configured to analyze the electrical signal related to the scattered light, the transmitted light, the fluorescence, or the electromagnetic waves recorded by the storage unit9and record an analysis result; and an optical system control unit13configured to optimize the light irradiation region3using machine learning or the like on the basis of the analysis result. The analysis device preferably further includes a light-receiving system control unit27configured to receive the electrical signal from the light-receiving unit7and optimize a light-receiving region25which is a region where light is radiated to the light-receiving unit7. In this analysis device, preferably, the light-receiving system control unit27optimizes the light-receiving region25using machine learning. Also, the analysis device can rapidly and accurately perform analysis even when the optical system control unit13is not present and only the light-receiving system control unit27is provided. Such an analysis device includes the light source1; the light irradiation region3irradiated with light from the light source1; the light-receiving unit7configured to receive scattered light, transmitted light, fluorescence, or electromagnetic waves from the observed object5located in the light irradiation region3and convert the received light or electromagnetic waves into an electrical signal; the storage unit9configured to receive and record the electrical signal from the light-receiving unit7; the analysis unit11configured to analyze the electrical signal related to the scattered light, the transmitted light, the fluorescence, or the electromagnetic waves recorded by the storage unit9and record an analysis result; and the optical system control unit27configured to receive the electrical signal from the light-receiving unit7and to optimize the light-receiving region25which is a region where light is radiated to the light-receiving unit7. The outline of the present invention will be described below. High-Accuracy and High-Sensitivity Cell Classification Method not Biased by Human Knowledge By creating a phenotype from a large amount of cell information including cell morphology, nuclear morphology, molecular localization, and molecular information generated by advanced optical technology using machine learning, objective and accurate optimum cell classification which as much as possible does not include a human knowledge bias is performed. It is also possible to analyze classification results of the machines from the viewpoint of humans and biology/genetics and interactively evaluate the classification results of the machines by educating the machines and the like. It is also possible to improve the sensitivity to specific object cells by educating the machines. FIG.2shows a result of classifying GMI temporal single-cell signals generated from single-cell images using machine learning. For example, a group of GMI temporal cell signals generated by a single-cell group in a large number of complex samples (for example, blood) is first classified using unsupervised machine learning (a cell image group classified as the same type on the left ofFIG.2)) and a typical representative example of each group (a template image on the right ofFIG.2) is presented to the human. On the basis of the templates, the human selects cells according to a purpose (education for a machine). On the basis of a selection result, the machine can correct a determination criterion and classify object cells with higher sensitivity and higher speed. FIG.3shows a result of classifying GMI temporal single-cell signals generated from single-cell images using to machine learning. Because they are randomly captured images, a case in which the cell is not necessarily a single cell is also included. In each group, the left image shows the original cell image of the GMI temporal single-cell signals belonging to the same classification, and the right image shows a representative template cell image in the same classification generated in a GMI reconstruction process from an average signal of the GMI temporal single-cell signals belonging to the same classification. Considering application to flow cytometry, it is also possible to classify a single cell or a plurality of cells and provide an effective approach for reducing a false positive rate. For example, in current flow cytometry in which cells are classified by measuring the total number of fluorescence molecules, a main false positive result indicates a single cell, a plurality of cells, incorporated foreign matter, or the like. The present invention can assign additional spatial information to existing flow cytometry information and reduce the false positive rate. Also, generation of a representative template image in the same classification is useful for practical use because it is possible to confirm whether or not classification based on the machine learning conforms to a user's intention. Introduction of the above-described machine learning can be applied not only to the integration of flow cytometry technology and high-speed imaging technology such as GMI, but also to the integration of flow cytometry technology and nonlinear molecular spectroscopy technology (Raman spectroscopy, stimulated Raman scattering spectroscopy, or coherent anti-Stokes Raman scattering spectroscopy). In this case, machine learning of a temporal signal of a scattered spectrum other than an image or a temporal waveform is performed, an analysis time is significantly shortened without involving a Fourier transform, and classification is performed without a human knowledge bias. It is also possible to construct an interactive system. For example, cells are classified using unsupervised machine learning, template spectra are generated, the human performs education while viewing a template, and cell classification can be performed according to a purpose with higher accuracy on the basis of supervised machine learning. High-Speed Imaging and Analysis Method of Obtaining Cell Space Information without Capturing Cell “Image” For example, space information of an imaging object (which cannot be recognized by the human eye) is effectively compressed and included in a temporal waveform signal obtained in a process of high-speed imaging technology using a single-pixel detection element in the GMI method. A process of causing the machine to learn such one-dimensional temporal waveform data is equivalent to a process of causing the machine to learn a two-dimensional image. Thus, a processing speed can be dramatically improved when no image reconstruction process is performed and the information and classification accuracy will not be degraded. It is possible to process cell space information with high speed, high accuracy, and highly sensitivity without reconstructing an “image.” For example, the unsupervised cell classification shown inFIGS.2and3also uses compressed temporal waveform signals. The compressed temporal signals can be used directly without involving the image reconstruction process in highly accurate and highly sensitive cell classification released from a human bias. Optical Imaging Method that Automatically Performs Optimization According to Object For example, machine learning is performed between a GMI image and a conventional camera image to dynamically correct an optical structure in which GMI is used. Also, it is possible to apply a cytometry result (including human recognition of whether specific information of the observed object is desired or the like) to an optical imaging method. Introduction of Automatic Machine Learning into Optical Imaging Process As shown inFIG.4, characteristics of the observed object differ greatly (bacteria, blood, and the like) according to a purpose or a researcher. By introducing automatic machine learning into an optical imaging process, the required optical structure is automatically optimized using a light spatial modulator or the like. There are parameters such as an overall shape (a rectangle, a rhombus, or an ellipse), a size, a density of a bright part of a random pattern, a size of each bright part of a pattern, and the like in the optical structure. Using the machine learning, this parameter group is optimized to improve accuracy while speeding up optical imaging and information processing. By introducing machine learning for the observed object of the next generation flow cytometry for generating a large amount of multi-dimensional information, a. large-quantity/multi-dimensional data can be processed, b. fast analysis/classification processing is possible, c. high accuracy is possible, d. human individual differences, human fatigue, and the like do not intervene, and e. it is possible to accurately find characteristics of cells which could not be recognized by limited human knowledge and perception. It is possible to execute high-accuracy and high-speed analysis and classification from spectral data which cannot be recognized by the human eye not only when imaging is performed for spatial information of cells according to fluorescence imaging or the like but also in molecular spectroscopy. Processing can be speeded up without degrading the quality of information by processing a temporal waveform signal obtained by effectively compressing cell spatial information in the optical imaging method instead of two-dimensional information of the cell image. By optimizing the optical imaging method in accordance with the observed object, it is possible to accurately collect object information while effectively compressing the object information and it is possible to speed up optical imaging and information processing without reducing the accuracy of cytometry. Also, by applying the cytometry result (including human recognition of whether specific information of the observed object is desired or the like) to the optical imaging method, it is possible to make a modification suitable for the purpose and increase the sensitivity of the cytometry. Each element in the analysis device of the present invention will be described below. Optical System The light source1and the light irradiation region3irradiated with the light from the light source1form an optical system for irradiating the observed object with light. The optical system may appropriately include an optical element including a mirror or a lens (not shown), a spatial light modulator, and a filter. The optical system may be an optical system (a system) having a structured illumination pattern having a plurality of regions with different optical characteristics. An example of the optical system may be a group of optical elements having a light source and a filter for receiving the light from the light source and forming a structured illumination pattern. Another example of the optical system is a light source group (or a light source group and an optical element group including optical elements) having a plurality of light sources for configuring an illumination pattern. For example, light from the light source passes through a filter having a pattern of optical characteristics and is radiated to an object to be measured with a pattern of light. The light source may be continuous light or pulsed light, but continuous light is preferable. The light source may include a single light source or may include a plurality of light sources regularly arranged (for example, the light source may include a plurality of light sources arranged at equal intervals in a vertical direction and a horizontal direction). In this case, it is preferable that one or both of the intensity and the wavelength can be controlled in the plurality of light sources. The light source may be white light or monochromatic light. Although examples of optical characteristics are characteristics related to one or more of an intensity of light, a wavelength of light, and polarization (e.g., transmittance), the present invention is not limited thereto. An example of a structured illumination pattern having a plurality of regions having different optical characteristics includes a plurality of regions having a first intensity of light and a plurality of regions having a second intensity of light. Examples of the plurality of regions having different optical characteristics have portions with different optical characteristics randomly scattered in a certain region. Also, in another example of the plurality of regions having different optical characteristics, a plurality of regions partitioned in a lattice shape are present and the plurality of regions include at least a region having a first intensity of light and a region having a second intensity of light. For example, the structured illumination pattern having a plurality of regions having different optical characteristics can be achieved by adjusting the intensity and frequency of each light source included in the plurality of light sources and can be obtained by radiating light from the light sources to a transparent film on which a pattern is printed. Preferably, the structured illumination pattern is radiated to the observed object. When the observed object5is mounted on, for example, a specific stage or moves on a specific stage, a region irradiated with light from the light source in the stage is the light irradiation region3. Normally, the observed object5is located in the light irradiation region3or passes through the light irradiation region3. Various types of observed objects5can be designated as the observed object according to a field of application. Although examples of the observed object are cells, body fluids, and the eyeball (which may be a moving eyeball), the present invention is not limited thereto. Light-Receiving Unit (Imaging Unit) The light-receiving unit7is a detection element configured to receive scattered light (including Raman scattering), transmitted light, fluorescence, or electromagnetic waves (hereinafter also simply referred to as an “optical signal”) from the observed object5located in the light irradiation region3and convert the optical signal into an electrical signal. If the electromagnetic waves are received, it is possible to perform analysis based on various types of spectroscopic technologies. For example, the light-receiving unit7may include an optical element such as a light spatial modulator or may be a light-receiving unit capable of appropriately adjusting an optical signal from the observed object5. If a region where optical signals from the observed object5located in the light irradiation region3reach the light-receiving unit7is a light-receiving region, the light-receiving region may be controlled by these optical elements. In other words, the light-receiving unit7may be a structured detection system having a plurality of regions having different optical characteristics. The light-receiving unit7may be configured to include a light spatial modulator or an optical element using a film partially coated or painted with a material that changes transmittance such as aluminum, silver, or lead for an optical element. In other words, the light-receiving unit7may be configured by arranging the above-described optical element between the observed object5to which the uniform illumination is radiated and the light-receiving unit7, or the like. If a region where optical signals from the observed object5located in the light irradiation region3reach the light-receiving unit7is set as a light-receiving region, the light-receiving region may be controlled by these optical elements. The light-receiving unit7includes a light-receiving device (an imaging device) and preferably includes one or a few pixel detection elements. Although examples of one or a few pixel detection elements are a photomultiplier tube and a multichannel plate photomultiplier tube, the present invention is not limited thereto. Because a few pixel detecting elements are compact and operable at a high speed in parallel, pixel detection elements are preferably used in the present invention. Examples of a single-pixel detection element are disclosed in Japan Patent Nos. 4679507 and 3444509. Examples of the light-receiving device include one or a few light-receiving devices such as a photomultiplier tube (PMT), a line type PMT element, an avalanche photodiode (APD), and a photodetector (PD) or a CCD camera and a CMOS sensor. The light-receiving unit7may have a plurality of types of detection devices. If reconstruction of an image is required for optimization, for example, one of the detection devices may be a detection system based on GMI and another detection device may be a normal camera. In this case, as shown inFIG.4, an image derived from the GMI and an image derived from the camera are compared and it is only necessary for an optical system or a detection system to perform adjustment so that a difference between a reconstructed image derived from the GMI and the image derived from the camera is reduced. Storage Unit The storage unit is an element connected to an element such as a light-receiving unit to exchange information with the connected element and configured to record the information. When the light-receiving unit includes a storage device such as a memory or a hard disk, they function as a storage unit. Also, if the light-receiving unit is connected to a computer, a server or the like connected to the computer functions as a storage unit in addition to the storage device (a memory, a hard disk, or the like) of the computer. The storage unit receives an electrical signal from the light-receiving unit7and records the received electrical signal. The analysis unit recognizes and classifies electrical signals related to scattered light, transmitted light, fluorescence, or electromagnetic waves recorded by the storage unit9and records results of recognition and classification. Using machine learning for analysis, cell data such as GMI compressed temporal signals which cannot be read by the human eye can be recognized and classified. That is, it is preferable that the analysis unit can store a program that performs machine learning and performs the machine learning on given information. Although specific details of the recognition and classification are described in the examples, the analysis unit, for example, recognize a type of class to which the observed object belongs. For example, in the recognition using a k-means method, a class in which a distance between an electrical signal pattern serving as a template of each class obtained in advance and a newly obtained electrical signal pattern is minimized is designated as a class of a newly obtained electrical signal pattern. Moreover, electrical signals to be observed are stored, and a pattern group of stored electrical signals is classified. In this classification, classification is performed so that each electrical signal pattern from an average electrical signal pattern of each class is minimized. Also, a new electrical signal pattern is classified on the basis of this stored data and classification data. Further, if necessary, the stored data and classification data are updated on the basis of a new electrical signal pattern. Updating the classification data indicates that a new electrical signal pattern is used to calculate average data and intermediate value data of each class. For example, the classification data is updated by adding a new electrical signal pattern c to (a+b)/2 which is an average value of electrical signal patterns a and b and obtaining (a+b+c)/3. The analysis unit may include, for example, a temporal signal information acquisition unit configured to receive an optical signal during a fixed period and acquire temporal signal information of the optical signal and a partial signal separation unit configured to separate partial temporal signal information in a partial region of an observed object from the temporal signal information. For example, if temporal signal information when the observed object is a contaminant which is not a cell or the like is stored and partial temporal signal information of a certain observation part is classified as a pattern classified as partial temporal signal information of a contaminant or the like which is not a cell, it is possible to ascertain that there is no cell in the observation part. Because it is possible to ascertain a region where there is no cell without reconstructing an image, the speed of processing is increased, control is performed so that no light is radiated to part thereof as will be described below or temporal signal information is not adopted thereafter, and therefore, it is possible to reduce an amount of information, reduce the number of errors, and speed up processing. According to a field of application, the analysis unit may further include a partial image reconstruction unit configured to extract or reconstruct information about an image of each part of the observed object (an intensity of emitted light or the like) from the obtained partial temporal signal information of the observed object. Also, the analysis unit may include an image reconstruction unit for reconstructing an image related to the observed object using an image of each part of the observed object reconstructed by the partial image reconstruction unit. Although this case is preferable because the human can perform a verification operation, analysis and classification becomes time-consuming because the image of the observed object is reconstructed once. A detected signal includes information of the detected intensity for each change over time. The temporal signal information acquisition unit acquires temporal signal information of an optical signal. In an example of the temporal signal information acquisition unit, the light-receiving unit of one or a few pixel detection elements receives the detected signal received, detected, and stored for a fixed time as the temporal signal information. The temporal signal information acquired by the temporal signal information acquisition unit may be appropriately stored in the storage unit. Also, the temporal signal information acquired by the temporal signal information acquisition unit may be sent to the partial signal separation unit so that the temporal signal information is used in an arithmetic process of the partial signal separation unit. The partial signal separation unit is an element for separating the partial temporal signal information in the partial region of the observed object from the temporal signal information. The temporal signal information includes a detected signal derived from each part of the observed object. Thus, the partial signal separation unit separates partial temporal signal information which is temporal signal information in each partial region of the observed object from the temporal signal information. At this time, the partial signal separation unit reads information H about the stored illumination pattern and separates the partial temporal signal information using the information H about the read illumination pattern and the temporal signal information. That is, because there is a change corresponding to the information H about the illumination pattern, the temporal signal information can be separated into the partial temporal signal information using the information H about the illumination pattern. The partial temporal signal information which is temporal signal information in each partial region of the observed object from the temporal signal information may be appropriately stored in the storage unit. Also, according to a field of application, the partial temporal signal information may be sent to the partial image reconstruction unit for an arithmetic process of the partial image reconstruction unit. The partial image reconstruction unit is an element for extracting or reconstructing information about an image of each part of the observed object (the intensity of emitted light or the like) from the partial temporal signal information. Because the partial temporal signal information is temporal signal information in each partial region, information f about an intensity of light in each region can be obtained. The information about the image of each part of the observed object (the intensity of emitted light or the like) may be appropriately stored in the storage unit. Also, the information about the image of each part of the observed object (the intensity of emitted light or the like) may be sent to the image reconstruction unit for the arithmetic process of the image reconstruction unit. In this case, for example, because the observed object can be analyzed before the image is reconstructed, it is possible to optimize the light source system and the detection system rapidly and also obtain the image of the observed object. The image reconstruction unit is an element for reconstructing an image related to an observed object using images of parts of the observed object reconstructed by the partial image reconstruction unit. Because the image of each part of the observed object is an image of each region of the observed object, the image related to the observed object can be reconstructed by adjusting the image. The analysis device preferably optimizes a classification algorithm of the analysis unit11using machine learning. That is, the analysis unit11includes a classification algorithm for performing various types of analysis. This classification algorithm is optimized using machine learning. The classification algorithm includes an algorithm for making a classification using the classification of the observed object described above or the classification of a signal when the observed object is not present. An example of the analysis is a process of ascertaining a characteristic optical signal component of the observed object, setting a threshold value to be used in a classification operation, or setting a condition for optimizing the optical system and the detection system. The machine learning is well-known as disclosed in, for example, Japanese Patent No. 5574407, Japanese Patent No. 5464244, and Japanese Patent No. 5418386. An example of the machine learning is learning using an Ada Boosting algorithm. For example, the machine learning may be a process of obtaining optical signals of a plurality of objects among observed objects and learning characteristics of the observed object from the obtained optical signals. By performing machine learning, it becomes possible to detect the presence or absence of specific cells extremely efficiently and rapidly. The object of the machine learning is not limited to an image, for example, it being only necessary to detect a vibration that is not imaged as in a case in which Raman spectroscopy is used and to use a detected signal for an analysis object. For example, the analysis unit can analyze optical signals of a plurality of observed objects using the machine learning of and perform analysis such as the classification/recognition of the optical signals of the observed objects. In this analysis device, preferably, the analysis unit11analyzes an observed object without reconstructing an image of the observed object with an electrical signal related to scattered light, transmitted light, fluorescence or electromagnetic waves. For example, the analysis unit11analyzes the observed object using the above-described temporal signal information, partial temporal signal information, or GMI. In this example, it is possible to analyze whether or not an observed object is a specific object and analyze information about the observation target including a size and a position of the observation target by recognizing a pattern or the like using machine learning for a plurality of objects and collating the temporal signal information, the partial temporal signal information, or the GMI with the pattern or the like. Optical System Control Unit The optical system control unit13is an element for optimizing the light source1or the light irradiation region3on the basis of an analysis result. A control signal (a control command) for controlling the light-receiving unit7, the light source1, or the optical element constituting the optical system may be requested in the analysis unit described above, or may be requested in the optical system control unit13. If the control command is requested in the analysis unit, the optical system control unit13can optimize a light source system by controlling the light source1or the optical element constituting the optical system in accordance with the control signal (the control command) requested by the analysis unit3. An example of the optical system control unit13is a computer connected so that information can be exchanged with a control unit configured to control the light-receiving unit7, the light source1, or the optical element constituting the optical system. In this computer, a program is installed so that a specific arithmetic process and an input/output can be performed. An example of this program is a program for performing machine learning. In an image capturing mechanism such as GMI, it is preferable that the optical system control unit13perform processing with the temporal waveform electrical signal (GMI) as it is without reconstructing the image of the observed object on the basis of the electrical signal from the light-receiving unit7. According to a field of application, a function of reconstructing the image of the observed object may be provided. In this case, the image quality can be verified. In the spectrum temporal waveform recording mechanism such as Raman spectroscopic measurement, it is preferable that the optical system control unit13process the temporal waveform electrical signal as it is without performing a Fourier transform on the spectrum and analyzing a molecular spectrum in the frequency domain on the basis of the electrical signal from the light-receiving unit7. However, according to a field of application, the optical system control unit13may analyze the molecular spectrum in the frequency domain by performing a Fourier transform on an electromagnetic wave spectrum. In this analysis device, preferably, the optical system control unit13optimizes the light source1or the light irradiation region3using machine learning. An example of optimization of the light source1is to adjust an intensity of light of the light source1. In this analysis device, preferably, the light from the light source1has a plurality of light regions21, and the optical system control unit13controls an optical structure of the plurality of light regions21. In this analysis device, preferably, the optical system control unit13analyzes a region of presence of the observed object5on the basis of the electrical signal and performs control for limiting the light irradiation region3. In this analysis device, preferably, the optical system control unit13analyzes a density of the observed object5on the basis of the electrical signal to obtain coarseness/fineness information of the observed object and controls the light source1or the light irradiation region3on the basis of the coarseness/fineness information. Light-Receiving System Control Unit The analysis device preferably further includes the light-receiving system control unit27configured to receive an electrical signal from the light-receiving unit7and optimize the light-receiving region25which is a region where light is radiated to the light-receiving unit7. In the light-receiving system control unit27, the analysis unit described above may perform analysis of the light-receiving system. That is, for example, the analysis unit adopts a program for machine learning, and classifies a received optical signal of part of the light-receiving unit where no useful information is obtained. If the received optical signal of a certain part of the light-receiving unit is classified as this classification, for example, processing is performed so that information from this part is not used for analysis. Accordingly, it is possible to reduce the throughput of the analysis and to perform the process rapidly. In the analysis device, preferably, the light-receiving system control unit27optimizes the light-receiving region25using machine learning. In this preferred form, the light source1or the light-receiving region25may be optimized using a technique similar to optimization of the optical system by the optical system control unit13. An example of the light-receiving system control unit25is a computer connected so that information can be exchanged with the control device configured to control the light-receiving unit7, the light source1, or the optical elements constituting the light-receiving system and the light-receiving region25. That is, the present description discloses optimization of the optical system and optimization of the light-receiving system and discloses optimization of only the light-receiving system. The analysis device may include various types of element in a well-known analysis device. An example of such an element is a relative position control mechanism. Next, an operation example of the imaging device of the present invention will be described. FIG.5is a conceptual diagram showing that the observed object passes through patterned illumination. As shown inFIG.5, the observed object5is moved by a relative position control mechanism and passes through patterned illumination of the optical system. This patterned illumination optical structure exhibits its intensity distribution in a matrix of H(x, y). This observed object5has fluorescence molecules indicated by optical spatial information, for example, F1to F4. These fluorescence molecules do not emit fluorescence according to an intensity of received light or an intensity of emitted fluorescence varies. That is, in this example, the fluorescence molecule denoted by F2first emits fluorescence and an intensity of emitted fluorescence is affected by the patterned illumination through which the observed object5passes. The light from the observed object5may be appropriately focused according to a lens or the like. Then, the light from the observed object5is transmitted to one or a few pixel detection elements. In the example ofFIG.5, a traveling direction of the observed object is set as an x-axis, and a y-axis is provided in a direction perpendicular to the x-axis which is in the same plane as that of the x-axis. In this example, F1and F2are observed as fluorescence on y1which is the same y coordinate (F1and F2are denoted by H(x, y1). Also, F3and F4are observed as fluorescence on y2which is the same y coordinate (F3and F4are denoted by H(x, y2)). FIG.6is a conceptual diagram showing a state of fluorescence emitted by the observed object shown inFIG.5. As shown inFIG.6, the fluorescence is emitted from each fluorescence molecule. For example, because F1and F2experience the same illumination pattern, they are considered to have similar time response patterns or output patterns. On the other hand, the intensity of emitted light is considered to be different between F1and F2. Thus, intensities of emitted light of F1and F2can be approximated so that they are products of F1and F2which are coefficients specific to each light emitting molecule and H(x, y1) which is a time response pattern common to the coordinate y1. The same is true for F3and F4. FIG.7is a conceptual diagram showing a detected signal when fluorescence emitted by the observed object shown inFIG.5is detected. This detected signal is observed as a sum signal of fluorescence signals shown inFIG.6. Then, this signal includes a time-varying pattern H(x, yn) of a plurality of intensities. Then, it is possible to obtain each coordinate and a fluorescence coefficient (an intensity of fluorescence) at each coordinate from each intensity (G(t)) of the detected signal and H(x, yn). FIG.8is a conceptual diagram showing a position and an intensity of fluorescence of fluorescence molecules obtained from an intensity of a detected signal. As shown inFIG.8, fluorescence coefficients (intensities of fluorescence) F1to F4can be obtained from the detected signal G(t). The above-described principle will be described in more detail.FIG.9is a diagram showing the principle of image reconstruction. For example, F(1) and F(2) are assumed to be in-object coordinates. Then, at time 1, light of a first pattern is radiated to F(1) and is not radiated to F(2). At time 2, light of a second pattern is irradiated to F(1) and light of the first pattern is radiated to F(2). At time 3, no light is radiated to F(1) and the light of the second pattern is radiated to F(2). Then, the detected signal G(t) is as follows. G(1)=F(1)H(1), G(2)=F(1)H(2)+F(2)H(1), and G(3)=F(2)H(2). When the above is solved, F(l) and F(2) can be analyzed. Using this principle, the coordinates F(1) to F(n) can be obtained by performing analysis in a similar manner even if there are more in-object coordinates. Next, if the object is two-dimensional, internal coordinates of the observed object are assumed to be F(x, y). On the other hand, pattern illumination is also assumed to have coordinates. Assuming that there are n internal coordinates of the observed object in the x-axis direction and n in the y-axis direction, the number of unknowns of F(x, y) is n×n. F(x, y) (0≤x≤n and 0≤y≤n) can be reconstructed by measuring the signal in the same manner as above and analyzing the obtained signal G(t). FIG.10is a diagram showing an example of the image reconstruction process. In this example, the image is represented as f (an object position information vector) in a matrix equation. Then, patterned illumination is represented by H(X, y), and X is represented by a variable which varies with time. Also, an intensity of a detected signal is represented as g (a measured signal vector). Then, these can be represented as g=Hf. As shown inFIG.10, it only necessary to multiply an inverse matrix H−1of H from the left in order to obtain f. On the other hand, H may be too large to easily obtain the inverse matrix H−1of H. In this case, for example, a transposed matrix Htof H may be used instead of an inverse matrix. Using this relationship, it is possible to obtain an initial estimated value fin off. Thereafter, by optimizing f with the initial estimated value fintoff, the image of the observed object can be reconstructed. In other words,FIG.10is a diagram showing an example of the image reconstruction process. In this example, the image is represented as f (an object position information vector) in a matrix equation. Then, the patterned illumination is represented as H(X, y), and X is represented by a variable which varies with time. Also, an intensity of a detected signal is represented as g (a measured signal vector). Then, these can be represented as g=Hf. As shown inFIG.10, it only necessary to multiply the inverse matrix H−1of H from the left in order to obtain f. On the other hand, H may be too large to easily obtain the inverse matrix H−1of H. In this case, for example, an initial estimated value finitoff can be obtained as a result of multiplying a transposed matrix HTof H by g. Thereafter, it is possible to reconstruct the image of the observed object by optimizing f with the initial estimated value finitoff. Shape of optical structure adjusted to shape of observed object Observed objects in cytometry such as cells are spherical in many cases. At this time, the overall shape of the optical structure need not be rectangular. For example, bright parts at the four corners of an optical structure are changed to dark parts and the dark parts enlarged to an extent at which the quality does not deteriorate. If the quality drops, it is only necessary to repeat a cycle in which a few new points are added to the four corners. During cell imaging, a dense structure of cytoplasm, nuclei, or the like and a sparse structure of a cell membrane, localized molecules, or a specific chromosome label (a FISH method or the like) exist. Basically, in GMI, a sparse optical structure is desirable for a dense object structure, and it is desirable to design a dense optical structure for a sparse object structure. First, the density of the object is recognized on the basis of the GMI electrical signal. For example, when a ratio of a value of an object signal integrated over time to a product of a maximum value of the object signal and a time width of the object signal (a product of a value of the object signal integrated over time/a maximum value of the object signal and a time width of the object signal) is greater than or equal to a fixed value, the object is dense. Also, when the ratio is less than or equal to the fixed value, the object is sparse. This value is adjusted according to a sample or an object structure. On the basis of this recognition, a more sparse or dense optical structure is designed. For example, the occupancy of a random bright part relative to the whole structure is increased or decreased and a new random pattern is created (the current GMI optical structure uses a DMD/optical mask and the structure is simply random; thus, two values of brightness and darkness are present and a bright part is specified as any % of the whole and randomly scattered). Finally, the above-described cycle is repeated until the above-described ratio (the ratio of the value of the object signal integrated over time to the product of the maximum value of the object signal and the time width of the object signal) falls in a certain fixed range (depending upon the object). Intensity of S/N of Object Signal According to the object, an intensity of an optical signal is very weak and an S/N ratio is low. If the S/N ratio is low, highly accurate cell information processing and cytometry may not be able to performed. One technique for increasing the S/N ratio in GMI is to perform a plurality of binning operations on pixels in a spatial light modulator and set the pixels subjected to the binning operations as unit pixels of the GMI optical structure. Thereby, an intensity of light of the unit pixel of the GMI optical structure can be increased and the S/N can be improved. Also, in Raman spectroscopy, one technique for increasing the S/N ratio is to reduce light radiation to a part which does not pass through the object. Thereby, the intensity of noise light can be reduced and S/N can be improved. A simplest binning technique is a method of binning the same number of pixels in vertical and horizontal directions such as 2*2 and 3*3. However, this increases the structural size of the spatial modulator (corresponding to the real size of the GMI optical structure on the sample), and the throughput, an amount of information, and the quality deteriorate. Binning is done in a horizontally elongated rectangle, for example, 1 pixel in length*2 pixels in width. Then, because the spatial resolution in the GMI is determined by the size of the vertical pixels, the spatial resolution does not deteriorate. However, the number of horizontal pixels (corresponding to an actual horizontal width of the GMI optical structure on the sample) of the spatial modulator becomes large and the throughput is sacrificed. Binning is done in a vertical rectangle, 2 pixels in length*1 pixel in width. Then, because the spatial resolution in the GMI is determined according to a size of the vertical pixels, the spatial resolution deteriorates. However, the number of horizontal pixels (corresponding to the actual horizontal width of the GMI optical structure on the sample) of the spatial modulator does not change and the throughput does not deteriorate. During cell imaging, a complex structure such as a cytoskeletal structure, a virus infection path, or a cell signal network is provided. It is difficult to design an optimum optical structure for such a complex structure. At this time, on the basis of the GMI electrical signal, the optical structure can be automatically optimized using machine learning, for example, for the purpose of improving the quality of the reconstructed image. Also, in optimization of the optical structure using machine learning, optimization including the above-described example can be achieved, for example, by setting an objective function to improvement of the imaging throughput, reduction of an amount of calculation for the electrical signal, reduction of an amount of image information, improvement of the image quality, improvement of sensitivity to a target feature quantity (a nucleus/cytoplasm ratio, a cell size, a chromosome aggregation image, the number of chromosomes, or the like), improvement of S/N of a signal, improvement of recognition accuracy, or the like. For example, during cell imaging based on GMT, the optical structure can be optimized with a well-known machine learning and optimization algorithm. Well-known machine learning and optimization algorithms include evolutionary algorithms and simulated annealing. For example, it is possible to achieve the above-described objective and optimize an optical structure using machine learning by setting minimization of an area of the illumination region, maximization of image quality or recognition accuracy, maximization of S/N, or the like as an objective function of the optimization algorithm. FIG.11is a conceptual diagram showing an example of a process of optimizing the illumination pattern. In this example, an optical signal (a GMI signal, a ghost motion imaging signal, and g) is obtained using the above-described imaging device, and an image (F) obtained by reconstructing the observed object5is obtained. On the other hand, the imaging unit images the same observed object5and obtains a captured image (C) of the observed object. Then, the optical system control unit compares the reconstructed image (F) with the captured image (C). In this comparison, for example, after the reconstructed image (F) and the captured image (C) are adjusted to have the same size, it is only necessary to obtain a sum of contrast differences of colors or intensities included in pixels (or to obtain a sum of absolute values of the differences or a sum of squares of the differences) and set the obtained value as a comparison value (c). Then, the optical system control unit appropriately changes the illumination pattern and obtains the comparison value again using the same observed object as the previous observed object5or using the same type of observed object. In this manner, after a plurality of combinations of illumination patterns and comparison values are obtained, it is only necessary to determine an optimum illumination pattern in consideration of an amount of information of the illumination pattern. Also, on the basis of the reconstructed image (F), the size of the observed object may be ascertained and the illumination pattern may be controlled so that the illumination pattern becomes an illumination pattern corresponding to the ascertained size. For example, it is only necessary to obtain reconstructed images for one or more observed objects, analyze the size of a necessary irradiation region from these images, and control the illumination pattern so that the illumination pattern has the analyzed size. In this case, the optical system control unit controls the optical system so that the illumination pattern becomes an obtained optimized illumination pattern. The light source system adjusts the illumination pattern in accordance with a control signal from the optical system control unit. In this manner, it is possible to obtain an optimized illumination pattern. FIG.12is a drawing of a captured image showing an example in which an illumination pattern is actually optimized for various observed objects. In this example, image reconstruction is first performed a plurality of times to ascertain a range of the size of the observed object. This can be easily implemented by analyzing a region where a reconstructed image exists. Then, a size of the illumination pattern capable of covering a region necessary for imaging the observed object is obtained, the illumination pattern included in the size is varied to obtain the comparison value (s), and an optimum illumination pattern is found from the comparison value (s) of the illumination pattern. Using this optimized illumination pattern, the amount of information can be significantly reduced, the amount of processing required for reconstructing subsequent images can be significantly reduced, and high-speed imaging can be performed. In a preferred example of this imaging device, the size of the observed object is first ascertained on the basis of the above-described method, and the size of the irradiation pattern is adjusted. Thereafter, the pattern itself in the illumination pattern across the adjusted size (region) is changed to obtain the comparison value (e) in a plurality of patterns. Moreover, it is only necessary to obtain an optimum illumination pattern by comparing with the comparison value (c). The comparison value (c) may be the sum of squares of the difference values of the pixels. FIG.13is a conceptual diagram showing an example of a process of optimizing an illumination pattern. In this example, the illumination pattern is optimized on the basis of an optical signal (a GMI image) without reconstructing the image. This example further includes a detection system configured to image the observed object5, an estimated signal calculation unit configured to obtain an estimated signal gcobtained by estimating the optical signal detected by one or a few pixel detection elements on the basis of a captured image of the observed object imaged by the detection system, and an arithmetic unit configured to change the illumination pattern while comparing optical signals g detected by one or a few pixel detection elements and the estimated signal gcestimated by the estimated signal calculation unit. In this example, for example, the detection system images an observed object and obtains a captured image. Then, image analysis is performed on the captured image, and a composition or tissue of each part is ascertained. Then, light information corresponding to each composition or tissue is obtained from a table storing information about light (e.g., fluorescence) emitted from each composition and tissue when light irradiation is performed or the like. In this manner, it is possible to ascertain a type of light response when light is radiated to the observed object. Then, it is possible to estimate the optical signal g on the basis of the captured image. This is the estimated signal gcestimated by the estimated signal calculation unit. The optical signal g is, for example, a spectrum as shown inFIG.7. Then, it is only necessary for the second illumination pattern control unit107to obtain the evaluation value (8) for the estimated signal gcand the optical signal g after adjustment is performed so that a corresponding positional relationship between the estimated signal gcand the optical signal g is correct by adjusting relative intensities of the estimated signal gcand the optical signal g to the same degree and further performing automatic matching of the shape of the spectrum (performing adjustment so that an amount of overlapping is maximized). For example, s may be a sum of differences between the relative intensities of the estimated signal gcand the optical signal g per unit time (or an absolute value of a difference or a square of the difference). Alternatively, for example, the estimated signal gcand the optical signal g may be converted into a new coordinate domain to achieve a difference in a relative intensity or the like in the domain. It is only necessary to obtain the evaluation value (ε) on the basis of various illumination patterns and optimize the illumination pattern using the evaluation value (ε). FIG.14is a conceptual diagram showing an example of a process of optimizing the illumination pattern. This example is used, for example, when information about an observed object (for example, a pattern of an image obtained by reconstructing the observed object in an image reconstruction unit) has already been stored. That is, this example further includes a control unit configured to change the illumination pattern using an image (F) of the observed object reconstructed by the image reconstruction unit. In this example, for example, the control unit performs pattern authentication with the image (F) of the reconstructed observed object and a pattern of the image stored in advance. Because the pattern authentication technology is well-known, pattern authentication can be implemented by installing a well-known pattern authentication program in a computer. This example can be effectively used, for example, when an object (an accepted item or a rejected item) is selected or when inspecting for the presence or absence of an object. Also, this example can be used for the purpose of automatically measuring the number of cells contained in a specific region. That is, a preferred example of the imaging device further includes an observed object determination unit configured to classify an observed object using the image of the observed object reconstructed by the image reconstruction unit. FIG.15is a conceptual diagram showing an example of a process of optimizing the illumination pattern. This example is used, for example, when information about the observed object (for example, the pattern of the optical signal g) has already been stored. This example further includes a control unit configured to change the illumination pattern using the optical signal g detected by one or a few pixel detection elements. In this example, for example, the control unit performs pattern authentication with a pattern of the optical signal g detected by one or a small number of pixel detection elements and the optical signal g stored in advance. Because the pattern authentication technology is well-known, pattern authentication can be implemented by installing a well-known pattern authentication program in a computer. Alternatively, for example, the two signals g may be converted into a new coordinate domain to achieve a difference in a relative intensity or the like within the domain. Also, a preferred example of the imaging device further includes a reconstructed image classification unit configured to classify the reconstructed image of the observed object using a plurality of images of the observed object reconstructed by the image reconstruction unit. The image (F) classified by the reconstructed image classification unit is used by the control unit or/and the determination unit. A preferred usage form of this analysis device is a flow cytometer having any one of the above-described analysis devices. This flow cytometer has a flow cell including the light irradiation region3. The flow cytometer preferably includes a sorting unit configured to recognize an observed object on the basis of an analysis result of the analysis unit11and sort the observed object5. More specifically, when the observed object5is a target object and when the observed object5is not a target object, the target object can be sorted by making a path after passing through the sorting unit different. In the flow cytometer, the target object may be analyzed in advance and information indicating the target object (a threshold value or the like) may be stored in the storage unit. Also, an object including a large number of target objects may be observed, the classification unit may recognize the object as the observed object, and classification information about the target object may be subjected to a machine learning algorithm. This classification information is, for example, a characteristic peak included in various spectra. When the observed object moves through the flow cell and reaches the light irradiation region, light from the light source is radiated to the observed object. Then, the light-receiving unit receives an optical signal from the observed object. The analysis unit analyzes the optical signal. At that time, the analysis unit reads the classification information of the target object stored in the storage unit and determines whether the observed object is a target object by comparing the classification information with the optical signal. If the analysis unit determines that the observed observed object is the target object, the analysis unit sends a control signal corresponding to the observed object serving as the target object to the classification unit. The classification unit receiving the control signal adjusts the path and guides the observed object to the path of the target object. In this manner, it is possible to recognize the target object and an object other than the target object and classify the objects. At this time, it is only necessary to measure a plurality of observed objects and appropriately optimize the optical system or the detection system in the analysis unit. Then, the classification can be rapidly performed with appropriate accuracy. Example 1 Hereinafter, the present invention will be specifically described with reference to examples. <Example 1-1> (Supervised Machine Learning, Decompression/Conversion of Image into Temporal Signal Using Optical Structure, Classifier Formation, and Classification) A computer used in present example included a 2.8 GHz Intel Core i7 processor and 16 GB of memory. First, as an object sample for learning, a total of 1100 images including 1000 images for learning and 100 images for classification accuracy measurement were provided for a face image group and a non-face image group in which an image had 19 pixels in length and 19 pixels in width (an image source ofFIG.16is the Center for Biological and Computational Learning at MIT). Within the computer, noise (S/N=30 dB) was applied to the above-described image group, the above-described GMT process passing through an optical structure was virtually executed, and a temporal signal was generated. The optical structure was a patterned illumination optical structure or detected optical structure in an experimental system, the optical structure used here was 19 pixels in width and 343 pixels in length (FIG.17), and the temporal signal was generated for (1×192) pixels (which are the same as those of the original image). A face or non-face label was attached to all waveform signals, and a classifier was formed by learning 1000 waveforms for learning using a linear classification type support vector machine technique. As a sample for classification accuracy measurement, a temporal signal was provided using the same optical structure on the basis of 100 new face and non-face images. The label was removed from this sample, the formed classifier performed automatic classification, and a correct answer rate of the label (face or non-face) was measured. On the other hand, a classifier was formed by giving noise (S/N=30 dB) to the same image sample, attaching a face or non-face label thereto, and learning 1000 images for learning using a support vector machine technique. As a sample for classification accuracy measurement, 100 new face and non-face images were similarly provided. The label was removed from this sample, the formed classifier performed automatic classification, and a correct answer rate of the label (face or non-face) was measured. The result was that the classification accuracy for a face or non-face temporal signal (the number of correct answers for face and non-face/total number×100) in the classifier learning the temporal signal sample was 87% and the classification accuracy for the face or non-face of the image in the classifier performing learning was 82%. According to this result, it was found that, even if a temporal signal generated by passing an image through the optical structure was learned and classified, it was possible to obtain a classification result at least equivalent to that of learning and classifying the original image. Also, a time required for learning 1000 samples and classifying 100 samples was not different between in the case of the image or in the temporal signal. <Example 1-2> (Supervised Machine Learning, Compression/Conversion of Image into Temporal Signal Using Optical Structure, Classifier Formation, and Classification) A computer used in the present example included a 2.8 GHz Intel Core i7 processor and 16 GB of memory. First, as an object sample for learning, a total of 1100 images including 1000 images for learning and 100 images for classification accuracy measurement were provided for a face image group and a non-face image group in which an image has 19 pixels in length and 19 pixels in width (FIG.16). Within the computer, noise (S/N=30 dB) was applied to the above-described image group, the above-described GMI process passing through an optical structure was virtually executed, and a temporal signal (e.g., a GMI waveform) was generated. The optical structure was a patterned illumination optical structure or a detection optical structure in the experimental system, and the optical structure used here had 19 pixels in width and 50 pixels in length (FIG.18), and the temporal signal was generated in 68 pixels and 81% of a total number of original image pixels were compressed. A face or non-face label was attached to all compressed temporal signals and a classifier was formed by learning 1000 waveforms for learning using the support vector machine technique. As a sample for classification accuracy measurement, a compressed temporal signal was provided using a similar optical structure on the basis of 100 new face and non-face images. The label was removed from this sample, the formed classifier performed automatic classification, and a correct answer rate of the label (face or non-face) was measured. On the other hand, a classifier was formed by giving noise (S/N=30 dB) to the same image sample, attaching a face or non-face label thereto, and learning 1000 images for learning using a linear classification type support vector machine technique. As a sample for classification accuracy measurement, 100 new face and non-face images were similarly provided. The label was removed from this sample, the formed classifier performed automatic classification, and a correct answer rate of the label (face or non-face) was measured. The result was that the classification accuracy for a compressed temporal waveform signal of the face or non-face (the number of correct answers for face and non-face/total number×100) in the classifier learning the compressed temporal waveform signal sample was 75% and the classification accuracy for the face or non-face of the image in the classifier learning the image sample was 82%. According to this result, it was found that the classification accuracy using machine learning can also maintain equivalent accuracy according to optical image compression through the optical structure. Also, a time taken to learn 1000 temporal waveform samples was 399 seconds and a time taken to learn 1000 image samples was 475 seconds. According to this result, it was found that it is possible to shorten the time by 16% as compared with the original image in the case of the compressed time signal in the learning of the same sample. Furthermore, a time taken to classify 100 compressed temporal samples was 0.0065 seconds, and a time taken to classify 100 image samples was 0.0147 seconds. According to this result, it was found that it is possible to shorten the time by 56% compared with the original image classification in the case of compressed time signal classification in the classification of the same sample. Example 2 Example 2 (Cell Classification by Unsupervised Machine Learning) A computer used in this embodiment included a 2.8 GHz Intel Core i7 processor and 16 GB of memory. For a sample, viable cell staining was performed using calcein AM for a single-cell group generated by dispersing mouse spleen. A fluorescence-labeled single cell solution as described above was spread on a glass slide and a large number of fluorescence images of a single-cell group were captured by an sCMOS camera (Flash 4.0 manufactured by Hamamatsu Photonics K.K.) using a fluorescence microscope. This image data was read within the computer, the position of a single cell was specified by software (imagej), and a single-cell periphery was partitioned using 70 pixels in length and width to cut out 2165 samples of a large number of single-cell image group samples (FIG.19). This single-cell image group included images containing single cells having different sizes and images including a plurality of cells or objects other than cells. Within the computer, noise (S/N=30 dB) was applied to the above-described image group, the above-described GMI process passing through an optical structure was virtually executed, a temporal signal (e.g., a GMI waveform) was generated, and a temporal waveform group of virtual flow cytometry cells was provided (FIG.20). The optical structure used here had 70 pixels in length and 400 pixels in width, and the temporal signal had 470 pixels (FIG.21). (On the other hand, noise (S/N=30 dB) was applied to the same image sample, and a virtual flow cytometry cell image was provided.) Single-cell temporal waveform samples provided as described above were classified using unsupervised machine learning classification using software (matlab). Specifically, the single-cell temporal waveform samples were classified into 20 types of cell groups using a k-means technique. A representative (average) temporal signal was generated from the cell temporal waveform group included in each same classification group and a cell image was generated on the basis of this temporal signal (FIGS.22and23). As a result, it is can be seen that single cells, a plurality of cells, waste, a cell size, or the like are correctly classified. These are one of the greatest error sources in conventional cytometry and it was showed that correct classification is also made in unsupervised machine learning using a compressed temporal waveform signal through an optical structure. Example 3 Example 3 (Optimization of Optical Structure by Supervised Machine Learning) In the present example, samples similar to those used in Example 1 were used. As an initial optical structure, a random structure was provided with 80 pixels in length and 20 pixels in width and samples of the temporal waveform signal group were provided through the image samples of Example 1. As in Example 1, learning was performed using a linear classification type support vector machine technique, and the classification accuracy (the number of correct answers for face and non-face/total number×100) was obtained. This accuracy of classification was set as an objective function, and the optical structure was optimized using machine learning in order to maximize the objective function (FIG.24). Specifically, a genetic algorithm was used. The number of individuals was 200, the number of generations was 16,000, roulette selection was used for selection, and uniform crossover was used for crossover. As a result, as in Example 1, evaluation was performed using image samples unused for optimization. The classification accuracy was 65% in the initial random optical structure, the classification accuracy was 75% in the optical structure after optimization, and an improvement in classification accuracy of 10% was exhibited. Next, an example of the analysis unit11will be described with reference toFIG.25. FIG.25shows an example of an analysis system200. The analysis system200includes a flow cytometer300, the analysis unit11, and a computer400. The flow cytometer300observes and sorts the observed object5. The flow cytometer300outputs an optical signal related to the observed object5to the analysis unit11. The analysis unit11classifies the observed object5from the flow cytometer300on the basis of an optical signal related to the observed object5. The computer400mechanically learns the optical signal related to the observed object5observed by the flow cytometer300. The computer400changes a classification algorithm of the classification unit101on the basis of a machine learning result. The flow cytometer300includes a light-receiving unit7and a sorting unit33. The light-receiving unit7receives the optical signal from the observed object5and converts the received optical signal into an electrical signal ES. The light-receiving unit7outputs the electrical signal ES obtained through the conversion to a signal input unit100. The sorting unit33sorts the observed object5on the basis of a signal classification result R indicating the result of analyzing the electrical signal ES in the analysis unit11. The computer400includes a storage unit9and a machine learning unit401. The storage unit9stores an input signal SS. The machine learning unit401performs machine learning on the optical signal stored in the storage unit9. In this example, the analysis unit11includes the signal input unit100and a classification unit101. The classification unit101includes a logic circuit capable of changing a logic circuit configuration. The logic circuit may be a programmable logic device such as a field-programmable gate array (FPGA), or an application specific integrated circuit (ASIC). The machine learning unit401changes the classification algorithm of the classification unit101included in the analysis unit11on the basis of the machine learning result. In this example, the machine learning unit401changes the logic circuit configuration of the classification unit101on the basis of the machine learning result. Specifically, the machine learning unit401configures a classification logic LP, which is a logic circuit configuration of a classification algorithm suitable for the observed object5, on the basis of the machine learning result, and changes the logic circuit. The signal input unit100acquires the electrical signal ES from the light-receiving unit7. The signal input unit100outputs the electrical signal ES acquired from the light-receiving unit7as an input signal SS to the storage unit9and the classification unit101. The signal input unit100may remove noise of the electrical signal ES by applying a filter to the electrical signal ES. The noise is, for example, high-frequency noise, shot noise, and the like. By removing the noise of the electrical signal ES, the signal input unit100can stabilize a trigger position at which the electrical signal ES starts to be acquired as the input signal SS. The signal input unit100can output a signal suitable for machine learning as the input signal SS by stabilizing the trigger position. Also, the signal input unit100may distinguish whether the observed object5is a single cell or a plurality of cells and whether the observed object5is waste and distinguish a cell size of the observed object5and the like, and determine whether or not to output the signal as the input signal SS. The above-described filter is changed in accordance with the observed object5. The filter removes noise by making the electrical signal ES have a gentle waveform. Specifically, the filter is a filter for performing comparison with the threshold value of the electrical signal ES, a filter for performing a moving average operation on the electrical signal ES and comparing a value obtained through the moving average operation with a threshold value, a filter for differentiating the value obtained through the moving average operation on the electrical signal ES and comparing the differentiated value with a threshold value, or the like. The classification unit101acquires the input signal SS from the signal input unit100. The classification unit101classifies the observed object5observed by the flow cytometer300on the basis of the input signal SS acquired from the signal input unit100. The classification unit101classifies the input signal SS through the logic circuit, thereby determining the observed object5. By classifying the observed object5through the logic circuit, the classification unit101can classify the observed object5at a higher speed than in a general-purpose computer. As described above, the light-receiving unit7receives scattered light, transmitted light, fluorescence, or electromagnetic waves from the observed object located in the light irradiation region irradiated with the light from the light source, and converts the received light or electromagnetic waves into an electrical signal. The analysis unit11analyzes the observed object5on the basis of a signal extracted on the basis of a time axis of the electrical signal ES output from the light-receiving unit7. Also, the analysis unit11includes the signal input unit100. The signal input unit100filters the electrical signal ES output by the flow cytometer300. The signal input unit100filters the electrical signal ES to output a signal with reduced noise as the input signal SS to the classification unit101and the storage unit9. The machine learning unit401can perform machine learning on the basis of the input signal SS with reduced noise and can improve the accuracy of classification of the observed object5. Also, the signal input unit100may include a logic circuit. When the signal input unit100includes a logic circuit, the filter configuration may be changed on the basis of the machine learning result. Also, the analysis unit11includes the classification unit101. Because the classification unit101includes the logic circuit, the classification unit101can classify the observed object5in a shorter time than in computation with a general-purpose computer. Next, the support vector machine technique, which is an example of the classification algorithm of the analysis unit11described above, will be described with reference toFIG.26. FIG.26is a diagram showing an example of a discriminant calculation circuit of the analysis unit11. A discriminant of the support vector machine technique can be represented by Equation (1). Classification is made on the basis of a sign of a result of Equation (1). [Math.1]f(x)=b+∑jNSVαjYjexp[-∑kNSL(X^jk-x^kK)2](1) b included in Equation (1) is a constant. Here, it is possible to adjust a classification condition of the support vector machine technique by changing b included in Equation (1). For example, if b included in Equation (1) is changed so that the classification condition becomes strict, a false positive rate can be minimized. a and Y included in Equation (1) are values obtained using machine learning. Here, an element included in Equation (1) in which X is marked with a {circumflex over ( )} (hat symbol) thereabove will be described as X(hat). X(hat) included in Equation (1) can be represented by Equation (2). [Math.2]X^jk=Xjk-μkσk(2) X included in Equation (2) is a matrix obtained using machine learning. X(hat)jkincluded in Equation (2) is a value obtained by normalizing the matrix X obtained using machine learning. [Math.3]x^k=xk-μkσk(3) x included in Equation (3) is data input to the analysis unit11. In this example, the data input to the analysis unit11is a signal extracted on the basis of the time axis of the electrical signal output from the light-receiving unit7. An element included in Equation (3) in which x is marked with {circumflex over ( )} (hat symbol) thereabove will be described as x(hat). In Equation (3), x(hat)kis a value obtained by normalizing x. Here, if the above-described Equation (1) is implemented as a logic circuit, the logic circuit scale becomes enormous and may not fit an FPGA or PL) logic circuit size. Therefore, a logic circuit based on Equation (4) is mounted on the logic circuit mounted on the classification unit101. [Math.4]f(x)=b+∑jNSVβjexp[K~∑kNSL(X~jk-xkσ~k)2](4) An element included in Equation (4) in which K is marked with ˜ (tilde symbol) thereabove will be described as K(tilde). An element included in Equation (4) in which X is marked with ˜ (tilde symbol) thereabove will be described as X(tilde). An element included in Equation (4) in which a is marked with ˜ (tilde symbol) thereabove will be described as σ(tilde). βj, K(tilde), X(tilde)jkand σ(tilde)kincluded in Equation (4) can be represented by Equations (5). The machine learning unit401provided in the computer400calculates Equations (5) in advance. A calculation result is incorporated in the logic circuit included in the analysis unit11. b and K(tilde) included in Equation (4) are constants, βjand σ(tilde)kare vectors, and X(tilde)jkis a matrix. [Math.5]βj=αjYj,K~=-1K2,X~jk=X^jk+μkσk,σ~k=1σk(5) FIG.26(a)shows a discriminant calculation circuit of the above-described Equation (4). A calculation time is shortened by calculating the addition of k included in Equation (4) in parallel. By shortening the calculation time, the analysis unit11can shorten a time required for classification of the observed object5. FIG.26(b)shows a discriminant calculation circuit for calculating the above-described Equation (4) at a higher speed. In the discriminant calculation circuit shown inFIG.26(b), in addition to the configuration in which the above-described addition of k is subjected to parallel processing, the addition of j included in Equation (4) is subjected to parallel processing for calculation. As a result, the analysis unit11can classify the observed object5at a higher speed than in the discriminant calculation circuit shown inFIG.26(a). Although a method of implementing the support vector machine technique in the discriminant calculation circuit has been described above, the classification algorithm of the classification unit101is not limited thereto. Next, an example when the analysis device performs machine learning of the observed object5observed in flow cytometry will be described. FIG.27is a diagram showing conventional cytometry. Conventional cytometry has a problem that it is difficult to observe the observed object at a high speed and change the measurement method for each measurement purpose. FIG.28is a diagram showing an example of a processing method for observing the observed object of the GMI method that solves the above-described problem. In the GMT method, pattern illumination is radiated to an observed object such as a cell moving along a flow path. The observed object irradiated with the pattern illumination emits electromagnetic waves. The electromagnetic waves emitted from the observed object are detected. Also, the pattern illumination radiated to the cells may be illumination for radiating uniform light. If the observed object is irradiated with the uniform light, the GMI method causes the electromagnetic waves emitted from the observed object to be transmitted through a pattern structure having a plurality of regions having different electromagnetic wave transmission characteristics. In the GMI method, the electromagnetic waves transmitted through the pattern structure are detected. FIG.29shows a concept thereof. A time required for image reconstruction and feature quantity extraction and analysis from the image is shortened by directly applying machine learning to the temporal waveform signal and a processing speed is significantly shortened by analyzing compressed small data as it is. FromFIG.30, specific implementation examples in cell classification are shown.FIG.30shows three types of cells specifically used. Miapaca and MCf7 have similar sizes and similar characteristics and k562 has a smaller size. All are dyed green by dead cell staining (LIVE/DEAD Fixable Green Dead Cell Stain Kit, for 488 nm excitation, Thermo Fisher scientific) and classified using machine learning. Only MCF7 is subjected to nuclear staining (DAPI) in blue and this is used for verification of classification accuracy thereafter. FIG.30shows a method of forming a classifier. Miapaca, MCF7, and k562 are separately moved along a flow path, and a temporal waveform signal is generated during imaging based on a GMI method. Threshold value processing is performed on the generated signal and each cell type label is attached thereto. A waveform signal group with this cell type label is incorporated into the computer and a classifier for classifying the waveform signal group is formed. As a classifier formation method, a support vector machine method is applied. Next, different cell types (here, MCF7 and Miapaca) shown inFIG.31are experimentally mixed, cell classification is performed on the temporal waveform signal generated according to GMI using a previously provided classifier, and verification of the classification result is performed according to a total of DAPI signal intensities. FIG.32shows a result thereof. A concentration of MCF7 using machine learning classification for a temporal waveform of a green fluorescence signal is shown with respect to a concentration of MCF7 according to DAPI (blue, correct answer) classification when the concentration of MCF7 in a mixed liquor is changed and a correct answer is shown with high accuracy (>87%). When MCF7 and Miapaca are compared in a green fluorescence image, it is difficult to perform classification with the human eye and the usefulness of machine learning is obvious. High-speed and high-accuracy cell classification implementation has been demonstrated. Further,FIGS.33to35show an example in which a fluid system is implemented in flow cytometry implementation. Here, although the above-described cell classification accuracy when the fluid experimental system is changed was verified, no correct answer was given according to an intensity of a DAPI signal and a temporal waveform group with a cell type label was incorporated into the computer and was classified (without teaching the correct answer to the machine) and accuracy was verified. As shown inFIG.8, it is generally known that it is possible to align cells at the same width (fluctuation) on the same stream line using microfluidic engineering (a flow focusing method). InFIGS.34and35, an extent to which classification accuracy is improved with respect to the number of temporal waveforms used for learning when a fluid system is changed is plotted. Also, a cell temporal waveform to be learned uses randomly mixed cells. In an ideal fluid experiment system, the cells flow through the same streamline and generate a very uniform GMI temporal waveform. At this time, as shown inFIG.34(a), the classification accuracy sharply increases and reaches about 98%. On the other hand, as shown inFIG.34(b), when a flow focus is relaxed and a width is given to the streamline, the increase in accuracy becomes moderately gentle and the accuracy achieved also slightly decreases, but an accuracy of more than 95% can be still achieved. However, in practical application of fluid experiments, there are vibrations of the flow path, instability of the optical system, and the like and a robust classification method is required. At this time, if learning is performed with the waveform signal when the flow focus is relaxed and a waveform signal with an enhanced follow focus is classified, a classification accuracy of about 90% can be robustly obtained as shown inFIG.35(a)and the accuracy is also stable. On the other hand, as shown inFIG.35(b), when learning is performed in a waveform signal when the flow focus is enhanced and a waveform signal with a relaxed flow focus is classified, the classification accuracy does not reach 90% and the accuracy is also unstable. From this, it was shown that generalization of machine learning can be implemented and practicality can be improved by performing learning with a greater breadth of data. Also, in the experiment shown inFIG.7described above, data with the enhanced flow focus and data with the relaxed flow focus are mixed and used for learning. In other words, when the flow focus is enhanced and tested on the basis of teacher information obtained through learning of after the flow focus is relaxed, the most robust classification accuracy can be obtained. On the other hand, when the flow focus is enhanced and tested on the basis of the teacher information obtained through learning of after the flow focus is enhanced, the most accurate classification accuracy can be obtained if the conditions are uniform. Also, when testing is performed on the basis of teacher information obtained by combining data learned by relaxing the flow focus and data learned by enhancing the flow focus, robust and accurate classification accuracy can be obtained. In other words, the analysis device mechanically learns the observed object5in accordance with a flow line width adjusted by the flow path width adjusting unit provided in the flow cytometer300. In the following description, the flow line width will be also described as a flow path width. The analysis device can perform more accurate and robust classification by analyzing the observed object5on the basis of teaching information obtained by combining mechanically learned data in a state in which the flow line width is wider than a diameter of the observed object5and mechanically learned data in a state in which the flow line width is a flow line width according to the diameter of the observed object5. An analysis device of the present embodiment includes a flow path along which an observed object is able to move; a light-emitting unit configured to emit light radiated to a light irradiation region of the flow path; a pattern structure unit having a plurality of regions whose light transmission characteristics are different; a flow path width control unit configured to variably control a movable flow path width of the observed object which moves along the flow path; a detection unit configured to detect electromagnetic waves emitted from the observed object irradiated with the light on the basis of a region and relative movement within the flow path of the light and the observed object by radiating the light to the observed object of the light irradiation region; an acquisition unit configured to acquire a change in an intensity of the electromagnetic waves detected by the detection unit over time as an observed result signal indicating a state of the observed object when the light is radiated to the observed object; a teacher information generation unit configured to generate teacher information indicating a criterion for classifying the state of the observed object using machine learning on the basis of the observed result signal acquired by the acquisition unit and a flow path width when the observed result signal is acquired; and an estimation unit configured to estimate the state of the observed object which moves along the flow path on the basis of the observed result signal acquired by the acquisition unit and the teacher information generated by the teacher information generation unit. Also, in the analysis device, the flow path width control unit provided in the analysis device may control the flow path width so that the flow path width becomes a first flow path width which is a width according to a diameter of the observed object, the teacher information generation unit may generate first teacher information based on a first observed result signal detected by the detection unit as the teacher information in the first flow path width controlled by the flow path width control unit, and the estimation unit may estimate the state of the observed object which moves along the flow path on the basis of the first teacher information generated by the teacher information generation unit and the observed result signal acquired by the acquisition unit. Also, in the analysis device, the flow path width control unit provided in the analysis device may control the flow path width so that the flow path becomes a second flow path width which is a width based on the diameter of the observed object and is wider than the first flow path width, the teacher information generation unit may further generate second teacher information based on a second observed result signal detected by the detection unit as teacher information in the second flow path width controlled by the flow path width control unit, and the estimation unit may estimate the state of the observed object which moves along the flow path on the basis of the first teacher information generated by the teacher information generation unit, the second teacher information generated by the teacher information generation unit, and the observed result signal acquired by the acquisition unit. Also, in the analysis device, the flow path width control unit provided in the analysis device may control the flow path width so that the flow path width becomes the first flow path width which is the width based on the diameter of the observed object and has a narrower width than the second flow path width, the teach information generation unit may further generate the first teacher information based on the first observed result signal detected by the detection unit in the first flow path width controlled by the flow path width control unit, and the estimation unit may estimate the state of the observed object which moves along the flow path on the basis of the first teacher information generated by the teacher information generation unit, the second teacher information generated by the teacher information generation unit, and the observed result signal acquired by the acquisition unit. REFERENCE SIGNS LIST 1Light source3Light irradiation region5Observed object7Light-receiving unit9Storage unit11Analysis unit13Optical system control unit21Plurality of light regions25Light-receiving region31Flow cell33Sorting unit200Analysis system300Flow cytometer400Computer401Machine learning unit | 90,888 |
11861890 | DETAILED DESCRIPTION A style-based generative network architecture enables scale-specific control of the synthesized output. A style-based generator system includes a mapping network and a synthesis network. Conceptually, in an embodiment, feature maps (containing spatially varying information representing content of the output data, where each feature map is one channel of intermediate activations) generated by different layers of the synthesis network are modified based on style control signals provided by the mapping network. The style control signals for different layers of the synthesis network may be generated from the same or different latent codes. A latent code may be a random N-dimensional vector drawn from e.g. a Gaussian distribution. The style control signals for different layers of the synthesis network may be generated from the same or different mapping networks. Additionally, spatial noise may be injected into each layer of the synthesis network. FIG.1Aillustrates a block diagram of a style-based generator system100, in accordance with an embodiment. The style-based generator system100includes a mapping neural network110, a style conversion unit115, and a synthesis neural network140. After the synthesis neural network140is trained, the synthesis neural network140may be deployed without the mapping neural network110when the intermediate latent code(s) and/or the style signals produced by the style conversion unit115are pre-computed. In an embodiment, additional style conversion units115may be included to convert the intermediate latent code generated by the mapping neural network110into a second style signal or to convert a different intermediate latent code into the second style signal. One or more additional mapping neural networks110may be included in the style-based generator system100to generate additional intermediate latent codes from the latent code or additional latent codes. The style-based generator system100may be implemented by a program, custom circuitry, or by a combination of custom circuitry and a program. For example, the style-based generator system100may be implemented using a GPU (graphics processing unit), CPU (central processing unit), or any processor capable of performing the operations described herein. Furthermore, persons of ordinary skill in the art will understand that any system that performs the operations of the style-based generator system100is within the scope and spirit of embodiments of the present invention. Conventionally, a latent code is provided to a generator through an input layer, such as the first layer of a feedforward neural network. In contrast, in an embodiment, instead of receiving the latent code, the synthesis neural network140starts from a learned constant and the latent code is input to the mapping neural network110. In an embodiment, the first intermediate data is the learned constant. Given a latent code z in the input latent space Z, a non-linear mapping network ƒ: Z→W first produces intermediate latent code w∈W. The mapping neural network110may be configured to implement the non-linear mapping network. In an embodiment, the dimensions of input and output activations in the input latent space Z and the intermediate latent space W are equal (e.g.,512). In an embodiment, the mapping function ƒ is implemented using an 8-layer MLP (multilayer perceptron, i.e., a neural network consisting of only fully-connected layers). While the conventional generator only feeds the latent code though the input layer of the generator, the mapping neural network110instead maps the input latent code z to the intermediate latent space W to produce the intermediate latent code w. The style conversion unit115converts the intermediate latent code w into a first style signal. One or more intermediate latent codes w are converted into spatially invariant styles including the first style signal and a second style signal. In contrast with conventional style transfer techniques, the spatially invariant styles are computed from a vector, namely the intermediate latent code w, instead of from an example image. The one or more intermediate latent codes w may be generated by one or more mapping neural networks110for one or more respective latent codes z. The synthesis neural network140processes the first intermediate data (e.g., a learned constant encoded as a feature map) according to the style signals, for example, increasing density of the first intermediate data from 4×4 to 8×8 and continuing until the output data density is reached. In an embodiment, the style conversion unit115performs an affine transformation. The style conversion unit115may be trained to learn the affine transformation during training of the synthesis neural network140. The first style signal controls operations at a first layer120of the synthesis neural network140to produce modified first intermediate data. In an embodiment, the first style signal controls an adaptive instance normalization (AdaIN) operation within the first layer120of the synthesis network140. In an embodiment, the AdaIN operation receives a set of content feature maps and a style signal and modifies the first-order statistics (i.e., the “style”) of the content feature maps to match first-order statistics defined by the style signal. The modified first intermediate data output by the first layer120is processed by processing layer(s)125to generate second intermediate data. In an embodiment, the processing layer(s)125include a 3×3 convolution layer. In an embodiment, the processing layer(s)125include a 3×3 convolution layer followed by an AdaIN operation that receives an additional style signal, not explicitly shown inFIG.1A. The second intermediate data is input to a second layer130of the synthesis neural network140. The second style signal controls operations at the second layer130to produce modified second intermediate data. In an embodiment, the first style signal modifies a first attribute encoded in the first intermediate data and the second style signal modifies a second attribute encoded in the first intermediate data and the second intermediate data. For example, the first intermediate data is coarse data compared with the second intermediate data and the first style is transferred to coarse feature maps at the first layer120while the second style is transferred to higher density feature maps at the second layer130. In an embodiment, the second layer130up-samples the second intermediate data and includes a 3×3 convolution layer followed by an AdaIN operation. In an embodiment, the second style signal controls an AdaIN operation within the second layer130of the synthesis network140. The modified second intermediate data output by the second layer130is processed by processing layer(s)135to generate output data including content corresponding to the second intermediate data. In an embodiment, multiple (e.g., 32, 48, 64, 96, etc.) channels of features in the modified second intermediate data are converted into the output data that is encoded as color channels (e.g., red, green, blue). In an embodiment, the processing layer(s)135includes a 3×3 convolution layer. In an embodiment, the output data is an image including first attributes corresponding to a first scale and second attributes corresponding to a second scale, where the first scale is coarser compared with the second scale. The first scale may correspond to a scale of the feature maps processed by the first layer120and the second scale may correspond to a scale of the feature maps processed by the second layer130. Accordingly, the first style signal modifies the first attributes at the first scale and the second style signal modifies the second attributes at the second scale. More illustrative information will now be set forth regarding various optional architectures and features with which the foregoing framework may be implemented, per the desires of the user. It should be strongly noted that the following information is set forth for illustrative purposes and should not be construed as limiting in any manner. Any of the following features may be optionally incorporated with or without the exclusion of other features described. FIG.1Billustrates images generated by the style-based generator system100, in accordance with an embodiment. The images are generated in 10242resolution. In other embodiments, the images can be generated at a different resolution. Two different latent codes are used to control the styles of images generated by the style-based generator system100. Specifically, a first portion of the styles are produced by the mapping neural network110and a style conversion unit115from the “source” latent codes in the top row. A second portion of the styles are produced by the same or an additional mapping neural network110and a corresponding style conversion unit115from the “destination” latent codes in the leftmost column. The style-based generator system100starts from a learned constant input at the synthesis neural network140and adjusts the “style” of the image at each convolution layer based on the latent code, therefore directly controlling the strength of image attributes, encoded in feature maps, at different scales. In other words, a given set of styles from “source” data is copied to “destination” data. Thus, the copied styles (coarse, middle, or fine) are taken from the “source” data while all the other styles are kept the same as in the “destination” data. The first portion of styles (destination) are applied by the synthesis neural network140to process the learned constant with a first subset of the first portion of styles replaced with a corresponding second subset of the second portion of the styles (source). In an embodiment, the learned constant is a 4×4×512 constant tensor. In the second, third, and fourth rows of images inFIG.1B, the second portion of the styles (source) replaces the first portion of the styles (destination) at coarse layers of the synthesis neural network140. In an embodiment, the coarse layers correspond to coarse spatial densities 42-82. In an embodiment, high-level attributes such as pose, general hair style, face shape, and eyeglasses are copied from the source, while other attributes, such as all colors (eyes, hair, lighting) and finer facial features of the destination are retained. In the fifth and sixth rows of images inFIG.1B, second portion of the styles (source) replaces the first portion of the styles (destination) at middle layers of the synthesis neural network140. In an embodiment, the middle layers correspond to spatial densities of 162-322. Smaller scale facial features, hair style, eyes open/closed are inherited from the source, while the pose, general face shape, and eyeglasses from the destination are preserved. Finally, in the last row of images inFIG.1B, the second portion of the styles (source) replaces the first portion of the styles (destination) at high density (fine) layers of the synthesis neural network140. In an embodiment, the fine layers correspond to spatial densities of 642-10242. Using the styles from the second portion of the styles (source) for the fine layers inherits the color scheme and microstructure from the source while preserving the pose and general face shape from the destination. The architecture of the style-based generator system100enables control of the image synthesis via scale-specific modifications to the styles. The mapping network110and affine transformations performed by the style conversion unit115can be viewed as a way to draw samples for each style from a learned distribution, and the synthesis network140provides a mechanism to generate a novel image based on a collection of styles. The effects of each style are localized in the synthesis network140, i.e., modifying a specific subset of the styles can be expected to affect only certain attributes of the image. Using style signals from at least two different latent codes, as shown inFIG.1B, is referred to as style mixing or mixing regularization. Style mixing during training decorrelates neighboring styles and enables more fine-grained control over the generated imagery. In an embodiment, during training a given percentage of images are generated using two random latent codes instead of one. When generating such an image, a random location (e.g., crossover point) in the synthesis neural network140may be selected where processing switches from using style signals generated using a first latent code to style signals generated using a second latent code. In an embodiment, two latent codes z1, z2are processed by the mapping neural network110, and the corresponding intermediate latent codes w1, w2control the styles so that w1applies before the crossover point and w2after the crossover point. The mixing regularization technique prevents the synthesis neural network140from assuming that adjacent styles are correlated. TABLE 1 shows how enabling mixing regularization during training may improve localization of the styles considerably, indicated by improved (lower is better) Fréchet inception distances (FIDs) in scenarios where multiple latent codes are mixed at test time. The images shown inFIG.1Bare examples of images synthesized by mixing two latent codes at various scales. Each subset of styles controls meaningful high-level attributes of the image. TABLE 1FIDs for different mixing regularization ratiosNumber of latent codesMixing ratio(test time)(training time)12340%4.428.2212.8817.4150%4.416.108.7111.6190%4.405.116.889.03100%4.835.176.638.40 The mixing ratio indicates that percentage of training examples for which mixing regularization is enabled. A maximum of four different latent codes were randomly selected during test time and the crossover points between the different latent codes were also randomly selected. Mixing regularization improves the tolerance to these adverse operations significantly. As confirmed by the FIDs, the average quality of the images generated by the style-based generator system100is high, and even accessories such as eyeglasses and hats are successfully synthesized. For the images shown inFIG.1B, sampling from the extreme regions of W is avoided by using the so-called truncation trick that can be performed in W instead of Z. Note that the style-based generator system100may be implemented to enable application of the truncation selectively to low resolutions only, so that high-resolution details are not affected. Considering the distribution of training data, areas of low density are poorly represented and thus likely to be difficult for the style-based generator system100to learn. Non-uniform distributions of training data present a significant open problem in all generative modeling techniques. However, it is known that drawing latent vectors from a truncated or otherwise shrunk sampling space tends to improve average image quality, although some amount of variation is lost. In an embodiment, to improve training of the style-based generator system100, a center of mass of W is computed asw=z˜P(z)[ƒ(z)]. In the case of one dataset of human faces (e.g., FFHQ, Flickr-Faces-HQ), the point represents a sort of an average face (ψ=0). The deviation of a given w is scaled down from the center as w′=w+ψ(w−w), where ψ<1. In conventional generative modeling systems, only a subset of the neural networks are amenable to such truncation, even when orthogonal regularization is used, truncation in W space seems to work reliably even without changes to the loss function. FIG.1Cillustrates a flowchart of a method150for style-based generation, in accordance with an embodiment. The method150may be performed by a program, custom circuitry, or by a combination of custom circuitry and a program. For example, the method150may be executed by a GPU (graphics processing unit), CPU (central processing unit), or any processor capable of performing the operations of the style-based generator system100. Furthermore, persons of ordinary skill in the art will understand that any system that performs method150is within the scope and spirit of embodiments of the present invention. At step155, the mapping neural network110processes a latent code defined in an input space, to produce an intermediate latent code defined in an intermediate latent space. At step160, the intermediate latent code is converted into a first style signal by the style conversion unit115. At step165, the first style signal is applied at a first layer120of the synthesis neural network140to modify the first intermediate data according to the first style signal to produce modified first intermediate data. At step170, the modified first intermediate data is processed by the processing layer(s)125to produce the second intermediate data. At step175, a second style signal is applied at the second layer130of the synthesis neural network140to modify the second intermediate data according to the second style signal to produce modified second intermediate data. At step180, the modified second intermediate data is processed by the processing layer(s)135to produce output data including content corresponding to the second intermediate data. There are various definitions for disentanglement, but a common goal is a latent space that consists of linear subspaces, each of which controls one factor of variation. However, the sampling probability of each combination of factors in the latent space Z needs to match the corresponding density in the training data. A major benefit of the style-based generator system100is that the intermediate latent space W does not have to support sampling according to any fixed distribution; the sampling density for the style-based generator system100is induced by the learned piecewise continuous mapping ƒ(z). The mapping can be adapted to “unwarp” W so that the factors of variation become more linear. The style-based generator system100may naturally tend to unwarp W, as it should be easier to generate realistic images based on a disentangled representation than based on an entangled representation. As such, the training may yield a less entangled W in an unsupervised setting, i.e., when the factors of variation are not known in advance. FIG.2Aillustrates a block diagram of the mapping neural network110shown inFIG.1A, in accordance with an embodiment. A distribution of the training data may be missing a combination of attributes, such as, children wearing glasses. A distribution of the factors of variation in the combination of glasses and age becomes more linear in the intermediate latent space W compared with the latent space Z. In an embodiment, the mapping neural network110includes a normalization layer205and multiple fully-connected layers210. In an embodiment, eight fully-connected layers210are coupled in sequence to produce the intermediate latent code. Parameters (e.g., weights) of the mapping neural network110are learned during training and the parameters are used to process the input latent codes when the style-based generator system100is deployed to generate the output data. In an embodiment, the mapping neural network110generates one or more intermediate latent codes that are used by the synthesis neural network140at a later time to generate the output data. There are many attributes in human portraits that can be regarded as stochastic, such as the exact placement of hairs, stubble, freckles, or skin pores. Any of these can be randomized without affecting a perception of the image as long as the randomizations follow the correct distribution. The artificial omission of noise when generating images leads to images with a featureless “painterly” look. In particular, when generating human portraits, coarse noise may cause large-scale curling of hair and appearance of larger background features, while the fine noise may bring out the finer curls of hair, finer background detail, and skin pores. A conventional generator may only generate stochastic variation based on the input to the neural network, as provided through the input layer. During the training, the conventional generator may be forced to learn to generate spatially-varying pseudorandom numbers from earlier activations whenever the pseudorandom numbers are needed. In other words, pseudorandom number generation is not intentionally built into the conventional generator. Instead, the generation of pseudorandom numbers emerges on its own during training in order for the conventional generator to satisfy the training objective. Generating the pseudorandom numbers consumes neural network capacity and hiding the periodicity of generated signal is difficult—and not always successful, as evidenced by commonly seen repetitive patterns in generated images. In contrast, style-based generator system100may be configured to avoid these limitations by adding per-pixel noise after each convolution. In an embodiment, the style-based generator system100is configured with a direct means to generate stochastic detail by introducing explicit noise inputs. In an embodiment, the noise inputs are single-channel images consisting of uncorrelated Gaussian noise, and a dedicated noise image is input to one or more layers of the synthesis network140. The noise image may be broadcast to all feature maps using learned per-feature scaling factors and then added to the output of the corresponding convolution. FIG.2Billustrates a block diagram of the synthesis neural network140shown inFIG.1A, in accordance with an embodiment. The synthesis neural network140includes a first processing block200and a second processing block230. In an embodiment, the processing block200processes 4×4 resolution feature maps and the processing block230processes 8×8 resolution feature maps. One or more additional processing blocks may be included in the synthesis neural network140after the processing blocks200and230, before them, and/or between them. The first processing block200receives the first intermediate data, first spatial noise, and second spatial noise. In an embodiment, the first spatial noise is scaled by a learned per-channel scaling factor before being combined with (e.g., added to) the first intermediate data. In an embodiment, the first spatial noise, second spatial noise, third spatial noise, and fourth spatial noise is independent per-pixel Gaussian noise. The first processing block200also receives the first style signal and the second style signal. As previously explained, the style signals may be obtained by processing the intermediate latent code according to a learned affine transform. Learned affine transformations specialize w to styles y=(ys, yb) that control adaptive instance normalization (AdaIN) operations implemented by the modules220in the synthesis neural network140. Compared to more general feature transforms, AdaIN is particularly well suited for implementation in the style-based generator system100due to its efficiency and compact representation. The AdaIN operation is defined AdaIN(xi,y)=ys,ixi-μ(xi)σ(xi)+yb,i(1) where each feature map xi, is normalized separately, and then scaled and biased using the corresponding scalar components from style y. Thus, the dimensionality of y is twice the number of feature maps compared to the input of the layer. In an embodiment, a dimension of the style signal is a multiple of a number of feature maps in the layer at which the style signal is applied. In contrast with conventional style transfer, the spatially invariant style y is computed from vector w instead of an example image. The effects of each style signal are localized in the synthesis neural network140, i.e., modifying a specific subset of the style signals can be expected to affect only certain attributes of an image represented by the output data. To see the reason for the localization, consider how the AdaIN operation (Eq. 1) implemented by the module220first normalizes each channel to zero mean and unit variance, and only then applies scales and biases based on the style signal. The new per-channel statistics, as dictated by the style, modify the relative importance of features for the subsequent convolution operation, but the new per-channel statistics do not depend on the original statistics because of the normalization. Thus, each style signal controls only a pre-defined number of convolution(s)225before being overridden by the next AdaIN operation. In an embodiment, scaled spatial noise is added to the features after each convolution and before processing by another module225. Each module220may be followed by a convolution layer225. In an embodiment, the convolution layer225applies a 3×3 convolution kernel to the input. Within the processing block200, second intermediate data output by the convolution layer225is combined with the second spatial noise and input to a second module220that applies the second style signal to generate an output of the processing block200. In an embodiment, the second spatial noise is scaled by a learned per-channel scaling factor before being combined with (e.g., added to) the second intermediate data. The processing block230receives feature maps output by the processing block200and the feature maps are up-sampled by an up-sampling layer235. In an embodiment 4×4 feature maps are up-sampled by the up-sampling layer235to produce 8×8 feature maps. The up-sampled feature maps are input to another convolution layer225to produce third intermediate data. Within the processing block230, the third intermediate data is combined with the third spatial noise and input to a third module220that applies the third style signal via an AdaIN operation. In an embodiment, the third spatial noise is scaled by a learned per-channel scaling factor before being combined with (e.g., added to) the third intermediate data. The output of the third module220is processed by another convolution layer225to produce fourth intermediate data. The fourth intermediate data is combined with the fourth spatial noise and input to a fourth module220that applies the fourth style signal via an AdaIN operation. In an embodiment, the fourth spatial noise is scaled by a learned per-channel scaling factor before being combined with (e.g., added to) the fourth intermediate data. In an embodiment, a resolution of the output data is 10242and the synthesis neural network140includes 18 layers—two for each power-of-two resolution (42-10242). The output of the last layer of the synthesis neural network140may be converted to RGB using a separate 1×1 convolution. In an embodiment, the synthesis neural network140has a total of 26.2M trainable parameters, compared to 23.1 M in a conventional generator with the same number of layers and feature maps. Introducing spatial noise affects only the stochastic aspects of the output data, leaving the overall composition and high-level attributes such as identity intact. Separate noise inputs to the synthesis neural network140enables the application of stochastic variation to different subsets of layers. Applying a spatial noise input to a particular layer of the synthesis neural network140leads to stochastic variation at a scale that matches the scale of the particular layer. The effect of noise appears tightly localized in the synthesis neural network140. At any point in the synthesis neural network140, there is pressure to introduce new content as soon as possible, and the easiest way for the synthesis neural network140to create stochastic variation is to rely on the spatial noise inputs. A fresh set of spatial noise is available for each layer in the synthesis neural network140, and thus there is no incentive to generate the stochastic effects from earlier activations, leading to a localized effect. Therefore, the noise affects only inconsequential stochastic variation (differently combed hair, beard, etc.). In contrast, changes to the style signals have global effects (changing pose, identity, etc.). In the synthesis neural network140, when the output data is an image, the style signals affect the entire image because complete feature maps are scaled and biased with the same values. Therefore, global effects such as pose, lighting, or background style can be controlled coherently. Meanwhile, the spatial noise is added independently to each pixel and is thus ideally suited for controlling stochastic variation. If the synthesis neural network140tried to control, e.g., pose using the noise, that would lead to spatially inconsistent decisions that would be penalized during training. Thus, the synthesis neural network140learns to use the global and local channels appropriately, without explicit guidance. FIG.2Cillustrates a flowchart of a method250for applying spatial noise using the style-based generator system100, in accordance with an embodiment. The method250may be performed by a program, custom circuitry, or by a combination of custom circuitry and a program. For example, the method250may be executed by a GPU (graphics processing unit), CPU (central processing unit), or any processor capable of performing the operations of the style-based generator system100. Furthermore, persons of ordinary skill in the art will understand that any system that performs method250is within the scope and spirit of embodiments of the present invention. At step255, a first set of spatial noise is applied at a first layer of the synthesis neural network140to generate the first intermediate data comprising content corresponding to source data that is modified based on the first set of spatial noise. In an embodiment, the source data is the first intermediate data and the first layer is a layer including the module220and/or the convolution layer225. At step258, the modified first intermediate data is processed by the processing layer(s)225to produce the second intermediate data. At step260, a second set of spatial noise is applied at a second layer of the synthesis neural network140to generate second intermediate data comprising content corresponding to the first intermediate data that is modified based on the second set of spatial noise. In an embodiment, the first intermediate data is modified by at least the module220to produce the second intermediate data. At step265, the second intermediate data is processed to produce output data including content corresponding to the second intermediate data. In an embodiment, the second intermediate data is processed by another module220and the block230to produce the output data. Noise may be injected into the layers of the synthesis neural network140to cause synthesis of stochastic variations at a scale corresponding to the layer. Importantly, the noise should be injected during both training and generation. Additionally, during generation, the strength of the noise may be modified to further control the “look” of the output data. Providing style signals instead of directly inputting the latent code into the synthesis neural network140in combination with noise injected directly into the synthesis neural network140, leads to automatic, unsupervised separation of high-level attributes (e.g., pose, identity) from stochastic variation (e.g., freckles, hair) in the generated images, and enables intuitive scale-specific mixing and interpolation operations. In particular, the style signals directly adjust the strength of image attributes at different scales in the synthesis neural network140. During generation, the style signals can be used to modify selected image attributes. Additionally, during training, the mapping neural network110may be configured to perform style mixing regularization to improve localization of the styles. The mapping neural network110embeds the input latent code into the intermediate latent space, which has a profound effect on how the factors of variation are represented in the synthesis neural network140. The input latent space follows the probability density of the training data, and this likely leads to some degree of unavoidable entanglement. The intermediate latent space is free from that restriction and is therefore allowed to be disentangled. Compared to a conventional generator architecture, the style-based generator system100admits a more linear, less entangled representation of different factors of variation. In an embodiment, replacing a conventional generator with the style-based generator may not require modifying any other component of the training framework (loss function, discriminator, optimization method, or the like). The style-based generative neural network100may be trained using e.g. the GAN (generative adversarial networks), VAE (variational autoencoder) framework, flow-based framework, or the like.FIG.2Dillustrates a block diagram of the GAN270training framework, in accordance with an embodiment. The GAN270may be implemented by a program, custom circuitry, or by a combination of custom circuitry and a program. For example, the GAN270may be implemented using a GPU, CPU, or any processor capable of performing the operations described herein. Furthermore, persons of ordinary skill in the art will understand that any system that performs the operations of the GAN270is within the scope and spirit of embodiments of the present invention. The GAN270includes a generator, such as the style-based generator system100, a discriminator (neural network)275, and a training loss unit280. The topologies of both the generator110and discriminator275may be modified during training. The GAN270may operate in an unsupervised setting or in a conditional setting. The style-based generator system100receives input data (e.g., at least one latent code and/or noise inputs) and produces output data. Depending on the task, the output data may be an image, audio, video, or other types of data (configuration setting). The discriminator275is an adaptive loss function that is used during training of the style-based generator system100. The style-based generator system100and discriminator275are trained simultaneously using a training dataset that includes example output data that the output data produced by the style-based generator system100should be consistent with. The style-based generator system100generates output data in response to the input data and the discriminator275determines if the output data appears similar to the example output data included in the training data. Based on the determination, parameters of the discriminator275and/or the style-based generative neural network100are adjusted. In the unsupervised setting, the discriminator275outputs a continuous value indicating how closely the output data matches the example output data. For example, in an embodiment, the discriminator275outputs a first training stimulus (e.g., high value) when the output data is determined to match the example output data and a second training stimulus (e.g., low value) when the output data is determined to not match the example output data. The training loss unit280adjusts parameters (weights) of the GAN270based on the output of the discriminator275. When the style-based generator system100is trained for a specific task, such as generating images of human faces, the discriminator outputs a high value when the output data is an image of a human face. The output data generated by the style-based generator system100is not required to be identical to the example output data for the discriminator275to determine the output data matches the example output data. In the context of the following description, the discriminator275determines that the output data matches the example output data when the output data is similar to any of the example output data. In the conditional setting, the input of the style-based generative neural network100may include other data, such as an image, a classification label, segmentation contours, and other (additional) types of data (distribution, audio, etc.). The additional data may be specified in addition to the random latent code, or the additional data may replace the random latent code altogether. The training dataset may include input/output data pairs, and the task of the discriminator275may be to determine if the output of the style-based generative neural network100appears consistent with the input, based on the example input/output pairs that the discriminator275has seen in the training data. In an embodiment, the style-based generative neural network100may be trained using a progressive growing technique. In one embodiment, the mapping neural network110and/or the synthesis neural network140are initially implemented as a generator neural network portion of a GAN and trained using a progressive growing technique, as described in Karras et al., “Progressive Growing of GANs for Improved Quality, Stability, and Variation,” Sixth International Conference on Learning Representations (ICLR), (Apr. 30, 2018), which is herein incorporated by reference in its entirety. Parallel Processing Architecture FIG.3illustrates a parallel processing unit (PPU)300, in accordance with an embodiment. In an embodiment, the PPU300is a multi-threaded processor that is implemented on one or more integrated circuit devices. The PPU300is a latency hiding architecture designed to process many threads in parallel. A thread (i.e., a thread of execution) is an instantiation of a set of instructions configured to be executed by the PPU300. In an embodiment, the PPU300is a graphics processing unit (GPU) configured to implement a graphics rendering pipeline for processing three-dimensional (3D) graphics data in order to generate two-dimensional (2D) image data for display on a display device such as a liquid crystal display (LCD) device. In another embodiment, the PPU300is configured to implement the neural network system100. In other embodiments, the PPU300may be utilized for performing general-purpose computations. While one exemplary parallel processor is provided herein for illustrative purposes, it should be strongly noted that such processor is set forth for illustrative purposes only, and that any processor may be employed to supplement and/or substitute for the same. One or more PPUs300may be configured to accelerate thousands of High Performance Computing (HPC), data center, and machine learning applications. The PPU300may be configured to accelerate numerous deep learning systems and applications including autonomous vehicle platforms, deep learning, high-accuracy speech, image, and text recognition systems, intelligent video analytics, molecular simulations, drug discovery, disease diagnosis, weather forecasting, big data analytics, astronomy, molecular dynamics simulation, financial modeling, robotics, factory automation, real-time language translation, online search optimizations, and personalized user recommendations, and the like. As shown inFIG.3, the PPU300includes an Input/Output (I/O) unit305, a front end unit315, a scheduler unit320, a work distribution unit325, a hub330, a crossbar (Xbar)370, one or more general processing clusters (GPCs)350, and one or more partition units380. The PPU300may be connected to a host processor or other PPUs300via one or more high-speed NVLink310interconnect. The PPU300may be connected to a host processor or other peripheral devices via an interconnect302. The PPU300may also be connected to a local memory304comprising a number of memory devices. In an embodiment, the local memory may comprise a number of dynamic random access memory (DRAM) devices. The DRAM devices may be configured as a high-bandwidth memory (HBM) subsystem, with multiple DRAM dies stacked within each device. The NVLink310interconnect enables systems to scale and include one or more PPUs300combined with one or more CPUs, supports cache coherence between the PPUs300and CPUs, and CPU mastering. Data and/or commands may be transmitted by the NVLink310through the hub330to/from other units of the PPU300such as one or more copy engines, a video encoder, a video decoder, a power management unit, etc. (not explicitly shown). The NVLink310is described in more detail in conjunction withFIG.5B. The I/O unit305is configured to transmit and receive communications (i.e., commands, data, etc.) from a host processor (not shown) over the interconnect302. The I/O unit305may communicate with the host processor directly via the interconnect302or through one or more intermediate devices such as a memory bridge. In an embodiment, the I/O unit305may communicate with one or more other processors, such as one or more the PPUs300via the interconnect302. In an embodiment, the I/O unit305implements a Peripheral Component Interconnect Express (PCIe) interface for communications over a PCIe bus and the interconnect302is a PCIe bus. In alternative embodiments, the I/O unit305may implement other types of well-known interfaces for communicating with external devices. The I/O unit305decodes packets received via the interconnect302. In an embodiment, the packets represent commands configured to cause the PPU300to perform various operations. The I/O unit305transmits the decoded commands to various other units of the PPU300as the commands may specify. For example, some commands may be transmitted to the front end unit315. Other commands may be transmitted to the hub330or other units of the PPU300such as one or more copy engines, a video encoder, a video decoder, a power management unit, etc. (not explicitly shown). In other words, the I/O unit305is configured to route communications between and among the various logical units of the PPU300. In an embodiment, a program executed by the host processor encodes a command stream in a buffer that provides workloads to the PPU300for processing. A workload may comprise several instructions and data to be processed by those instructions. The buffer is a region in a memory that is accessible (i.e., read/write) by both the host processor and the PPU300. For example, the I/O unit305may be configured to access the buffer in a system memory connected to the interconnect302via memory requests transmitted over the interconnect302. In an embodiment, the host processor writes the command stream to the buffer and then transmits a pointer to the start of the command stream to the PPU300. The front end unit315receives pointers to one or more command streams. The front end unit315manages the one or more streams, reading commands from the streams and forwarding commands to the various units of the PPU300. The front end unit315is coupled to a scheduler unit320that configures the various GPCs350to process tasks defined by the one or more streams. The scheduler unit320is configured to track state information related to the various tasks managed by the scheduler unit320. The state may indicate which GPC350a task is assigned to, whether the task is active or inactive, a priority level associated with the task, and so forth. The scheduler unit320manages the execution of a plurality of tasks on the one or more GPCs350. The scheduler unit320is coupled to a work distribution unit325that is configured to dispatch tasks for execution on the GPCs350. The work distribution unit325may track a number of scheduled tasks received from the scheduler unit320. In an embodiment, the work distribution unit325manages a pending task pool and an active task pool for each of the GPCs350. The pending task pool may comprise a number of slots (e.g., 32 slots) that contain tasks assigned to be processed by a particular GPC350. The active task pool may comprise a number of slots (e.g., 4 slots) for tasks that are actively being processed by the GPCs350. As a GPC350finishes the execution of a task, that task is evicted from the active task pool for the GPC350and one of the other tasks from the pending task pool is selected and scheduled for execution on the GPC350. If an active task has been idle on the GPC350, such as while waiting for a data dependency to be resolved, then the active task may be evicted from the GPC350and returned to the pending task pool while another task in the pending task pool is selected and scheduled for execution on the GPC350. The work distribution unit325communicates with the one or more GPCs350via XBar370. The XBar370is an interconnect network that couples many of the units of the PPU300to other units of the PPU300. For example, the XBar370may be configured to couple the work distribution unit325to a particular GPC350. Although not shown explicitly, one or more other units of the PPU300may also be connected to the XBar370via the hub330. The tasks are managed by the scheduler unit320and dispatched to a GPC350by the work distribution unit325. The GPC350is configured to process the task and generate results. The results may be consumed by other tasks within the GPC350, routed to a different GPC350via the XBar370, or stored in the memory304. The results can be written to the memory304via the partition units380, which implement a memory interface for reading and writing data to/from the memory304. The results can be transmitted to another PPU300or CPU via the NVLink310. In an embodiment, the PPU300includes a number U of partition units380that is equal to the number of separate and distinct memory devices of the memory304coupled to the PPU300. A partition unit380will be described in more detail below in conjunction withFIG.4B. In an embodiment, a host processor executes a driver kernel that implements an application programming interface (API) that enables one or more applications executing on the host processor to schedule operations for execution on the PPU300. In an embodiment, multiple compute applications are simultaneously executed by the PPU300and the PPU300provides isolation, quality of service (QoS), and independent address spaces for the multiple compute applications. An application may generate instructions (i.e., API calls) that cause the driver kernel to generate one or more tasks for execution by the PPU300. The driver kernel outputs tasks to one or more streams being processed by the PPU300. Each task may comprise one or more groups of related threads, referred to herein as a warp. In an embodiment, a warp comprises 32 related threads that may be executed in parallel. Cooperating threads may refer to a plurality of threads including instructions to perform the task and that may exchange data through shared memory. Threads and cooperating threads are described in more detail in conjunction withFIG.5A. FIG.4Aillustrates a GPC350of the PPU300ofFIG.3, in accordance with an embodiment. As shown inFIG.4A, each GPC350includes a number of hardware units for processing tasks. In an embodiment, each GPC350includes a pipeline manager410, a pre-raster operations unit (PROP)415, a raster engine425, a work distribution crossbar (WDX)480, a memory management unit (MMU)490, and one or more Data Processing Clusters (DPCs)420. It will be appreciated that the GPC350ofFIG.4Amay include other hardware units in lieu of or in addition to the units shown inFIG.4A. In an embodiment, the operation of the GPC350is controlled by the pipeline manager410. The pipeline manager410manages the configuration of the one or more DPCs420for processing tasks allocated to the GPC350. In an embodiment, the pipeline manager410may configure at least one of the one or more DPCs420to implement at least a portion of a graphics rendering pipeline. For example, a DPC420may be configured to execute a vertex shader program on the programmable streaming multiprocessor (SM)440. The pipeline manager410may also be configured to route packets received from the work distribution unit325to the appropriate logical units within the GPC350. For example, some packets may be routed to fixed function hardware units in the PROP415and/or raster engine425while other packets may be routed to the DPCs420for processing by the primitive engine435or the SM440. In an embodiment, the pipeline manager410may configure at least one of the one or more DPCs420to implement a neural network model and/or a computing pipeline. The PROP unit415is configured to route data generated by the raster engine425and the DPCs420to a Raster Operations (ROP) unit, described in more detail in conjunction withFIG.4B. The PROP unit415may also be configured to perform optimizations for color blending, organize pixel data, perform address translations, and the like. The raster engine425includes a number of fixed function hardware units configured to perform various raster operations. In an embodiment, the raster engine425includes a setup engine, a coarse raster engine, a culling engine, a clipping engine, a fine raster engine, and a tile coalescing engine. The setup engine receives transformed vertices and generates plane equations associated with the geometric primitive defined by the vertices. The plane equations are transmitted to the coarse raster engine to generate coverage information (e.g., an x,y coverage mask for a tile) for the primitive. The output of the coarse raster engine is transmitted to the culling engine where fragments associated with the primitive that fail a z-test are culled, and transmitted to a clipping engine where fragments lying outside a viewing frustum are clipped. Those fragments that survive clipping and culling may be passed to the fine raster engine to generate attributes for the pixel fragments based on the plane equations generated by the setup engine. The output of the raster engine425comprises fragments to be processed, for example, by a fragment shader implemented within a DPC420. Each DPC420included in the GPC350includes an M-Pipe Controller (MPC)430, a primitive engine435, and one or more SMs440. The MPC430controls the operation of the DPC420, routing packets received from the pipeline manager410to the appropriate units in the DPC420. For example, packets associated with a vertex may be routed to the primitive engine435, which is configured to fetch vertex attributes associated with the vertex from the memory304. In contrast, packets associated with a shader program may be transmitted to the SM440. The SM440comprises a programmable streaming processor that is configured to process tasks represented by a number of threads. Each SM440is multi-threaded and configured to execute a plurality of threads (e.g., 32 threads) from a particular group of threads concurrently. In an embodiment, the SM440implements a SIMD (Single-Instruction, Multiple-Data) architecture where each thread in a group of threads (i.e., a warp) is configured to process a different set of data based on the same set of instructions. All threads in the group of threads execute the same instructions. In another embodiment, the SM440implements a SIMT (Single-Instruction, Multiple Thread) architecture where each thread in a group of threads is configured to process a different set of data based on the same set of instructions, but where individual threads in the group of threads are allowed to diverge during execution. In an embodiment, a program counter, call stack, and execution state is maintained for each warp, enabling concurrency between warps and serial execution within warps when threads within the warp diverge. In another embodiment, a program counter, call stack, and execution state is maintained for each individual thread, enabling equal concurrency between all threads, within and between warps. When execution state is maintained for each individual thread, threads executing the same instructions may be converged and executed in parallel for maximum efficiency. The SM440will be described in more detail below in conjunction withFIG.5A. The MMU490provides an interface between the GPC350and the partition unit380. The MMU490may provide translation of virtual addresses into physical addresses, memory protection, and arbitration of memory requests. In an embodiment, the MMU490provides one or more translation lookaside buffers (TLBs) for performing translation of virtual addresses into physical addresses in the memory304. FIG.4Billustrates a memory partition unit380of the PPU300ofFIG.3, in accordance with an embodiment. As shown inFIG.4B, the memory partition unit380includes a Raster Operations (ROP) unit450, a level two (L2) cache460, and a memory interface470. The memory interface470is coupled to the memory304. Memory interface470may implement 32, 64, 128, 1024-bit data buses, or the like, for high-speed data transfer. In an embodiment, the PPU300incorporates U memory interfaces470, one memory interface470per pair of partition units380, where each pair of partition units380is connected to a corresponding memory device of the memory304. For example, PPU300may be connected to up to Y memory devices304, such as high bandwidth memory stacks or graphics double-data-rate, version 5, synchronous dynamic random access memory, or other types of persistent storage. In an embodiment, the memory interface470implements an HBM2 memory interface and Y equals half U. In an embodiment, the HBM2 memory stacks are located on the same physical package as the PPU300, providing substantial power and area savings compared with conventional GDDR5 SDRAM systems. In an embodiment, each HBM2 stack includes four memory dies and Y equals 4, with HBM2 stack including two 128-bit channels per die for a total of 8 channels and a data bus width of 1024 bits. In an embodiment, the memory304supports Single-Error Correcting Double-Error Detecting (SECDED) Error Correction Code (ECC) to protect data. ECC provides higher reliability for compute applications that are sensitive to data corruption. Reliability is especially important in large-scale cluster computing environments where PPUs300process very large datasets and/or run applications for extended periods. In an embodiment, the PPU300implements a multi-level memory hierarchy. In an embodiment, the memory partition unit380supports a unified memory to provide a single unified virtual address space for CPU and PPU300memory, enabling data sharing between virtual memory systems. In an embodiment the frequency of accesses by a PPU300to memory located on other processors is traced to ensure that memory pages are moved to the physical memory of the PPU300that is accessing the pages more frequently. In an embodiment, the NVLink310supports address translation services allowing the PPU300to directly access a CPU's page tables and providing full access to CPU memory by the PPU300. In an embodiment, copy engines transfer data between multiple PPUs300or between PPUs300and CPUs. The copy engines can generate page faults for addresses that are not mapped into the page tables. The memory partition unit380can then service the page faults, mapping the addresses into the page table, after which the copy engine can perform the transfer. In a conventional system, memory is pinned (i.e., non-pageable) for multiple copy engine operations between multiple processors, substantially reducing the available memory. With hardware page faulting, addresses can be passed to the copy engines without worrying if the memory pages are resident, and the copy process is transparent. Data from the memory304or other system memory may be fetched by the memory partition unit380and stored in the L2 cache460, which is located on-chip and is shared between the various GPCs350. As shown, each memory partition unit380includes a portion of the L2 cache460associated with a corresponding memory304. Lower level caches may then be implemented in various units within the GPCs350. For example, each of the SMs440may implement a level one (L1) cache. The L1 cache is private memory that is dedicated to a particular SM440. Data from the L2 cache460may be fetched and stored in each of the L1 caches for processing in the functional units of the SMs440. The L2 cache460is coupled to the memory interface470and the XBar370. The ROP unit450performs graphics raster operations related to pixel color, such as color compression, pixel blending, and the like. The ROP unit450also implements depth testing in conjunction with the raster engine425, receiving a depth for a sample location associated with a pixel fragment from the culling engine of the raster engine425. The depth is tested against a corresponding depth in a depth buffer for a sample location associated with the fragment. If the fragment passes the depth test for the sample location, then the ROP unit450updates the depth buffer and transmits a result of the depth test to the raster engine425. It will be appreciated that the number of partition units380may be different than the number of GPCs350and, therefore, each ROP unit450may be coupled to each of the GPCs350. The ROP unit450tracks packets received from the different GPCs350and determines which GPC350that a result generated by the ROP unit450is routed to through the Xbar370. Although the ROP unit450is included within the memory partition unit380inFIG.4B, in other embodiment, the ROP unit450may be outside of the memory partition unit380. For example, the ROP unit450may reside in the GPC350or another unit. FIG.5Aillustrates the streaming multi-processor440ofFIG.4A, in accordance with an embodiment. As shown inFIG.5A, the SM440includes an instruction cache505, one or more scheduler units510, a register file520, one or more processing cores550, one or more special function units (SFUs)552, one or more load/store units (LSUs)554, an interconnect network580, a shared memory/L1 cache570. As described above, the work distribution unit325dispatches tasks for execution on the GPCs350of the PPU300. The tasks are allocated to a particular DPC420within a GPC350and, if the task is associated with a shader program, the task may be allocated to an SM440. The scheduler unit510receives the tasks from the work distribution unit325and manages instruction scheduling for one or more thread blocks assigned to the SM440. The scheduler unit510schedules thread blocks for execution as warps of parallel threads, where each thread block is allocated at least one warp. In an embodiment, each warp executes 32 threads. The scheduler unit510may manage a plurality of different thread blocks, allocating the warps to the different thread blocks and then dispatching instructions from the plurality of different cooperative groups to the various functional units (i.e., cores550, SFUs552, and LSUs554) during each clock cycle. Cooperative Groups is a programming model for organizing groups of communicating threads that allows developers to express the granularity at which threads are communicating, enabling the expression of richer, more efficient parallel decompositions. Cooperative launch APIs support synchronization amongst thread blocks for the execution of parallel algorithms. Conventional programming models provide a single, simple construct for synchronizing cooperating threads: a barrier across all threads of a thread block (i.e., the syncthreads( ) function). However, programmers would often like to define groups of threads at smaller than thread block granularities and synchronize within the defined groups to enable greater performance, design flexibility, and software reuse in the form of collective group-wide function interfaces. Cooperative Groups enables programmers to define groups of threads explicitly at sub-block (i.e., as small as a single thread) and multi-block granularities, and to perform collective operations such as synchronization on the threads in a cooperative group. The programming model supports clean composition across software boundaries, so that libraries and utility functions can synchronize safely within their local context without having to make assumptions about convergence. Cooperative Groups primitives enable new patterns of cooperative parallelism, including producer-consumer parallelism, opportunistic parallelism, and global synchronization across an entire grid of thread blocks. A dispatch unit515is configured to transmit instructions to one or more of the functional units. In the embodiment, the scheduler unit510includes two dispatch units515that enable two different instructions from the same warp to be dispatched during each clock cycle. In alternative embodiments, each scheduler unit510may include a single dispatch unit515or additional dispatch units515. Each SM440includes a register file520that provides a set of registers for the functional units of the SM440. In an embodiment, the register file520is divided between each of the functional units such that each functional unit is allocated a dedicated portion of the register file520. In another embodiment, the register file520is divided between the different warps being executed by the SM440. The register file520provides temporary storage for operands connected to the data paths of the functional units. Each SM440comprises L processing cores550. In an embodiment, the SM440includes a large number (e.g., 128, etc.) of distinct processing cores550. Each core550may include a fully-pipelined, single-precision, double-precision, and/or mixed precision processing unit that includes a floating point arithmetic logic unit and an integer arithmetic logic unit. In an embodiment, the floating point arithmetic logic units implement the IEEE 754-2008 standard for floating point arithmetic. In an embodiment, the cores550include 64 single-precision (32-bit) floating point cores, 64 integer cores, 32 double-precision (64-bit) floating point cores, and 8 tensor cores. Tensor cores configured to perform matrix operations, and, in an embodiment, one or more tensor cores are included in the cores550. In particular, the tensor cores are configured to perform deep learning matrix arithmetic, such as convolution operations for neural network training and inferencing. In an embodiment, each tensor core operates on a 4×4 matrix and performs a matrix multiply and accumulate operation D=A×B+C, where A, B, C, and D are 4×4 matrices. In an embodiment, the matrix multiply inputs A and B are 16-bit floating point matrices, while the accumulation matrices C and D may be 16-bit floating point or 32-bit floating point matrices. Tensor Cores operate on 16-bit floating point input data with 32-bit floating point accumulation. The 16-bit floating point multiply requires 64 operations and results in a full precision product that is then accumulated using 32-bit floating point addition with the other intermediate products for a 4×4×4 matrix multiply. In practice, Tensor Cores are used to perform much larger two-dimensional or higher dimensional matrix operations, built up from these smaller elements. An API, such as CUDA 9 C++ API, exposes specialized matrix load, matrix multiply and accumulate, and matrix store operations to efficiently use Tensor Cores from a CUDA-C++ program. At the CUDA level, the warp-level interface assumes 16×16 size matrices spanning all 32 threads of the warp. Each SM440also comprises M SFUs552that perform special functions (e.g., attribute evaluation, reciprocal square root, and the like). In an embodiment, the SFUs552may include a tree traversal unit configured to traverse a hierarchical tree data structure. In an embodiment, the SFUs552may include texture unit configured to perform texture map filtering operations. In an embodiment, the texture units are configured to load texture maps (e.g., a 2D array of texels) from the memory304and sample the texture maps to produce sampled texture values for use in shader programs executed by the SM440. In an embodiment, the texture maps are stored in the shared memory/L1 cache470. The texture units implement to fixture operations such as filtering operations using mip-maps (i.e., texture maps of varying levels of detail). In an embodiment, each SM340includes two texture units. Each SM440also comprises N LSUs554that implement load and store operations between the shared memory/L1 cache570and the register file520. Each SM440includes an interconnect network580that connects each of the functional units to the register file520and the LSU554to the register file520, shared memory/L1 cache570. In an embodiment, the interconnect network580is a crossbar that can be configured to connect any of the functional units to any of the registers in the register file520and connect the LSUs554to the register file and memory locations in shared memory/L1 cache570. The shared memory/L1 cache570is an array of on-chip memory that allows for data storage and communication between the SM440and the primitive engine435and between threads in the SM440. In an embodiment, the shared memory/L1 cache570comprises 128 KB of storage capacity and is in the path from the SM440to the partition unit380. The shared memory/L1 cache570can be used to cache reads and writes. One or more of the shared memory/L1 cache570, L2 cache460, and memory304are backing stores. Combining data cache and shared memory functionality into a single memory block provides the best overall performance for both types of memory accesses. The capacity is usable as a cache by programs that do not use shared memory. For example, if shared memory is configured to use half of the capacity, texture and load/store operations can use the remaining capacity. Integration within the shared memory/L1 cache570enables the shared memory/L1 cache570to function as a high-throughput conduit for streaming data while simultaneously providing high-bandwidth and low-latency access to frequently reused data. When configured for general purpose parallel computation, a simpler configuration can be used compared with graphics processing. Specifically, the fixed function graphics processing units shown inFIG.3, are bypassed, creating a much simpler programming model. In the general purpose parallel computation configuration, the work distribution unit325assigns and distributes blocks of threads directly to the DPCs420. The threads in a block execute the same program, using a unique thread ID in the calculation to ensure each thread generates unique results, using the SM440to execute the program and perform calculations, shared memory/L1 cache570to communicate between threads, and the LSU554to read and write global memory through the shared memory/L1 cache570and the memory partition unit380. When configured for general purpose parallel computation, the SM440can also write commands that the scheduler unit320can use to launch new work on the DPCs420. The PPU300may be included in a desktop computer, a laptop computer, a tablet computer, servers, supercomputers, a smart-phone (e.g., a wireless, hand-held device), personal digital assistant (PDA), a digital camera, a vehicle, a head mounted display, a hand-held electronic device, and the like. In an embodiment, the PPU300is embodied on a single semiconductor substrate. In another embodiment, the PPU300is included in a system-on-a-chip (SoC) along with one or more other devices such as additional PPUs300, the memory304, a reduced instruction set computer (RISC) CPU, a memory management unit (MMU), a digital-to-analog converter (DAC), and the like. In an embodiment, the PPU300may be included on a graphics card that includes one or more memory devices. The graphics card may be configured to interface with a PCIe slot on a motherboard of a desktop computer. In yet another embodiment, the PPU300may be an integrated graphics processing unit (iGPU) or parallel processor included in the chipset of the motherboard. Exemplary Computing System Systems with multiple GPUs and CPUs are used in a variety of industries as developers expose and leverage more parallelism in applications such as artificial intelligence computing. High-performance GPU-accelerated systems with tens to many thousands of compute nodes are deployed in data centers, research facilities, and supercomputers to solve ever larger problems. As the number of processing devices within the high-performance systems increases, the communication and data transfer mechanisms need to scale to support the increased bandwidth. FIG.5Bis a conceptual diagram of a processing system500implemented using the PPU300ofFIG.3, in accordance with an embodiment. The exemplary system565may be configured to implement the method150shown inFIG.1Cand/or the method250shown inFIG.2C. The processing system500includes a CPU530, switch510, and multiple PPUs300, and respective memories304. The NVLink310provides high-speed communication links between each of the PPUs300. Although a particular number of NVLink310and interconnect302connections are illustrated inFIG.5B, the number of connections to each PPU300and the CPU530may vary. The switch510interfaces between the interconnect302and the CPU530. The PPUs300, memories304, and NVLinks310may be situated on a single semiconductor platform to form a parallel processing module525. In an embodiment, the switch510supports two or more protocols to interface between various different connections and/or links. In another embodiment (not shown), the NVLink310provides one or more high-speed communication links between each of the PPUs300and the CPU530and the switch510interfaces between the interconnect302and each of the PPUs300. The PPUs300, memories304, and interconnect302may be situated on a single semiconductor platform to form a parallel processing module525. In yet another embodiment (not shown), the interconnect302provides one or more communication links between each of the PPUs300and the CPU530and the switch510interfaces between each of the PPUs300using the NVLink310to provide one or more high-speed communication links between the PPUs300. In another embodiment (not shown), the NVLink310provides one or more high-speed communication links between the PPUs300and the CPU530through the switch510. In yet another embodiment (not shown), the interconnect302provides one or more communication links between each of the PPUs300directly. One or more of the NVLink310high-speed communication links may be implemented as a physical NVLink interconnect or either an on-chip or on-die interconnect using the same protocol as the NVLink310. In the context of the present description, a single semiconductor platform may refer to a sole unitary semiconductor-based integrated circuit fabricated on a die or chip. It should be noted that the term single semiconductor platform may also refer to multi-chip modules with increased connectivity which simulate on-chip operation and make substantial improvements over utilizing a conventional bus implementation. Of course, the various circuits or devices may also be situated separately or in various combinations of semiconductor platforms per the desires of the user. Alternately, the parallel processing module525may be implemented as a circuit board substrate and each of the PPUs300and/or memories304may be packaged devices. In an embodiment, the CPU530, switch510, and the parallel processing module525are situated on a single semiconductor platform. In an embodiment, the signaling rate of each NVLink310is 20 to 25 Gigabits/second and each PPU300includes six NVLink310interfaces (as shown inFIG.5B, five NVLink310interfaces are included for each PPU300). Each NVLink310provides a data transfer rate of 25 Gigabytes/second in each direction, with six links providing 300 Gigabytes/second. The NVLinks310can be used exclusively for PPU-to-PPU communication as shown inFIG.5B, or some combination of PPU-to-PPU and PPU-to-CPU, when the CPU530also includes one or more NVLink310interfaces. In an embodiment, the NVLink310allows direct load/store/atomic access from the CPU530to each PPU's300memory304. In an embodiment, the NVLink310supports coherency operations, allowing data read from the memories304to be stored in the cache hierarchy of the CPU530, reducing cache access latency for the CPU530. In an embodiment, the NVLink310includes support for Address Translation Services (ATS), allowing the PPU300to directly access page tables within the CPU530. One or more of the NVLinks310may also be configured to operate in a low-power mode. FIG.5Cillustrates an exemplary system565in which the various architecture and/or functionality of the various previous embodiments may be implemented. The exemplary system565may be configured to implement the method150shown inFIG.1Cand/or the method250shown inFIG.2C. As shown, a system565is provided including at least one central processing unit530that is connected to a communication bus575. The communication bus575may be implemented using any suitable protocol, such as PCI (Peripheral Component Interconnect), PCI-Express, AGP (Accelerated Graphics Port), HyperTransport, or any other bus or point-to-point communication protocol(s). The system565also includes a main memory540. Control logic (software) and data are stored in the main memory540which may take the form of random access memory (RAM). The system565also includes input devices560, the parallel processing system525, and display devices545, i.e. a conventional CRT (cathode ray tube), LCD (liquid crystal display), LED (light emitting diode), plasma display or the like. User input may be received from the input devices560, e.g., keyboard, mouse, touchpad, microphone, and the like. Each of the foregoing modules and/or devices may even be situated on a single semiconductor platform to form the system565. Alternately, the various modules may also be situated separately or in various combinations of semiconductor platforms per the desires of the user. Further, the system565may be coupled to a network (e.g., a telecommunications network, local area network (LAN), wireless network, wide area network (WAN) such as the Internet, peer-to-peer network, cable network, or the like) through a network interface535for communication purposes. The system565may also include a secondary storage (not shown). The secondary storage610includes, for example, a hard disk drive and/or a removable storage drive, representing a floppy disk drive, a magnetic tape drive, a compact disk drive, digital versatile disk (DVD) drive, recording device, universal serial bus (USB) flash memory. The removable storage drive reads from and/or writes to a removable storage unit in a well-known manner. Computer programs, or computer control logic algorithms, may be stored in the main memory540and/or the secondary storage. Such computer programs, when executed, enable the system565to perform various functions. The memory540, the storage, and/or any other storage are possible examples of computer-readable media. The architecture and/or functionality of the various previous figures may be implemented in the context of a general computer system, a circuit board system, a game console system dedicated for entertainment purposes, an application-specific system, and/or any other desired system. For example, the system565may take the form of a desktop computer, a laptop computer, a tablet computer, servers, supercomputers, a smart-phone (e.g., a wireless, hand-held device), personal digital assistant (PDA), a digital camera, a vehicle, a head mounted display, a hand-held electronic device, a mobile phone device, a television, workstation, game consoles, embedded system, and/or any other type of logic. While various embodiments have been described above, it should be understood that they have been presented by way of example only, and not limitation. Thus, the breadth and scope of a preferred embodiment should not be limited by any of the above-described exemplary embodiments, but should be defined only in accordance with the following claims and their equivalents. Machine Learning Deep neural networks (DNNs) developed on processors, such as the PPU300have been used for diverse use cases, from self-driving cars to faster drug development, from automatic image captioning in online image databases to smart real-time language translation in video chat applications. Deep learning is a technique that models the neural learning process of the human brain, continually learning, continually getting smarter, and delivering more accurate results more quickly over time. A child is initially taught by an adult to correctly identify and classify various shapes, eventually being able to identify shapes without any coaching. Similarly, a deep learning or neural learning system needs to be trained in object recognition and classification for it to get smarter and more efficient at identifying basic objects, occluded objects, etc., while also assigning context to objects. At the simplest level, neurons in the human brain look at various inputs that are received, importance levels are assigned to each of these inputs, and output is passed on to other neurons to act upon. An artificial neuron or perceptron is the most basic model of a neural network. In one example, a perceptron may receive one or more inputs that represent various features of an object that the perceptron is being trained to recognize and classify, and each of these features is assigned a certain weight based on the importance of that feature in defining the shape of an object. A deep neural network (DNN) model includes multiple layers of many connected nodes (e.g., perceptrons, Boltzmann machines, radial basis functions, convolutional layers, etc.) that can be trained with enormous amounts of input data to quickly solve complex problems with high accuracy. In one example, a first layer of the DNN model breaks down an input image of an automobile into various sections and looks for basic patterns such as lines and angles. The second layer assembles the lines to look for higher level patterns such as wheels, windshields, and mirrors. The next layer identifies the type of vehicle, and the final few layers generate a label for the input image, identifying the model of a specific automobile brand. Once the DNN is trained, the DNN can be deployed and used to identify and classify objects or patterns in a process known as inference. Examples of inference (the process through which a DNN extracts useful information from a given input) include identifying handwritten numbers on checks deposited into ATM machines, identifying images of friends in photos, delivering movie recommendations to over fifty million users, identifying and classifying different types of automobiles, pedestrians, and road hazards in driverless cars, or translating human speech in real-time. During training, data flows through the DNN in a forward propagation phase until a prediction is produced that indicates a label corresponding to the input. If the neural network does not correctly label the input, then errors between the correct label and the predicted label are analyzed, and the weights are adjusted for each feature during a backward propagation phase until the DNN correctly labels the input and other inputs in a training dataset. Training complex neural networks requires massive amounts of parallel computing performance, including floating-point multiplications and additions that are supported by the PPU300. Inferencing is less compute-intensive than training, being a latency-sensitive process where a trained neural network is applied to new inputs it has not seen before to classify images, translate speech, and generally infer new information. Neural networks rely heavily on matrix math operations, and complex multi-layered networks require tremendous amounts of floating-point performance and bandwidth for both efficiency and speed. With thousands of processing cores, optimized for matrix math operations, and delivering tens to hundreds of TFLOPS of performance, the PPU300is a computing platform capable of delivering performance required for deep neural network-based artificial intelligence and machine learning applications. It is noted that the techniques described herein may be embodied in executable instructions stored in a computer readable medium for use by or in connection with a processor-based instruction execution machine, system, apparatus, or device. It will be appreciated by those skilled in the art that, for some embodiments, various types of computer-readable media can be included for storing data. As used herein, a “computer-readable medium” includes one or more of any suitable media for storing the executable instructions of a computer program such that the instruction execution machine, system, apparatus, or device may read (or fetch) the instructions from the computer-readable medium and execute the instructions for carrying out the described embodiments. Suitable storage formats include one or more of an electronic, magnetic, optical, and electromagnetic format. A non-exhaustive list of conventional exemplary computer-readable medium includes: a portable computer diskette; a random-access memory (RAM); a read-only memory (ROM); an erasable programmable read only memory (EPROM); a flash memory device; and optical storage devices, including a portable compact disc (CD), a portable digital video disc (DVD), and the like. It should be understood that the arrangement of components illustrated in the attached Figures are for illustrative purposes and that other arrangements are possible. For example, one or more of the elements described herein may be realized, in whole or in part, as an electronic hardware component. Other elements may be implemented in software, hardware, or a combination of software and hardware. Moreover, some or all of these other elements may be combined, some may be omitted altogether, and additional components may be added while still achieving the functionality described herein. Thus, the subject matter described herein may be embodied in many different variations, and all such variations are contemplated to be within the scope of the claims. To facilitate an understanding of the subject matter described herein, many aspects are described in terms of sequences of actions. It will be recognized by those skilled in the art that the various actions may be performed by specialized circuits or circuitry, by program instructions being executed by one or more processors, or by a combination of both. The description herein of any sequence of actions is not intended to imply that the specific order described for performing that sequence must be followed. All methods described herein may be performed in any suitable order unless otherwise indicated herein or otherwise clearly contradicted by context. The use of the terms “a” and “an” and “the” and similar references in the context of describing the subject matter (particularly in the context of the following claims) are to be construed to cover both the singular and the plural, unless otherwise indicated herein or clearly contradicted by context. The use of the term “at least one” followed by a list of one or more items (for example, “at least one of A and B”) is to be construed to mean one item selected from the listed items (A or B) or any combination of two or more of the listed items (A and B), unless otherwise indicated herein or clearly contradicted by context. Furthermore, the foregoing description is for the purpose of illustration only, and not for the purpose of limitation, as the scope of protection sought is defined by the claims as set forth hereinafter together with any equivalents thereof. The use of any and all examples, or exemplary language (e.g., “such as”) provided herein, is intended merely to better illustrate the subject matter and does not pose a limitation on the scope of the subject matter unless otherwise claimed. The use of the term “based on” and other like phrases indicating a condition for bringing about a result, both in the claims and in the written description, is not intended to foreclose any other conditions that bring about that result. No language in the specification should be construed as indicating any non-claimed element as essential to the practice of the invention as claimed. | 83,936 |
11861891 | The figures are not to scale. Wherever possible, the same reference numbers will be used throughout the drawing(s) and accompanying written description to refer to the same or like parts. DETAILED DESCRIPTION Electrical and/or computing devices have increased significantly as technology advances. Such devices include hardware, software, and/or firmware to perform particular function(s). If an error occurs in the hardware, software, and/or firmware, such devices may not be able to perform the particular function(s) or may have poor performance. Such inability to perform and/or poor performance may affect the results of a system. For example, an autopilot computing device may gather raw image data from a sensor and transmit the raw image data to an image processing sub-system to process the image data and/or render an image. Once the autopilot computing device receives the rendered image, the autopilot computing device may make navigation decisions based on further analysis of the rendered image. To ensure proper navigation decision-making, safety protocols (e.g., safety hooks) may be in place to ensure that electrical and/or electronical systems are functional (e.g., to reduce risk) and that proper data is available for the decision-making process. In some examples, safety protocols may be used in conjunction with signal processing sub-systems. Signal processing sub-systems may be utilized in conjunction with a processor to perform signal processing protocols due to the efficiency of such sub-systems. Signal processing sub-systems include hardware (e.g., memory, electrical circuits and/or components, etc.) to process an input signal. For example, signal processing sub-systems include hardware accelerators to perform specific signal processing tasks corresponding to a signal processing protocol (e.g., a signal processing protocol is broken into tasks executed by the multiple hardware accelerators). The hardware accelerators include local memory (e.g., L1 memory) to store the input data and the signal processing sub-system includes local/on chip memory (e.g., L2/L3 memory) to store and transfer the input data between the multiple hardware accelerators. Such signal processing sub-systems may be subject to hardware errors, such as when a memory register inadvertently switches its stored logic value. For example, a soft error in memory may occur when nuclear particles collide with transistors inside memory, causing bits to inadvertently flip. Some hardware errors may correspond to critical errors while other hardware errors may correspond to non-critical errors. For example, in an imaging processing sub-system, a critical error may corrupt a large area of an image, while a non-critical error may corrupt single isolated pixel values. Conventional safety protocols to ensure error protection and/or identification for signal processing sub-systems include providing memory protection to the local memory of the hardware accelerators. Such memory protection includes implementation of additional storage for check bits for each memory address, resulting in increased cost and silicon area required for such additional storage. Examples disclosed herein reduce the cost and silicon area corresponding to such conventional safety protocols by providing protected memory and unprotected memory for the hardware accelerators and allocating some data to be stored in the protected memory and other data to be stored in the unprotected memory. For example, critical data (e.g., data that, if corrupted, corresponds to a critical error) may be stored in protected memory and non-critical data (e.g., data that, if corrupted, corresponds to a non-critical error) may be stored in unprotected memory. In this manner, if a hardware error occurs, the error will have a minor effect the output while significantly reducing the cost and area corresponding to full memory protection. To further reduce the effect of an error on the output of a signal process sub-system, examples disclosed herein further include filtering the output data to reduce and/or eliminate corrupted data corresponding to the error. For example, an image processing sub-system may include one or more pixel values, stored in the unprotected memory, that have been corrupted during the imaging processing process. Examples disclosed herein improve the input image by filtering the pixel values to adjust the corrupted pixel values based on neighboring pixel values. Accordingly, examples disclosed herein increase the quality of the output data based on the final filtering step. Examples disclosed herein further include an interface protection protocol to identify errors caused by the hardware accelerators of a signal processing sub-system. Such interface protection protocols include a host device adding redundant bits to the control data of an input signal to be processed. In this manner, the signal processing sub-system can determine the redundant bits from the control signal and transmit the determined redundant bits back to the host device. In response, the host device can determine if a hardware error occurred when the generated redundant bits do not match the received determined redundant bits. FIG.1illustrates an example signal processing sub-system102providing an efficient safety mechanism. The example ofFIG.1includes an example signal processing chain104, example hardware accelerators106a-(n−1),118, example combinatory logic circuits (e.g., combo logic)108a-n, example memory-mapped registers (MMRs)110a-n, example flip flops112a-n, example unprotected memory114a-n, example protected memory116a-n, an example filtering hardware accelerator118, an example outlier filter120, example local/on-chip memory122, an example host device124, an example signal processing sub-system interface125, example interface protectors126a-n, an example multiplexer (MUX)128, and an example de-multiplexer (DE-MUX)130. The example signal processing sub-system102ofFIG.1is a circuit that processes a signal from the example host device124via the example input interface131. The example signal processing sub-system102include circuitry to perform a specific function. For example, the signal processing sub-system102may be an imaging processing sub-system to receive raw pixel data and process the data to generate an image. The example signal processing sub-system102performs a specified function using the example signal processing pipeline104. The example signal processing pipeline104ofFIG.1includes the example HWAs106a-(n−1),118to perform a signal processing protocol. When the example signal processing sub-system102is enabled and receives an input signal, the input signal is processed, in series, by the example HWAs106a-(n−1),118. The HWAs106a-(n−1),118correspond to a particular task, such that when the input signal is received from the example host device124via the input interface131(e.g., a bus that inputs data from the host device124to the signal processing sub-system102), the input signal is processed by the HWA106a-(n−1),118. Once the signal processing pipeline104processes the input signal, the processed signal is transmitted back to the example host device124via the example output interface132. The example HWAs106a-(n−1),118ofFIG.1each include the example combo logic108a-n, the example MMR110a-n, the example flip flops112a-n, the example unprotected memory114a-nand the example protected memory116a-n. The example combo logic108a-nincludes hardware to perform a particular task based on the instructions from the example host device124. The example MMRs110a-nare configuration registers that define pseudo-static configuration of the example flip flops112a-ncorresponding to how the combo logic108a-noperates (e.g., defining the particular task performed by the combo logic108a-n). When an input signal is received by a HWA (e.g., the first example HWA106a), the data of the input signal is stored in the example memories114a,116aofFIG.1. Once stored, the example combo logic108aperforms the predefined function (e.g., based on the configuration of the example MMR110a) based on the stored data. The input signal may include various data. For example, if the input signal corresponds to image data, the input data may include pixel data (e.g., a value corresponding to each pixel), motion vectors, stereo disparity, look up table information, filter coefficients, statistics, protocol information, etc. As described above, memory protection (e.g., Parity protection, error correcting code (ECC) protection, etc.) requires additional resources and board space, adding cost and size to an IC. For example, including ECC protocols require extra memory to store computed values during a write data operation and comparing to what is output when the data is read. Accordingly, to conserve board space and reduce cost, critical data is stored in the example protected memory116a-n, which includes memory protection against memory failures (e.g., added memory registers to facilitate parity protocols, ECC protocols, etc.), while other data is stored in the unprotected memory114a-n, which does not include memory protection against memory failures. Critical data corresponds to data whose failure (e.g., one or more bit failure) results in corruption of a large portion of the output (e.g., more than a threshold amount of data corruption). For example, in an imaging processing system, a failure corresponding to critical data may result in a large area of the image being unviewable (e.g., entirely black, white, or otherwise distorted). Such failures result in post processing failures (e.g., object detection failures). In such an example, pixel data corresponding to a pixel corruption may result in a single pixel being unviewable, which may have limited or no effect on post processing failures. Accordingly, the example protected memory116a-nstores critical data and the example unprotected memory114a-nstores non-critical data (e.g., data whose failure corresponds to less than a threshold amount of corruption). Lookup tables, protocol information, filter coefficients, and/or statistical memory within HWA may be predefined as critical data and data buffers (e.g., pixel data, motion vectors, and/or stereo disparity) may be predefined as non-critical data. In some examples, a user and/or manufacturer may determine which data should be stored in the unprotected114a-nand/or the protected memory116a-n. The example filtering HWA118ofFIG.1is a HWA structured to filter the processed input signal prior to being output to the example host device124. Because non-critical data is stored in the example unprotected memory114a-nofFIG.1, individual data values may be corrupted. For example, if the example signal processing sub-system102is an image processing sub-system, individual pixel value may be corrupted. Accordingly, the example filtering HWA118may be implemented to filter the processed image, thereby increasing the quality of the output image by filtering out corrupted pixel values (e.g., adjusting the corrupted pixel value to a preset value corresponding to a better image quality). The example filtering HWA118includes the example outlier filter120to adjust outlier values, thereby increasing the overall data quality. For example, the outlier filter120may compare a pixel value to neighboring pixels (e.g., pixels within a threshold distance to a selected pixel). In some examples, the outlier filter120may determine outlier pixels based on the comparison and adjust the value of the pixel based on the neighboring pixel values. In this manner, corrupted pixel values are identified and adjusted to generate a higher quality image. The example filtering HWA118is the last HWA in the example signal processing pipeline104to filter out the output data prior to returning to the example host device124. The example outlier filter120is further described below in conjunction withFIG.4. The example local/on-chip memory122ofFIG.1is a common pool of memory that receives data and transmits the data in series to the example HWAs106a-(n−1),118. For example, the example local/on-chip memory122receives and stores the input data from the example host device124. The example local/on-chip memory122transmits the stored input data to the first example HWA106aand receives and stores the output of the first example HWA106a. The example local/on-chip memory122continues to store and transmit the stored data throughout the example signal processing pipeline104until the example filtering HWA118transmits the output (e.g., the processed and filtered signal). Once the output is received, the example local/on-chip memory122transmits the output data to the example host device124using the example output interface132(e.g., a bus to output data from the signal processing sub-system102to the host device124). The example local/on-chip memory122may be L2 cache (e.g., implemented by SRAM) and/or L3 cache. The example host device124ofFIG.1is a processing unit that transmits input data to the example signal processing sub-system102and receives the output data corresponding to the input data after being processed. For example, the host device124may be implemented as part of an auto-driving and/or semi auto-driving system on a vehicle (e.g., car, plane, train, board, etc.). In such an example, the host device124may receive image data from a sensor on the vehicle and transmit the data to the example signal processing sub-system102to be processed. Once processed, the example host device124may perform analytics and/or operate based on the processed data. For example, the host device124may utilize a processed image to make determinations regarding how to control the vehicle. The example host device124ofFIG.1includes the example signal processing sub-system interface125. The example signal processing sub-system interface125ofFIG.1interfaces with the example signal processing sub-system102to initiate signal processing of data. For example, the signal processing sub-system interface125may transmit control/configuration instructions to the MMRs110a-nto configure the example HWAs106a-(n−1),118to perform signal processing. Additionally, the example signal processing sub-system interface125may initiate a signal processing scheme by transmitting the input data to the example signal processing sub-system102. The example signal processing sub-system interface125receives the processed signal via the output interface132. In some examples, the signal processing sub-system interface125may include (e.g., generate and add) redundant bits in the control signal of the input data. In such examples, the interface protectors126a-nmay calculate the redundant bits from the control signal and transmit the redundant bits back to the example signal processing sub-system interface125. Each interface protector126a-ncorresponds a hardware component of the example signal processing sub-system102that is outputting the signal. For example, the first example interface protector126acorresponds to an error of the example local/on-chip memory122, the second example interface protector126bcorresponds to an error of the first example HWA106a, etc. Additionally or alternatively, any number of the example interface protectors126a-nmay be used at any part of the example signal processing sub-system102. The example signal processing sub-system interface125ofFIG.1compares the generated redundant bits to the receive redundant bits to determine if a HWA is faulty. For example, if the redundant bits from the second interface protector126bcorresponding to a first HWA106adoes not match the generated redundant bits, the example signal processing sub-system interface125determines that the first HWA106ais faulty. An example of the example interface protectors126a-nis further described below in conjunction withFIG.6. In some examples, the signal processing sub-system interface125performs a hardware test (e.g., a golden test) of the example signal processing sub-system102by controlling the example MUX128and the example De-MUX130to input a known test input with a known test output. In this manner, the example signal processing sub-system interface125can compare the known test output to a received output to determine the status of the example signal processing sub-system102and/or make adjustments to the example signal process sub-system102. The example signal processing sub-system interface125is further described below in conjunction withFIG.5. Although the example signal processing sub-system102ofFIG.1includes three safety hook protections (e.g., Parity/ECC protection of the example protected memory116a-n, interface protection using the example interface protectors126a-n, and golden test input and output using the example test input/output interface), other safety hook protections may additionally be utilized. For example, the example flip flops112a-nmay be implemented with dice flops for critical registers, firewall and magic pattern may be implemented on the example MMR interface for un-wanted writes, watch-dog timer protection may be implemented in the example combo logic108a-n, etc. FIG.2illustrates an alternative example signal processing sub-system200an efficient safety mechanism. The example ofFIG.2includes the example signal processing chain104, the example hardware accelerators106a-(n−1), the example combo logic108a-n−1, the example memory-mapped registers (MMRs)110a-n−1, the example flip flops112a-n−1, the example unprotected memory114a-n−1, the example protected memory116a-n−1, the example outlier filter120, the example local/on-chip memory122, the example host device124, the example signal processing sub-system interface125, the example interface protectors126a-n−1, the example multiplexer (MUX)128, and the example de-multiplexer (DE-MUX)130. Because many types of signal processing sub-systems already include an HWA that performs a filtering operation, the alternative example signal processing sub-system200ofFIG.2rearranges the example HWAs106a-(n−1) such that a HWA that already corresponds to filtering is moved to the end of the example signal processing pipeline104, thereby allowing a final filtering protocol using an HWA already dedicated to filtering (e.g., combining an outlier filter operation with pre-existing filter operation), without requiring an extra HWA (e.g., the example filtering HWA118of FIG.1). In this manner, the alternative example signal processing sub-system200conserves more space and requires less cost by utilizing the combo HWA106afor a combination of its signal processing purpose and for post process filtering. For example, in a visual/imaging sub system (VISS), a defective pixel correction HWA may be moved to the end of the example signal processing pipeline104. In a Vision Processing Acceleration (VPAC) sub-system (e.g., used for image processing automotive chips), a noise filter HWA may be moved to the end of the example signal processing pipeline104. In a depth and motion processing accelerator (DMPAC) sub-system (e.g., used for image processing automotive chips), a post-processing median filter HWA may be moved to the end of the optical flow and stereo disparity in the example signal processing pipeline104. Although the example ofFIG.2illustrates the first example HWA106aas being a filtering HWA, any of the example HWAs106a-(n−1) may be a filtering HWA. FIG.3illustrates an alternative example signal processing sub-system300an efficient safety mechanism. The example ofFIG.2includes the example signal processing chain104, the example hardware accelerators106a-(n−1), the example combo logic108a-n−1, the example memory-mapped registers (MMRs)110a-n−1, the example flip flops112a-n−1, the example unprotected memory114a-n−1, the example protected memory116a-n−1, the example outlier filter120, the example local/on-chip memory122, the example host device124, the example signal processing sub-system interface125, the example interface protectors126a-n−1, the example multiplexer (MUX)128, and the example de-multiplexer (DE-MUX)130. In the illustrated example ofFIG.3, the nth example HWA106n(e.g., the last HWA in the example signal processing pipeline104) has been structured (e.g., using the example MMR110n) to (A) perform the last step of the signal processing (e.g., using the example combo logic108n-1) and (B) filter the output signal (e.g., using the example outlier filter120). In this manner, the nth example HWA106nhas a dual purpose to conserve space and reduce the cost of including the example filtering HWA118ofFIG.1in the example signal processing pipeline104. FIG.4is a block diagram of the example outlier filter120ofFIGS.1-3. The example outlier filter120includes an example receiver400, an example neighbor array determiner402, an example data processor404, an example data adjuster406, and an example transmitter408. The example outlier filter120is described in conjunction with the example signal processing sub-system102,200,300ofFIGS.1-3. Alternatively, the example outlier filter120ofFIGS.1-3may be utilized with any type of signal processing sub-system. The example receiver400ofFIG.4receives the data from the example unprotected memory114a-n. For example, in an image processing sub-system, the example receiver400may receive pixel data from the example unprotected memory114a-n. The pixel data includes pixel values for pixels that correspond to an image, for example. Each pixel value corresponds to a color and/or brightness. The example neighbor array determiner402ofFIG.4generates neighbor arrays for one or more predefined pixels for pixel comparison. For example, the neighbor array determiner402may generate neighboring arrays of predefined pixels of an image for such pixel value comparisons. The neighbor array determiner402may generate a N×M array corresponding to a predefined pixel. For example, the neighbor array determiner402may generate a 3×3 array of pixels for a predefined pixel, where the predefined pixel is the center of the 3×3 array. In such examples, different dimension arrays may be generated for different pixels and/or the predefined pixel may be in a different location. For example, because corner cannot be the center of a neighbor array, a 2×2 neighbor array may be generated for a predefined corner pixel, where the corner pixel is the corner of the neighbor array. In some examples, the neighbor array determiner402may generate N×M arrays for a limited number of predefined pixels of an image (e.g., such that the image is broken up into multiple non-overlapping neighbor arrays that includes every pixel in multiple neighbor arrays of the image). In some examples, the neighbor array determiner402may generate a N×M array for all pixels of an image, where N and M may vary from pixel to pixel. The example data processor404ofFIG.4processes one or more predefined pixels based on a comparison of the predefined pixels to the corresponding neighboring array to determine if a predefined pixel is an outlier. In some examples, the data processor404determines if the average difference (e.g., Hamming distance) between the predefined pixel value and the neighboring pixel values is more than a threshold. For example, if the predefined pixel has a value of 1, and the neighbor array pixels have values of 10, 9, 11, 8, 10, 6, 10, and 11, the average distance with respect to the predefined pixel is 8.4 (e.g.,(10-1)+(9-1)+(11-1)+(8-1)+(10-1)+(6-1)+(10-1)+(11-1)8). In such an example, if the threshold corresponds to 6, then the data processor404determines that the predefined pixel is faulty (e.g., an outlier). In some examples, the data processor404finds the minimum and maximum pixel value of each neighbor array pixel and if the predefined pixel is more than a maximum threshold or less than a minimum threshold, the example data processor404determines that the predefined pixel is faulty (e.g., an outlier). Using the above example, because the predefined pixel has a value of 1 and the minimum value of a pixel in the neighbor array is 6, the example data processor404determines that the predefined pixel is an outlier. The example data adjuster406ofFIG.4adjusts the predefined pixel values to filter the image. In some examples, the data adjuster406adjusts predefined pixel values that are outliers. For example, if the example data processor404determines that the predefined pixel value is an outlier because the average distance to the predefined pixel value with respect to the neighbor pixel values is greater than a threshold, the example data adjuster406adjusts the predefined pixel value to the average pixel value of the neighborhood array (e.g., which may be calculated by the data processor404based on instructions from the data adjuster406). In another example, if the example data processor404determines that the predefined pixel value is an outlier because the predefined pixel value is lower than the minimum value of the neighbor array or higher than the maximum value of the neighbor array, the example data adjuster406adjusts the predefined pixel value to the minimum or maximum value of the neighbor array. Once the example data adjuster406adjusts the values of the predefined outlier pixels, the filter protocol is complete. In some examples, the data adjuster406adjusts the value of the predefined pixel value regardless whether the value is an outlier or not. For example, the data adjuster406may instruct the data processor404to determine the median of all pixel values of the neighbor array and adjust the value of the predefined pixel to be the median value. In such examples, once the values of the predefined pixels have been adjusted, the filtering protocol is complete. Once complete, the example transmitter408transmits the filtered data to the example local/on-chip memory122ofFIG.2. FIG.5is a block diagram of the example signal processing sub-system interface125ofFIGS.1-3. The example signal processing sub-system interface125includes an example transmitter500, an example receiver502, an example data comparator504, and an example alerter506. The example transmitter500ofFIG.5transmits input signals/test signals to the example signal processing sub-system102,200,300ofFIGS.1-3via the input interface/test interface. Additionally, the example transmitter500may transmit a select signal to the example MUX128and/or the example De-MUX130when transmitting an input signal or a test signal. In some examples, the transmitter500adds redundant bits to the input signal that is transmitted to the example signal processing sub-systems102,200,300. The example transmitter500may add the redundant bits using a parity check protocol, a cyclic redundancy check protocol, a checksum protocol, etc. Additionally, the example transmitter500transmits control/configure signals to configure the example MMRs110a-nof the example HWAs106a-(n−1),118ofFIGS.1-3. The example transmitter500may transmit the received data (e.g., input data that has been processed and filtered by the example signal processing sub-system102,200,300) and/or alerts to other components of the example host device124for further processing. The example receiver502ofFIG.5receives the output (e.g., the signal processed input) and/or a test output signal via the output interface132/test output interface. Additionally, the example receiver502receives redundant bit calculations from the example interface protectors126a-n. The example data comparator504ofFIG.5compares known data to received data to identify hardware failures. For example, when the transmitter500transmits input data with known redundant bits added and the receiver502receives the redundant bits from the example interface protectors126a-n, the example data comparator504compares the added (e.g., known) redundant bits to the received redundant bits. If the received redundant bits match the added redundant bits, the example data comparator504determines that no hardware error has occurred. If one or more of the received redundant bits do not match the added redundant bits, the example data comparator504determines that a hardware error has occurred. Additionally, the example data comparator504determines where the hardware error has occurred. For example, if the received redundant bits begin to mismatch the added redundant bits starting at a first interface protector location, the example data comparator504may determine that the error occurred at the HWA preceding the interface protector that corresponds to a mismatch. In some examples, to conserve resources, the example data comparator504may only compare the received redundant bits from the last interface protector126n(e.g., the interface protector that determines the redundancy bits after the input signal has been fully processed and filtered). In this manner, if the comparison results in a match, the example data comparators504assumes that no error has occurred. If the comparison results in a mismatch, the example data comparator504may then compare the other received redundancy bits from the other interface protectors126a-n, to identify where the fault occurred. In some examples, the data comparator504compares known test data to a received test output of the example signal processing sub-system102,200,300to determine if the example signal processing sub-system102,200,300is faulty or not (e.g., based on whether the test output data matches the known output data). The example alerter506ofFIG.5generates an alert to the other components of the example host device124when the example data comparator504determines that an error has occurred. The alert may be a signal/trigger that causes other components of the example host device124to perform a function. In some examples, the alert may be transmitted to a user via a user interface. FIG.6is a block diagram of the example interface protector126aofFIGS.1-3. The example interface protector126aincludes an example redundant bit determiner600. Although the example ofFIG.6is described in conjunction with the example interface protector126aofFIG.1, the example ofFIG.6may be described in conjunction with any of the example interface protectors126a-n. The example redundant bit determiner600ofFIG.6receives the control signal to determine the redundant bits from the example control signal. As explained above, an input signal may include a control signal and a data signal. The control signal may include address information, protocol information, etc., while the data signal includes the pixel values. The example redundant bit determiner600determines redundant bits using the control signal, because errors in the control signal correspond to more critical errors in the output signal. However, in some examples, the example redundant bit determiner600may determine redundancy bits from the data signal and/or from a combination of the control signal and the data signal. The example redundant bit determiner600may determine the redundant bits using a parity check protocol, a cyclic redundancy check protocol, a checksum protocol, etc. Once the example redundant bit determiner600determines the redundant bits from the control signal, the redundant bit determiner600transmits the determined redundant bits to the example host device124for further processing (e.g., to compare the determined redundant bits with the known redundant bits to identify a hardware error). In some examples, to conserve resources, the redundant bit determiner600generates a signature packet (e.g., a parity packet) based on the redundant bits and transmits the signature packet to the example host device124for processing. While an example manner of implementing the example outlier filter120ofFIGS.1-3is illustrated inFIG.4, an example manner of implementing the example signal processing sub-system interface125ofFIGS.1-3is illustrated inFIG.5, and an example manner of implementing the example interface protector126a-nofFIGS.1-3is illustrated inFIG.6, one or more of the elements, processes and/or devices illustrated inFIGS.4-6may be combined, divided, re-arranged, omitted, eliminated and/or implemented in any other way. Further, the example receiver400, the example neighbor array determiner402, the example data processor404, the example data adjuster406, the example transmitter408, the example transmitter500, the example receiver502, the example comparator504, the example alerter506, the example redundant bit determiner600, and/or, more generally, the example outlier filter120, signal processing sub-system interface125, and/or interface protector126a-nofFIGS.4-6may be implemented by hardware, software, firmware and/or any combination of hardware, software and/or firmware. Thus, for example, any of the example receiver400, the example neighbor array determiner402, the example data processor404, the example data adjuster406, the example transmitter408, the example transmitter500, the example receiver502, the example comparator504, the example alerter506, the example redundant bit determiner600, and/or, more generally, the example outlier filter120, signal processing sub-system interface125, and/or interface protector126a-nofFIGS.4-6could be implemented by one or more analog or digital circuit(s), logic circuits, programmable processor(s), application specific integrated circuit(s) (ASIC(s)), programmable logic device(s) (PLD(s)) and/or field programmable logic device(s) (FPLD(s)). When reading any of the apparatus or system claims of this patent to cover a purely software and/or firmware implementation, at least one of the example receiver400, the example neighbor array determiner402, the example data processor404, the example data adjuster406, the example transmitter408, the example transmitter500, the example receiver502, the example comparator504, the example alerter506, the example redundant bit determiner600, and/or, more generally, the example outlier filter120, signal processing sub-system interface125, and/or interface protector126a-nofFIGS.4-6is/are hereby expressly defined to include a non-transitory computer readable storage device or storage disk such as a memory, a digital versatile disk (DVD), a compact disk (CD), a Blu-ray disk, etc. including the software and/or firmware. Further still, the example the example outlier filter120, signal processing sub-system interface125, and/or interface protector126a-nofFIGS.4-6may include one or more elements, processes and/or devices in addition to, or instead of, those illustrated inFIGS.4-6, and/or may include more than one of any or all of the illustrated elements, processes and devices. Flowcharts representative of example machine readable instructions for implementing the example outlier filter120, signal processing sub-system interface125, and/or interface protector126a-nofFIGS.4-6are shown inFIGS.7-11. In this example, the machine readable instructions comprise a program for execution by a processor such as the processor1312shown in the example processor platform1300discussed below in connection withFIG.13. The program may be embodied in software stored on a non-transitory computer readable storage medium such as a CD-ROM, a floppy disk, a hard drive, a digital versatile disk (DVD), a Blu-ray disk, or a memory associated with the processor1312, but the entire program and/or parts thereof could alternatively be executed by a device other than the processor1312and/or embodied in firmware or dedicated hardware. Further, although the example program is described with reference to the flowcharts illustrated inFIGS.7-11, many other methods of implementing the example outlier filter120, signal processing sub-system interface125, and/or interface protector126a-nofFIGS.4-6may alternatively be used. For example, the order of execution of the blocks may be changed, and/or some of the blocks described may be changed, eliminated, or combined. Additionally or alternatively, any or all of the blocks may be implemented by one or more hardware circuits (e.g., discrete and/or integrated analog and/or digital circuitry, a Field Programmable Gate Array (FPGA), an Application Specific Integrated circuit (ASIC), a comparator, an operational-amplifier (op-amp), a logic circuit, etc.) structured to perform the corresponding operation without executing software or firmware. As mentioned above, the example processes ofFIGS.7-11may be implemented using coded instructions (e.g., computer and/or machine readable instructions) stored on a non-transitory computer and/or machine readable medium such as a hard disk drive, a flash memory, a read-only memory, a compact disk, a digital versatile disk, a cache, a random-access memory and/or any other storage device or storage disk in which information is stored for any duration (e.g., for extended time periods, permanently, for brief instances, for temporarily buffering, and/or for caching of the information). As used herein, the term non-transitory computer readable medium is expressly defined to include any type of computer readable storage device and/or storage disk and to exclude propagating signals and to exclude transmission media. “Including” and “comprising” (and all forms and tenses thereof) are used herein to be open ended terms. Thus, whenever a claim lists anything following any form of “include” or “comprise” (e.g., comprises, includes, comprising, including, etc.), it is to be understood that additional elements, terms, etc. may be present without falling outside the scope of the corresponding claim. As used herein, when the phrase “at least” is used as the transition term in a preamble of a claim, it is open-ended in the same manner as the term “comprising” and “including” are open ended. FIG.7is an example flowchart700representative of example machine readable instructions that may be executed by the example outlier filter120ofFIG.4to filter outlier pixel values corresponding to an image. Although the instructions ofFIG.7are described in conjunction with the example outlier filter120ofFIGS.1-4, the example instructions may be utilized by any type of outlier filter in any type of hardware accelerator. At block702, the example receiver400receives pixel data from the example unprotected memory114a-nofFIGS.1-4. The pixel data includes a location (e.g., within an array corresponding to a picture) and value (e.g., brightness and/or color) of the pixels. At bock704, the example data processor404selects a pixel from the pixel data. When the example outlier filter120filters pixel data, the example outlier filter120may filter each individual pixel or may filter certain predefined pixels based on a location of the predefined pixel. Accordingly, the selected pixel may be a predefined pixel based on a predefined location and/or may be a first pixel of the pixel data (e.g., a pixel corresponding to the top, right corner of the image). At block706, the example neighbor array determiner402determines a neighbor array of neighboring pixels for the selected pixel. For example, if the pixel is the top-left corner pixel, the example neighbor array determiner402may determine a neighbor array based on the pixels that are next to (e.g., to the right of, below, and diagonal to) the selected pixel, thereby generating a 2×2 pixel array. In another example, if the pixel is not a corner or side pixel, the example neighbor array determiner402may determine a neighbor array based on the surrounding (e.g., to the right of, below, above, to the left of, and diagonal to) the selected pixel, thereby generating a 3×3 array where the selected pixel is the center of the array. Alternatively, the example neighbor array determiner402may determine a pixel array based on any dimensions. At block708, the example data processor404determines an average difference between the values of the selected pixel and the neighboring pixels of the neighbor array. For example, if the selected pixel has a value of 1, and the neighbor array pixels have values of 10, 9, 11, 8, 10, 6, 10, and 11, the average distance with respect to the predefined pixel is 8.4 (e.g.,(10-1)+(9-1)+(11-1)+(8-1)+(10-1)+(6-1)+(10-1)+(11-1)8). At block710, the example data processor404determines if the average difference is greater than a threshold (e.g., defined by a user and/or manufacturer). Using the above example, if the threshold corresponds to a value of 5, for example, the example data processor404determines that the average difference is greater than the threshold. In another example, if the threshold corresponds to a value of 5 and the average distance is 4.2, the example data processor404determines that the average distance is less than the threshold. If the example data processor404determines that the average distance is not greater than the threshold (block710: NO), the process continues to block716. If the example data processor404determines that the average distance is greater than the threshold (block710: YES), the example data adjuster406calculates the average value of the neighbor array (e.g., which or without the selected pixel) (block712). At block714, the example data adjuster406replaces value of the selected pixel with the average value. For example, if the average value of the neighbor array is 8, the example data adjuster406may replace the outlier value of the selected pixel with the value 8 (e.g., the average). At block716, the example data processor404determines if there is a subsequent pixel to filter. For example, if there are 100 predefined pixels to filter and only one has been filtered, the example outlier filter120will select a subsequent predefined pixel and continue to filter until all 100 predefined pixels have been filtered. If the example data processor404determines that there is a subsequent pixel to filter (block716: YES), the example data processor404selects the subsequent pixel (block718) and the process returns to block706. If the example data processor404determines that there is a subsequent pixel to filter (block716: NO), the example transmitter408outputs the filtered pixel data to the example local/on chip memory122(block720). FIG.8is an example flowchart800representative of example machine readable instructions that may be executed by the example outlier filter120ofFIG.4to filter outlier pixel values corresponding to an image. Although the instructions ofFIG.7are described in conjunction with the example outlier filter120ofFIGS.1-4, the example instructions may be utilized by any type of outlier filter in any type of hardware accelerator. At block802, the example receiver400receives pixel data from the example unprotected memory114a-nofFIGS.1-4. The pixel data includes a location (e.g., within an array corresponding to a picture) and value (e.g., brightness and/or color) of the pixels. At bock804, the example data processor404selects a pixel from the pixel data. When the example outlier filter120filters pixel data, the example outlier filter120may filter each individual pixel or may filter certain predefined pixels based on a location of the predefined pixel. Accordingly, the selected pixel may be a predefined pixel based on a predefined location and/or may be a first pixel of the pixel data (e.g., a pixel corresponding to the top, right corner of the image). At block806, the example neighbor array determiner402determines a neighbor array of neighboring pixels for the selected pixel. For example, if the pixel is the top-left corner pixel, the example neighbor array determiner402may determine a neighbor array based on the pixels that are next to (e.g., to the right of, below, and diagonal to) the selected pixel, thereby generating a 2×2 pixel array. In another example, if the pixel is not a corner or side pixel, the example neighbor array determiner402may determine a neighbor array based on the surrounding (e.g., to the right of, below, above, to the left of, and diagonal to) the selected pixel, thereby generating a 3×3 array where the selected pixel is the center of the array. Alternatively, the example neighbor array determiner402may determine a pixel array based on any dimensions. At block808, the example data processor404determines a minimum and a maximum value of the neighboring pixels in the neighbor array. For example, if the neighbor array pixels have values of 10, 9, 11, 8, 10, 6, 10, and 11, the example data processor404determines that the minimum pixel value is 6 and the maximum pixel value is 11. At block810, the example data processor404determines if the value of the selected pixel is above the maximum value. Using the above example, if the selected pixel value is 14, for example, the example data processor404determines that the value of the selected pixel is above the maximum value of 11. In another example, if the selected pixel value is 10, for example, the example data processor404determines that the value of the selected pixel is not above the maximum value of 11. If the example data processor404determines that the value of the selected pixel is not above the maximum pixel value (block810: NO), the process continues to block814. If the example data processor404determines that the value of the selected pixel is above the maximum value (block810: YES), the example data adjuster406replaces the value of the selected pixel with the maximum value of the neighbor array (block812). For example, if the selected pixel value is 14 and the maximum neighbor pixel value is 11, the example data adjuster406replaces the selected pixel value of 14 to the maximum value of 11. At block814, the example data processor404determines if the value of the selected pixel is below the minimum value. If the example data processor404determines that the value of the selected pixel is not below the minimum pixel value (block814: NO), the process continues to block818. If the example data processor404determines that the value of the selected pixel is below the minimum value (block814: YES), the example data adjuster406replaces the value of the selected pixel with the minimum value of the neighbor array (block816). At block818, the example data processor404determines if there is a subsequent pixel to filter. For example, if there are 100 predefined pixels to filter and only one has been filtered, the example outlier filter120will select a subsequent predefined pixel and continue to filter until all 100 predefined pixels have been filtered. If the example data processor404determines that there is a subsequent pixel to filter (block818: YES), the example data processor404selects the subsequent pixel (block820) and the process returns to block806. If the example data processor404determines that there is a subsequent pixel to filter (block820: NO), the example transmitter408outputs the filtered pixel data to the example local/on chip memory122(block822). FIG.9is an example flowchart900representative of example machine readable instructions that may be executed by the example outlier filter120ofFIG.4to filter outlier pixel values corresponding to an image. Although the instructions ofFIG.9are described in conjunction with the example outlier filter120ofFIGS.1-4, the example instructions may be utilized by any type of outlier filter in any type of hardware accelerator. At block902, the example receiver400receives pixel data from the example unprotected memory114a-nofFIGS.1-4. The pixel data includes a location (e.g., within an array corresponding to a picture) and value (e.g., brightness and/or color) of the pixels. At bock904, the example data processor404selects a pixel from the pixel data. When the example outlier filter120filters pixel data, the example outlier filter120may filter each individual pixel or may filter certain predefined pixels based on a location of the predefined pixel. Accordingly, the selected pixel may be a predefined pixel based on a predefined location and/or may be a first pixel of the pixel data (e.g., a pixel corresponding to the top, right corner of the image). At block906, the example neighbor array determiner402determines a neighbor array of neighboring pixels for the selected pixel. For example, if the pixel is the top-left corner pixel, the example neighbor array determiner402may determine a neighbor array based on the pixels that are next to (e.g., to the right of, below, and diagonal to) the selected pixel, thereby generating a 2×2 pixel array. In another example, if the pixel is not a corner or side pixel, the example neighbor array determiner402may determine a neighbor array based on the surrounding (e.g., to the right of, below, above, to the left of, and diagonal to) the selected pixel, thereby generating a 3×3 array where the selected pixel is the center of the array. Alternatively, the example neighbor array determiner402may determine a pixel array based on any dimensions. At block908, the example data processor404determines a median value of the neighbor array including the selected pixel. For example, if the selected pixel has a value of 1 and the neighbor array pixels have values of 10, 9, 11, 8, 10, 6, 10, and 11, the example data processor404determines the median value to be 10. At block910, the example data adjuster406replaces the selected pixel with the median value. Using the above example, the example data processor404replaces the selected pixel value of 1 with the median pixel value of 10. At block912, the example data processor404determines if there is a subsequent pixel to filter. For example, if there are 100 predefined pixels to filter and only one has been filtered, the example outlier filter120will select a subsequent predefined pixel and continue to filter until all 100 predefined pixels have been filtered. If the example data processor404determines that there is a subsequent pixel to filter (block912: YES), the example data processor404selects the subsequent pixel (block914) and the process returns to block906. If the example data processor404determines that there is a subsequent pixel to filter (block912: NO), the example transmitter408outputs the filtered pixel data to the example local/on chip memory122(block916). FIG.10is an example flowchart1000representative of example machine readable instructions that may be executed by the example signal processing sub-system interface125ofFIG.5to generate an alert based on a hardware failure of the example signal processing sub-system102,200,300ofFIGS.1-3. Although the instructions ofFIG.10are described in conjunction with the example signal processing sub-system interface125ofFIGS.1-3and/or5, the example instructions may be utilized by any type of signal processing sub-system interface within any type of host device. At block1002, the example transmitter500transmits instructions to the example MMRs110a-nof the example HWAs106a-(n−1),118to initiate the example combo logic108a-nfor a particular signal processing protocol. At block1004, the example transmitter500transmits an input signal into the example signal processing sub-system102,200,300with redundant bits included in the control signal. For example, the host device124may include or communicate with an image capturing sensor. The sensor may capture raw image data that may require image processing to render. Accordingly, the example transmitter500may transmit the raw image data along with control data as an input signal to the example signal processing sub-system102,200,300. Additionally, the example transmitter500generates and includes redundant bits in the control signal to identify hardware errors corresponding to the example signal processing sub-system102. The example transmitter500may add the redundant bits using a parity check protocol, a cyclic redundancy check protocol, a checksum protocol, etc. At block1006, the example receiver502receives redundant bits from the example interface protectors126a-n. For example, receiver502may receive the redundant bits directly from each of the interface protectors126a-nvia dedicated hardware data lines coupled between respective interface protectors126a-nand the host device124. As described above in conjunction withFIG.6, the interface protectors126a-nprocess the control signal to determine the embedded redundant bits and transmit the determined redundant bits back to the example signal processing sub-system interface125. If the received redundant bits do not match the redundant bits generated by the example transmitter500, a hardware error has occurred. At block1008, the example data comparator504compares the generated redundant bits (e.g., the redundant bits added to the control signal by the transmitter500) to the receive redundant bits from the final example interface protector126n(e.g., corresponding to the signal processed and filtered output). If the generated redundant bits are the same as the received redundant bits, then no error has occurred. However, if the generated redundant bits are different from the received redundant bits, then an unintentional change of data occurred corresponding to a hardware error of the example HWAs106a-(n−1),118. At block1010, the example data comparator504determines if the received redundant bits from the final example interface protector126ncorrespond to the generated redundant bits. If the example data comparator504determines that the received redundant bits from the final example interface protector126ncorrespond to the generated redundant bits (block1010: YES), the process ends. If the example data comparator504determines that the received redundant bits from the final example interface protector126ndo not correspond to the generated redundant bits (block1010: NO), the example data comparator504compares the generated redundancy bits from the received redundant bits of the example interface protectors126a-n(block1012). At block1014, the example data comparator504determines where the error occurred based on the comparison. For example, the data comparator504may determine from which interface protector126a-ndid the received redundant bits begin to differ from the generated redundant bits (e.g., where the first discrepancy occurred). As described above, because each of the interface protectors126a-ncorresponds to a difference hardware element of the example signal processing sub-system102,200,300, the example data comparator504can identify which hardware element caused the error by determining which interface protector126a-ncorresponds to the first discrepancy. At block1016, the example alerter506generates a failure alert corresponding to the faulty hardware element (e.g., HWA106a-(n−1),118and/or local/con-chip memory122) based on where the error occurred. At block1018, the example transmitter500transmits the alert to the host device124for further processing and/or decision making. FIG.11is an example flowchart1100representative of example machine readable instructions that may be executed by the example interface protector126aofFIG.6to determine redundant bits from a control signal. Although the instructions ofFIG.11are described in conjunction with the example interface protector126aofFIGS.1-3and/or6, the example instructions may be utilized by any type of interface protector (e.g, the example interface protectors126a-n, or any other interface protector) within any type of signal processing sub-system. At block1102, the example redundant bit determiner600receives a control signal and data signal corresponding to an input signal transmitted by the example host device124. As described above, the data signal includes individual data values, while the control signal includes address information, protocol information etc. Additionally, the control signal includes redundant bits that were added to the control signal. At block1104, the example redundant bit determiner600determines the redundant bits from the received control signal. The example redundant bit determiner600may determine the redundant bits using a parity check protocol, a cyclic redundancy check protocol, a checksum protocol, etc. At block1106, the example redundant bit determiner600transmits the determined redundant bits to the example host device124. FIGS.12A-Cillustrate three example images that have been processed by a signal processing sub-system. The first example image1200corresponds to a critical error that has occurred in a signal processing sub-system that does not include the example protected memory116a-nin the example HWAs106a-(n−1),108a-nofFIGS.1-3. The second example image1202corresponds to an imaged that has been processed with the example unprotected memory114a-nand the example protected memory116a-nin the example HWAs106a-(n−1),108a-nprior to being filtered by a HWA that includes the example outlier filter120ofFIGS.1-3. The third example image1204corresponds to an imaged that has been processed with the example unprotected memory114a-nand the example protected memory116a-nin the example HWAs106a-(n−1),108a-nand filtered by the example outlier filter120ofFIGS.1-3. The first example image1200ofFIG.12Acorresponds to rendered image data that has been processed by a signal processing sub-system that does not include the example protected memory116a-n, where a critical hardware error occurred. As shown in the first example image1200, the critical hardware error corresponds to the entire image being blacked-out, or otherwise unusable. The second example image1202ofFIG.12Bcorresponds to rendered image data that has been processed by the example signal processing sub-system102,200,300, prior to the example outlier filter120has filtered the image data. As shown in the second example image1202, including the example protected memory116a-nfor storage of critical data, the second example image1202is useable for further processing, even when an error occurs, because the critical data is protected. However, because the raw pixel data is unprotected (e.g., stored in the example unprotected memory114a-n), individual pixel values may be corrupted, thereby causing pixel inaccuracies in the second example image1202. The third example image1204ofFIG.12Ccorresponds to the rendered image data that has been processed and filtered by the example signal processing sub-system102,200,300. As shown in the third example image1204, the corrupted pixel values of the second example1202are reduced/eliminated through the filtering process. Accordingly, the third example image1204is the highest quality image, thereby providing optimal image data to the example host device124for further processing. FIG.13is a block diagram of an example processor platform1300capable of executing the instructions ofFIGS.7-11to implement the example outlier filter120, signal processing sub-system interface125, and/or the example interface protector ofFIG.4-6. The processor platform1300can be, for example, a server, a personal computer, a mobile device (e.g., a cell phone, a smart phone, a tablet such as an iPad™), a personal digital assistant (PDA), an Internet appliance, a DVD player, a CD player, a digital video recorder, a Blu-ray player, a gaming console, a personal video recorder, a set top box, a vehicle computing system, or any other type of computing device. The processor platform1300of the illustrated example includes a processor1312. The processor1312of the illustrated example is hardware. For example, the processor1312can be implemented by one or more integrated circuits, logic circuits, microprocessors or controllers from any desired family or manufacturer. The hardware processor may be a semiconductor based (e.g., silicon based) device. In this example, the processor1312implements the example receiver400, the example neighbor array determiner402, the example data processor404, the example data adjuster406, the example transmitter408, the example transmitter500, the example receiver502, the example data comparator504, the alerter506, and/or the example redundant bit determiner600ofFIGS.4-6. The processor1312of the illustrated example includes a local memory1313(e.g., a cache). The example local memory1313may implement the example unprotected memory114a-n, and the example protected memory116a-nofFIGS.1-3. The processor1312of the illustrated example is in communication with a main memory including a volatile memory1314and a non-volatile memory1316via a bus1318. The volatile memory1314may be implemented by Synchronous Dynamic Random Access Memory (SDRAM), Dynamic Random Access Memory (DRAM), RAMBUS Dynamic Random Access Memory (RDRAM) and/or any other type of random access memory device. The non-volatile memory1316may be implemented by flash memory and/or any other desired type of memory device. Access to the main memory1314,1316is controlled by a memory controller. The example volatile memory1314and/or the example non-volatile memory1316may be used to implement the example local/on-chip memory122ofFIGS.1-3. The processor platform1300of the illustrated example also includes an interface circuit1320. The interface circuit1320may be implemented by any type of interface standard, such as an Ethernet interface, a universal serial bus (USB), and/or a PCI express interface. In the illustrated example, one or more input devices1322are connected to the interface circuit1320. The input device(s)1322permit(s) a user to enter data and/or commands into the processor1312. The input device(s) can be implemented by, for example, an audio sensor, a microphone, a camera (still or video), a keyboard, a button, a mouse, a touchscreen, a track-pad, a trackball, isopoint and/or a voice recognition system. One or more output devices1324are also connected to the interface circuit1320of the illustrated example. The output devices1324can be implemented, for example, by display devices (e.g., a light emitting diode (LED), an organic light emitting diode (OLED), a liquid crystal display, a cathode ray tube display (CRT), a touchscreen, a tactile output device, a printer and/or speakers). The interface circuit1320of the illustrated example, thus, typically includes a graphics driver card, a graphics driver chip and/or a graphics driver processor. The interface circuit1320of the illustrated example also includes a communication device such as a transmitter, a receiver, a transceiver, a modem and/or network interface card to facilitate exchange of data with external machines (e.g., computing devices of any kind) via a network1326(e.g., an Ethernet connection, a digital subscriber line (DSL), a telephone line, coaxial cable, a cellular telephone system, etc.). The processor platform1300of the illustrated example also includes one or more mass storage devices1328for storing software and/or data. Examples of such mass storage devices1328include floppy disk drives, hard drive disks, compact disk drives, Blu-ray disk drives, RAID systems, and digital versatile disk (DVD) drives. The coded instructions1332ofFIGS.7-11may be stored in the mass storage device1328, in the volatile memory1314, in the non-volatile memory1316, and/or on a removable tangible computer readable storage medium such as a CD or DVD. From the foregoing, it will be appreciated that example methods, apparatus and articles of manufacture have been disclosed that provide an efficient safety mechanism for signal processing hardware. Conventional memory protection techniques include providing memory protection for all data being processed, which increase cost and silicon area (e.g., integrated circuit space). Examples disclosed herein reduce the cost and silicon area corresponding to such conventional safety protocols by providing protected memory and unprotected memory for the hardware accelerators and allocating some data to be stored in the protected memory and other data to be stored in the unprotected memory. Examples disclosed herein provide 75% more silicon area savings then conventional techniques without reducing overall signal processing output. Although certain example methods, apparatus and articles of manufacture have been disclosed herein, the scope of coverage of this patent is not limited thereto. On the contrary, this patent covers all methods, apparatus and articles of manufacture fairly falling within the scope of the claims of this patent. | 64,186 |
11861892 | DETAILED DESCRIPTION From the foregoing, it will be appreciated that specific embodiments of the invention have been described herein for purposes of illustration, but that various modifications may be made without deviating from the scope of the invention. Accordingly, the invention is not limited except as by the appended claims. Example Implementation of an Unmanned Aerial Vehicle FIG.1Ashows an example configuration of an unmanned aerial vehicle (UAV)100within which certain techniques described herein may be applied. As shown inFIG.1A, UAV100may be configured as a rotor-based aircraft (e.g., a “quadcopter”). The example UAV100includes propulsion and control actuators110(e.g., powered rotors or aerodynamic control surfaces) for maintaining controlled flight, various sensors for automated navigation and flight control112, and one or more image capture devices114and115for capturing images (including video) of the surrounding physical environment while in flight. Although not shown inFIG.1A, UAV100may also include other sensors (e.g., for capturing audio) and means for communicating with other devices (e.g., a mobile device104) via a wireless communication channel116. In the example depicted inFIG.1A, the image capture devices114and/or115are depicted capturing an object102in the physical environment that happens to be a person. In some cases, the image capture devices may be configured to capture images for display to users (e.g., as an aerial video platform) and/or, as described above, may also be configured for capturing images for use in autonomous navigation. In other words, the UAV100may autonomously (i.e., without direct human control) navigate the physical environment, for example, by processing images captured by any one or more image capture devices. While in autonomous flight, UAV100can also capture images using any one or more image capture devices that can be displayed in real time and or recorded for later display at other devices (e.g., mobile device104). FIG.1Ashows an example configuration of a UAV100with multiple image capture devices configured for different purposes. In the example configuration shown inFIG.1, the UAV100includes multiple image capture devices114arranged about a perimeter of the UAV100. The image capture device114may be configured to capture images for use by a visual navigation system in guiding autonomous flight by the UAV100and/or a tracking system for tracking other objects in the physical environment (e.g., as described with respect toFIG.1B). Specifically, the example configuration of UAV100depicted inFIG.1Aincludes an array of multiple stereoscopic image capture devices114placed around a perimeter of the UAV100so as to provide stereoscopic image capture up to a full 360 degrees around the UAV100. In addition to the array of image capture devices114, the UAV100depicted inFIG.1Aalso includes another image capture device115configured to capture images that are to be displayed but not necessarily used for navigation. In some embodiments, the image capture device115may be similar to the image capture devices114except in how captured images are utilized. However, in other embodiments, the image capture devices115and114may be configured differently to suit their respective roles. In many cases, it is generally preferable to capture images that are intended to be viewed at as high a resolution as possible given certain hardware and software constraints. On the other hand, if used for visual navigation and/or object tracking, lower resolution images may be preferable in certain contexts to reduce processing load and provide more robust motion planning capabilities. Accordingly, in some embodiments, the image capture device115may be configured to capture relatively high resolution (e.g., 3840×2160) color images while the image capture devices114may be configured to capture relatively low resolution (e.g., 320×240) grayscale images. As will be described in more detail, the UAV100can be configured to track one or more objects such as a human subject102through the physical environment based on images received via the image capture devices114and/or115. Further the UAV100can be configured to track image capture of such objects, for example, for filming purposes. In some embodiments, the image capture device115is coupled to the body of the UAV100via an adjustable mechanism that allows for one or more degrees of freedom of motion relative to a body of the UAV100. The UAV100may be configured to automatically adjust an orientation of the image capture device115so as to track image capture of an object (e.g., human subject102) as both the UAV100and object are in motion through the physical environment. In some embodiments, this adjustable mechanism may include a mechanical gimbal mechanism that rotates an attached image capture device about one or more axes. In some embodiments, the gimbal mechanism may be configured as a hybrid mechanical-digital gimbal system coupling the image capture device115to the body of the UAV100. In a hybrid mechanical-digital gimbal system, orientation of the image capture device115about one or more axes may be adjusted by mechanical means, while orientation about other axes may be adjusted by digital means. For example, a mechanical gimbal mechanism may handle adjustments in the pitch of the image capture device115, while adjustments in the roll and yaw are accomplished digitally by transforming (e.g., rotating, panning, etc.) the captured images so as to effectively provide at least three degrees of freedom in the motion of the image capture device115relative to the UAV100. FIG.1Bis a block diagram that illustrates an example navigation system120that may be implemented as part of the example UAV100described with respect toFIG.1A. The navigation system120may include any combination of hardware and/or software. For example, in some embodiments, the navigation system120and associated subsystems, may be implemented as instructions stored in memory and executable by one or more processors. As shown inFIG.1B, the example navigation system120includes a motion planning system130for autonomously maneuvering the UAV100through a physical environment and a tracking system140for tracking one or more objects in the physical environment. The tracking subsystem140may include one or more subsystems such as an object detection subsystem142, an instance segmentation subsystem144, an identity recognition subsystem146, and any other subsystems148. The purposes of such subsystems will be described in more detail later. Note that the arrangement of systems shown inFIG.1Bis an example provided for illustrative purposes and is not to be construed as limiting. For example, in some embodiments, the tracking system140may be completely separate from the navigation system120. Further, the subsystems making up the navigation system120may not be logically separated as shown inFIG.1B. In some embodiments, the motion planning system130, operating separately or in conjunction with the tracking system140, is configured to generate a planned trajectory through the 3D space of a physical environment based, for example, on images received from image capture devices114and/or115, data from other sensors112(e.g., IMU, GPS, proximity sensors, etc.), one or more control inputs170from external sources (e.g., from a remote user, navigation application, etc.), and/or one or more specified navigation objectives. Navigation objectives may include, for example, avoiding collision with other objects and/or maneuvering to follow a particular object (e.g., an object tracked by tracking system140). In some embodiments, the generated planned trajectory is continuously or continually (i.e., at regular or irregular intervals) updated based on new perception inputs (e.g., newly captured images) received as the UAV100autonomously navigates the physical environment. In some embodiments, the navigation system120may generate control commands configured to cause the UAV100to maneuver along the planned trajectory generated by the motion planning system130. For example, the control commands may be configured to control one or more control actuators110(e.g., rotors and/or control surfaces) to cause the UAV100to maneuver along the planned 3D trajectory. Alternatively, a planned trajectory generated by the motion planning system120may be output to a separate flight controller system160that is configured to process trajectory information and generate appropriate control commands configured to control the one or more control actuators110. As will be described in more detail, the tracking system140, operating separately or in conjunction with the motion planning system130, is configured to track one or more objects in the physical environment based, for example, on images received from image capture devices114and/or115, data from other sensors112(e.g., IMU, GPS, proximity sensors, etc.), one or more control inputs170from external sources (e.g., from a remote user, navigation application, etc.), and/or one or more specified tracking objectives. A tracking object may include, for example, a designation by a user to track a particular detected object in the physical environment or a standing objective to track objects of a particular classification (e.g., people). As alluded to above, the tracking system140may communicate with the motion planning system130, for example, to maneuver the UAV100based on measured, estimated, and/or predicted positions, orientations, and/or trajectories of objects in the physical environment. For example, the tracking system140may communicate a navigation objective to the motion planning system130to maintain a particular separation distance to a tracked object that is in motion. In some embodiments, the tracking system140, operating separately or in conjunction with the motion planning system130, is further configured to generate control commands configured to cause a mechanism to adjust an orientation of any image capture devices114/115relative to the body of the UAV100based on the tracking of one or more objects. Such a mechanism may include a mechanical gimbal or a hybrid digital mechanical gimbal, as previously described. For example, while tracking an object in motion relative to the UAV100, the tracking system140may generate control commands configured to adjust an orientation of an image capture device115so as to keep the tracked object centered in the field of view (FOV) of the image capture device115while the UAV100is in motion. Similarly, the tracking system140may generate commands or output data to a digital image processor (e.g., that is part of a hybrid digital-mechanical gimbal) to transform images captured by the image capture device115to keep the tracked object centered in the FOV of the image capture device115while the UAV100is in motion. The UAV100shown inFIG.1Aand the associated navigation system120shown inFIG.1Bare examples provided for illustrative purposes. A UAV100in accordance with the present teachings may include more or fewer components than are shown. Further, the example UAV100depicted inFIG.1Aand associated navigation system120depicted inFIG.1Bmay include or be part of one or more of the components of the example UAV system1300described with respect toFIG.13and/or the example computer processing system1400described with respect toFIG.14. For example, the aforementioned navigation system120and associated tracking system140may include or be part of the UAV system1300and/or processing system1400. While the introduced techniques for object tracking are described in the context of an aerial vehicle such as the UAV100depicted inFIG.1A, such techniques are not limited to this context. The described techniques may similarly be applied to detect, identify, and track objects using image capture devices mounted to other types of vehicles (e.g., fixed-wing aircraft, automobiles, watercraft, etc.), hand-held image capture devices (e.g., mobile devices with integrated cameras), or to stationary image capture devices (e.g., building mounted security cameras). Object Tracking Overview A UAV100can be configured to track one or more objects, for example, to enable intelligent autonomous flight. The term “objects” in this context can include any type of physical object occurring in the physical world. Objects can include dynamic objects such as a people, animals, and other vehicles. Objects can also include static objects such as landscape features, buildings, and furniture. Further, certain descriptions herein may refer to a “subject” (e.g., human subject102). The terms “subject” as used herein may simply refer to an object being tracked using any of the disclosed techniques. The terms “object” and “subject” may therefore be used interchangeably. A tracking system140associated with a UAV100can be configured to track one or more physical objects based on images of the objects captured by image capture devices (e.g., image capture devices114and/or115) onboard the UAV100. While a tracking system140can be configured to operate based only on input from image capture devices, the tracking system140can also be configured to incorporate other types of information to aid in the tracking. For example, various other techniques for measuring, estimating, and/or predicting the relative positions and/or orientations of the UAV100and/or other objects are described with respect toFIGS.10-12. In some embodiments, a tracking system140can be configured to fuse information pertaining to two primary categories: semantics and three-dimensional (3D) geometry. As images are received, the tracking system140may extract semantic information regarding certain objects captured in the images based on an analysis of the pixels in the images. Semantic information regarding a captured object can include information such as an object's category (i.e., class), location, shape, size, scale, pixel segmentation, orientation, inter-class appearance, activity, and pose. In an example embodiment, the tracking system140may identify general locations and categories of objects based on captured images and then determine or infer additional more detailed information about individual instances of objects based on further processing. Such a process may be performed as a sequence of discrete operations, a series of parallel operations, or as a single operation. For example,FIG.2shows an example image220captured by a UAV in flight through a physical environment. As shown inFIG.2, the example image220includes captures of two physical objects, specifically, two people present in the physical environment. The example image220may represent a single frame in a series of frames of video captured by the UAV. As previously alluded to, a tracking system140may first identify general locations of the captured objects in the image220. For example, pixel map230shows two dots corresponding to the general locations of the captured objects in the image. These general locations may be represented as image coordinates. The tracking system140may further process the captured image220to determine information about the individual instances of the captured objects. For example, pixel map240shows a result of additional processing of image220identifying pixels corresponding to the individual object instances (i.e., people in this case). Semantic cues can be used to locate and identify objects in captured images as well as associate identified objects occurring in multiple images. For example, as previously mentioned, the captured image220depicted inFIG.2may represent a single frame in a sequence of frames of a captured video. Using semantic cues, a tracking system140may associate regions of pixels captured in multiple images as corresponding to the same physical object occurring in the physical environment. Additional details regarding semantic algorithms that can be employed are described later in this disclosure. In some embodiments, a tracking system140can be configured to utilize 3D geometry of identified objects to associate semantic information regarding the objects based on images captured from multiple views in the physical environment. Images captured from multiple views may include images captured by multiple image capture devices having different positions and/or orientations at a single time instant. For example, each of the image capture devices114shown mounted to a UAV100inFIG.1include cameras at slightly offset positions (to achieve stereoscopic capture). Further, even if not individually configured for stereoscopic image capture, the multiple image capture devices114may be arranged at different positions relative to the UAV100, for example, as shown inFIG.1. Images captured from multiple views may also include images captured by an image captured device at multiple time instants as the image capture device moves through the physical environment. For example, any of the image capture devices114and/or115mounted to UAV100will individually capture images from multiple views as the UAV100moves through the physical environment. Using an online visual-inertial state estimation system, a tracking system140can determine or estimate a trajectory of the UAV100as it moves through the physical environment. Thus, the tracking system140can associate semantic information in captured images, such as locations of detected objects, with information about the 3D trajectory of the objects, using the known or estimated 3D trajectory of the UAV100. For example,FIG.3Ashows a trajectory310of a UAV100moving through a physical environment. As the UAV100moves along trajectory310, the one or more image capture devices (e.g., devices114and/or115) captured images of the physical environment at multiple views312a-n. Included in the images at multiple views312a-nare captures of an object such as a human subject102. By processing the captured images at multiple views312a-n, a trajectory320of the object can also be resolved. Object detections in captured images create rays from a center position of a capturing camera to the object along which the object lies, with some uncertainty. The tracking system140can compute depth measurements for these detections, creating a plane parallel to a focal plane of a camera along which the object lies, with some uncertainty. These depth measurements can be computed by a stereo vision algorithm operating on pixels corresponding with the object between two or more camera images at different views. The depth computation can look specifically at pixels that are labeled to be part of an object of interest (e.g., a subject102). The combination of these rays and planes over time can be fused into an accurate prediction of the 3D position and velocity trajectory of the object over time. For example,FIG.3Bshows a visual representation of a predicted trajectory of a subject102based on images captured from a UAV100. While a tracking system140can be configured to rely exclusively on visual data from image capture devices onboard a UAV100, data from other sensors (e.g. sensors on the object, on the UAV100, or in the environment) can be incorporated into this framework when available. Additional sensors may include GPS, IMU, barometer, magnetometer, and cameras at other devices such as a mobile device104. For example, a GPS signal from a mobile device104held by a person can provide rough position measurements of the person that are fused with the visual information from image capture devices onboard the UAV100. An IMU sensor at the UAV100and/or a mobile device104can provide acceleration and angular velocity information, a barometer can provide relative altitude, and a magnetometer can provide heading information. Images captured by cameras at a mobile device104held by a person can be fused with images from cameras onboard the UAV100to estimate relative pose between the UAV100and the person by identifying common features captured in the images. Various other techniques for measuring, estimating, and/or predicting the relative positions and/or orientations of the UAV100and/or other objects are described with respect toFIGS.10-12. In some embodiments, data from various sensors are input into a spatiotemporal factor graph to probabilistically minimize total measurement error using non-linear optimization.FIG.4shows a diagrammatic representation of an example spatiotemporal factor graph400that can be used to estimate a 3D trajectory of an object (e.g., including pose and velocity over time). In the example spatiotemporal factor graph400depicted inFIG.4, variable values such as the pose and velocity (represented as nodes (402and404respectively)) connected by one or more motion model processes (represented as nodes406along connecting edges). For example, an estimate or prediction for the pose of the UAV100and/or other object at time step 1 (i.e., variable X(1)) may be calculated by inputting estimated pose and velocity at a prior time step (i.e., variables X(0) and V(0)) as well as various perception inputs such as stereo depth measurements and camera image measurements via one or more motion models. A spatiotemporal factor model can be combined with an outlier rejection mechanism wherein measurements deviating too far from an estimated distribution are thrown out. In order to estimate a 3D trajectory from measurements at multiple time instants, one or more motion models (or process models) are used to connect the estimated variables between each time step in the factor graph. Such motion models can include any one of constant velocity, zero velocity, decaying velocity, and decaying acceleration. Applied motion models may be based on a classification of a type of object being tracked and/or learned using machine learning techniques. For example, a cyclist is likely to make wide turns at speed, but is not expected to move sideways. Conversely, a small animal such as a dog may exhibit a more unpredictable motion pattern. In some embodiments, a tracking system140can generate an intelligent initial estimate for where a tracked object will appear in a subsequently captured image based on a predicted 3D trajectory of the object.FIG.5shows a diagram that illustrates this concept. As shown inFIG.5, a UAV100is moving along a trajectory410while capturing images of the surrounding physical environment, including of a human subject102. As the UAV100moves along the trajectory510, multiple images (e.g., frames of video) are captured from one or more mounted image capture devices114/115.FIG.5shows a first FOV of an image capture device at a first pose540and a second FOV of the image capture device at a second pose542. In this example, the first pose540may represent a previous pose of the image capture device at a time instant t(0) while the second pose542may represent a current pose of the image capture device at a time instant t(1). At time instant t(0), the image capture device captures an image of the human subject102at a first 3D position560in the physical environment. This first position560may be the last known position of the human subject102. Given the first pose540of the image capture device, the human subject102while at the first 3D position560appears at a first image position550in the captured image. An initial estimate for a second (or current) image position552can therefore be made based on projecting a last known 3D trajectory520aof the human subject102forward in time using one or more motion models associated with the object. For example, predicted trajectory520bshown inFIG.5represents this projection of the 3D trajectory520aforward in time. A second 3D position562(at time t(1)) of the human subject102along this predicted trajectory520bcan then be calculated based on an amount of time elapsed from t(0) to t(1). This second 3D position562can then be projected into the image plane of the image capture device at the second pose542to estimate the second image position552that will correspond to the human subject102. Generating such an initial estimate for the position of a tracked object in a newly captured image narrows down the search space for tracking and enables a more robust tracking system, particularly in the case of a UAV100and/or tracked object that exhibits rapid changes in position and/or orientation. In some embodiments, the tracking system140can take advantage of two or more types of image capture devices onboard the UAV100. For example, as previously described with respect toFIG.1, the UAV100may include image capture device114configured for visual navigation as well as an image captured device115for capturing images that are to be viewed. The image capture devices114may be configured for low-latency, low-resolution, and high FOV, while the image capture device115may be configured for high resolution. An array of image capture devices114about a perimeter of the UAV100can provide low-latency information about objects up to 360 degrees around the UAV100and can be used to compute depth using stereo vision algorithms. Conversely, the other image capture device115can provide more detailed images (e.g., high resolution, color, etc.) in a limited FOV. Combining information from both types of image capture devices114and115can be beneficial for object tracking purposes in a number of ways. First, the high-resolution color information602from an image capture device115can be fused with depth information604from the image capture devices114to create a 3D representation606of a tracked object, for example, as shown inFIG.6. Second, the low-latency of the image capture devices114can enable more accurate detection of objects and estimation of object trajectories. Such estimates can be further improved and/or corrected based on images received from a high-latency, high resolution image capture device115. The image data from the image capture devices114can either be fused with the image data from the image capture device115, or can be used purely as an initial estimate. By using the image capture devices114, a tracking system140can achieve tracking of objects up to a full 360 degrees around the UAV100. The tracking system140can fuse measurements from any of the image capture devices114or115when estimating a relative position and/or orientation of a tracked object as the positions and orientations of the image capture devices114and115change over time. The tracking system140can also orient the image capture device115to get more accurate tracking of specific objects of interest, fluidly incorporating information from both image capture modalities. Using knowledge of where all objects in the scene are, the UAV100can exhibit more intelligent autonomous flight. As previously discussed, the high-resolution image capture device115may be mounted to an adjustable mechanism such as a gimbal that allows for one or more degrees of freedom of motion relative to the body of the UAV100. Such a configuration is useful in stabilizing image capture as well as tracking objects of particular interest. An active gimbal mechanism configured to adjust an orientation of a higher-resolution image capture device115relative to the UAV100so as to track a position of an object in the physical environment may allow for visual tracking at greater distances than may be possible through use of the lower-resolution image capture devices114alone. Implementation of an active gimbal mechanism may involve estimating the orientation of one or more components of the gimbal mechanism at any given time. Such estimations may be based on any of hardware sensors coupled to the gimbal mechanism (e.g., accelerometers, rotary encoders, etc.), visual information from the image capture devices114/115, or a fusion based on any combination thereof Detecting Objects for Tracking A tracking system140may include an object detection system142for detecting and tracking various objects. Given one or more classes of objects (e.g., humans, buildings, cars, animals, etc.), the object detection system142may identify instances of the various classes of objects occurring in captured images of the physical environment. Outputs by the object detection system142can be parameterized in a few different ways. In some embodiments, the object detection system142processes received images and outputs a dense per-pixel segmentation, where each pixel is associated with a value corresponding to either an object class label (e.g., human, building, car, animal, etc.) and/or a likelihood of belonging to that object class. For example,FIG.7shows a visualization704of a dense per-pixel segmentation of a captured image702where pixels corresponding to detected objects710a-bclassified as humans are set apart from all other pixels in the image702. Another parameterization may include resolving the image location of a detected object to a particular image coordinate (e.g., as shown at map230inFIG.2), for example, based on centroid of the representation of the object in a received image. In some embodiments, the object detection system142can utilize a deep convolutional neural network for object detection. For example, the input may be a digital image (e.g., image702), and the output may be a tensor with the same spatial dimension. Each slice of the output tensor may represent a dense segmentation prediction, where each pixel's value is proportional to the likelihood of that pixel belonging to the class of object corresponding to the slice. For example, the visualization704shown inFIG.7may represent a particular slice of the aforementioned tensor where each pixel's value is proportional to the likelihood that the pixel corresponds with a human. In addition, the same deep convolutional neural network can also predicts the centroid locations for each detected instance, as described in the following section. Instance Segmentation A tracking system140may also include an instance segmentation system144for distinguishing between individual instances of objects detected by the object detection system142. In some embodiments, the process of distinguishing individual instances of detected objects may include processing digital images captured by the UAV100to identify pixels belonging to one of a plurality of instances of a class of physical objects present in the physical environment and captured in the digital images. As previously described with respect toFIG.7, a dense per-pixel segmentation algorithm can classify certain pixels in an image as corresponding to one or more classes of objects. This segmentation process output may allow a tracking system140to distinguish the objects represented in an image and the rest of the image (i.e., a background). For example, the visualization704distinguishes pixels that correspond to humans (e.g., included in region712) from pixels that do not correspond to humans (e.g., included in region730). However, this segmentation process does not necessarily distinguish between individual instances of the detected objects. A human viewing the visualization704may conclude that the pixels corresponding to humans in the detected image actually correspond to two separate humans; however, without further analysis, a tracking system may140be unable to make this distinction. Effective object tracking may involve distinguishing pixels that correspond to distinct instances of detected objects. This process is known as “instance segmentation.”FIG.8shows an example visualization804of an instance segmentation output based on a captured image802. Similar to the dense per-pixel segmentation process described with respect toFIG.7, the output represented by visualization804distinguishes pixels (e.g., included in regions812a-c) that correspond to detected objects810a-cof a particular class of objects (in this case humans) from pixels that do not correspond to such objects (e.g., included in region830). Notably, the instance segmentation process goes a step further to distinguish pixels corresponding to individual instances of the detected objects from each other. For example, pixels in region812acorrespond to a detected instance of a human810a, pixels in region812bcorrespond to a detected instance of a human810b, and pixels in region812ccorrespond to a detected instance of a human810c. Distinguishing between instances of detected objects may be based on an analysis, by the instance segmentation system144, of pixels corresponding to detected objects. For example, a grouping method may be applied by the instance segmentation system144to associate pixels corresponding to a particular class of object to a particular instance of that class by selecting pixels that are substantially similar to certain other pixels corresponding to that instance, pixels that are spatially clustered, pixel clusters that fit an appearance-based model for the object class, etc. Again, this process may involve applying a deep convolutional neural network to distinguish individual instances of detected objects. Identity Recognition Instance segmentation may associate pixels corresponding to particular instances of objects; however, such associations may not be temporally consistent. Consider again, the example described with respect toFIG.8. As illustrated inFIG.8, a tracking system140has identified three instances of a certain class of objects (i.e., humans) by applying an instance segmentation process to a captured image802of the physical environment. This example captured image802may represent only one frame in a sequence of frames of captured video. When a second frame is received, the tracking system140may not be able to recognize newly identified object instances as corresponding to the same three people810a-cas captured in image802. To address this issue, the tracking system140can include an identity recognition system146. An identity recognition system146may process received inputs (e.g., captured images) to learn the appearances of instances of certain objects (e.g., of particular people). Specifically, the identity recognition system146may apply a machine-learning appearance-based model to digital images captured by one or more image capture devices114/115associated with a UAV100. Instance segmentations identified based on processing of captured images can then be compared against such appearance-based models to resolve unique identities for one or more of the detected objects. Identity recognition can be useful for various different tasks related to object tracking. As previously alluded to, recognizing the unique identities of detected objects allows for temporal consistency. Further, identity recognition can enable the tracking of multiple different objects (as will be described in more detail). Identity recognition may also facilitate object persistence that enables re-acquisition of previously tracked objects that fell out of view due to limited FOV of the image capture devices, motion of the object, and/or occlusion by another object. Identity recognition can also be applied to perform certain identity-specific behaviors or actions, such as recording video when a particular person is in view. In some embodiments, an identity recognition process may employ a deep convolutional neural network to learn one or more effective appearance-based models for certain objects. In some embodiments, the neural network can be trained to learn a distance metric that returns a low distance value for image crops belonging to the same instance of an object (e.g. a person), and a high distance value otherwise. In some embodiments, an identity recognition process may also include learning appearances of individual instances of objects such as people. When tracking humans, a tracking system140may be configured to associate identities of the humans, either through user-input data or external data sources such as images associated with individuals available on social media. Such data can be combined with detailed facial recognition processes based on images received from any of the one or more image capture devices114/115onboard the UAV100. In some embodiments, an identity recognition process may focus on one or more key individuals. For example, a tracking system140associated with a UAV100may specifically focus on learning the identity of a designated owner of the UAV100and retain and/or improve its knowledge between flights for tracking, navigation, and/or other purposes such as access control. Multi-Object Tracking In some embodiments, a tracking system140may be configured to focus tracking on a specific object detected in images captured by the one or more image capture devices114/115of a UAV100. In such a single-object tracking approach, an identified object (e.g., a person) is designated for tracking while all other objects (e.g., other people, trees, buildings, landscape features, etc.) are treated as distractors and ignored. While useful in some contexts, a single-object tracking approach may have some disadvantages. For example, an overlap in trajectory, from the point of view of an image capture device, of a tracked object and a distractor object may lead to an inadvertent switch in the object being tracked such that the tracking system140begins tracking the distractor instead. Similarly, spatially close false positives by an object detector can also lead to inadvertent switches in tracking. A multi-object tracking approach addresses these shortcomings, and introduces a few additional benefits. In some embodiments, a unique track is associated with each object detected in the images captured by the one or more image capture devices114/115. In some cases, it may not be practical, from a computing standpoint, to associate a unique track with every single object that is captured in the images. For example, a given image may include hundreds of objects, including minor features such as rocks or leaves of trees. Instead, unique tracks may be associate with certain classes of objects that may be of interest from a tracing standpoint. For example, the tracking system140may be configured to associate a unique track with every object detected that belongs to a class that is generally mobile (e.g., people, animals, vehicles, etc.). Each unique track may include an estimate for the spatial location and movement of the object being tracked (e.g., using the spatiotemporal factor graph described earlier) as well as its appearance (e.g., using the identity recognition feature). Instead of pooling together all other distractors (i.e., as may be performed in a single object tracking approach), the tracking system140can learn to distinguish between the multiple individual tracked objects. By doing so, the tracking system140may render inadvertent identity switches less likely. Similarly, false positives by the object detector can be more robustly rejected as they will tend to not be consistent with any of the unique tracks. An aspect to consider when performing multi-object tracking includes the association problem. In other words, given a set of object detections based on captured images (including parameterization by 3D location and regions in the image corresponding to segmentation), an issue arises regarding how to associate each of the set of object detections with corresponding tracks. To address the association problem, the tracking system140can be configured to associate one of a plurality of detected objects with one of a plurality of estimated object tracks based on a relationship between a detected object and an estimate object track. Specifically, this process may involve computing a “cost” value for one or more pairs of object detections and estimate object tracks. The computed cost values can take into account, for example, the spatial distance between a current location (e.g., in 3D space and/or image space) of a given object detection and a current estimate of a given track (e.g., in 3D space and/or in image space), an uncertainty of the current estimate of the given track, a difference between a given detected object's appearance and a given track's appearance estimate, and/or any other factors that may tend to suggest an association between a given detected object and given track. In some embodiments, multiple cost values are computed based on various different factors and fused into a single scalar value that can then be treated as a measure of how well a given detected object matches a given track. The aforementioned cost formulation can then be used to determine an optimal association between a detected object and a corresponding track by treating the cost formulation as an instance of a minimum cost perfect bipartite matching problem, which can be solved using, for example, the Hungarian algorithm. Object State Estimation Is some embodiments, effective object tracking by a tracking system140may be improved by incorporating information regarding a state of an object. For example, a detected object such as a human may be associated with any one or more defined states. A state in this context may include an activity by the object such as sitting, standing, walking, running, or jumping. In some embodiments, one or more perception inputs (e.g., visual inputs from image capture devices114/115) may be used to estimate one or more parameters associated with detected objects. The estimated parameters may include an activity type, motion capabilities, trajectory heading, contextual location (e.g., indoors vs. outdoors), interaction with other detected objects (e.g., two people walking together, a dog on a leash held by a person, a trailer pulled by a car, etc.), and any other semantic attributes. Generally, object state estimation may be applied to estimate one or more parameters associated with a state of a detected object based on perception inputs (e.g., images of the detected object captured by one or more image capture devices114/115onboard a UAV100or sensor data from any other sensors onboard the UAV100). The estimated parameters may then be applied to assist in predicting the motion of the detected object and thereby assist in tracking the detected object. For example, future trajectory estimates may differ for a detected human depending on whether the detected human is walking, running, jumping, riding a bicycle, riding in a car, etc. In some embodiments, deep convolutional neural networks may be applied to generate the parameter estimates based on multiple data sources (e.g., the perception inputs) to assist in generating future trajectory estimates and thereby assist in tracking. Predicting Future Trajectories of Detected Objects As previously alluded to, a tracking system140may be configured to estimate (i.e., predict) a future trajectory of a detected object based on past trajectory measurements and/or estimates, current perception inputs, motion models, and any other information (e.g., object state estimates). Predicting a future trajectory of a detected object is particularly useful for autonomous navigation by the UAV100. Effective autonomous navigation by the UAV100may depend on anticipation of future conditions just as much as current conditions in the physical environment. Through a motion planning process, a navigation system of the UAV100may generate control commands configured to cause the UAV100to maneuver, for example, to avoid a collision, maintain separation with a tracked object in motion, and/or satisfy any other navigation objectives. Predicting a future trajectory of a detected object is generally a relatively difficult problem to solve. The problem can be simplified for objects that are in motion according to a known and predictable motion model. For example, an object in free fall is expected to continue along a previous trajectory while accelerating at rate based on a known gravitational constant and other known factors (e.g., wind resistance). In such cases, the problem of generating a prediction of a future trajectory can be simplified to merely propagating past and current motion according to a known or predictable motion model associated with the object. Objects may of course deviate from a predicted trajectory generated based on such assumptions for a number of reasons (e.g., due to collision with another object). However, the predicted trajectories may still be useful for motion planning and/or tracking purposes. Dynamic objects such as people and animals, present a more difficult challenge when predicting future trajectories because the motion of such objects is generally based on the environment and their own free will. To address such challenges, a tracking system140may be configured to take accurate measurements of the current position and motion of an object and use differentiated velocities and/or accelerations to predict a trajectory a short time (e.g., seconds) into the future and continually update such prediction as new measurements are taken. Further, the tracking system140may also use semantic information gathered from an analysis of captured images as cues to aid in generating predicted trajectories. For example, a tracking system140may determine that a detected object is a person on a bicycle traveling along a road. With this semantic information, the tracking system140may form an assumption that the tracked object is likely to continue along a trajectory that roughly coincides with a path of the road. As another related example, the tracking system140may determine that the person has begun turning the handlebars of the bicycle to the left. With this semantic information, the tracking system140may form an assumption that the tracked object will likely turn to the left before receiving any positional measurements that expose this motion. Another example, particularly relevant to autonomous objects such as people or animals is to assume that that the object will tend to avoid collisions with other objects. For example, the tracking system140may determine a tracked object is a person heading on a trajectory that will lead to a collision with another object such as a light pole. With this semantic information, the tracking system140may form an assumption that the tracked object is likely to alter its current trajectory at some point before the collision occurs. A person having ordinary skill will recognize that these are only examples of how semantic information may be utilized as a cue to guide prediction of future trajectories for certain objects. Frame-to-Frame Tracking In addition to performing an object detection process in one or more captured images per time frame, the tracking system140may also be configured to perform a frame-to-frame tracking process, for example, to detect motion of a particular set or region of pixels in images at subsequent time frames (e.g., video frames). Such a process may involve applying a mean-shift algorithm, a correlation filter, and/or a deep network. In some embodiments, frame-to-frame tracking may be applied by a system that is separate from the object detection system142wherein results from the frame-to-frame tracking are fused into a spatiotemporal factor graph. Alternatively, or in addition, an object detection system142may perform frame-to-frame tracking if, for example, the system has sufficient available computing resources (e.g., memory). For example, an object detection system142may apply frame-to-frame tracking through recurrence in a deep network and/or by passing in multiple images at a time. A frame-to-frame tracking process and object detection process can also be configured to complement each other, with one resetting the other when a failure occurs. Disparity Segmentation As previously discussed, the object detection system142may be configured to process images (e.g., the raw pixel data) received from one or more image capture devices114/115onboard a UAV100. Alternatively, or in addition, the object detection system142may also be configured to operate by processing disparity images. A “disparity image” may generally be understood as an image representative of a disparity between two or more corresponding images. For example, a stereo pair of images (e.g., left image and right image) captured by a stereoscopic image capture device will exhibit an inherent offset due to the slight difference in position of the two or more cameras associated with the stereoscopic image capture device. Despite the offset, at least some of the objects appearing in one image should also appear in the other image; however, the image locations of pixels corresponding to such objects will differ. By matching pixels in one image with corresponding pixels in the other and calculating the distance between these corresponding pixels, a disparity image can be generated with pixel values that are based on the distance calculations. Such a disparity image will tend to highlight regions of an image that correspond to objects in the physical environment since the pixels corresponding to the object will have similar disparities due to the object's 3D location in space. Accordingly, a disparity image, that may have been generated by processing two or more images according to a separate stereo algorithm, may provide useful cues to guide an object detection system142in detecting objects in the physical environment. In many situations, particularly where harsh lighting is present, a disparity image may actually provide stronger cues about the location of objects than an image captured from the image capture devices114/115. As mentioned, disparity images may be computed with a separate stereo algorithm. Alternatively, or in addition, disparity images may be output as part of the same deep network applied by the object detection system142. Disparity images may be used for object detection separately from the images received from the image capture devices114/115, or they may be combined into a single network for joint inference. Amodal Segmentation In general, an object detection system142and/or an associated instance segmentation system144may be primary concerned with determining which pixels in a given image correspond to each object instance. However, these systems may not consider portions of a given object that are not actually captured in a given image. For example, pixels that would otherwise correspond with an occluded portion of an object (e.g., a person partially occluded by a tree) may not be labeled as corresponding to the object. This can be disadvantageous for object detection, instance segmentation, and/or identity recognition because the size and shape of the object may appear in the captured image to be distorted due to the occlusion. To address this issue, the object detection system142and/or instance segmentation system144may be configured to imply a segmentation of an object instance in a captured image even if that object instance is occluded by other object instances. The object detection system142and/or instance segmentation system144may additionally be configured to determine which of the pixels associated with an object instance correspond with an occluded portion of that object instance. This process is generally referred to as “amodal segmentation” in that the segmentation process takes into consideration the whole of a physical object even if parts of the physical object are not necessarily perceived, for example, received images captured by the image capture devices114/115. Amodal segmentation may be particularly advantageous when performing identity recognition and in a tracking system140configured for multi-object tracking. Object Permanence Loss of visual contact is to be expected when tracking an object in motion through a physical environment. A tracking system140based primarily on visual inputs (e.g., images captured by image capture devices114/115) may lose a track on an object when visual contact is lost (e.g., due to occlusion by another object or by the object leaving a FOV of an image capture device114/115). In such cases, the tracking system140may become uncertain of the object's location and thereby declare the object lost. Human pilots generally do not have this issue, particularly in the case of momentary occlusions, due to the notion of object permanence. Object permanence assumes that, given certain physical constraints of matter, an object cannot suddenly disappear or instantly teleport to another location. Based on this assumption, if it is clear that all escape paths would have been clearly visible, then an object is likely to remain in an occluded volume. This situation is most clear when there is single occluding object (e.g., boulder) on flat ground with free space all around. If a tracked object in motion suddenly disappears in the captured image at a location of another object (e.g., the bolder), then it can be assumed that the object remains at a position occluded by the other object and that the tracked object will emerge along one of one or more possible escape paths. In some embodiments, the tracking system140may be configured to implement an algorithm that bounds the growth of uncertainty in the tracked objects location given this concept. In other words, when visual contact with a tracked object is lost at a particular position, the tracking system140can bound the uncertainty in the object's position to the last observed position and one or more possible escape paths given a last observed trajectory. A possible implementation of this concept may include generating, by the tracking system140, an occupancy map that is carved out by stereo and the segmentations with a particle filter on possible escape paths. Augmented Reality Applications Based on Object Tracking In some embodiments, information regarding objects in the physical environment gathered and/or generated by a tracking system140can be utilized to generate and display “augmentations” to tracked objects, for example, via associated display devices. Devices configured for augmented reality (AR devices) can deliver to a user a direct or indirect view of a physical environment which includes objects that are augmented (or supplemented) by computer-generated sensory outputs such as sound, video, graphics, or any other data that may augment (or supplement) a user's perception of the physical environment. For example, data gathered or generated by a tracking system140regarding a tracked object in the physical environment can be displayed to a user in the form of graphical overlays via an AR device while the UAV100is in flight through the physical environment and actively tracking the object and/or as an augmentation to video recorded by the UAV100after the flight has completed. Examples of AR devices that may be utilized to implement such functionality include smartphones, tablet computers, laptops, head mounted display devices (e.g., Microsoft HoloLens™, Google Glass™), virtual retinal display devices, heads up display (HUD) devices in vehicles, etc. For example, the previously mentioned mobile device104may be configured as an AR device. Note that for illustrative simplicity the term AR device is used herein to describe any type of device capable of presenting augmentations (visible, audible, tactile, etc.) to a user. The term “AR device” shall be understood to also include devices not commonly referred to as AR devices such as virtual reality (VR) headset devices (e.g., Oculus Rift™). FIG.9shows an example view900of a physical environment910as presented at a display of an AR device. The view900of the physical environment910shown inFIG.9may be generated based on images captured by one or more image capture devices114/115of a UAV100and be displayed to a user via the AR device in real time or near real time as the UAV100is flying through the physical environment capturing the images. As shown inFIG.9, one or more augmentations may be presented to the user in the form of augmenting graphical overlays920a,922a,924a,926a, and920bassociated with objects (e.g., bikers940aand940b) in the physical environment910. For example, in an embodiment, the aforementioned augmenting graphical overlays may be generated and composited with video captured by UAV100as the UAV100tracks biker940a. The composite including the captured video and the augmenting graphical overlays may be displayed to the user via a display of the AR device (e.g., a smartphone). In other embodiments, the AR device may include a transparent display (e.g., a head mounted display) through which the user can view the surrounding physical environment910. The transparent display may comprise a waveguide element made of a light-transmissive material through which projected images of one or more of the aforementioned augmenting graphical overlays are propagated and directed at the eyes of the user such that the projected images appear to the user to overlay the user's view of the physical environment910and correspond with particular objects or points in the physical environment. In some embodiments augmentations may include labels with information associated with objects detected in the physical environment910. For example,FIG.9illustrates a scenario in which UAV100has detected and is tracking a first biker940aand a second biker940b. In response, one or more augmenting graphical overlays associated with the tracked objects may be displayed via the AR device at points corresponding to the locations of the bikers940a-bas they appear in the captured image. In some embodiments, augmentations may indicate specific object instances that are tracked by UAV100. In the illustrative example provided inFIG.9, such augmentations are presented as augmenting graphical overlays920a-bin the form of boxes that surround the specific object instances940a-b(respectively). This is just an example provided for illustrative purposes. Indications of object instances may be presented using other types of augmentations (visual or otherwise). For example, object instances and their segmentations may alternatively be visually displayed similar to the segmentation map804described with respect toFIG.8. In some embodiments, augmentations may include identifying information associated with detected objects. For example, augmenting graphical overlays922a-binclude names of the tracked bikers940a-b(respectively). Further, augmenting graphical overlay922aincludes a picture of biker940a. Recall that the identities of tracked individuals may have been resolved by the tracking system140as part of an identity recognition process. In some embodiments, information such as the picture of the biker940amay be automatically pulled from an external source such as a social media platform (e.g., Facebook™, Twitter™, Instagram™, etc.). Although not shown inFIG.9, augmentations may also include avatars associated with identified people. Avatars may include 3D graphical reconstructions of the tracked person (e.g., based on captured images and other sensor data), generative “bitmoji” from instance segmentations, or any other type of generated graphics representative of tracked objects. In some embodiments, augmentation may include information regarding an activity or state of the tracked object. For example, augmenting graphical overlay922aincludes information regarding the speed, distance traveled, and current heading of biker940a. Other information regarding the activity of a tracked object may similarly be displayed. In some embodiments, augmentations may include visual effects that track or interact with tracked objects. For example,FIG.9shows an augmenting graphical overlay924ain the form of a projection of a 3D trajectory (e.g., current, past, and/or future) associated with biker940a. In some embodiments, trajectories of multiple tracked objects may be presented as augmentations. Although not shown inFIG.9, augmentations may also include other visual effects such as halos, fireballs, dropped shadows, ghosting, multi-frame snapshots, etc. Semantic knowledge of objects in the physical environment may also enable new AR user interaction paradigms. In other words, certain augmentations may be interactive and allow a user to control certain aspects of the flight of the UAV100and/or image capture by the UAV100. Illustrative examples of interactive augmentations may include an interactive follow button that appears above moving objects. For example, in the scenario depicted inFIG.9, a UAV is tracking the motion of both bikers940aand940b, but is actively following (i.e., at a substantially constant separation distance) the first biker940a. This is indicated in the augmenting graphical overlay922athat states “currently following.” Note that a corresponding overlay922bassociated with the second biker940bincludes an interactive element (e.g., a “push to follow” button), that when pressed by a user, would cause the UAV100to stop following biker940aand begin following biker940b. Similarly, overlay922aincludes an interactive element (e.g., a “cancel” button), that when pressed by a user, would cause the UAV100to stop following biker940a. In such a situation, the UAV100may revert to some default autonomous navigation objective, for example, following the path the bikers are traveling on but not any one biker in particular. Other similar interactive augmentations may also be implemented. For example, although not shown inFIG.9, users may inspect certain objects, for example, by interacting with the visual depictions of the objects as presented by the AR device. For example, if the AR device includes a touch screen display, a user may cause the UAV100to follow the object simply by touching a region of the screen corresponding to the displayed object. This may also be applied to static objects that are not in motion. For example, by interacting with a region of the screen of an AR device corresponding to the displayed path950, an AR interface may display information regarding the path (e.g., source, destination, length, material, map overlay, etc.) or may cause the UAV to travel along the path at a particular altitude. The size and geometry of detected objects may be taken into consideration when presenting augmentations. For example, in some embodiments, an interactive control element may be displayed as a ring about a detected object in an AR display. For example,FIG.9shows a control element926ashown as a ring that appears to encircle the first biker940. The control element926amay respond to user interactions to control an angle at which UAV100captures images of the biker940a. For example, in a touch screen display context, a user may swipe their finger over the control element926ato cause the UAV100to revolve about the biker940a(e.g., at a substantially constant range) even as the biker940ais in motion. Other similar interactive elements may be implemented to allow the user to zoom image captured in or out, pan from side to side, etc. Example Localization Systems A navigation system120of a UAV100may employ any number of other systems and techniques for localization.FIG.10shows an illustration of an example localization system1000that may be utilized to guide autonomous navigation of a vehicle such as UAV100. In some embodiments, the positions and/or orientations of the UAV100and various other physical objects in the physical environment can be estimated using any one or more of the subsystems illustrated inFIG.10. By tracking changes in the positions and/or orientations over time (continuously or at regular or irregular time intervals (i.e., continually)), the motions (e.g., velocity, acceleration, etc.) of UAV100and other objects may also be estimated. Accordingly, any systems described herein for determining position and/or orientation may similarly be employed for estimating motion. As shown inFIG.10, the example localization system1000may include the UAV100, a global positioning system (GPS) comprising multiple GPS satellites1002, a cellular system comprising multiple cellular antennae1004(with access to sources of localization data1006), a Wi-Fi system comprising multiple Wi-Fi access points1008(with access to sources of localization data1006), and/or a mobile device104operated by a user1. Satellite-based positioning systems such as GPS can provide effective global position estimates (within a few meters) of any device equipped with a receiver. For example, as shown inFIG.10, signals received at a UAV100from satellites of a GPS system1002can be utilized to estimate a global position of the UAV100. Similarly, positions relative to other devices (e.g., a mobile device104) can be determined by communicating (e.g. over a wireless communication link116) and comparing the global positions of the other devices. Localization techniques can also be applied in the context of various communications systems that are configured to transmit communications signals wirelessly. For example, various localization techniques can be applied to estimate a position of UAV100based on signals transmitted between the UAV100and any of cellular antennae1004of a cellular system or Wi-Fi access points1008,1010of a Wi-Fi system. Known positioning techniques that can be implemented include, for example, time of arrival (ToA), time difference of arrival (TDoA), round trip time (RTT), angle of Arrival (AoA), and received signal strength (RSS). Moreover, hybrid positioning systems implementing multiple techniques such as TDoA and AoA, ToA and RSS, or TDoA and RSS can be used to improve the accuracy. Some Wi-Fi standards, such as 802.11ac, allow for RF signal beamforming (i.e., directional signal transmission using phased-shifted antenna arrays) from transmitting Wi-Fi routers. Beamforming may be accomplished through the transmission of RF signals at different phases from spatially distributed antennas (a “phased antenna array”) such that constructive interference may occur at certain angles while destructive interference may occur at others, thereby resulting in a targeted directional RF signal field. Such a targeted field is illustrated conceptually inFIG.10by dotted lines1012emanating from WiFi routers1010. An inertial measurement unit (IMU) may be used to estimate position and/or orientation of device. An IMU is a device that measures a vehicle's angular velocity and linear acceleration. These measurements can be fused with other sources of information (e.g., those discussed above) to accurately infer velocity, orientation, and sensor calibrations. As described herein, a UAV100may include one or more IMUs. Using a method commonly referred to as “dead reckoning,” an IMU (or associated systems) may estimate a current position based on previously measured positions using measured accelerations and the time elapsed from the previously measured positions. While effective to an extent, the accuracy achieved through dead reckoning based on measurements from an IMU quickly degrades due to the cumulative effect of errors in each predicted current position. Errors are further compounded by the fact that each predicted position is based on a calculated integral of the measured velocity. To counter such effects, an embodiment utilizing localization using an IMU may include localization data from other sources (e.g., the GPS, Wi-Fi, and cellular systems described above) to continually update the last known position and/or orientation of the object. Further, a nonlinear estimation algorithm (one embodiment being an “extended Kalman filter”) may be applied to a series of measured positions and/or orientations to produce a real-time optimized prediction of the current position and/or orientation based on assumed uncertainties in the observed data. Kalman filters are commonly applied in the area of aircraft navigation, guidance, and controls. Computer vision may be used to estimate the position and/or orientation of a capturing camera (and by extension a device to which the camera is coupled) as well as other objects in the physical environment. The term, “computer vision” in this context may generally refer to any method of acquiring, processing, analyzing and “understanding” captured images. Computer vision may be used to estimate position and/or orientation using a number of different methods. For example, in some embodiments, raw image data received from one or more image capture devices (onboard or remote from the UAV100) may be received and processed to correct for certain variables (e.g., differences in camera orientation and/or intrinsic parameters (e.g., lens variations)). As previously discussed with respect toFIG.1A, the UAV100may include two or more image capture devices114/115. By comparing the captured image from two or more vantage points (e.g., at different time steps from an image capture device in motion), a system employing computer vision may calculate estimates for the position and/or orientation of a vehicle on which the image capture device is mounted (e.g., UAV100) and/or of captured objects in the physical environment (e.g., a tree, building, etc.). Computer vision can be applied to estimate position and/or orientation using a process referred to as “visual odometry.”FIG.11illustrates the working concept behind visual odometry at a high level. A plurality of images are captured in sequence as an image capture device moves through space. Due to the movement of the image capture device, the images captured of the surrounding physical environment change from frame to frame. InFIG.11, this is illustrated by initial image capture FOV1152and a subsequent image capture FOV1154captured as the image capture device has moved from a first position to a second position over a period of time. In both images, the image capture device may capture real world physical objects, for example, the house1180and/or the person1102. Computer vision techniques are applied to the sequence of images to detect and match features of physical objects captured in the FOV of the image capture device. For example, a system employing computer vision may search for correspondences in the pixels of digital images that have overlapping FOV. The correspondences may be identified using a number of different methods such as correlation-based and feature-based methods. As shown in, inFIG.11, features such as the head of a human subject1102or the corner of the chimney on the house1180can be identified, matched, and thereby tracked. By incorporating sensor data from an IMU (or accelerometer(s) or gyroscope(s)) associated with the image capture device to the tracked features of the image capture, estimations may be made for the position and/or orientation of the image capture relative to the objects1180,1102captured in the images. Further, these estimates can be used to calibrate various other systems, for example, through estimating differences in camera orientation and/or intrinsic parameters (e.g., lens variations) or IMU biases and/or orientation. Visual odometry may be applied at both the UAV100and any other computing device such as a mobile device104to estimate the position and/or orientation of the UAV100and/or other objects. Further, by communicating the estimates between the systems (e.g., via a wireless communication link116) estimates may be calculated for the respective positions and/or orientations relative to each other. Position and/or orientation estimates based in part on sensor data from an on board IMU may introduce error propagation issues. As previously stated, optimization techniques may be applied to such estimates to counter uncertainties. In some embodiments, a nonlinear estimation algorithm (one embodiment being an “extended Kalman filter”) may be applied to a series of measured positions and/or orientations to produce a real-time optimized prediction of the current position and/or orientation based on assumed uncertainties in the observed data. Such estimation algorithms can be similarly applied to produce smooth motion estimations. In some embodiments, data received from sensors onboard UAV100can be processed to generate a 3D map of the surrounding physical environment while estimating the relative positions and/or orientations of the UAV100and/or other objects within the physical environment. This process is sometimes referred to as simultaneous localization and mapping (SLAM). In such embodiments, using computer vision processing, a system in accordance with the present teaching can search for dense correspondence between images with overlapping FOV (e.g., images taken during sequential time steps and/or stereoscopic images taken at the same time step). The system can then use the dense correspondences to estimate a depth or distance to each pixel represented in each image. These depth estimates can then be used to continually update a generated 3D model of the physical environment taking into account motion estimates for the image capture device (i.e., UAV100) through the physical environment. In some embodiments, a 3D model of the surrounding physical environment may be generated as a 3D occupancy map that includes multiple voxels with each voxel corresponding to a 3D volume of space in the physical environment that is at least partially occupied by a physical object. For example,FIG.12shows an example view of a 3D occupancy map1202of a physical environment including multiple cubical voxels. Each of the voxels in the 3D occupancy map1202correspond to a space in the physical environment that is at least partially occupied by a physical object. A navigation system120of a UAV100can be configured to navigate the physical environment by planning a 3D trajectory1220through the 3D occupancy map1202that avoids the voxels. In some embodiments, this 3D trajectory1220planned using the 3D occupancy map1202can be optimized by applying an image space motion planning process. In such an embodiment, the planned 3D trajectory1220of the UAV100is projected into an image space of captured images for analysis relative to certain identified high cost regions (e.g., regions having invalid depth estimates). Computer vision may also be applied using sensing technologies other than cameras, such as LIDAR. For example, a UAV100equipped with LIDAR may emit one or more laser beams in a scan up to 360 degrees around the UAV100. Light received by the UAV100as the laser beams reflect off physical objects in the surrounding physical world may be analyzed to construct a real time 3D computer model of the surrounding physical world. Depth sensing through the use of LIDAR may in some embodiments augment depth sensing through pixel correspondence as described earlier. Further, images captured by cameras (e.g., as described earlier) may be combined with the laser constructed 3D models to form textured 3D models that may be further analyzed in real time or near real time for physical object recognition (e.g., by using computer vision algorithms). The computer vision-aided localization techniques described above may calculate the position and/or orientation of objects in the physical world in addition to the position and/or orientation of the UAV100. The estimated positions and/or orientations of these objects may then be fed into a motion planning system130of the navigation system120to plan paths that avoid obstacles while satisfying certain navigation objectives (e.g., travel to a particular location, follow a tracked objects, etc.). In addition, in some embodiments, a navigation system120may incorporate data from proximity sensors (e.g., electromagnetic, acoustic, and/or optics based) to estimate obstacle positions with more accuracy. Further refinement may be possible with the use of stereoscopic computer vision with multiple cameras, as described earlier. The localization system1000ofFIG.10(including all of the associated subsystems as previously described) is only one example of a system configured to estimate positions and/or orientations of a UAV100and other objects in the physical environment. A localization system1000may include more or fewer components than shown, may combine two or more components, or may have a different configuration or arrangement of the components. Some of the various components shown inFIG.10may be implemented in hardware, software or a combination of both hardware and software, including one or more signal processing and/or application specific integrated circuits. Unmanned Aerial Vehicle—Example System A UAV100, according to the present teachings, may be implemented as any type of UAV. A UAV, sometimes referred to as a drone, is generally defined as any aircraft capable of controlled flight without a human pilot onboard. UAVs may be controlled autonomously by onboard computer processors or via remote control by a remotely located human pilot. Similar to an airplane, UAVs may utilize fixed aerodynamic surfaces along with a propulsion system (e.g., propeller, jet, etc.) to achieve lift. Alternatively, similar to helicopters, UAVs may directly use a propulsion system (e.g., propeller, jet, etc.) to counter gravitational forces and achieve lift. Propulsion-driven lift (as in the case of helicopters) offers significant advantages in certain implementations, for example, as a mobile filming platform, because it allows for controlled motion along all axes. Multi-rotor helicopters, in particular quadcopters, have emerged as a popular UAV configuration. A quadcopter (also known as a quadrotor helicopter or quadrotor) is a multi-rotor helicopter that is lifted and propelled by four rotors. Unlike most helicopters, quadcopters use two sets of two fixed-pitch propellers. A first set of rotors turns clockwise, while a second set of rotors turns counter-clockwise. In turning opposite directions, a first set of rotors may counter the angular torque caused by the rotation of the other set, thereby stabilizing flight. Flight control is achieved through variation in the angular velocity of each of the four fixed-pitch rotors. By varying the angular velocity of each of the rotors, a quadcopter may perform precise adjustments in its position (e.g., adjustments in altitude and level flight left, right, forward and backward) and orientation, including pitch (rotation about a first lateral axis), roll (rotation about a second lateral axis), and yaw (rotation about a vertical axis). For example, if all four rotors are spinning (two clockwise, and two counter-clockwise) at the same angular velocity, the net aerodynamic torque about the vertical yaw axis is zero. Provided the four rotors spin at sufficient angular velocity to provide a vertical thrust equal to the force of gravity, the quadcopter can maintain a hover. An adjustment in yaw may be induced by varying the angular velocity of a subset of the four rotors thereby mismatching the cumulative aerodynamic torque of the four rotors. Similarly, an adjustment in pitch and/or roll may be induced by varying the angular velocity of a subset of the four rotors but in a balanced fashion such that lift is increased on one side of the craft and decreased on the other side of the craft. An adjustment in altitude from hover may be induced by applying a balanced variation in all four rotors, thereby increasing or decreasing the vertical thrust. Positional adjustments left, right, forward, and backward may be induced through combined pitch/roll maneuvers with balanced applied vertical thrust. For example, to move forward on a horizontal plane, the quadcopter would vary the angular velocity of a subset of its four rotors in order to perform a pitch forward maneuver. While pitching forward, the total vertical thrust may be increased by increasing the angular velocity of all the rotors. Due to the forward pitched orientation, the acceleration caused by the vertical thrust maneuver will have a horizontal component and will therefore accelerate the craft forward on a horizontal plane. FIG.113shows a diagram of an example UAV system1300including various functional system components that may be part of a UAV100, according to some embodiments. UAV system1300may include one or more means for propulsion (e.g., rotors1302and motor(s)1304), one or more electronic speed controllers1306, a flight controller1308, a peripheral interface1310, processor(s)1312, a memory controller1314, a memory1316(which may include one or more computer readable storage media), a power module1318, a GPS module1320, a communications interface1322, audio circuitry1324, an accelerometer1326(including subcomponents such as gyroscopes), an inertial measurement unit (IMU)1328, a proximity sensor1330, an optical sensor controller1332and associated optical sensor(s)1334, a mobile device interface controller1336with associated interface device(s)1338, and any other input controllers1340and input device(s)1342, for example, display controllers with associated display device(s). These components may communicate over one or more communication buses or signal lines as represented by the arrows inFIG.13. UAV system1300is only one example of a system that may be part of a UAV100. A UAV100may include more or fewer components than shown in system1300, may combine two or more components as functional units, or may have a different configuration or arrangement of the components. Some of the various components of system1300shown inFIG.13may be implemented in hardware, software or a combination of both hardware and software, including one or more signal processing and/or application specific integrated circuits. Also, UAV100may include an off-the-shelf UAV (e.g., a currently available remote controlled quadcopter) coupled with a modular add-on device (for example, one including components within outline1390) to perform the innovative functions described in this disclosure. As described earlier, the means for propulsion1302-1304may comprise fixed-pitch rotors. The means for propulsion may also include variable-pitch rotors (for example, using a gimbal mechanism), a variable-pitch jet engine, or any other mode of propulsion having the effect of providing force. The means for propulsion1302-1304may include a means for varying the applied thrust, for example, via an electronic speed controller1306varying the speed of each fixed-pitch rotor. Flight controller1308may include a combination of hardware and/or software configured to receive input data (e.g., sensor data from image capture devices1334, and or generated trajectories form an autonomous navigation system120), interpret the data and output control commands to the propulsion systems1302-1306and/or aerodynamic surfaces (e.g., fixed wing control surfaces) of the UAV100. Alternatively or in addition, a flight controller1308may be configured to receive control commands generated by another component or device (e.g., processors1612and/or a separate computing device), interpret those control commands and generate control signals to the propulsion systems1302-1306and/or aerodynamic surfaces (e.g., fixed wing control surfaces) of the UAV100. In some embodiments, the previously mentioned navigation system120of the UAV100may comprise the flight controller1308and/or any one or more of the other components of system1300. Alternatively, the flight controller1308shown inFIG.13may exist as a component separate from the navigation system120, for example, similar to the flight controller160shown inFIG.1B Memory1316may include high-speed random access memory and may also include non-volatile memory, such as one or more magnetic disk storage devices, flash memory devices, or other non-volatile solid-state memory devices. Access to memory1316by other components of system1300, such as the processors1312and the peripherals interface1310, may be controlled by the memory controller1314. The peripherals interface1310may couple the input and output peripherals of system1300to the processor(s)1312and memory1316. The one or more processors1312run or execute various software programs and/or sets of instructions stored in memory1316to perform various functions for the UAV100and to process data. In some embodiments, processors1312may include general central processing units (CPUs), specialized processing units such as graphical processing units (GPUs) particularly suited to parallel processing applications, or any combination thereof. In some embodiments, the peripherals interface1310, the processor(s)1312, and the memory controller1314may be implemented on a single integrated chip. In some other embodiments, they may be implemented on separate chips. The network communications interface1322may facilitate transmission and reception of communications signals often in the form of electromagnetic signals. The transmission and reception of electromagnetic communications signals may be carried out over physical media such as copper wire cabling or fiber optic cabling, or may be carried out wirelessly, for example, via a radiofrequency (RF) transceiver. In some embodiments, the network communications interface may include RF circuitry. In such embodiments, RF circuitry may convert electrical signals to/from electromagnetic signals and communicate with communications networks and other communications devices via the electromagnetic signals. The RF circuitry may include well-known circuitry for performing these functions, including, but not limited to, an antenna system, an RF transceiver, one or more amplifiers, a tuner, one or more oscillators, a digital signal processor, a CODEC chipset, a subscriber identity module (SIM) card, memory, and so forth. The RF circuitry may facilitate transmission and receipt of data over communications networks (including public, private, local, and wide area). For example, communication may be over a wide area network (WAN), a local area network (LAN), or a network of networks such as the Internet. Communication may be facilitated over wired transmission media (e.g., via Ethernet) or wirelessly. Wireless communication may be over a wireless cellular telephone network, a wireless local area network (LAN) and/or a metropolitan area network (MAN), and other modes of wireless communication. The wireless communication may use any of a plurality of communications standards, protocols and technologies, including, but not limited to, Global System for Mobile Communications (GSM), Enhanced Data GSM Environment (EDGE), high-speed downlink packet access (HSDPA), wideband code division multiple access (W-CDMA), code division multiple access (CDMA), time division multiple access (TDMA), Bluetooth, Wireless Fidelity (Wi-Fi) (e.g., IEEE 802.11n and/or IEEE 802.11 ac), voice over Internet Protocol (VoIP), Wi-MAX, or any other suitable communication protocols. The audio circuitry1324, including the speaker and microphone1350, may provide an audio interface between the surrounding environment and the UAV100. The audio circuitry1324may receive audio data from the peripherals interface1310, convert the audio data to an electrical signal, and transmit the electrical signal to the speaker1350. The speaker1350may convert the electrical signal to human-audible sound waves. The audio circuitry1324may also receive electrical signals converted by the microphone1350from sound waves. The audio circuitry1324may convert the electrical signal to audio data and transmit the audio data to the peripherals interface1310for processing. Audio data may be retrieved from and/or transmitted to memory1316and/or the network communications interface1322by the peripherals interface1310. The I/O subsystem1360may couple input/output peripherals of UAV100, such as an optical sensor system1334, the mobile device interface1338, and other input/control devices1342, to the peripherals interface1310. The I/O subsystem1360may include an optical sensor controller1332, a mobile device interface controller1336, and other input controller(s)1340for other input or control devices. The one or more input controllers1340receive/send electrical signals from/to other input or control devices1342. The other input/control devices1342may include physical buttons (e.g., push buttons, rocker buttons, etc.), dials, touch screen displays, slider switches, joysticks, click wheels, and so forth. A touch screen display may be used to implement virtual or soft buttons and one or more soft keyboards. A touch-sensitive touch screen display may provide an input interface and an output interface between the UAV100and a user. A display controller may receive and/or send electrical signals from/to the touch screen. The touch screen may display visual output to a user. The visual output may include graphics, text, icons, video, and any combination thereof (collectively termed “graphics”). In some embodiments, some or all of the visual output may correspond to user-interface objects, further details of which are described below. A touch sensitive display system may have a touch-sensitive surface, sensor or set of sensors that accepts input from the user based on haptic and/or tactile contact. The touch sensitive display system and the display controller (along with any associated modules and/or sets of instructions in memory1316) may detect contact (and any movement or breaking of the contact) on the touch screen and convert the detected contact into interaction with user-interface objects (e.g., one or more soft keys or images) that are displayed on the touch screen. In an exemplary embodiment, a point of contact between a touch screen and the user corresponds to a finger of the user. The touch screen may use LCD (liquid crystal display) technology, or LPD (light emitting polymer display) technology, although other display technologies may be used in other embodiments. The touch screen and the display controller may detect contact and any movement or breaking thereof using any of a plurality of touch sensing technologies now known or later developed, including, but not limited to, capacitive, resistive, infrared, and surface acoustic wave technologies, as well as other proximity sensor arrays or other elements for determining one or more points of contact with a touch screen. The mobile device interface device1338along with mobile device interface controller1336may facilitate the transmission of data between a UAV100and other computing devices such as a mobile device104. According to some embodiments, communications interface1322may facilitate the transmission of data between UAV100and a mobile device104(for example, where data is transferred over a Wi-Fi network). UAV system1300also includes a power system1318for powering the various components. The power system1318may include a power management system, one or more power sources (e.g., battery, alternating current (AC), etc.), a recharging system, a power failure detection circuit, a power converter or inverter, a power status indicator (e.g., a light-emitting diode (LED)) and any other components associated with the generation, management and distribution of power in computerized device. UAV system1300may also include one or more image capture devices1334. Image capture devices1334may be the same as the image capture device114/115of UAV100described with respect toFIG.1A.FIG.13shows an image capture device1334coupled to an image capture controller1332in I/O subsystem1360. The image capture device1334may include one or more optical sensors. For example, image capture device1334may include a charge-coupled device (CCD) or complementary metal-oxide semiconductor (CMOS) phototransistors. The optical sensors of image capture devices1334receive light from the environment, projected through one or more lens (the combination of an optical sensor and lens can be referred to as a “camera”) and converts the light to data representing an image. In conjunction with an imaging module located in memory1316, the image capture device1334may capture images (including still images and/or video). In some embodiments, an image capture device1334may include a single fixed camera. In other embodiments, an image capture device1340may include a single adjustable camera (adjustable using a gimbal mechanism with one or more axes of motion). In some embodiments, an image capture device1334may include a camera with a wide-angle lens providing a wider FOV. In some embodiments, an image capture device1334may include an array of multiple cameras providing up to a full 360 degree view in all directions. In some embodiments, an image capture device1334may include two or more cameras (of any type as described herein) placed next to each other in order to provide stereoscopic vision. In some embodiments, an image capture device1334may include multiple cameras of any combination as described above. In some embodiments, the cameras of an image capture device1334may be arranged such that at least two cameras are provided with overlapping FOV at multiple angles around the UAV100, thereby allowing for stereoscopic (i.e., 3D) image/video capture and depth recovery (e.g., through computer vision algorithms) at multiple angles around UAV100. For example, UAV100may include four sets of two cameras each positioned so as to provide a stereoscopic view at multiple angles around the UAV100. In some embodiments, a UAV100may include some cameras dedicated for image capture of a subject and other cameras dedicated for image capture for visual navigation (e.g., through visual inertial odometry). UAV system1300may also include one or more proximity sensors1330.FIG.13shows a proximity sensor1330coupled to the peripherals interface1310. Alternately, the proximity sensor1330may be coupled to an input controller1340in the I/O subsystem1360. Proximity sensors1330may generally include remote sensing technology for proximity detection, range measurement, target identification, etc. For example, proximity sensors1330may include radar, sonar, and LIDAR. UAV system1300may also include one or more accelerometers1326.FIG.13shows an accelerometer1326coupled to the peripherals interface1310. Alternately, the accelerometer1326may be coupled to an input controller1340in the I/O subsystem1360. UAV system1300may include one or more inertial measurement units (IMU)1328. An IMU1328may measure and report the UAV's velocity, acceleration, orientation, and gravitational forces using a combination of gyroscopes and accelerometers (e.g., accelerometer1326). UAV system1300may include a global positioning system (GPS) receiver1320.FIG.13shows an GPS receiver1320coupled to the peripherals interface1310. Alternately, the GPS receiver1320may be coupled to an input controller1340in the I/O subsystem1360. The GPS receiver1320may receive signals from GPS satellites in orbit around the earth, calculate a distance to each of the GPS satellites (through the use of GPS software), and thereby pinpoint a current global position of UAV100. In some embodiments, the software components stored in memory1316may include an operating system, a communication module (or set of instructions), a flight control module (or set of instructions), a localization module (or set of instructions), a computer vision module, a graphics module (or set of instructions), and other applications (or sets of instructions). For clarity, one or more modules and/or applications may not be shown inFIG.13. An operating system (e.g., Darwin, RTXC, LINUX, UNIX, OS X, WINDOWS, or an embedded operating system such as VxWorks) includes various software components and/or drivers for controlling and managing general system tasks (e.g., memory management, storage device control, power management, etc.) and facilitates communication between various hardware and software components. A communications module may facilitate communication with other devices over one or more external ports1644and may also include various software components for handling data transmission via the network communications interface1322. The external port1344(e.g., Universal Serial Bus (USB), FIREWIRE, etc.) may be adapted for coupling directly to other devices or indirectly over a network (e.g., the Internet, wireless LAN, etc.). A graphics module may include various software components for processing, rendering and displaying graphics data. As used herein, the term “graphics” may include any object that can be displayed to a user, including, without limitation, text, still images, videos, animations, icons (such as user-interface objects including soft keys), and the like. The graphics module in conjunction with a graphics processing unit (GPU)1312may process in real time or near real time, graphics data captured by optical sensor(s)1334and/or proximity sensors1330. A computer vision module, which may be a component of a graphics module, provides analysis and recognition of graphics data. For example, while UAV100is in flight, the computer vision module along with a graphics module (if separate), GPU1312, and image capture devices(s)1334and/or proximity sensors1330may recognize and track the captured image of an object located on the ground. The computer vision module may further communicate with a localization/navigation module and flight control module to update a position and/or orientation of the UAV100and to provide course corrections to fly along a planned trajectory through a physical environment. A localization/navigation module may determine the location and/or orientation of UAV100and provide this information for use in various modules and applications (e.g., to a flight control module in order to generate commands for use by the flight controller1308). Image capture devices(s)1334, in conjunction with an image capture device controller1332and a graphics module, may be used to capture images (including still images and video) and store them into memory1316. Each of the above identified modules and applications correspond to a set of instructions for performing one or more functions described above. These modules (i.e., sets of instructions) need not be implemented as separate software programs, procedures or modules, and, thus, various subsets of these modules may be combined or otherwise re-arranged in various embodiments. In some embodiments, memory1316may store a subset of the modules and data structures identified above. Furthermore, memory1316may store additional modules and data structures not described above. Example Computer Processing System FIG.14is a block diagram illustrating an example of a processing system1400in which at least some operations described in this disclosure can be implemented. The example processing system1400may be part of any of the aforementioned devices including, but not limited to, UAV100and/or mobile device104. The processing system1400may include one or more central processing units (“processors”)1402, main memory1406, non-volatile memory1410, network adapter1412(e.g., network interfaces), display1418, input/output devices1420, control device1422(e.g., keyboard and pointing devices), drive unit1424including a storage medium1426, and signal generation device1430that are communicatively connected to a bus1416. The bus1416is illustrated as an abstraction that represents any one or more separate physical buses, point to point connections, or both connected by appropriate bridges, adapters, or controllers. The bus1416, therefore, can include, for example, a system bus, a Peripheral Component Interconnect (PCI) bus or PCI-Express bus, a HyperTransport or industry standard architecture (ISA) bus, a small computer system interface (SCSI) bus, a universal serial bus (USB), TIC (I2C) bus, or an Institute of Electrical and Electronics Engineers (IEEE) standard1394bus (also called “Firewire”). A bus may also be responsible for relaying data packets (e.g., via full or half duplex wires) between components of the network appliance, such as the switching fabric, network port(s), tool port(s), etc. In various embodiments, the processing system1400may be a server computer, a client computer, a personal computer (PC), a user device, a tablet PC, a laptop computer, a personal digital assistant (PDA), a cellular telephone, an iPhone, an iPad, a Blackberry, a processor, a telephone, a web appliance, a network router, switch or bridge, a console, a hand-held console, a (hand-held) gaming device, a music player, any portable, mobile, hand-held device, or any machine capable of executing a set of instructions (sequential or otherwise) that specify actions to be taken by the computing system. While the main memory1406, non-volatile memory1410, and storage medium1426(also called a “machine-readable medium”) are shown to be a single medium, the term “machine-readable medium” and “storage medium” should be taken to include a single medium or multiple media (e.g., a centralized or distributed database, and/or associated caches and servers) that store one or more sets of instructions1428. The term “machine-readable medium” and “storage medium” shall also be taken to include any medium that is capable of storing, encoding, or carrying a set of instructions for execution by the computing system and that cause the computing system to perform any one or more of the methodologies of the presently disclosed embodiments. In general, the routines executed to implement the embodiments of the disclosure, may be implemented as part of an operating system or a specific application, component, program, object, module, or sequence of instructions referred to as “computer programs.” The computer programs typically comprise one or more instructions (e.g., instructions1404,1408,1428) set at various times in various memory and storage devices in a computer, and that, when read and executed by one or more processing units or processors1402, cause the processing system1400to perform operations to execute elements involving the various aspects of the disclosure. Moreover, while embodiments have been described in the context of fully functioning computers and computer systems, those skilled in the art will appreciate that the various embodiments are capable of being distributed as a program product in a variety of forms, and that the disclosure applies equally regardless of the particular type of machine or computer-readable media used to actually effect the distribution. Further examples of machine-readable storage media, machine-readable media, or computer-readable (storage) media include recordable type media such as volatile and non-volatile memory devices1610, floppy and other removable disks, hard disk drives, optical disks (e.g., Compact Disk Read-Only Memory (CD ROMS), Digital Versatile Disks (DVDs)), and transmission type media such as digital and analog communication links. The network adapter1412enables the processing system1400to mediate data in a network1414with an entity that is external to the processing system1400, such as a network appliance, through any known and/or convenient communications protocol supported by the processing system1400and the external entity. The network adapter1412can include one or more of a network adaptor card, a wireless network interface card, a router, an access point, a wireless router, a switch, a multilayer switch, a protocol converter, a gateway, a bridge, bridge router, a hub, a digital media receiver, and/or a repeater. The network adapter1412can include a firewall which can, in some embodiments, govern and/or manage permission to access/proxy data in a computer network, and track varying levels of trust between different machines and/or applications. The firewall can be any number of modules having any combination of hardware and/or software components able to enforce a predetermined set of access rights between a particular set of machines and applications, machines and machines, and/or applications and applications, for example, to regulate the flow of traffic and resource sharing between these varying entities. The firewall may additionally manage and/or have access to an access control list which details permissions including, for example, the access and operation rights of an object by an individual, a machine, and/or an application, and the circumstances under which the permission rights stand. As indicated above, the techniques introduced here may be implemented by, for example, programmable circuitry (e.g., one or more microprocessors), programmed with software and/or firmware, entirely in special-purpose hardwired (i.e., non-programmable) circuitry, or in a combination or such forms. Special-purpose circuitry can be in the form of, for example, one or more application-specific integrated circuits (ASICs), programmable logic devices (PLDs), field-programmable gate arrays (FPGAs), etc. Note that any of the embodiments described above can be combined with another embodiment, except to the extent that it may be stated otherwise above or to the extent that any such embodiments might be mutually exclusive in function and/or structure. Although the present invention has been described with reference to specific exemplary embodiments, it will be recognized that the invention is not limited to the embodiments described, but can be practiced with modification and alteration within the spirit and scope of the appended claims. Accordingly, the specification and drawings are to be regarded in an illustrative sense rather than a restrictive sense. | 105,337 |
11861893 | DETAILED DESCRIPTION According to one embodiment, a reading support system includes a processing device. The processing device includes an extractor and a type determiner. The extractor extracts a plurality of numeral regions from a candidate region. The candidate region is a candidate of a region in which a meter is imaged. The numeral regions respectively include a plurality of characters of the meter. The type determiner determines a type of the meter based on positions of the numeral regions. Various embodiments are described below with reference to the accompanying drawings. In the specification and drawings, components similar to those described previously or illustrated in an antecedent drawing are marked with like reference numerals, and a detailed description is omitted as appropriate. First Embodiment FIG.1is a block diagram illustrating the configuration of a reading support system according to a first embodiment. FIGS.2A to2Dare schematic views illustrating meters. FIGS.3A to12describe processing according to the reading support system according to the first embodiment. The reading support system1is used when reading a value (an indication) shown by a meter from an image including the meter. The type of the object meter is arbitrary. For example, the reading support system1may be used to read the indications of round meters M1and M2such as those illustrated inFIGS.2A and2B. A round meter includes a pointer In rotating with some point as the center, multiple graduations Sc marked around the center point, and numerals Nu marked to correspond to at least a portion of the multiple graduations Sc. The graduations Sc are arranged in a circular configuration or a circular arc-like configuration. The round meter shows a value by the pointer In indicating a designated graduation Sc by one of the pointer In or the graduations Sc rotating along the arrangement direction of the graduations Sc. The reading support system1also can be used to read the indication of a vertical meter M3such as that illustrated inFIG.2Cor a horizontal meter M4such as that illustrated inFIG.2D. Vertical meters and horizontal meters include the pointer In, the multiple graduations Sc arranged in one direction, and the numerals Nu marked to correspond to at least a portion of the multiple graduations Sc. Vertical meters and horizontal meters show a value by the pointer In indicating a designated graduation Sc by one of the pointer In or the graduations Sc moving along the arrangement direction of the graduations Sc. As illustrated inFIG.1, the reading support system1according to the first embodiment includes a processing device10and a memory device20. The processing device10includes an extractor14. In the example ofFIG.1, the processing device10further includes an acceptor11, an extractor12, a corrector13, a reader15, and an outputter16. For example, an external imaging device acquires a static image by imaging the meter. The imaging device transmits the acquired image to the processing device10. Or, the imaging device may store the image in the memory device20. The processing device10acquires the image by accessing the memory device20. A video image may be imaged by the imaging device. For example, the imaging device cuts out a static image from the video image and transmits the static image to the processing device10or the memory device20. An object other than the meter may be imaged in the image. The acceptor11accepts the image input to the processing device10. The acceptor11outputs the image to the extractor12.FIG.3Ais an example of the image input to the processing device10. From the input image, the extractor12extracts a candidate of the region in which the meter is imaged. Here, the image that is imaged by the imaging device and input to the processing device10is called the input image. A portion of the input image that is a candidate of the region in which the meter is imaged is called a candidate region. Multiple candidate regions may be output from the extractor12. As one specific example, the extractor12includes a contour extractor12aand a selector12b. For example, the contour extractor12aextracts contours (edges) included in the input image based on the brightness difference or the luminance difference of the input image. The contour extractor12aalso may perform processing of the input image as appropriate when extracting the contours. For example, the contour extractor12amay convert the input image into grayscale, subsequently binarize the image, and extract the contours from the binary image. The selector12bextracts a region surrounded with a contour from the input image. For example, the selector12bcompares the maximum length in some one direction and the surface area to preset thresholds for the regions. The selector12bselects, as candidate regions, the regions for which the maximum length and the surface area are respectively greater than the thresholds. Thereby, the regions that have surface areas that are too small, regions having shapes much different from a meter, etc., are excluded.FIG.3Billustrates a candidate region CR1extracted from an input image II illustrated inFIG.3A. The selector12boutputs the candidate region to the corrector13. The corrector13performs a projective transformation of the candidate region as appropriate. Typically, the outer edge of the meter or the outer edge of the display panel of the meter is circular or rectangular. When the candidate region is trapezoidal, parallelogram-shaped, etc., the corrector13performs a projective transformation of the candidate region so that the outer edge of the candidate region is rectangular. When the candidate region is elliptical or oval, the corrector13performs a projective transformation of the candidate region so that the outer edge of the candidate region is circular. The distortion of the candidate region is corrected by the projective transformation. The corrector13outputs the corrected candidate region to the extractor14. The extractor14extracts a numeral region, a scale region, and a pointer region from the candidate region. Specifically, the extractor14includes a numeral region extractor14a, a type determiner14b, a scale region extractor14c, and a pointer region extractor14d. The numeral region extractor14aextracts a character candidate, which is a candidate of a region in which a character is imaged, from the candidate region. The character candidate is a portion of the candidate region. The character candidate includes a numeral, an alphabet character, etc. Or, an object other than a character such as adhered matter on the meter, noise of the image, etc., may be included in the character candidate. For example, scene character recognition technology or the like is used to extract the character candidate. The size of the extracted character candidate is determined based on the size of the candidate region. The numeral region extractor14aperforms the following processing for each of the character candidates. First, the numeral region extractor14acalculates a match rate between the character candidate and a numeral for each of the character candidates. For example, the numeral region extractor14acalculates, as the match rate, the similarity of a feature between the character candidate and a preregistered image of a numeral. Then, the numeral region extractor14aperforms a preset angular rotation of the character candidate. The numeral region extractor14arecalculates the match rate between the character candidate and the numeral for each of the rotated character candidates. Thereafter, the rotation of the character candidate and the calculation of the match rate are repeated until the rotation count reaches a prescribed number or the total rotation angle reaches a prescribed threshold. InFIGS.4A and4B, the horizontal axis is a rotation angle A. The vertical axis is a match rate R. Results such as those illustrated inFIGS.4A and4Bare obtained by repeating the rotation of the character candidate and the calculation of the match rate. FIG.4Aillustrates a result when a numeral is included in the character candidate. The numeral region extractor14acalculates the proportion of the change of the match rate with respect to the change of the rotation angle from the result. For example, the numeral region extractor14adetermines that a numeral is included in the character candidate when the proportion is greater than a threshold. In the example ofFIG.4A, the numeral region extractor14adetermines that a numeral is included in the character candidate from the result of the cross-hatched portion. FIG.4Billustrates a result when a numeral is not included in the character candidate. When a numeral is not included in the character candidate, the proportion of the change of the match rate with respect to the change of the rotation angle is small as illustrated inFIG.4B. Based on this result, the numeral region extractor14aexcludes the character candidates that do not include a numeral from the multiple character candidates. The numeral region extractor14adetermines the minimum area surrounding the character candidate for each of the character candidates determined to include a numeral. The numeral region extractor14amay rotate the candidate region based on the result of the rotation angle and the match rate. For example, when a numeral is determined to be included in the character candidate, the numeral region extractor14arecords the angle at which the maximum value of the match rate is obtained. When the determination is completed for all of the character candidates, the numeral region extractor14acalculates the average value of the angles. The numeral region extractor14arotates the candidate region by the calculated average value. By the processing described above, for example, the minimum areas that surround the numerals are obtained as illustrated inFIG.3Cfrom the image of the candidate region CR1illustrated inFIG.3B. These minimum areas each are extracted as numeral regions R1from the candidate region CR1. The numeral region extractor14aoutputs, to the type determiner14b, the extracted multiple numeral regions and the positions of the numeral regions in the candidate region. At this time, information that relates to the numeral regions may be output from the numeral region extractor14ato the corrector13. For example, the numeral region extractor14aacquires a length L1and a length L2for at least a portion of the extracted numeral regions as illustrated inFIG.3Cand outputs the length L1and the length L2to the corrector13. For example, the corrector13calculates the distortion of the candidate region based on the ratio of the length L1and the length L2and re-performs a projective transformation of the candidate region to correct the distortion. In such a case, the processing by the numeral region extractor14ais re-performed for the candidate region of the projective transformation. Numeral regions that have less distortion can be extracted thereby. A scale region and a pointer region that have less distortion can be extracted in the subsequent processing. The reading accuracy of the indication of the meter can be increased when the distortions of the numeral region, the scale region, and the pointer region are small. The type determiner14bdetermines the type of the meter imaged in the input image based on the positions of the numeral regions. For example, when the multiple numeral regions R1are arranged in a curve in the circumferential direction as illustrated inFIG.3C, the type determiner14bdetermines that the meter imaged in the input image is a round meter. When the numerals are arranged along one direction, the type determiner14bdetermines that the meter imaged in the input image is a vertical meter or a horizontal meter. The type determiner14boutputs the type of the meter to the scale region extractor14c. The scale region extractor14cextracts the scale region from the candidate region. For example, the processing by the scale region extractor14cchanges according to the result of the determination by the type determiner14b. Specifically, when the type of the meter is determined to be round, the scale region extractor14cperforms a polar transformation of the candidate region. For example, when the candidate region is circular, the center of the circle is used as the center of the polar coordinate system. When the candidate region is rectangular, the intersection of the diagonal lines is used as the center of the polar coordinate system. The polar transformation of the candidate region is performed after setting the center of the polar coordinate system. When the meter is round, the graduations are arranged in the circumferential direction in the candidate region. In the candidate region after the polar transformation, the graduations are arranged in substantially one direction. When the type of the meter is vertical or horizontal, a polar transformation of the candidate region is not performed. This is because the graduations are already arranged in one direction in vertical meters and horizontal meters.FIG.5Aillustrates a result of a polar transformation of the candidate region CR1ofFIG.3B. For example, the scale region extractor14cbinarizes a candidate region CR2after the polar transformation illustrated inFIG.5A. As illustrated inFIG.5B, a binarized candidate region CR3is obtained thereby. The scale region extractor14cextracts contours from the binarized candidate region CR3. As illustrated inFIG.5C, a candidate region CR4in which the contours are enhanced is obtained thereby. The scale region extractor14csets multiple first subregions SRA as illustrated inFIG.6Afor the image in which the contours are extracted. The multiple first subregions SRA are arranged in a first direction D1. The length of the first subregion SRA in a second direction D2perpendicular to the first direction D1is, for example, equal to the length in the second direction D2of the candidate region CR4. For example, the number of the first subregions SRA that are set is determined based on the size of the candidate region. In the example ofFIG.6A, ten first subregions SRA1to SRA10are set. When the meter is round, the first direction D1corresponds to the diametrical direction before the polar transformation. The diametrical direction is the direction from the center of rotation of the meter or the graduation toward the graduations. The second direction D2corresponds to the circumferential direction. The circumferential direction is the direction in which the graduations are arranged. The scale region extractor14ccounts the number of horizontal lines in each of the first subregions SRA. Here, a horizontal line refers to a line extending in the first direction D1. FIG.6Bis an enlarged image of a portion P1illustrated inFIG.6A.FIG.6Cis an enlarged image of a portion P2illustrated inFIG.6A. Here, the black and white of the image ofFIG.6Aare inverted for convenience of description inFIGS.6B and6C. For example, as illustrated inFIGS.6B and6C, the scale region extractor14csets masks Ma in each of the first subregions SRA. For example, the masks Ma are set so that the length in the second direction D2of the regions not covered with the masks Ma has a specified value. The scale region extractor14cconfirms whether or not contours are in the regions not covered with the masks Ma. When a contour exists, the scale region extractor14cmeasures the length in the first direction D1of the contour. When the length in the first direction D1is greater than a prescribed threshold, the contour is determined to be a horizontal line. For example, in the example illustrated inFIG.6B, a contour E1between the masks Ma is not determined to be a horizontal line. In the example illustrated inFIG.6C, contours E2and E3between the masks Ma are determined to be horizontal lines. The scale region extractor14ccounts the number of horizontal lines in each of the first subregions SRA while changing the positions of the masks Ma. The scale region extractor14ctotals the number of horizontal lines in each of the first subregions SRA. For example, the result illustrated inFIG.6Dis obtained from the image illustrated inFIG.6A.FIG.6Dillustrates a total Sum of the number of horizontal lines at each of multiple points in the first direction D1. From this result, the scale region extractor14cdetermines the first subregions in which the total numbers of the horizontal lines are greater than the prescribed threshold to be areas (scale areas) in which the graduations of the meter exist. In the example ofFIG.6D, the scale region extractor14cdetermines the positions of the first subregions SRA8to SRA10shown by cross hatching to be the scale area. The scale region extractor14cextracts the scale area from the candidate region based on the determination result. For example, by this processing, a scale area SA illustrated inFIG.7Ais extracted from the image illustrated inFIG.5A. For example, as illustrated inFIG.7B, the scale region extractor14csets multiple second subregions SRB in the scale area SA in which the contours are extracted. The multiple second subregions SRB are arranged in the second direction D2. In the example ofFIG.7B, nine second subregions SRB1to SRB9are set. Similarly to the totaling calculation of the number of horizontal lines of each of the first subregions SRA described above, the scale region extractor14ctotals the number of horizontal lines for each of the second subregions SRB. For example, the result illustrated inFIG.7Cis obtained from the image illustrated inFIG.7B.FIG.7Cillustrates the total Sum of the number of horizontal lines for each of the second subregions SRB. From this result, the scale region extractor14cdetermines the second subregions in which the total numbers of the horizontal lines are greater than the threshold to be regions (scale regions) in which graduations of the meter exist. In the example ofFIG.7C, the scale region extractor14cdetermines the positions of the second subregions SRB1to SRB6and SRB9shown by cross hatching to be the scale region of the meter. The scale region extractor14cextracts the scale region from the scale area based on this result. By the processing described above, the region of the candidate region in which the graduations exist is designated. The scale region extractor14cextracts the scale region from the candidate region based on this result. For example, as illustrated inFIG.7D, a scale region R2is extracted from the candidate region CR1rotated by the numeral region extractor14a. The scale region extractor14coutputs the extracted scale region to the pointer region extractor14d. In the case where a polar transformation of the candidate region has been performed, the scale region extractor14calso may perform the following processing. The scale region extractor14cextracts the graduations from the scale region. For example, the scale region extractor14cidentifies the horizontal lines arranged in the second direction D2in the scale region to be graduations. As illustrated inFIG.8, the scale region extractor14ccalculates a distance D between the graduation Sc and the first direction D1end portion of the candidate region after the polar transformation for each of the graduations Sc. The distances are substantially equal when the center of the polar coordinate system matches the center of the actual round meter. Fluctuation of the distance indicates that the center of the polar coordinate system does not match the center of the actual round meter. For example, the scale region extractor14cacquires the maximum value of the distances, the minimum value of the distances, the position of the horizontal line at which the distance is the maximum, and the position of the horizontal line at which the distance is the minimum. At the position at which the distance is the minimum, the center of the actual round meter is more distant in the first direction D1than the center of the polar coordinate system. At the position at which the distance is the maximum, the center of the actual round meter is more proximate in the first direction D1than the center of the polar coordinate system. Based on such information, the scale region extractor14ccorrects the center of the polar coordinate system and re-performs a polar transformation of the candidate region. The scale region extractor14cre-extracts the scale region for the candidate region of the new polar transformation. The scale region can be more accurately extracted thereby. The reading accuracy of the indication of the meter can be increased. The processing of the scale region extractor14cdescribed inFIGS.5A to8is for a round meter. Processing similar to that described above is performed even when the meter is vertical or horizontal. In other words, the scale region extractor14csets the multiple first subregions for a candidate region in which the graduations are arranged in one direction. The scale region extractor14cextracts the scale area from the candidate region based on the total number of the horizontal lines for each of the first subregions. The scale region extractor14csets the multiple second subregions for the scale area. The scale region extractor14cextracts the scale region from the scale area based on the total number of the horizontal lines for each of the second subregions. The pointer region extractor14dsets a detection region for detecting the pointer in the candidate region. When the meter is determined to be round by the type determiner14b, the pointer region extractor14dsets a circular pointer region. When the meter is determined to be vertical or horizontal, the pointer region extractor14dsets a rectangular pointer region. The pointer region extractor14ddetermines the position of the pointer based on information obtained from the detection region. The pointer region extractor14dextracts the pointer region based on the determination result of the pointer position in the detection regions while changing the size of the detection region. An example of the processing by the pointer region extractor14dwhen the meter is round will now be described. First, the pointer region extractor14dsets a circular detection region. The center of the detection region is set to the center of the candidate region. First, the diameter of the circle of the detection region is set to a predetermined value. The pointer region extractor14dperforms a polar transformation of the detection region. For example, as illustrated inFIG.9A, a circular detection region DR1is set in the candidate region.FIG.9Billustrates the result of performing the polar transformation of the detection region DR1illustrated inFIG.9Aand binarizing. The pointer region extractor14dcalculates the total of the luminances of the multiple pixels arranged in the first direction D1at each of multiple points in the second direction D2for the detection region of the polar transformation. By performing this processing for the binarized detection region, the number of white pixels arranged in the first direction D1is calculated at each of the multiple points in the second direction D2.FIG.9Cillustrates the relationship between the position P in the second direction D2and the total Sum of the luminances for the detection region illustrated inFIG.9B. For example, the pointer region extractor14ddetermines that the pointer exists at the second direction D2position at which the total of the luminances is a minimum. When the minimum values of the totals of the luminances are the same at multiple points, the pointer region extractor14dcompares a distribution range threshold and the distribution range of the positions at which the luminances are minimum values. When the distribution range is not more than the distribution range threshold, the pointer region extractor14duses the average position as the position of the pointer. As an example, the second direction D2distance that corresponds to an angle of 10 degrees in the polar coordinate system is set as the distribution range threshold. The pointer region extractor14ddetermines the position of the pointer to be undetected when the distribution range is greater than the distribution range threshold. When the pointer position is determined in the detection region that is set, the pointer region extractor14dchanges the size of the detection region. For example, when a small detection region is set first, the pointer region extractor14dincreases the detection region.FIG.10Aillustrates another example of the detection region. The diameter of a detection region DR2illustrated inFIG.10Ais greater than the diameter of the detection region DR1illustrated inFIG.9A.FIG.10Billustrates a result of performing a polar transformation of the detection region DR2illustrated inFIG.10Aand binarizing. Similarly for the detection region DR2illustrated inFIG.10B, the pointer region extractor14dcalculates the total of the luminances of the multiple pixels arranged in the first direction D1at each of multiple points in the second direction D2.FIG.10Cillustrates the relationship between the position P in the second direction D2and the total Sum of the luminances for the detection region illustrated inFIG.10B. The pointer region extractor14drepeats the modification of the size of the detection region and the determination of the pointer position in the detection region described above. This processing is repeated until the size of the detection region satisfies a prescribed condition. For example, this processing is repeated until the detection region reaches the scale region. In a typical round meter, at least a portion of the pointer exists inward of the scale region. If the detection region reaches the scale region, at least a portion of the pointer exists inside the detection region. Or, the processing described above may be repeated until the size of the detection region reaches a designated value calculated based on the size of the candidate region. FIG.11illustrates a result obtained by repeating the modification of the size of the detection region and the determination of the pointer position in the detection region. InFIG.11, the horizontal axis is a size S of the detection region, and the vertical axis is the position P of the pointer in the second direction D2. When the meter is round, the size of the detection region corresponds to the diameter (the radius or the diameter). The position in the second direction D2corresponds to the angle. From the result of the change of the size of the detection region and the pointer position, the pointer region extractor14ddetermines a first range and a second range for the size of the detection region. The change of the pointer position is small when the size is within the first range. The change of the pointer position is large when the size is within the second range. For example, the pointer region extractor14dcalculates the proportion of the change of the pointer position with respect to the change of the size at each of multiple points of the horizontal axis for the graph illustrated inFIG.11. The pointer region extractor14dextracts, as the first range, a continuous portion in which the proportion is not more than the first threshold. The pointer region extractor14dextracts, as the second range, a continuous portion in which the proportion is greater than the second threshold. The second threshold is greater than the first threshold. FIG.11illustrates an example of a first range Ra1and a second range Ra2. Typically, in a round meter, one end of the pointer is proximate to the graduations, and the other end of the pointer protrudes to the opposite side of the center of rotation. When the detection region is small as illustrated inFIGS.9B and9C, the total value of the luminances is large at both the position at which the one end of the pointer exists and the position at which the other end exists. Therefore, it is not easy to identify the one end of the pointer with high accuracy. As a result, the change of the pointer position is large as in the second range Ra2illustrated inFIG.11. On the other hand, when the detection region is large as illustrated inFIGS.10B and10C, the total of the luminances at the position at which the one end of the pointer exists is greater than the total of the luminances at the position at which the other end exists. Thereby, the one end of the pointer can be discriminated from the other end, and the change of the pointer position is reduced. The pointer region extractor14ddetermines the size of the pointer region based on the upper limit of the size (the diameter) of the first range. Also, the pointer region extractor14dmay determine the size of the pointer region based on the upper limit of the size in the second range or the lower limit of the size (the diameter) in the first range. For example, when the meter is round, a circular-ring shaped pointer region R3is extracted as illustrated inFIG.12. An inner diameter IR of the pointer region R3is set based on the upper limit of the size in the second range or the lower limit of the size in the first range. An outer diameter OR of the pointer region R3is set based on the upper limit of the length in the first range. Namely, the pointer region is set so that the length does not include a detection region within the second range. The accuracy of the reading of the indication can be increased thereby. By the processing described above, the numeral region, the scale region, and the pointer region are extracted from the candidate region. The extractor14outputs the extracted regions to the reader15. The reader15reads the indication of the meter by using the numeral region, the scale region, and the pointer region extracted by the extractor14. Specifically, the reader15includes a graduation recognizer15a, a numeral recognizer15b, a pointer recognizer15c, a graduation joiner15d, and a calculator15e. The graduation recognizer15arecognizes the graduations of the meter based on the luminance difference in the scale region extracted by the extractor14. For example, the graduation recognizer15asets a reference line and calculates the angles between the reference line and the graduations. The numeral recognizer15brecognizes a numeral in the numeral region extracted by the extractor14. The pointer recognizer15cdetects the angle between the reference line and the pointer based on information of the pointer region extracted by the extractor14. The graduation joiner15dassociates the graduations recognized by the graduation recognizer15aand the numerals recognized by the numeral recognizer15b. The calculator15ecalculates the indication of the meter based on the angles of the graduations, correspondence information between the graduations and the numerals, and the angle of the pointer. The reader15transmits the calculated indication to the outputter16. For example, the outputter16outputs information based on the calculated indication to an external output device. For example, the information includes the indication that is read. The information may include a result calculated based on the indication that is read. The outputter16may calculate another value based on the multiple indications that are read and may output the calculation result. The outputter16also may output information such as the time of the reading, etc. Or, the outputter16may output a file including the information such as the indication numeral that is read, the time of the reading, etc., in a prescribed format such as CSV, etc. The outputter16may transmit the data to an external server by using FTP (File Transfer Protocol), etc. Or, the outputter16may insert the data into an external database server by performing database communication and using ODBC (Open Database Connectivity), etc. The processing device10includes, for example, a processing circuit made of a central processing unit. The memory device20includes, for example, at least one of a hard disk drive (HDD), a network-attached hard disk (NAS), an embedded multimedia card (eMMC), a solid-state drive (SSD), or a solid-state hybrid drive (SSHD). The processing device10and the memory device20are connected by a wired or wireless technique. Or, the processing device10and the memory device20may be connected to each other via a network. FIGS.13to16are flowcharts illustrating the processing according to the reading support system according to the first embodiment. As illustrated inFIG.13, the acceptor11accepts the input image (step S11). The contour extractor12aextracts contours from the input image (step S12a). The selector12bselects, as candidate regions, a portion of the regions surrounded with the contours that satisfy a condition (step S12b). The corrector13corrects the candidate region by performing a projective transformation (step S13). The numeral region extractor14aextracts multiple numeral regions from the candidate region (step S14a). The type determiner14bdetermines the type of the meter based on the positions of the multiple numeral regions (step S14b). The scale region extractor14cextracts the scale region from the candidate region (step S14c). The pointer region extractor14dextracts the pointer region from the candidate region (step S14d). The reader15reads the indication of the pointer (step S15). The outputter16outputs information based on the reading result (step S16). FIG.14is a flowchart specifically illustrating the processing of step S14aperformed by the numeral region extractor14a. The numeral region extractor14aextracts character candidates from the candidate region (step S14a1). The numeral region extractor14arotates one of the multiple character candidates (step S14a2). The numeral region extractor14acalculates a match rate for the rotated character candidate (step S14a3). The numeral region extractor14arecords the calculated match rate (step S14a4). The numeral region extractor14adetermines whether or not the rotation count of the character candidate is not more than a threshold (step S14a5). Steps S14a2to S14a4are repeated when the rotation count is not more than the threshold. When the rotation count is greater than the threshold, the numeral region extractor14adetermines whether or not the change amount of the match rate is not more than a threshold (step S14a6). The flow proceeds to step S14a8when the change amount of the match rate is not more than the threshold. When the change amount of the match rate is greater than the threshold, the numeral region extractor14adetermines a minimum area surrounding the numeral included in the character candidate (step S14a7). The numeral region extractor14adetermines whether or not there is another character candidate for which the numeral recognition has not been tried (step S14a8). When there is another character candidate, the numeral region extractor14aperforms step S14a2for the other character candidate. When there is no other character candidate, the numeral region extractor14aextracts the minimum areas determined up to that point as the numeral regions from the candidate region (step S14a9). FIG.15is a flowchart specifically illustrating the processing of step S14cperformed by the scale region extractor14c. The scale region extractor14crefers to the type of the meter determined by the type determiner14band determines whether or not the meter is round (step S14c1). The flow proceeds to step S14c3when the meter is not round. When the meter is round, the scale region extractor14cperforms a polar transformation of the candidate region (step S14c2). The scale region extractor14csets multiple subregion columns for the candidate region (step S14c3). The scale region extractor14cdetects horizontal lines included in the candidate region (step S14c4). The scale region extractor14ccalculates the total of the numbers of horizontal lines in the subregion columns (step S14c5). The scale region extractor14cmay perform steps14c6and S14c7. The scale region extractor14cuses the maximum value of the totals of the numbers of horizontal lines as a score, and determines whether or not the score is greater than a threshold (step S14c7). When the score is not more than the threshold, there is a possibility that the horizontal lines cannot be appropriately detected. The scale region extractor14cmodifies the detection condition of the horizontal lines and re-performs step S14c4. When the score is greater than the threshold, the scale region extractor14cdetermines whether or not the resolution is not more than a threshold (step S14c8). For example, the resolution is represented by the proportion of the size of one first subregion to the size of the entire extraction region. When the resolution is greater than the threshold, the scale region extractor14cmodifies the setting condition of the first subregion and re-performs step S14c3. When the resolution is not more than the threshold, the scale region extractor14cextracts the scale area from the candidate region based on the total of the numbers of horizontal lines in each of the first subregions (step S14c8). The scale region extractor14csets multiple second subregions for the scale area (step S14c9). The scale region extractor14cdetects horizontal lines included in the scale area (step S14c10). The scale region extractor14ccalculates the total of the numbers of horizontal lines in the second subregion (step S14c11). The scale region extractor14cmay perform steps S14c12and S14c13. The scale region extractor14cuses the maximum value of the totals of the numbers of horizontal lines as a score, and determines whether or not the score is greater than a threshold (step S14c12). When the score is not more than the threshold, there is a possibility that the horizontal lines cannot be appropriately detected. The scale region extractor14cmodifies the detection condition of the horizontal lines and re-performs step S14c10. When the score is greater than the threshold, the scale region extractor14cdetermines whether or not the resolution is not more than the threshold (step S14c13). When the resolution is greater than the threshold, the scale region extractor14cmodifies the setting condition of the second subregion and re-performs step S14c9. When the resolution is not more than the threshold, the scale region extractor14cextracts the scale region from the scale area based on the total of the numbers of horizontal lines in each of the second subregions (step S14c14). FIG.16is a flowchart specifically illustrating the processing of step S14dperformed by the pointer region extractor14d. The pointer region extractor14dsets a detection region in the candidate region (step S14d1). The pointer region extractor14ddetermines the position of the pointer in the detection region that is set (step S14d2). The pointer region extractor14drecords the determined position of the pointer (step S14d3). The pointer region extractor14ddetermines whether or not the detection region satisfies a condition (step S14d4). For example, the pointer region extractor14ddetermines whether or not the detection region reaches (overlaps) the scale region. When the detection region does not satisfy the condition, the pointer region extractor14dmodifies the length in a designated direction of the detection region (step S14d5). The pointer region extractor14dre-performs step S14d2based on the detection region having the modified length. When the detection region satisfies the condition, the pointer region extractor14dextracts the pointer region from the candidate region based on the relationship between the change of the length and the change of the pointer position (step S14d6). Effects of the first embodiment will now be described. When reading the indication of the meter from the image, processing that corresponds to the type of the meter is performed on the image. This is because the arrangement of the graduations is different according to the type of the meter as illustrated inFIGS.2A to2D. By performing the processing corresponding to the type of the meter, the accuracy of the reading of the indication can be increased. For example, a method in which the type of the meter to be imaged is preregistered may be considered to perform the processing corresponding to the type of the meter. The processing device10determines the processing to be performed by referring to the registered type of the meter. However, in this method, the indication cannot be appropriately read if the type of the meter is not preregistered. When sequentially reading indications from images of multiple mutually-different meters, etc., it is necessary to associate the meters imaged the images and the types of the meters, and a long period of time is necessary for the setting beforehand. In the reading support system1according to the first embodiment, first, the numeral region extractor14aextracts numeral regions including multiple numerals of the meter from the candidate region. Then, the type determiner14bdetermines the type of the meter based on the positions of the multiple numeral regions. In other words, according to the reading support system1according to the first embodiment, the type of the meter is determined automatically from the image. By using the reading support system1, it is unnecessary for the user to preregister the type of the meter for reading the indication. Other effects of the first embodiment will now be described. The numeral region, the scale region, and the pointer region are extracted from the candidate region when reading the indication of the meter from the image. Then, the indication of the meter is read based on these extracted regions. At this time, it is desirable for the numeral region, the scale region, or the pointer region to be more appropriately extracted. By more appropriately extracting the numeral region, the scale region, or the pointer region, the indication can be read with higher accuracy. For example, there is a method in which the position of the meter where the graduations exist is preregistered, and the scale region is extracted from the candidate region based on the registered information and the luminance of the image. However, in this method, the scale region cannot be appropriately extracted when the actual scale region is different from the registered position. When sequentially reading the indications from the images of multiple mutually-different meters, etc., it is necessary to associate the meters of the images and the positions of the graduations in the meters, and a long period of time is necessary for the setting beforehand. In the reading support system1according to the first embodiment, the scale region extractor14cperforms the following processing. The scale region extractor14csets multiple first subregions in the second direction in the candidate region so that multiple first subregions are arranged in the first direction. The scale region extractor14cdetects the number of line segments extending in the second direction for each of the first subregions. The scale region extractor14cextracts a portion in the second direction of the candidate region as the scale area in which the graduations of the meter exist based on the detected numbers of the line segments. According to the processing, it is unnecessary to preregister the position of the scale region of the meter, and the area in which graduations exist can be extracted more appropriately from the candidate region. Therefore, it is unnecessary for the user to preregister the positions of the graduations of the meters for reading the indication. By more appropriately extracting the scale area, the accuracy of the reading of the indication using the scale area can be increased. In the reading support system1, the scale region extractor14calso performs the following processing. The scale region extractor14csets multiple second subregions in the first direction for the scale area so that the multiple second subregions are arranged in the second direction. The scale region extractor14cdetects the number of line segments extending in the second direction for each of the second subregions. Based on the detected numbers of the line segments, the scale region extractor14cextracts a portion in the first direction of the scale area as the scale region in which the graduations of the meter exist. By performing such processing after extracting the scale area, the region in which the graduations exist can be extracted with higher accuracy from the scale area. For example, by using the scale region to read the indication, the accuracy of the reading can be increased. In the reading support system1according to the first embodiment, the pointer region extractor14dperforms the following processing. When the type of the meter is round, the pointer region extractor14dsets a circular detection region for detecting the pointer of the meter in the candidate region. The pointer region extractor14dchanges the size of the detection region and determines the angle of the pointer in the detection region of each size, and extracts the pointer region in which the pointer exists from the candidate region based on the result of the determination. According to the processing, it is unnecessary to preregister the position of the pointer region of the meter, and the pointer region can be extracted more appropriately from the candidate region. In particular, the pointer region extractor14ddetermines the first and second ranges for the size from the result of the change of the angle with respect to the change of the size. Then, the pointer region extractor14dextracts a circular-ring shaped pointer region having an outer diameter based on the upper limit of the first range and an inner diameter based on the upper limit of the second range. According to this processing, the regions that have little contribution to the recognition of the pointer can be excluded from the pointer region. The accuracy of the reading can be increased by using the pointer region extracted by this processing to read the indication. Second Embodiment FIG.17is a block diagram illustrating a configuration of a reading support system according to a second embodiment. The reading support system2according to the second embodiment further includes an imaging device30. The imaging device30generates an image by imaging the meter. The imaging device30transmits the generated image to the processing device10. Or, the imaging device30may store the image in the memory device20. The processing device10accesses the memory device20and refers to the stored image. When the imaging device30acquires a video image, the imaging device30extracts a static image from the video image and transmits the static image to the processing device10. The imaging device30includes, for example, a camera. The processing device10transmits, to an output device40, information based on characters that are identified and read. The output device40outputs the information received from the processing device10so that the user can recognize the information. The output device40includes, for example, at least one of a monitor, a printer, or a speaker. For example, the processing device10, the memory device20, the imaging device30, and the output device40are connected to each other by a wired or wireless technique. Or, these devices may be connected to each other via a network. Or, two or more of the processing device10, the memory device20, the imaging device30, or the output device40may be embedded in one device. For example, the processing device10may be embedded in an integral body with the image processor of the imaging device30, etc. Third Embodiment FIG.18is a block diagram illustrating a configuration of a reading support system according to a third embodiment. The reading support system3according to the third embodiment further includes a moving body50. The moving body50moves through a prescribed area. A meter is provided inside the area through which the moving body50moves. The moving body50is, for example, an automated guided vehicle (AGV). The moving body50may be a flying object such as a drone, etc. The moving body50may be an independent walking robot. The moving body50may be an unmanned forklift, crane, or the like that performs a prescribed operation. For example, the processing device10and the imaging device30are mounted to the moving body50. The processing device10may be provided separately from the moving body50and may be connected to the moving body50via a network. When the moving body50moves to a position where the meter is imageable, the imaging device30generates an image by imaging the meter. As illustrated inFIG.18, the reading support system3may further include an acquisition device60. The acquisition device60is mounted to the moving body50. For example, an identifier that includes unique identification information corresponding to the meter is provided. The acquisition device60acquires the identification information of the identifier. As illustrated inFIG.18, the reading support system3may further include a control device70. The control device70controls the moving body50. The moving body50moves through the prescribed area based on a command transmitted from the control device70. The control device70may be mounted to the moving body50or may be provided separately from the moving body50. The control device70includes, for example, a processing circuit made of a central processing unit. One processing circuit may function as both the processing device10and the control device70. For example, the identifier is a radio frequency (RF) tag including ID information. The identifier emits an electromagnetic field or a radio wave including the ID information. The acquisition device60acquires the ID information by receiving the electromagnetic field or the radio wave emitted from the identifier. Or, the identifier may be a one-dimensional or two-dimensional barcode. The acquisition device60may be a barcode reader. The acquisition device60acquires the identification information of the barcode by reading the barcode. As illustrated inFIG.18, the processing device10may further include an associator17. For example, when acquiring the identification information, the acquisition device60transmits the identification information to the processing device10. The associator17associates the transmitted identification information and the characters that are read. The associated information is stored in the memory device20. FIG.19is a schematic view describing an operation of the reading support system according to the third embodiment. For example, the moving body50is a moving body moving along a prescribed trajectory T. The imaging device30and the acquisition device60are mounted to the moving body50. The processing device10may be mounted to the moving body50or may be provided separately from the moving body50. The trajectory T is provided so that the moving body50passes in front of meters M11and M12. For example, the moving body50moves along the trajectory T and decelerates or stops when arriving at a position where the meter M11or M12is imageable by the imaging device30. For example, when decelerating or stopping, the moving body50transmits an imaging command to the imaging device30. Or, the imaging command may be transmitted to the imaging device30from the control device70. When receiving the command, the imaging device30images the meter M11or M12while the moving body50has decelerated or stopped. Or, the moving body50moves along the trajectory T at a speed such that the imaging device30can image the meter M11or M12without blur. When the position where the meter M11or M12is imageable by the imaging device30is reached, the imaging command is transmitted from the moving body50or the control device described above. When receiving the command, the imaging device30images the meter M11or M12. When the image has been generated by imaging, the imaging device30transmits the image to the processing device10mounted to the moving body50or provided separately from the moving body50. An identifier ID1is provided at the meter M11vicinity. An identifier ID2is provided at the meter M12vicinity. For example, the acquisition device60acquires the identification information of the identifier ID1or ID2while the moving body50has decelerated or stopped. For example, the moving body50moves in front of the meter M11. The imaging device30generates an image by imaging the meter M11. The processing device10identifies the characters displayed by the meter M11from the image. The acquisition device60acquires the identification information of the identifier ID1corresponding to the meter M11. The processing device10associates the identification information and the identified characters. The processing of the processing device10is particularly favorable when the imaging device30is mounted to the moving body50and sequentially images multiple meters as illustrated inFIG.19. Because the type of the meter is determined automatically by the numeral region extractor14aand the type determiner14b, it is unnecessary for the user to preregister the type of the meter. There is also a possibility that fluctuation of the image may occur according to the state of the moving body50. The numeral region, the scale region, and the pointer region are extracted automatically from the candidate region by the numeral region extractor14a, the scale region extractor14c, and the pointer region extractor14daccording to the processing by the processing device10even when fluctuation occurs in the image. Therefore, the accuracy of the reading of the indication can be increased. FIG.20is a block diagram illustrating a hardware configuration of the reading support systems according to the embodiments. For example, the processing device10of the reading support systems1to3is a computer and includes ROM (Read Only Memory)10a, RAM (Random Access Memory)10b, a CPU (Central Processing Unit)10c, and a HDD (Hard Disk Drive)10d. The ROM10astores programs controlling the operations of the computer. The ROM10astores programs necessary for causing the computer to function as the controller10. The RAM10bfunctions as a memory region where the programs stored in the ROM10aare loaded. The CPU10cincludes a processing circuit. The CPU10creads a control program stored in the ROM10aand controls the operation of the computer according to the control program. The CPU10cloads various data obtained by the operation of the computer into the RAM10b. The HDD10dstores information necessary for reading and information obtained in the reading process. For example, the HDD10dfunctions as the memory device20illustrated inFIG.1. Instead of the HDD10d, the controller10may include an eMMC (embedded Multi Media Card), a SSD (Solid State Drive), a SSHD (Solid State Hybrid Drive), etc. An input device10eand an output device10fmay be connected to the controller10. The user uses the input device10eto input information to the controller10. The input device10eincludes at least one of a mouse, a keyboard, a microphone (audio input), or a touchpad. Information that is transmitted from the controller10is output to the output device10f. The output device10fincludes at least one of a monitor, a speaker, a printer, or a projector. A device such as a touch panel that functions as both the input device10eand the output device10fmay be used. A hardware configuration similar toFIG.20is applicable also to the control device70of the reading support system3. Or, one computer may function as the processing device10and the control device70in the reading support system3. The processing and the functions of the processing device10and the control device70may be realized by collaboration between more computers. The embodiments may include the following configurations.Configuration 1A moving body moving through a prescribed area, the moving body comprising:an imaging device acquiring an image by imaging a meter; anda processing device receiving an input of the image,the processing device includingan extractor extracting a candidate region from the image, the candidate region being a candidate of a region in which a meter is imaged, anda scale region extractor that:sets multiple subregion columns in a second direction perpendicular to a first direction for a candidate region, the candidate region being a candidate of a region in which a meter is imaged, each of the subregion columns including a plurality of subregions arranged in the first direction;detects a number of line segments extending in the second direction for each of the subregions; andextracts, based on the detected numbers of the line segments, a portion in the second direction of the candidate region as a scale area in which a graduation of the meter exists.Configuration 2The moving body according to Configuration 1, whereinthe scale region extractor:sets multiple subregion rows in the first direction for the scale area, each of the subregion rows including a plurality of second subregions arranged in the second direction;detects a number of line segments extending in the second direction for each of the second subregions; andextracts, based on the detected numbers of the line segments, a portion in the first direction of the scale area as a scale region in which a graduation of the meter exists.Configuration 3A reading support method, comprising:extracting a plurality of characters from a candidate region, the candidate region being a candidate of a region in which a meter is imaged;calculating match rates between a numeral and each of the plurality of characters while rotating the plurality of characters;determining, as a plurality of numerals of the meter, at least a portion of the plurality of characters of which the match rate is not less than a prescribed threshold;extracting a plurality of numeral regions respectively including the plurality of numerals from the candidate region; anddetermining a type of the meter based on positions of the plurality of numeral regions.Configuration 4A reading support method, comprising:setting a plurality of subregion columns in a second direction perpendicular to a first direction for a candidate region, the candidate region being a candidate of a region in which a meter is imaged, each of the subregion columns including a plurality of first subregions arranged in the first direction;detecting a number of line segments extending in the second direction for each of the first subregions; andextracting, based on the detected numbers of the line segments, a portion in the second direction of the candidate region as a scale area in which a graduation of the meter exists.Configuration 5A storage medium storing a program causing a processing device to function as:a numeral region extractor extracting a plurality of numeral regions from a candidate region, the candidate region being a candidate of a region in which a meter is imaged, the plurality of numeral regions respectively including a plurality of numerals of the meter; anda type determiner determining a type of the meter based on positions of the plurality of numeral regions.Configuration 6A storage medium storing a program causing a processing device to function as a scale region extractor that:sets a plurality of subregion columns in a second direction perpendicular to a first direction for a candidate region, the candidate region being a candidate of a region in which a meter is imaged, each of the subregion columns including a plurality of first subregions arranged in the first direction;detects a number of line segments extending in the second direction for each of the first subregions; andextracting, based on the detected numbers of the line segments, a portion in the second direction of the candidate region as a scale area in which a graduation of the meter exists. By using the reading support system, the reading support method, and the moving body according to the embodiments described above, the numerals displayed by the meter can be read with higher accuracy. Similarly, by using a program for causing a computer to operate as the reading support system, the numerals displayed by the meter can be read by the computer with higher accuracy. For example, the processing of the various data recited above is executed based on a program (software). For example, the processing of the various information recited above is performed by a computer storing the program and reading the program. The processing of the various information recited above may be recorded in a magnetic disk (a flexible disk, a hard disk, etc.), an optical disk (CD-ROM, CD-R, CD-RW, DVD-ROM, DVD±R, DVD±RW, etc.), semiconductor memory, or another recording medium as a program that can be executed by a computer. For example, the information that is recorded in the recording medium can be read by a computer (or an embedded system). The recording format (the storage format) of the recording medium is arbitrary. For example, the computer reads the program from the recording medium and causes a CPU to execute the instructions recited in the program based on the program. The acquisition (or the reading) of the program by the computer may be performed via a network. The processing device and the control device according to the embodiments include one or multiple devices (e.g., personal computers, etc.). The processing device and the control device according to the embodiments may include multiple devices connected by a network. While certain embodiments have been described, these embodiments have been presented by way of example only, and are not intended to limit the scope of the inventions. Indeed, the novel embodiments described herein may be embodied in a variety of other forms; furthermore, various omissions, substitutions and changes in the form of the embodiments described herein may be made without departing from the spirit of the inventions. The accompanying claims and their equivalents are intended to cover such forms or modifications as would fall within the scope and spirit of the invention. The above embodiments can be practiced in combination with each other. | 62,287 |
11861894 | DETAILED DESCRIPTION OF THE INVENTION The inventor has conceived, and reduced to practice, a target custody platform comprising a data acquisition engine, a data analysis engine, a machine learning engine, and a data presentation layer configured to task a plurality of satellites for imagery data wherein the imagery data and metadata is used in conjunction with other types of data including identification data and weather data as inputs into a one or more machine and/or deep learning algorithms configured to predict a the likelihood a target of interest will travel along a project path. One or more different aspects may be described in the present application. Further, for one or more of the aspects described herein, numerous alternative arrangements may be described; it should be appreciated that these are presented for illustrative purposes only and are not limiting of the aspects contained herein or the claims presented herein in any way. One or more of the arrangements may be widely applicable to numerous aspects, as may be readily apparent from the disclosure. In general, arrangements are described in sufficient detail to enable those skilled in the art to practice one or more of the aspects, and it should be appreciated that other arrangements may be utilized and that structural, logical, software, electrical and other changes may be made without departing from the scope of the particular aspects. Particular features of one or more of the aspects described herein may be described with reference to one or more particular aspects or figures that form a part of the present disclosure, and in which are shown, by way of illustration, specific arrangements of one or more of the aspects. It should be appreciated, however, that such features are not limited to usage in the one or more particular aspects or figures with reference to which they are described. The present disclosure is neither a literal description of all arrangements of one or more of the aspects nor a listing of features of one or more of the aspects that must be present in all arrangements. Headings of sections provided in this patent application and the title of this patent application are for convenience only, and are not to be taken as limiting the disclosure in any way. Devices that are in communication with each other need not be in continuous communication with each other, unless expressly specified otherwise. In addition, devices that are in communication with each other may communicate directly or indirectly through one or more communication means or intermediaries, logical or physical. A description of an aspect with several components in communication with each other does not imply that all such components are required. To the contrary, a variety of optional components may be described to illustrate a wide variety of possible aspects and in order to more fully illustrate one or more aspects. Similarly, although process steps, method steps, algorithms or the like may be described in a sequential order, such processes, methods and algorithms may generally be configured to work in alternate orders, unless specifically stated to the contrary. In other words, any sequence or order of steps that may be described in this patent application does not, in and of itself, indicate a requirement that the steps be performed in that order. The steps of described processes may be performed in any order practical. Further, some steps may be performed simultaneously despite being described or implied as occurring non-simultaneously (e.g., because one step is described after the other step). Moreover, the illustration of a process by its depiction in a drawing does not imply that the illustrated process is exclusive of other variations and modifications thereto, does not imply that the illustrated process or any of its steps are necessary to one or more of the aspects, and does not imply that the illustrated process is preferred. Also, steps are generally described once per aspect, but this does not mean they must occur once, or that they may only occur once each time a process, method, or algorithm is carried out or executed. Some steps may be omitted in some aspects or some occurrences, or some steps may be executed more than once in a given aspect or occurrence. When a single device or article is described herein, it will be readily apparent that more than one device or article may be used in place of a single device or article. Similarly, where more than one device or article is described herein, it will be readily apparent that a single device or article may be used in place of the more than one device or article. The functionality or the features of a device may be alternatively embodied by one or more other devices that are not explicitly described as having such functionality or features. Thus, other aspects need not include the device itself. Techniques and mechanisms described or referenced herein will sometimes be described in singular form for clarity. However, it should be appreciated that particular aspects may include multiple iterations of a technique or multiple instantiations of a mechanism unless noted otherwise. Process descriptions or blocks in figures should be understood as representing modules, segments, or portions of code which include one or more executable instructions for implementing specific logical functions or steps in the process. Alternate implementations are included within the scope of various aspects in which, for example, functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved, as would be understood by those having ordinary skill in the art. Conceptual Architecture FIG.1is a block diagram illustrating an exemplary system architecture for a target custody platform100, according to an embodiment. According to the embodiment, target custody platform100comprises a data acquisition engine110, a data analytics engine200, a data presentation layer120, one or more databases130, and a machine learning engine300. In some implementations, target custody platform may be configured to operate on a computing device such as, for example, a server. In some implementations, target custody platform100may be configured as a cloud-based system and/or service accessible via a suitable network connection such as, for example, the Internet via a purpose-built software application or web application/interface. In some embodiments, platform100and one or more of its components may be operating on a single computing device or multiple computing devices communicatively coupled via a suitable network connection known to those skilled in the arts (i.e., local area network, wireless network, etc.). According to various embodiments, target custody platform100can provide improved target tracking by leveraging machine learning and artificial intelligence for modeling target navigation channels (i.e., trajectory) with confidence intervals. In some implementations, the modeling methods can include an initial understanding of the confidence surrounding the expected trajectory, or navigation channels, of a target by category. Data acquisition engine110of target custody platform100can be configured to receive, retrieve, or otherwise obtain a plurality of data and information such as the exemplary information sources150. Target custody platform100may interface with various information sources150using a suitable network connection. In some embodiments, data acquisition engine110may utilize various mechanisms, systems, or schemes for acquiring data. For example, data acquisition engine110may utilize one or more application programming interfaces (APIs) or API connectors configured to obtain data from a plurality of information sources150. In one embodiment configured for tracking sailing vessels, an API may be used to query automated identification system (AIS) data152from a database which stores AIS data. As another example, an API may be used to acquire cloud cover data for a given location from one or more weather data sources such as, for example, National Oceanic and Atmospheric Administration (NOAA) national weather service database. In other embodiments, a web crawler may be implemented and configured to crawl websites which store relevant data. Data obtained by platform100can be stored in database(s)130. Database(s)130may comprise one or more data storage devices and implement one or more data storage systems such as, for example, relational databases, non-relational databases, graph databases, object-oriented databases, centralized databases, distributed databases, and/or the like. In various embodiments, target custody platform100may receive a target of interest as an input. In response, data acquisition engine110may task a plurality of satellites151with obtaining imagery data based on a projected path of the target of interest. According to some embodiments, the number and type of satellites tasked by custody tracking platform100may be at least partially based on the target of interest, a desired level of risk (i.e., the minimum amount of computed confidence), and derived features such as a predicted cone of trajectory. The imagery data may be stored in database(s)130and used by data analytics engine200to determine one or more satellite footprints. According to the embodiment, data analytics engine200configured to provide various data processing and data analysis functions. In some embodiments, data analytics engine200obtains data from database(s)130and determines, computes, derives, or otherwise calculates one or more characteristics and/or features associated with a target of interest. According to some embodiments, the characteristics and/or features can include, but are not limited to, a channel cone, satellite footprints, target size (e.g., ship size determined by AIS data), number of possible footprints, cover area of a cone slice, cone/footprint convergence percentage, computed average ground slice distance, weather attributes, and/or the like. In some embodiments, the target of interest is a ship or sailing vessel. In other embodiments, the target of interest may be a truck, an airplane, a satellite, or any other object that can transit across a geographic region of the globe. Calculated features may be stored in database(s)130. Calculated features determined by data analytics engine200can be fed as inputs into machine learning engine300which can utilize one or more machine and/or deep learning algorithms to train a model configured to determine a target of interest's likely trajectory using a computed confidence score. According to the embodiment, machine learning engine300is configured to train, maintain, and deploy one or more machine and/or deep learning models to provide predictive target tracking capabilities. Machine learning engine300may receive, retrieve, or otherwise obtain a plurality of data which can be sourced from various information sources150including, but not limited to, satellite imagery data151, data sourced from governmental and non-governmental organization databases, and “big data.” For example, AIS data152and two-line element (TLE) data153from national and commercial sources (e.g., produced by NORAD, etc.) are a few such sources. Machine learning engine may use some or all of the obtained data to develop a model for tracking a target of interest. According to the embodiment, a data presentation layer120is present and configured to provide a user interface for interacting with target custody platform100. Data presentation layer120may be able to receive user queries and return an appropriate response, for example, by responding to a query for information by locating and retrieving the relevant information from database130, or, as another example, by responding to an input target of interest by displaying, via a graphical user interface, a projection of the target vessel on a map with the cone of trajectory and its computed confidence levels displayed, as is described in more detail inFIG.4below. The presentation layer120may represent a front-end user interface for platform100and may be implemented as a web application or bespoke software application stored and operating on a computing device which utilizes the backend components of platform100to provide visual representations of queried data and predicted navigation channels. Data presentation layer120may obtain various data and implement one or more systems for visualizing and/or displaying the data. As a simple example, data may be obtained and then a graphing engine may be used to format the data for display as one or more various types of graphs (e.g., bar graphs, histograms, infographics, pie charts, etc.). FIG.2is a block diagram illustrating an exemplary aspect of target custody platform100, the data analytics engine200. According to the embodiment, data analytics engine200is configured to provide data processing and analysis functions to platform100and may comprise one or more modules configured to compute, derive, and/or calculate one or more characteristics and/or features associated with the target of interest. The first module that may be present is a cone calculator module201configured to process various information to determine a cone of trajectory for the target of interest. In a use case involving sailing vessels, data analytics engine200may process AIS data152to glean the location, heading, identity, and characteristics of a ship to produce a cone of trajectory. A second module that may be present is a footprint analyzer202configured to receive a plurality of data from various data sources, including (but not limited to) TLE data153, and build satellite footprints to project where the satellites will be at given points in time. TLE data is a data format encoding a list of orbital elements of an Earth-orbiting object for a given point in time, the epoch. Using a suitable prediction formula, the state (e.g., position and velocity) at any point in the past or future can be estimated to some accuracy. A satellite footprint is the ground area that its transponders offer coverage and determines the satellite dish diameter required to receive each transponders signals. Footprint analyzer202may further be configured to compare intersections between satellites (via the footprints) and the produced cone of trajectory and then calculate the number of possible footprints based on the section of cones available. Additional information that may be associated with or obtained from satellites can include, but is not limited to, the number of unique sensors, sensor types, sensor phenomenology, sensor ground sample distance (GSD), and sensor grazing angle. Yet another module that may be present is a size calculator203module configured to determine the size of the target of interest. For example, in the use case of tracking sailing vessels, size calculator203may acquire ship dimension (e.g., length and width) from obtained AIS data152. In some embodiments, targets of interest may need to be filtered by size. An intersection analyzer module204may also be present and configured calculate the intersection over union between the area of the cone and the area of access windows for each. The intersection over union can be defined as the area of satellite access window overlap with the cone of trajectory divided by the area that the access window and cone occupy and/or overlap together. First, intersection analyzer module204may calculate the cover area of a single slice taken from the produced cone of trajectory. Next, intersection analyzer204may then calculate the percentage of the cone slice that the footprints (i.e., access windows) cover. This information may be used to compute the intersection over union between cone and satellite footprints. According to various embodiments, image analyzer205may be configured to receive a plurality of imagery data and metadata from a plurality of satellite systems151. Metadata can include, but is not limited to, the number of unique sensors, sensor types, sensor phenomenology, sensor GSD, and sensor grazing angle. Image analyzer205can be configured to calculate the average GSD of the satellite images per slice of the cone of trajectory. Data analysis engine200uses these modules to derive characteristics (e.g., features) of the target of interest including, but not limited to, producing a cone of trajectory, building a plurality of satellite footprints using various data sources, determining target of interest size, calculating the number of possible footprints based on the section of cones available, calculating the cover area of a cone slice, calculating the percentage of the cone that the footprints cover, determining the average GSD per slice, and other data processing and analysis tasks that may be required to provide target custody capabilities. The derived characteristics and/or features may be stored in database(s)130and used as inputs by machine learning engine300. FIG.3is a block diagram illustrating an exemplary aspect of target custody platform100, the machine learning engine300. According to the embodiment, machine learning engine may comprise a model training stage comprising a data preprocessor302, one or more machine and/or deep learning algorithms303, training output304, and a parametric optimizer305, and a model deployment stage comprising a deployed and fully trained model310configured to make predictions on live data311. At the model training stage, a plurality of training data301may be received at machine learning engine300. In some embodiments, the plurality of training data may be obtained from one or more database(s)130and/or directly from various information sources150via data acquisition engine110. In a use case directed to the tracking of ships at sea, a plurality of training data may be sourced from AIS databases. The training dataset may comprise a plurality of information including features and quantities derived, computed, or otherwise calculated by data analytics engine200. For example, the training dataset and input data301may comprise variables such as the target size, target type (e.g., type of ship, type of airplane, type of truck, etc.), the total number of satellite access windows, access window intersection over union, number of sensors, sensor types, sensor phenomenology, average GSD, average grazing angle, average cloud coverage/cloud free percentage, and/or the like. Data preprocessor302may receive the input data and perform various data preprocessing tasks on the input data to format the data for further processing. For example, data preprocessing can include, but is not limited to, tasks related to data cleansing, data deduplication, data normalization, data transformation, handling missing values, feature extraction and selection, mismatch handling, and/or the like. Data preprocessor302may also be configured to create training dataset and a test set from the plurality of input data301. For example, a training dataset may comprise 85% of the preprocessed input data and the test dataset may comprise the remaining 15% of the data. The preprocessed training dataset may be fed as input into one or more machine and/or deep learning algorithms303in order to train a predictive model for target custody tracking. During model training, training output304is produced and used to measure the accuracy and usefulness of the predictive outputs. During this process a parametric optimizer305may be used to perform algorithmic tuning between model training iterations. Model parameters and hyperparameters can include, but are not limited to, bias, train-test split ratio, learning rate in optimization algorithms (e.g., gradient descent), choice of optimization algorithm (e.g., gradient descent, stochastic gradient descent, of Adam optimizer, etc.), choice of activation function in a neural network layer (e.g., Sigmoid, ReLu, Tanh, etc.), the choice of cost or loss function the model will use, number of hidden layers in a neural network, number of activation unites in each layer, the drop-out rate in a neural network, number of iterations (epochs) in a training the model, number of clusters in a clustering task, kernel or filter size in convolutional layers, pooling size, batch size, the coefficients (or weights) of linear or logistic regression models, cluster centroids, and/or the like. Parameters and hyperparameters may be tuned and then applied to the next round of model training. In this way, the training stage provides a machine learning training loop. The test dataset can be used to test the accuracy of the model outputs. If the training model is making predictions that satisfy a certain criterion (e.g., baseline behavior, etc.), then it can be moved to the model deployment stage as a fully trained and deployed model310in a production environment making predictions based on live input data311. The deployed model can output a computed score315indicating a confidence interval associated with a cone of trajectory for a target of interest. Further, model predictions made by deployed model can be used as feedback and applied to model training in the training stage, wherein the model is continuously learning over time using both training data and live data and predictions. A model and training database306is present and configured to store training/test datasets and developed models. Database306may also store previous versions of models. According to some embodiments, the one or more machine and/or deep learning models may comprise any suitable algorithm known to those with skill in the art including, but not limited to: supervised learning algorithms such as: regression (e.g., linear, polynomial, logistic, etc.), decision tree, random forest, k-nearest neighbor, support vector machines, Naïve-Bayes algorithm; unsupervised learning algorithms such as clustering algorithms, hidden Markov models, singular value decomposition, and/or the like. Alternatively, or additionally, algorithms303may comprise a deep learning algorithm such as neural networks (e.g., recurrent, convolutional, long short-term memory networks, etc.). In some implementations, a neural network may be trained using preprocessed training data comprising at least in part, one or more of the features/variables and quantities determined by data analysis engine200or acquired from various information sources (e.g., AIS data152). In such implementations, the neural network may consist of multiple layers of nodes: the input layer, the hidden layer, and the output layer. The input layer receives the input data and passes those past data values into the next layer. The hidden layer or layers have complex functions that create predictors. A set of nodes in the hidden layer called neurons represents math functions that modify the input layer. The output layer collects the predictions made in the hidden layer to produce the model's final prediction. The output layer may produce a weighted score that represents a confidence level associated with slice of a cone of trajectory per a given time frame (e.g., per hour, per half-hour, etc.). For example, at the input layer the neural network may receive a plurality of variables such, for example, the target size, target type (e.g., type of ship, type of airplane, type of truck, etc.), the total number of satellite access windows, access window intersection over union, number of sensors, sensor types, sensor phenomenology, average GSD, average grazing angle, average cloud coverage/cloud free percentage. These variables are passed into the hidden layer where weighting factors are applied to the variable nodes and the neural network outputs a computed score per hour of cone. In some implementations, the output may be a value between 0 and 1, inclusive, which represents a confidence value indicative of the model's confidence that a given slice of cone for a target of interest accurately represents the target's most likely navigation trajectory per some given period of time (e.g., per 60 minutes, 30 minutes, etc.). In some implementations, algorithms303may comprise a random forest model trained using preprocessed training data comprising at least in part, one or more of the features/variables and quantities determined by data analysis engine200or acquired from various information sources (e.g., AIS data152, TLE data153). The random forest method combines the output of multiple decision trees to reach a single result and is well suited for classification and regression problems. In embodiments utilizing random forest algorithms, the preprocessed dataset may be split into multiple subsets using a bagging or bootstrap aggregation, technique wherein a random subset of the entire training dataset is selected. Each individual decision tree is generated from a different selected random subset with replacement known as row sampling. This step of row sampling with replacement is referred to as bootstrap. Each decision tree is trained independently and generates output. The final output may be based on majority (or averaging) voting after combining the results of all models via aggregation. Examples of hyperparameters for random forest models that may be tuned via parametric optimizer305can include (but not limited to) the number of trees the algorithm builds before aggregating the predictions, maximum number of features random forest considers splitting a node, minimum number of leaves required to split an internal node, how to split the node in each tree, and maximum leaf nodes in each tree, and/or the like. FIG.4is an example of a target of interest405with its associated cone of trajectory400split into cone slices401a-dand displaying metadata402as well as the computed average score403for the entire cone. According to the embodiment, a legend404may be present which indicates the color of cone slice and what confidence score it has. Each cone slice401a-dmay be color coded wherein the color indicates the confidence score for that slice of the cone of trajectory for the target of interest. The target of interest405is represented as a diamond but may take on any shape or color according to the embodiment. Metadata402can include information about the date and time associated with the displayed target of interest. This information may be sent to and displayed by data presentation layer120of target custody platform100. In some embodiments, the information displayed inFIG.4may be overlayed on top of a map of the region. For example, if the target of interest was a ship, then the information may be displayed on a map of the body of water the ship is sailing on and any nearby land masses, islands, etc. Detailed Description of Exemplary Aspects FIG.5is a flow diagram illustrating an exemplary method500for modeling target of interest navigation channels with confidence intervals, according to an embodiment. In this exemplary use case, the target of interest is a sailing vessel. According to the embodiment, the method can utilize automated identification system (AIS) emissions as the seed for developing a target tracking model. In some implementations, feasibilities were run for all major Commercial Imagery vendors, and image footprints calculated from the imagery data obtained from satellites. Image footprint metadata that is a geo-temporal correlation to a sector of the trajectory are aggregated to produce a likelihood score. In some implementations, likelihood is derived based on the intended target size, percentage of the sector covered by imagery, number of images, average ground sample distance (GSD), and percentage of cloud cover. According to the embodiment, the process begins at step501when target custody platform100processes a plurality of obtained data to determine at least one of a location, heading, identity, and characteristics of a ship to produce a cone of trajectory (COT). In some implementations, the obtained data may comprise at least automatic identification system data. In this step, obtained AIS data may be parsed and processed to identify the features associated with one or more ships described above. As a next step502, data analytics engine200can use the plurality of obtained data to build one or more satellite footprints to project where the ship will be at given points in time and then compare intersections between satellites and the COT. As a next step503, the size of the ship associated with the COT is determined using data parsed from the obtained AIS data. The ship dimensions of length and width can be sourced directly from AIS data and used to determine the ship size. As a next step504, data analytics engine200can calculate the number of possible footprints based on the section of cones available. In various implementations, this calculation may be set to a time interval, for example, calculating the number of footprints per hour or half-hour. After the number of footprints has been calculated over a specified time interval, the next step505performed by data analytics engine200is to calculate the cover area of a cone slice. Further, data analytics engine200can use the calculated area of a cone slice as an input for calculating the percentage of the cone that the footprints cover at step506. At a next step507, platform100calculates the average ground sample distance (GSD) of received satellite images per slice. The next step508, data analytics engine200determines cloud cover percentage based on obtained cloud cover data for a given location. As a next step509, the quantities and characteristics determined, calculated, and/or computed in previous steps may be pre-processed and normalized prior to being used as inputs into one or more machine and/or deep learning algorithms configured to apply weighting factors510to each of the normalized values. As a last step511, the trained one or more machine and/or deep learning models may produce as output, responsive to the input of the normalized value, a likelihood score per hour of cone. Hardware Architecture Generally, the techniques disclosed herein may be implemented on hardware or a combination of software and hardware. For example, they may be implemented in an operating system kernel, in a separate user process, in a library package bound into network applications, on a specially constructed machine, on an application-specific integrated circuit (ASIC), or on a network interface card. Software/hardware hybrid implementations of at least some of the aspects disclosed herein may be implemented on a programmable network-resident machine (which should be understood to include intermittently connected network-aware machines) selectively activated or reconfigured by a computer program stored in memory. Such network devices may have multiple network interfaces that may be configured or designed to utilize different types of network communication protocols. A general architecture for some of these machines may be described herein in order to illustrate one or more exemplary means by which a given unit of functionality may be implemented. According to specific aspects, at least some of the features or functionalities of the various aspects disclosed herein may be implemented on one or more general-purpose computers associated with one or more networks, such as for example an end-user computer system, a client computer, a network server or other server system, a mobile computing device (e.g., tablet computing device, mobile phone, smartphone, laptop, or other appropriate computing device), a consumer electronic device, a music player, or any other suitable electronic device, router, switch, or other suitable device, or any combination thereof. In at least some aspects, at least some of the features or functionalities of the various aspects disclosed herein may be implemented in one or more virtualized computing environments (e.g., network computing clouds, virtual machines hosted on one or more physical computing machines, or other appropriate virtual environments). Referring now toFIG.6, there is shown a block diagram depicting an exemplary computing device10suitable for implementing at least a portion of the features or functionalities disclosed herein. Computing device10may be, for example, any one of the computing machines listed in the previous paragraph, or indeed any other electronic device capable of executing software- or hardware-based instructions according to one or more programs stored in memory. Computing device10may be configured to communicate with a plurality of other computing devices, such as clients or servers, over communications networks such as a wide area network a metropolitan area network, a local area network, a wireless network, the Internet, or any other network, using known protocols for such communication, whether wireless or wired. In one aspect, computing device10includes one or more central processing units (CPU)12, one or more interfaces15, and one or more busses14(such as a peripheral component interconnect (PCI) bus). When acting under the control of appropriate software or firmware, CPU12may be responsible for implementing specific functions associated with the functions of a specifically configured computing device or machine. For example, in at least one aspect, a computing device10may be configured or designed to function as a server system utilizing CPU12, local memory11and/or remote memory16, and interface(s)15. In at least one aspect, CPU12may be caused to perform one or more of the different types of functions and/or operations under the control of software modules or components, which for example, may include an operating system and any appropriate applications software, drivers, and the like. CPU12may include one or more processors13such as, for example, a processor from one of the Intel, ARM, Qualcomm, and AMD families of microprocessors. In some aspects, processors13may include specially designed hardware such as application-specific integrated circuits (ASICs), electrically erasable programmable read-only memories (EEPROMs), field-programmable gate arrays (FPGAs), and so forth, for controlling operations of computing device10. In a particular aspect, a local memory11(such as non-volatile random-access memory (RAM) and/or read-only memory (ROM), including for example one or more levels of cached memory) may also form part of CPU12. However, there are many different ways in which memory may be coupled to system10. Memory11may be used for a variety of purposes such as, for example, caching and/or storing data, programming instructions, and the like. It should be further appreciated that CPU12may be one of a variety of system-on-a-chip (SOC) type hardware that may include additional hardware such as memory or graphics processing chips, such as a QUALCOMM SNAPDRAGON™ or SAMSUNG EXYNOS™ CPU as are becoming increasingly common in the art, such as for use in mobile devices or integrated devices. As used herein, the term “processor” is not limited merely to those integrated circuits referred to in the art as a processor, a mobile processor, or a microprocessor, but broadly refers to a microcontroller, a microcomputer, a programmable logic controller, an application-specific integrated circuit, and any other programmable circuit. In one aspect, interfaces15are provided as network interface cards (NICs). Generally, NICs control the sending and receiving of data packets over a computer network; other types of interfaces may for example support other peripherals used with computing device10. Among the interfaces that may be provided are Ethernet interfaces, frame relay interfaces, cable interfaces, DSL interfaces, token ring interfaces, graphics interfaces, and the like. In addition, various types of interfaces may be provided such as, for example, universal serial bus (USB), Serial, Ethernet, FIREWIRE™ THUNDERBOLT™, PCI, parallel, radio frequency (RF), BLUETOOTH™, near-field communications (e.g., using near-field magnetics), 802.11 (WiFi), frame relay, TCP/IP, ISDN, fast Ethernet interfaces, Gigabit Ethernet interfaces, Serial ATA (SATA) or external SATA (ESATA) interfaces, high-definition multimedia interface (HDMI), digital visual interface (DVI), analog or digital audio interfaces, asynchronous transfer mode (ATM) interfaces, high-speed serial interface (HSSI) interfaces, Point of Sale (POS) interfaces, fiber data distributed interfaces (FDDIs), and the like. Generally, such interfaces15may include physical ports appropriate for communication with appropriate media. In some cases, they may also include an independent processor (such as a dedicated audio or video processor, as is common in the art for high-fidelity A/V hardware interfaces) and, in some instances, volatile and/or non-volatile memory (e.g., RAM). Although the system shown inFIG.6illustrates one specific architecture for a computing device10for implementing one or more of the aspects described herein, it is by no means the only device architecture on which at least a portion of the features and techniques described herein may be implemented. For example, architectures having one or any number of processors13may be used, and such processors13may be present in a single device or distributed among any number of devices. In one aspect, a single processor13handles communications as well as routing computations, while in other aspects a separate dedicated communications processor may be provided. In various aspects, different types of features or functionalities may be implemented in a system according to the aspect that includes a client device (such as a tablet device or smartphone running client software) and server systems (such as a server system described in more detail below). Regardless of network device configuration, the system of an aspect may employ one or more memories or memory modules (such as, for example, remote memory block16and local memory11) configured to store data, program instructions for the general-purpose network operations, or other information relating to the functionality of the aspects described herein (or any combinations of the above). Program instructions may control execution of or comprise an operating system and/or one or more applications, for example. Memory16or memories11,16may also be configured to store data structures, configuration data, encryption data, historical system operations information, or any other specific or generic non-program information described herein. Because such information and program instructions may be employed to implement one or more systems or methods described herein, at least some network device aspects may include nontransitory machine-readable storage media, which, for example, may be configured or designed to store program instructions, state information, and the like for performing various operations described herein. Examples of such nontransitory machine-readable storage media include, but are not limited to, magnetic media such as hard disks, floppy disks, and magnetic tape; optical media such as CD-ROM disks; magneto-optical media such as optical disks, and hardware devices that are specially configured to store and perform program instructions, such as read-only memory devices (ROM), flash memory (as is common in mobile devices and integrated systems), solid state drives (SSD) and “hybrid SSD” storage drives that may combine physical components of solid state and hard disk drives in a single hardware device (as are becoming increasingly common in the art with regard to personal computers), memristor memory, random access memory (RAM), and the like. It should be appreciated that such storage means may be integral and non-removable (such as RAM hardware modules that may be soldered onto a motherboard or otherwise integrated into an electronic device), or they may be removable such as swappable flash memory modules (such as “thumb drives” or other removable media designed for rapidly exchanging physical storage devices), “hot-swappable” hard disk drives or solid state drives, removable optical storage discs, or other such removable media, and that such integral and removable storage media may be utilized interchangeably. Examples of program instructions include both object code, such as may be produced by a compiler, machine code, such as may be produced by an assembler or a linker, byte code, such as may be generated by for example a JAVA™ compiler and may be executed using a Java virtual machine or equivalent, or files containing higher level code that may be executed by the computer using an interpreter (for example, scripts written in Python, Perl, Ruby, Groovy, or any other scripting language). In some aspects, systems may be implemented on a standalone computing system. Referring now toFIG.7, there is shown a block diagram depicting a typical exemplary architecture of one or more aspects or components thereof on a standalone computing system. Computing device20includes processors21that may run software that carry out one or more functions or applications of aspects, such as for example a client application24. Processors21may carry out computing instructions under control of an operating system22such as, for example, a version of MICROSOFT WINDOWS™ operating system, APPLE macOS™ or iOS™ operating systems, some variety of the Linux operating system, ANDROID™ operating system, or the like. In many cases, one or more shared services23may be operable in system20and may be useful for providing common services to client applications24. Services23may for example be WINDOWS™ services, user-space common services in a Linux environment, or any other type of common service architecture used with operating system21. Input devices28may be of any type suitable for receiving user input, including for example a keyboard, touchscreen, microphone (for example, for voice input), mouse, touchpad, trackball, or any combination thereof. Output devices27may be of any type suitable for providing output to one or more users, whether remote or local to system20, and may include for example one or more screens for visual output, speakers, printers, or any combination thereof. Memory25may be random-access memory having any structure and architecture known in the art, for use by processors21, for example to run software. Storage devices26may be any magnetic, optical, mechanical, memristor, or electrical storage device for storage of data in digital form (such as those described above, referring toFIG.6). Examples of storage devices26include flash memory, magnetic hard drive, CD-ROM, and/or the like. In some aspects, systems may be implemented on a distributed computing network, such as one having any number of clients and/or servers. Referring now toFIG.8, there is shown a block diagram depicting an exemplary architecture30for implementing at least a portion of a system according to one aspect on a distributed computing network. According to the aspect, any number of clients33may be provided. Each client33may run software for implementing client-side portions of a system; clients may comprise a system20such as that illustrated inFIG.7. In addition, any number of servers32may be provided for handling requests received from one or more clients33. Clients33and servers32may communicate with one another via one or more electronic networks31, which may be in various aspects any of the Internet, a wide area network, a mobile telephony network (such as CDMA or GSM cellular networks), a wireless network (such as WiFi, WiMAX, LTE, and so forth), or a local area network (or indeed any network topology known in the art; the aspect does not prefer any one network topology over any other). Networks31may be implemented using any known network protocols, including for example wired and/or wireless protocols. In addition, in some aspects, servers32may call external services37when needed to obtain additional information, or to refer to additional data concerning a particular call. Communications with external services37may take place, for example, via one or more networks31. In various aspects, external services37may comprise web-enabled services or functionality related to or installed on the hardware device itself. For example, in one aspect where client applications24are implemented on a smartphone or other electronic device, client applications24may obtain information stored in a server system32in the cloud or on an external service37deployed on one or more of a particular enterprise's or user's premises. In addition to local storage on servers32, remote storage38may be accessible through the network(s)31. In some aspects, clients33or servers32(or both) may make use of one or more specialized services or appliances that may be deployed locally or remotely across one or more networks31. For example, one or more databases34in either local or remote storage38may be used or referred to by one or more aspects. It should be understood by one having ordinary skill in the art that databases in storage34may be arranged in a wide variety of architectures and using a wide variety of data access and manipulation means. For example, in various aspects one or more databases in storage34may comprise a relational database system using a structured query language (SQL), while others may comprise an alternative data storage technology such as those referred to in the art as “NoSQL” (for example, HADOOP CASSANDRA™, GOOGLE BIGTABLE™, and so forth). In some aspects, variant database architectures such as column-oriented databases, in-memory databases, clustered databases, distributed databases, or even flat file data repositories may be used according to the aspect. It will be appreciated by one having ordinary skill in the art that any combination of known or future database technologies may be used as appropriate, unless a specific database technology or a specific arrangement of components is specified for a particular aspect described herein. Moreover, it should be appreciated that the term “database” as used herein may refer to a physical database machine, a cluster of machines acting as a single database system, or a logical database within an overall database management system. Unless a specific meaning is specified for a given use of the term “database”, it should be construed to mean any of these senses of the word, all of which are understood as a plain meaning of the term “database” by those having ordinary skill in the art. Similarly, some aspects may make use of one or more security systems36and configuration systems35. Security and configuration management are common information technology (IT) and web functions, and some amount of each are generally associated with any IT or web systems. It should be understood by one having ordinary skill in the art that any configuration or security subsystems known in the art now or in the future may be used in conjunction with aspects without limitation, unless a specific security36or configuration system35or approach is specifically required by the description of any specific aspect. FIG.9shows an exemplary overview of a computer system40as may be used in any of the various locations throughout the system. It is exemplary of any computer that may execute code to process data. Various modifications and changes may be made to computer system40without departing from the broader scope of the system and method disclosed herein. Central processor unit (CPU)41is connected to bus42, to which bus is also connected memory43, nonvolatile memory44, display47, input/output (I/O) unit48, and network interface card (NIC)53. I/O unit48may, typically, be connected to peripherals such as a keyboard49, pointing device50, hard disk52, real-time clock51, a camera57, and other peripheral devices. NIC53connects to network54, which may be the Internet or a local network, which local network may or may not have connections to the Internet. The system may be connected to other computing devices through the network via a router wireless local area network56, or any other network connection. Also shown as part of system is power supply unit45connected, in this example, to a main alternating current (AC) supply46. Not shown are batteries that could be present, and many other devices and modifications that are well known but are not applicable to the specific novel functions of the current system and method disclosed herein. It should be appreciated that some or all components illustrated may be combined, such as in various integrated applications, for example Qualcomm or Samsung system-on-a-chip (SOC) devices, or whenever it may be appropriate to combine multiple capabilities or functions into a single hardware device (for instance, in mobile devices such as smartphones, video game consoles, in-vehicle computer systems such as navigation or multimedia systems in automobiles, or other integrated hardware devices). In various aspects, functionality for implementing systems or methods of various aspects may be distributed among any number of client and/or server components. For example, various software modules may be implemented for performing various functions in connection with the system of any particular aspect, and such modules may be variously implemented to run on server and/or client components. The skilled person will be aware of a range of possible modifications of the various aspects described above. Accordingly, the present invention is defined by the claims and their equivalents. | 49,691 |
11861895 | DETAILED DESCRIPTION The following detailed description of various embodiments herein makes reference to the accompanying drawings, which show various embodiments by way of illustration. While these various embodiments are described in sufficient detail to enable those skilled in the art to practice the disclosure, it should be understood that other embodiments may be realized and that changes may be made without departing from the scope of the disclosure. Thus, the detailed description herein is presented for purposes of illustration only and not of limitation. Furthermore, any reference to singular includes plural embodiments, and any reference to more than one component or step may include a singular embodiment or step. Also, any reference to attached, fixed, connected, or the like may include permanent, removable, temporary, partial, full or any other possible attachment option. Additionally, any reference to without contact (or similar phrases) may also include reduced contact or minimal contact. It should also be understood that unless specifically stated otherwise, references to “a,” “an” or “the” may include one or more than one and that reference to an item in the singular may also include the item in the plural. Further, all ranges may include upper and lower values and all ranges and ratio limits disclosed herein may be combined. With reference now toFIG.1, a helicopter100flying at a height H above the ground G is illustrated. In accordance with various embodiments, the helicopter100is equipped with a helicopter search light102, which is mounted to a front and bottom portion of the helicopter100. The helicopter search light102comprises a lighting arrangement having an adjustable light output. In various embodiments, for example, the lighting arrangement of the helicopter search light102may have two modes of operation: a floodlight mode and a spotlight mode. The spotlight mode is sometimes called a search mode or a pencil mode. When operated in the spotlight mode, a narrow beam of light104, as schematically illustrated by the dashed lines inFIG.1, is emitted from the helicopter search light102. The pilot may thus inspect the ground within an area A about a center position P where a main light emission direction106meets the ground. In the spotlight mode, the light emitted by the helicopter search light102is bundled along the main light emission direction106. As a result, the ground is brightly illuminated within the area A, which is located about the center position P, allowing for a close and thorough inspection of the ground G or of an object on the ground G that is within the area A. When operated in the floodlight mode, a wide beam of light108, as schematically illustrated by the solid lines inFIG.1, is emitted from the helicopter search light102. As illustrated, the cone of light resulting from the floodlight mode is much broader than the cone of light resulting from the spotlight mode, with both cones of light still defining a main light emission direction106at the centers of the cones. In various embodiments, the cone of light resulting from the floodlight mode may have an opening angle of about one-hundred degrees (100°), which is indicated by the angle sweep110shown inFIG.1. When using the floodlight mode, the pilot may inspect a larger portion of the environment outside the helicopter100than when using the spotlight mode. However, when using the floodlight mode, the lighting power of the helicopter search light102is distributed over a larger angular region than when in the spotlight mode and, thus, the luminance of the ground G is considerably less than when using the spotlight mode. Consequently, the floodlight mode is typically employed when the helicopter100is flying at a height H not greater than about twenty to thirty meters (20-30 m), which may be considered relatively close to the ground G. Referring now toFIGS.2A,2B and2C, top and side views of a helicopter search light202, similar to the helicopter search light102described above, are illustrated. In various embodiments, the helicopter search light202includes a light head220having a housing in the form of a cylindrical side wall222. The light head220further includes a first plurality of light sources224and a second plurality of light sources226that are spaced apart and arranged in a pattern (e.g., a circular pattern or a hexagonal pattern where the numbers of light sources equals six, as illustrated) within the light head220. Each of the first plurality of light sources224is associated with a first optical system228and each of the second plurality of light sources is associated with a second optical system230. In various embodiments, each of the first plurality of light sources224that are associated with the first optical system228are of identical design and positioned at the corners of a first equilateral hexagon, which is indicated by dashed lines inFIGS.2A and2B. Similarly, each of the second plurality of light sources226that are associated with the second optical system230are of identical design and positioned at the corners of a second equilateral hexagon that is positioned radially outward of the first equilateral hexagon. As illustrated, each of the second plurality of light sources226is packed between adjacent pairs of the first plurality of light sources224and the cylindrical side wall222of the light head220. In various embodiments, the first plurality of light sources224associated with the first optical system228is configured for providing the spotlight mode of operation, while the second plurality of light sources226associated with the second optical system230is configured for providing the floodlight mode of operation. As will be appreciated, the disclosure contemplates other arrangements of the various pluralities of light sources within the light head220. For example, as illustrated inFIG.2B, the light head220may include an auxiliary light source232centrally positioned within the first plurality of light sources224. In various embodiments, the auxiliary light source232may be associated with either of the first optical system228or the second optical system230, or it may be associated with a third optical system that is separate from the first and second optical systems. Referring now primarily toFIG.2C, a cross-sectional side view of the light head220, taken along the cross-sectional plane S indicated inFIG.2B, is illustrated. The light head220has a light emission side LE, depicted as the top side of the cross-sectional plane S, and a heat discharge side HD, depicted as the bottom side of the cross-sectional plane S. At the heat discharge side, the light head220includes a cooling rib structure235configured to provide a heat sink for the first and second pluralities of light sources. As further illustrated, each of the first plurality of light sources224comprises a collimating lens or a collimating reflector configured to generally focus the light in a beam for the spotlight mode of operation. Conversely, each of the second plurality of light sources226do not include a collimator, and thus are configured to emit light in more of a spherical or cone-shaped manner for the floodlight mode of operation. In various embodiments, each of the first plurality of light sources224and each of the second plurality of light sources226comprise light emitting diodes (LEDs) that are configured to emit white light in the visible light range—e.g., light that is visible to the human eye. In various embodiments, one or more of the first plurality of light sources224or the second plurality of light sources226may also be configured for emitting infrared or ultraviolet light. Still referring toFIGS.2A-2C, the light head220includes a viewing system240. In various embodiments, the viewing system240may be positioned at a central location within the light head220, as illustrated atFIG.2A, or the viewing system240may be positioned at a peripheral location (e.g., attached to the cylindrical side wall222) as illustrated atFIG.2B. In various embodiments, the viewing system240comprises a video-camera configured to capture and transmit on the order of thirty frames per second (30 fps) or greater. Further, in various embodiments, the light head220includes a controller234(seeFIG.2C), which may be integrated within the light head220, as illustrated, or positioned within the helicopter. The controller234typically includes a switching circuit that is electrically coupled to an electrical power source, as well as to each of the first plurality of light sources224and each of the second plurality of light sources226. The switching circuit allows for selectively switching the first plurality of light sources224and the second plurality of light sources226on and off and for selectively switching between the spotlight mode and the floodlight mode or a combination of the two modes. In various embodiments, the controller234is also configured to operate the viewing system240—e.g., to turn the viewing system240on and off or to select various viewing parameters, such as, for example, focal length and frame speed. Further, in various embodiments, the controller234is configured to adjust or rotate the field of view of the viewing system240, which, in various embodiments, is on the order of at least seventy degrees (70°) in the horizontal and at least sixty degrees (60°) in the vertical. Rotating the field of view may be accomplished, for example, by reorienting or rotating the main light emission direction106of the helicopter search light to which the viewing system240is attached or by reorienting or rotating the viewing system240independent of the helicopter search light. In various embodiments, the controller234may include a general-purpose processor, a digital signal processor (DSP), an application specific integrated circuit (ASIC), a field programmable gate array (FPGA) or some other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof. Referring now toFIG.3, a system300for detecting and tracking an object from a helicopter, such as the helicopter100described above with reference toFIG.1, is illustrated. In various embodiments, the system includes a helicopter search light302, similar to the helicopter search light202described above with reference toFIGS.2A-2C, and a viewing system340, similar to the viewing system240described above with reference toFIGS.2A and2B. In various embodiments, the viewing system340is configured to capture and transmit a plurality of images or a video stream of images of a region of interest or one or more objects within the region of interest. The system300further includes an object detection module350that includes a pattern database, an object tracking module304, an alert module306, a display module308and an object selection module310. In various embodiments, the object detection module350is configured to detect anomalous or suspicious behavior based on data stored within the pattern database. By way of example, while the helicopter is traveling along or proximate a highway or an interstate, the helicopter search light302and the viewing system340may be aimed in the general direction of the highway or the interstate. Data obtained from the viewing system340is transmitted to the object detection module350and analyzed against various anomalous scenarios stored within the pattern database. If, for example, a vehicle is traveling on the highway or the interstate in the wrong direction or if a person is walking across the highway or the interstate and in danger of being struck by a vehicle, the object detection module will provide a signal to the alert module306, which will then alert the pilot (e.g., via an audible or visual signal) of the anomalous behavior or scenario. At the same time, a visual depiction of the anomalous behavior or scenario is displayed via the display module308. The pilot then has the choice to select for tracking the object exhibiting the anomalous behavior or scenario via an input to the object selection module310. If the pilot elects to track the object, then the object tracking module304is activated and continuously moves the helicopter search light302and the viewing system340such that they both remain aimed at the object or objects being tracked. Referring now toFIGS.4A and4B, flowcharts are provided that more fully describe operation of a system400for detecting and tracking an object from a helicopter, similar to the system300described above with reference toFIG.3. At step402, a viewing system, such as, for example, a video-camera is turned on and oriented toward an area of interest. At step404, the viewing system captures and transmits images of the area of interest at a rate equal to at least thirty frames per second (30 fps). As noted above, in various embodiments, a field of view of the viewing system is on the order of at least seventy degrees (70°) in the horizontal and at least sixty degrees (60°) in the vertical. At step406, the images are acquired and transmitted to a crew display unit408and to a control module410. If the control module410is set to manual, then the helicopter crew decides whether or not to detect any anomalous or suspicious activity occurring within the area of interest at step412. If a decision is made to detect anomalous or suspicious activities, then the images are transmitted to an object detection module450, similar to the object detection module350described above with reference toFIG.3. If the control module410is set to automatic, then the images are transmitted to the object detection module450without input from the helicopter crew. The object detection module450, as discussed further below, includes a pattern database452and a processor454configured to process the images and to make comparisons of the images against data within the pattern database452. At step414, features of the objects detected within the area of interest (e.g., an automobile or a human) are identified and extracted. At step416, a determination is made as to whether the object is stationary or moving. If the object is stationary, then a decision is made whether or not to track the object at step418. If the decision is to not track the object, then the system400returns to the object detection module450and continues as described above. If the decision is to track the object, then an object tracking module420is activated. The object tracking module, similar to the object tracking module304described above with reference toFIG.3, controls movement of a helicopter search light such that it remains aimed at the object being tracked. In similar fashion, if the object is moving, then a decision is made whether or not to track the object at step424. If the decision is to not track the object, then the system400returns to the object detection module450and continues as described above. If the decision is to track the object, then the helicopter search light is focused on the object at step426and the object tracking module420is activated and controls movement of the helicopter search light and the viewing system, at step422, such that both remain aimed at the object being tracked. At the same time, because the object is moving, the system continues operation of the viewing system at step428. The various steps repeat sequentially until the helicopter crew halts tracking. A user may also manually input various modes of operation at step430via an input module. Referring more particularly toFIG.4B, and with continued reference toFIG.4A, further details of the object detection module450are provided. In various embodiments, the object detection module450receives an input video stream at step456. The input video stream is provided, for example, following the image acquisition at step406inFIG.4A. The input video stream is analyzed at step458for any objects displaying anomalous or suspicious behavior. In various embodiments, the detection may be made by comparing the input video stream against the pattern database452. Any objects detected are further analyzed at step460to determine whether the objects are known to currently not be anomalous or suspicious. If known to currently not be anomalous or suspicious, the objects are rescanned and reanalyzed, at step464, following receipt of subsequent images received from the input video stream. If, on the other hand, the objects detected are not known to currently not be anomalous or suspicious, then the objects are further analyzed at step462. If this further analysis determines an object is acting in an anomalous of suspicious behavior, then the system400proceeds with tracking at step466as described above with reference toFIG.4A. If, on the other hand, the further analysis determines an object is not acting in an anomalous of suspicious behavior, then the objects are rescanned and reanalyzed, at step464. Operation of the object detection module450is based on various machine learning models or deep learning models configured to detect the presence of anomalous or suspicious behavior or activity. The various machine learning models may comprise, for example, a Viola-Jones object detection model, a scale-invariant feature transformation model, or a histogram of oriented gradients model. The various deep learning models may comprise, for example, a You Only Look Once (YOLO) model, any of the class of region proposal models (e.g., R-CNN, Fast R-CNN, Faster R-CNN or Cascade R-CNN) or various neural network models, including, for example, a single-shot refinement neural network for object detection model. The resulting system is thus self-learning, meaning the pattern database452is continually updated through each operation. Initial operation of the system may employ pre-defined image data sets compiled from various sources (e.g., photographs taken from online sources). The pre-defined image data sets may be categorized with reference to different geographic regions, such as, for example, an urban residential area, a rural area, a forest, a water body, a highway, an international boundary or border, etc. With each use of the system, the image data sets may be updated for the different geographic regions based on where the system is being operated. During or prior to operation, inputs to the system (e.g., via the user input at step430) may include selection of a data set corresponding to a specific geographic region (e.g., a highway) and selection of a specific type of object being considered for detection and tracking (e.g., a human crossing or an automobile traveling on the highway). Selection of the geographic region and the object being considered may be referred to as a geographic region identifier and an object identifier, respectively, both of which may be entered into the system via an input module at step430. Additionally, the object detection module450may be configured to include a lookup table (e.g., within the pattern database) for each object marked for tracking, thereby enabling a resumption of tracking in the event an object is lost from the current field of view of the viewing system (e.g., a human or an automobile becomes positioned under a bridge or within a tunnel for a period of time). In such embodiments, the system may be configured to continue tracking various other objects until the object lost from the current field of view reappears, at which point all objects may be tracked. By way of examples, the system may be used to detect and track a vehicle moving in an incorrect direction or in a suspicious manner (e.g., weaving in and out of lanes or traveling at an excessive rate of speed) on a highway. More specifically, the helicopter crew may start the system (e.g., start the video-camera) and input a highway and an automobile as operating modes. The captured images are transmitted to the object detection module450to detect the automobile exhibiting anomalous or suspicious behavior. Once detected, the helicopter crew is alerted and a decision is made whether to track the automobile. If the decision is made to track the automobile, additional characteristics (e.g., color and model of the automobile and travel direction) are stored within the pattern database, either permanently or temporarily. In a similar example, the system may be used to detect and track one or more humans exhibiting suspicious activities. The helicopter crew may input a geographic region (e.g., a highway) and a human as operating modes. The captured images are transmitted to the object detection module450to detect one or more humans exhibiting anomalous or suspicious behavior (e.g., crossing or walking along a highway). Once detected, the helicopter crew is alerted and a decision is made whether to track the one or more humans. If the decision is made to track the one or more humans, additional characteristics (e.g., color of cloths and physical features) are stored within the pattern database, either permanently or temporarily. Similar examples may be made with respect to other geographic regions (e.g., international borders to detect illegal crossings or urban areas to monitor the movements of individuals exhibiting illegal behavior). Examples of relatively stationary objects include detection of tracking of large groups of individuals or accident sites. For example, where large numbers of individuals are present (e.g., large protests), appropriate operational modes may be selected according to geographic region and the movement of individuals or groups of individuals may be tracked for suspicious behavior within the larger group of individuals (e.g., groups of individuals approaching police or property under protection). Similarly, the system may be used to detect and track various elements at the scenes of accidents, where objects such as fire, smoke, fire trucks or ambulances may be detected, thereby alerting the helicopter crew of a potential accident site. The above disclosure provides a method for detecting and tracking an object that is exhibiting an anomalous behavior from a helicopter and a system for accomplishing the same, the anomalous behavior typically being exhibited by, for example, humans or automobiles, whether stationary or moving. Various benefits of the disclosure include a reduction of crew required to operate a helicopter and a reduction in distractions to the crew while operating the helicopter. The disclosure also provides a low cost solution for operating a helicopter search light and an easily mountable viewing system for use in conjunction with the helicopter search light. The systems disclosed herein may be retrofitted to existing helicopters without extensive modifications to existing hardware or software and may be readily upgraded as improvements to machine learning models or deep learning models are made. The system and methods described herein may be described in terms of functional block components, screen shots, optional selections, and various processing steps. It should be appreciated that such functional blocks may be realized by any number of hardware or software components configured to perform the specified functions. For example, the system may employ various integrated circuit components, e.g., memory elements, processing elements, logic elements, look-up tables, and the like, which may carry out a variety of functions under the control of one or more microprocessors or other control devices. Similarly, the software elements of the system may be implemented with any programming or scripting language such as C, C++, C#, JAVA®, VBScript, COBOL, MICROSOFT® Active Server Pages, assembly, PERL®, PHP, PYTHON®, Visual Basic, SQL Stored Procedures, PL/SQL, any UNIX® shell script, and extensible markup language (XML) with the various algorithms being implemented with any combination of data structures, objects, processes, routines or other programming elements. Further, it should be noted that the system may employ any number of conventional techniques for data transmission, signaling, data processing, network control, and the like. The various system components discussed herein may also include one or more of the following: a host server or other computing systems including a processor for processing digital data; a memory coupled to the processor for storing digital data; an input digitizer coupled to the processor for inputting digital data; an application program stored in the memory and accessible by the processor for directing processing of digital data by the processor; a display device coupled to the processor and memory for displaying information derived from digital data processed by the processor; and a plurality of databases. Various databases used herein may include: client data; merchant data; financial institution data; or like data useful in the operation of the system. As those skilled in the art will appreciate, users computer may include an operating system (e.g., WINDOWS®, UNIX®, LINUX®, SOLARIS®, MACOS®, etc.) as well as various conventional support software and drivers typically associated with computers. Benefits, other advantages, and solutions to problems have been described herein with regard to specific embodiments. Furthermore, the connecting lines shown in the various figures contained herein are intended to represent exemplary functional relationships or physical couplings between the various elements. It should be noted that many alternative or additional functional relationships or physical connections may be present in a practical system. However, the benefits, advantages, solutions to problems, and any elements that may cause any benefit, advantage, or solution to occur or become more pronounced are not to be construed as critical, required, or essential features or elements of the disclosure. The scope of the disclosure is accordingly to be limited by nothing other than the appended claims, in which reference to an element in the singular is not intended to mean “one and only one” unless explicitly so stated, but rather “one or more.” Moreover, where a phrase similar to “at least one of A, B, or C” is used in the claims, it is intended that the phrase be interpreted to mean that A alone may be present in an embodiment, B alone may be present in an embodiment, C alone may be present in an embodiment, or that any combination of the elements A, B and C may be present in a single embodiment; for example, A and B, A and C, B and C, or A and B and C. Different cross-hatching is used throughout the figures to denote different parts but not necessarily to denote the same or different materials. Systems, methods and apparatus are provided herein. In the detailed description herein, references to “one embodiment,” “an embodiment,” “various embodiments,” etc., indicate that the embodiment described may include a particular feature, structure, or characteristic, but every embodiment may not necessarily include the particular feature, structure, or characteristic. Moreover, such phrases are not necessarily referring to the same embodiment. Further, when a particular feature, structure, or characteristic is described in connection with an embodiment, it is submitted that it is within the knowledge of one skilled in the art to affect such feature, structure, or characteristic in connection with other embodiments whether or not explicitly described. After reading the description, it will be apparent to one skilled in the relevant art(s) how to implement the disclosure in alternative embodiments. Numbers, percentages, or other values stated herein are intended to include that value, and also other values that are about or approximately equal to the stated value, as would be appreciated by one of ordinary skill in the art encompassed by various embodiments of the present disclosure. A stated value should therefore be interpreted broadly enough to encompass values that are at least close enough to the stated value to perform a desired function or achieve a desired result. The stated values include at least the variation to be expected in a suitable industrial process, and may include values that are within 10%, within 5%, within 1%, within 0.1%, or within 0.01% of a stated value. Additionally, the terms “substantially,” “about” or “approximately” as used herein represent an amount close to the stated amount that still performs a desired function or achieves a desired result. For example, the term “substantially,” “about” or “approximately” may refer to an amount that is within 10% of, within 5% of, within 1% of, within 0.1% of, and within 0.01% of a stated amount or value. In various embodiments, system program instructions or controller instructions may be loaded onto a tangible, non-transitory, computer-readable medium (also referred to herein as a tangible, non-transitory, memory) having instructions stored thereon that, in response to execution by a controller, cause the controller to perform various operations. The term “non-transitory” is to be understood to remove only propagating transitory signals per se from the claim scope and does not relinquish rights to all standard computer-readable media that are not only propagating transitory signals per se. Stated another way, the meaning of the term “non-transitory computer-readable medium” and “non-transitory computer-readable storage medium” should be construed to exclude only those types of transitory computer-readable media that were found by In Re Nuijten to fall outside the scope of patentable subject matter under 35 U.S.C. § 101. Furthermore, no element, component, or method step in the present disclosure is intended to be dedicated to the public regardless of whether the element, component, or method step is explicitly recited in the claims. No claim element herein is to be construed under the provisions of 35 U.S.C. 112(f) unless the element is expressly recited using the phrase “means for.” As used herein, the terms “comprises,” “comprising,” or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Finally, it should be understood that any of the above described concepts can be used alone or in combination with any or all of the other above described concepts. Although various embodiments have been disclosed and described, one of ordinary skill in this art would recognize that certain modifications would come within the scope of this disclosure. Accordingly, the description is not intended to be exhaustive or to limit the principles described or illustrated herein to any precise form. Many modifications and variations are possible in light of the above teaching. | 30,919 |
11861896 | DETAILED DESCRIPTION Autonomous navigation functions of a UAV conventionally rely upon various onboard sensors, which generate data based on the UAV and/or the environment in which the UAV is operating. The data is generally processed at the UAV to determine one or more aspects of functionality for the UAV, including, for example, how and where the UAV will be flown, whether to capture images and what to focus those images on, whether to follow a subject or a defined flight path, or the like. This processing typically accounts for various environmental and UAV constraints, such as locations of obstacles (e.g., objects) within the environment in which the UAV is operating, indications of whether those obstacles are stationary or mobile, speed capabilities of the UAV, and other external factors which operate against the UAV in-flight. One common source of sensor data used for UAV navigation are cameras onboard the UAV. For example, one or more cameras coupled to the UAV may continuously or otherwise periodically collect data used to generate images that, when processed by a vision-based navigation system of the UAV, instruct the autonomous navigation functions of the UAV. Conventionally, onboard cameras used for vision-based navigation have infrared filters to prevent infrared data from being collected or to otherwise limit the amount of infrared data that is collected. That is, infrared data may negatively affect the quality of images and therefore may interfere with image processing for autonomous navigation functionality. Accordingly, the filtering of infrared data from images may enhance such functionality and also result in higher quality images output to a connected device display for consumption by an operator of the UAV. However, such conventional vision-based navigation approaches which rely upon infrared filtering are not optimized for all flight situations and may thus in some cases inhibit autonomous navigation functionality of a UAV. One example of such a situation is where a UAV is being flown in an environment with low or no light, such as outside at nighttime or inside a room that is not illuminated. In such a situation, the UAV must rely upon lights onboard the UAV or lights external to the UAV. In some cases, an inability to accurately perceive the environment in which the UAV is located may force the operator of the UAV to disable obstacle avoidance for autonomous vision-based navigation and manually navigate the UAV. In other cases, it may result in a complete inability of the UAV to autonomously navigate the environment (e.g., flight, takeoff, and/or landing) or damage to the UAV, damage to other property in the environment, and/or injury to anyone nearby the UAV. Implementations of this disclosure address problems such as these using autonomous aerial navigation in low-light and no-light conditions. A UAV as disclosed herein is configured for vision-based navigation while in day mode or night mode and includes one or more onboard cameras which collect image data including infrared data. A learning model usable for depth estimation in an infrared domain as disclosed herein is trained using images simulated to include infrared data. When the UAV is determined to be in night mode, the UAV uses the learning model to perform obstacle avoidance for autonomous vision-based navigation. When the UAV is determined to be in day mode, images produced based on image data including infrared data are used for autonomous vision-based navigation. In some cases, the images used for navigation while the UAV is in day mode may be filtered to remove infrared data therefrom, for example, using a software process or a physical mechanism. In some implementations, the UAV includes one or more blocking mechanisms for preventing or limiting glare otherwise resulting from the exposure of an onboard camera to light (e.g., infrared light) illuminated by a light source onboard the UAV. As used herein, night mode refers to an arrangement of configurations, settings, functions, and/or other aspects of a UAV based on low-light or no-light conditions of an environment in which the UAV is operating. Similarly, and also as used herein, day mode refers to an arrangement of configurations, settings, functions, and/or other aspects of a UAV based on light conditions of an environment in which the UAV is operating sufficient for typical vision-based navigation functionality. Whether a UAV is in night mode or day mode, and when to switch therebetween, is thus based on an amount of light within the environment of the UAV. For example, a UAV may be in night mode when there is insufficient light for navigation using the onboard cameras, and the UAV may otherwise be in day mode. However, in view of potential differences in operating capabilities of UAVs, manufacturing qualities of UAV components, and variations in amounts of light which may be present both in different locations and at different times, the quality of a condition being a low-light condition or a no-light condition may refer to conditions specific to a subject UAV rather than generic conditions that could potentially otherwise apply to multiple types or classes of UAV. To describe some implementations in greater detail, reference is first made to examples of hardware and software structures used to implement autonomous aerial navigation in low-light and no-light conditions.FIG.1is an illustration of an example of a UAV system100. The system100includes a UAV102, a controller104, a dock106, and a server108. The UAV102is a vehicle which may be controlled autonomously by one or more onboard processing aspects or remotely controlled by an operator, for example, using the controller104. The UAV102may be implemented as one of a number of types of unmanned vehicle configured for aerial operation. For example, the UAV102may be a vehicle commonly referred to as a drone, but may otherwise be an aircraft configured for flight within a human operator present therein. In particular, the UAV102may be a multi-rotor vehicle. For example, the UAV102may be lifted and propelled by four fixed-pitch rotors in which positional adjustments in-flight may be achieved by varying the angular velocity of each of those rotors. The controller104is a device configured to control at least some operations associated with the UAV102. The controller104may communicate with the UAV102via a wireless communications link (e.g., via a Wi-Fi network, a Bluetooth link, a ZigBee link, or another network or link) to receive video or images and/or to issue commands (e.g., take off, land, follow, manual controls, and/or commands related to conducting an autonomous or semi-autonomous navigation of the UAV102). The controller104may be or include a specialized device. Alternatively, the controller104may be or includes a mobile device, for example, a smartphone, tablet, laptop, or other device capable of running software configured to communicate with and at least partially control the UAV102. The dock106is a structure which may be used for takeoff and/or landing operations of the UAV102. In particular, the dock106may include one or more fiducials usable by the UAV102for autonomous takeoff and landing operations. For example, the fiducials may generally include markings which may be detected using one or more sensors of the UAV102to guide the UAV102from or to a specific position on or in the dock106. In some implementations, the dock106may further include components for controlling and/or otherwise providing the UAV102with flight patterns or flight pattern information and/or components for charging a battery of the UAV102while the UAV102is on or in the dock106. The server108is a remote computing device from which information usable for operation of the UAV102may be received and/or to which information obtained at the UAV102may be transmitted. For example, signals including information usable for updating aspects of the UAV102may be received from the server108. The server108may communicate with the UAV102over a network, for example, the Internet, a local area network, a wide area network, or another public or private network. Although not illustrated for simplicity, the server108may, alternatively or additionally, communicate with the dock106over the same or a different network, for example, the Internet, a local area network, a wide area network, or another public or private network. For example, the communication may include flight patterns or other flight pattern information. In some implementations, the system100may include one or more additional components not shown inFIG.1. In some implementations, one or more components shown inFIG.1may be omitted from the system100, for example, the server108. An example illustration of a UAV200, which may, for example, be the UAV102shown inFIG.1, is shown inFIGS.2A-C.FIG.2Ais an illustration of an example of the UAV200as seen from above. The UAV200includes a propulsion mechanism202including some number of propellers (e.g., four) and motors configured to spin the propellers. For example, the UAV200may be a quad-copter drone. The UAV200includes image sensors, including a high-resolution image sensor204. This image sensor204may, for example, be mounted on a gimbal to support steady, low-blur image capture and object tracking. The UAV200also includes image sensors206,208, and210that are spaced out around the top of the UAV200and covered by respective fisheye lenses to provide a wide field of view and support stereoscopic computer vision. The image sensors206,208, and210generally have a resolution which is lower than a resolution of the image sensor204. The UAV200also includes other internal hardware, for example, a processing apparatus (not shown). In some implementations, the processing apparatus is configured to automatically fold the propellers when entering a dock (e.g., the dock106shownFIG.1), which may allow the dock to have a smaller footprint than the area swept out by the propellers of the propulsion mechanism202. FIG.2Bis an illustration of an example of the UAV200as seen from below. From this perspective, three more image sensors212,214, and216arranged on the bottom of the UAV200may be seen. These image sensors212,214, and216may also be covered by respective fisheye lenses to provide a generally wide field of view and support stereoscopic computer vision. The various image sensors of the UAV200may enable visual inertial odometry (VIO) for high resolution localization and obstacle detection and avoidance. For example, the image sensors may be used to capture images including infrared data which may be processed for day or night mode navigation of the UAV200. The UAV200also includes a battery in battery pack220attached on the bottom of the UAV200, with conducting contacts218to enable battery charging. The bottom surface of the battery pack220may be a bottom surface of the UAV200. In some implementations, the UAV200may include one or more light blocking mechanisms for reducing or eliminating glare at an image sensor otherwise introduced by a light source.FIG.2Cis an illustration of an example of a portion of the UAV200including such a light blocking mechanism222. The light blocking mechanism222includes a number of protrusions (e.g., four) coupled to a portion of an arm of the UAV200. Openings224represent locations at which light sources may be coupled. The light sources may, for example, be infrared light emitting diode (LED) elements. In the example shown, two infrared LEDs may be coupled to the arm of the UAV200. In at least some cases, the infrared LEDs may be omnidirectional. Openings226represent locations at which cameras may be coupled. The cameras may, for example, be cameras configured to collect image data including infrared data. In at least some cases, the cameras may have fisheye lenses. Thus, the cameras which may be coupled to the arm of the UAV200within the openings226may be cameras which do not use or have infrared filtering. In operation, without the light blocking mechanism222, the light sources coupled to the openings224may shine directly into image sensors of the cameras coupled to the openings226. This direct shining may introduce glare negatively affecting both the ability of the cameras to be used for vision-based navigation functionality of the UAV200as well as the quality of images generated based on the data collected using the cameras. The protrusions of the light blocking mechanism222thus operate to block light from the light sources coupled to the openings224from interfering with the cameras coupled to the openings226, for example, by reducing or eliminating glare otherwise caused by the light sources directly reaching the image sensors of those cameras. In some implementations, a software infrared light filter may be used in addition to or in lieu of the light blocking mechanism222. FIG.3is an illustration of an example of a controller300for a UAV, which may, for example, be the UAV102shown inFIG.1. The controller300may, for example, be the controller104shown inFIG.1. The controller300may provide a user interface for controlling the UAV and reviewing data (e.g., images) received from the UAV. The controller300includes a touchscreen302, a left joystick304, and a right joystick306. In the example as shown, the touchscreen302is part of a mobile device308(e.g., a smartphone) that connects to a controller attachment310, which, in addition to providing addition control surfaces including the left joystick304and the right joystick306, may provide range extending communication capabilities for longer distance communication with the UAV. FIG.4is an illustration of an example of a dock400for facilitating autonomous landing of a UAV, for example, the UAV102shown inFIG.1. The dock may, for example, be the dock106shown inFIG.1. The dock400includes a landing surface402with a fiducial404, charging contacts406for a battery charger, a box408in the shape of a rectangular box with a door410, and a retractable arm412. The landing surface402is configured to hold a UAV. The UAV may be configured for autonomous landing on the landing surface402. The landing surface402has a funnel geometry shaped to fit a bottom surface of the UAV at a base of the funnel. The tapered sides of the funnel may help to mechanically guide the bottom surface of the UAV into a centered position over the base of the funnel during a landing. For example, corners at the base of the funnel may server to prevent the aerial vehicle from rotating on the landing surface402after the bottom surface of the aerial vehicle has settled into the base of the funnel shape of the landing surface402. For example, the fiducial404may include an asymmetric pattern that enables robust detection and determination of a pose (i.e., a position and an orientation) of the fiducial404relative to the UAV based on an image of the fiducial404, for example, captured with an image sensor of the UAV. The conducting contacts406are contacts of a battery charger on the landing surface402, positioned at the bottom of the funnel. The dock400includes a charger configured to charge a battery of the UAV while the UAV is on the landing surface402. For example, a battery pack of the UAV (e.g., the battery pack220shown inFIG.2) may be shaped to fit on the landing surface402at the bottom of the funnel shape. As the UAV makes its final approach to the landing surface402, the bottom of the battery pack will contact the landing surface and be mechanically guided by the tapered sides of the funnel to a centered location at the bottom of the funnel. When the landing is complete, the conducting contacts of the battery pack may come into contact with the conducting contacts406on the landing surface402, making electrical connections to enable charging of the battery of the UAV. The dock400may include a charger configured to charge the battery while the UAV is on the landing surface402. The box408is configured to enclose the landing surface402in a first arrangement and expose the landing surface402in a second arrangement. The dock400may be configured to transition from the first arrangement to the second arrangement automatically by performing steps including opening the door410of the box408and extending the retractable arm412to move the landing surface402from inside the box408to outside of the box408. The landing surface402is positioned at an end of the retractable arm412. When the retractable arm412is extended, the landing surface402is positioned away from the box408of the dock400, which may reduce or prevent propeller wash from the propellers of a UAV during a landing, thus simplifying the landing operation. The retractable arm412may include aerodynamic cowling for redirecting propeller wash to further mitigate the problems of propeller wash during landing. The retractable arm supports the landing surface402and enables the landing surface402to be positioned outside the box408, to facilitate takeoff and landing of a UAV, or inside the box408, for storage and/or servicing of a UAV. In some implementations, the dock400includes a second, auxiliary fiducial414on an outer surface of the box408. The root fiducial404and the auxiliary fiducial414may be detected and used for visual localization of the UAV in relation the dock400to enable a precise landing on a small landing surface402. For example, the fiducial404may be a root fiducial, and the auxiliary fiducial414is larger than the root fiducial404to facilitate visual localization from farther distances as a UAV approaches the dock400. For example, the area of the auxiliary fiducial414may be 25 times the area of the root fiducial404. For example, the auxiliary fiducial414may include an asymmetric pattern that enables robust detection and determination of a pose (i.e., a position and an orientation) of the auxiliary fiducial414relative to the UAV based on an image of the auxiliary fiducial414captured with an image sensor of the UAV. Although not illustrated, in some implementations, the dock400can include one or more network interfaces for communicating with remote systems over a network, for example, the Internet, a local area network, a wide area network, or another public or private network. The communication may include flight patterns or other flight pattern information. Additionally, the dock400can include one or more wireless interfaces for communicating with UAVs, for example, for controlling and/or otherwise providing the UAVs with flight patterns or flight pattern information. FIG.5is a block diagram of an example of a hardware configuration of a UAV500, which may, for example, be the UAV102shown inFIG.1. The UAV500includes a processing apparatus502, a data storage device504, a sensor interface506, a communications interface508, propulsion control interface510, a user interface512, and an interconnect514through which the processing apparatus502may access the other components. The processing apparatus502is operable to execute instructions that have been stored in the data storage device504or elsewhere. The processing apparatus502is a processor with random access memory (RAM) for temporarily storing instructions read from the data storage device504or elsewhere while the instructions are being executed. The processing apparatus502may include a single processor or multiple processors each having single or multiple processing cores. Alternatively, the processing apparatus502may include another type of device, or multiple devices, capable of manipulating or processing data. The data storage device504is a non-volatile information storage device, for example, a solid-state drive, a read-only memory device (ROM), an optical disc, a magnetic disc, or another suitable type of storage device such as a non-transitory computer readable memory. The data storage device504may include another type of device, or multiple devices, capable of storing data for retrieval or processing by the processing apparatus502. The processing apparatus502may access and manipulate data stored in the data storage device504via the interconnect514, which may, for example, be a bus or a wired or wireless network (e.g., a vehicle area network). The sensor interface506is configured to control and/or receive data from one or more sensors of the UAV500. The data may refer, for example, to one or more of temperature measurements, pressure measurements, a global positioning system (GPS) data, acceleration measurements, angular rate measurements, magnetic flux measurements, a visible spectrum image, an infrared image, an image including infrared data and visible spectrum data, and/or other sensor output. For example, the one or more sensors from which the data is generated may include single or multiple of one or more of an image sensor, an accelerometer, a gyroscope, a geolocation sensor, a barometer, and/or another sensor. In some implementations, the sensor interface506may implement a serial port protocol (e.g., I2C or SPI) for communications with one or more sensor devices over conductors. In some implementations, the sensor interface506may include a wireless interface for communicating with one or more sensor groups via low-power, short-range communications techniques (e.g., using a vehicle area network protocol). The communications interface508facilitates communication with one or more other devices, for example, a paired dock (e.g., the dock106), a controller (e.g., the controller104), or another device, for example, a user computing device (e.g., a smartphone, tablet, or other device). The communications interface508may include a wireless interface and/or a wired interface. For example, the wireless interface may facilitate communication via a Wi-Fi network, a Bluetooth link, a ZigBee link, or another network or link. In another example, the wired interface may facilitate communication via a serial port (e.g., RS-232 or USB). The communications interface508further facilitates communication via a network, which may, for example, be the Internet, a local area network, a wide area network, or another public or private network. The propulsion control interface510is used by the processing apparatus to control a propulsion system of the UAV500(e.g., including one or more propellers driven by electric motors). For example, the propulsion control interface510may include circuitry for converting digital control signals from the processing apparatus502to analog control signals for actuators (e.g., electric motors driving respective propellers). In some implementations, the propulsion control interface510may implement a serial port protocol (e.g., I2C or SPI) for communications with the processing apparatus502. In some implementations, the propulsion control interface510may include a wireless interface for communicating with one or more motors via low-power, short-range communications (e.g., a vehicle area network protocol). The user interface512allows input and output of information from/to a user. In some implementations, the user interface512can include a display, which can be a liquid crystal display (LCD), a light emitting diode (LED) display (e.g., an OLED display), or another suitable display. In some such implementations, the user interface512may be or include a touchscreen. In some implementations, the user interface512may include one or more buttons. In some implementations, the user interface512may include a positional input device, such as a touchpad, touchscreen, or the like, or another suitable human or machine interface device. In some implementations, the UAV500may include one or more additional components not shown inFIG.5. In some implementations, one or more components shown inFIG.5may be omitted from the UAV500, for example, the user interface512. FIG.6is a block diagram of example software functionality of a UAV system, which may, for example, be the system100shown inFIG.1. In particular, the software functionality is represented as onboard software600running at a UAV, for example, the UAV102shown inFIG.1. The onboard software600includes a mode detection tool602, an autonomous navigation tool604, a model update tool606, and an image filtering tool608. The mode detection tool602configures the UAV for operation in either a day mode or a night mode. The mode detection tool602configures the UAV for day mode operation where a determination is made that an amount of light within the environment in which the UAV is located is sufficient for vision-based navigation of the UAV without use of light sources onboard the UAV. The determination as to whether the amount of light within the environment in which the UAV is located is sufficient for vision-based navigation may be based on one or more of a threshold defined for one or more cameras used for the vision-based navigation, an exposure setting for those one or more cameras, a measurement of light within the environment using another sensor onboard the UAV or another sensor the output of which is reportable to the UAV system, or the like. For example, determining whether to configure the UAV in a day mode configuration or the night mode configuration based on an amount of light within the environment in which the UAV is operating can include measuring an intensity of light within the environment in which the UAV is operating, and automatically configuring the UAV in one of a day mode configuration or a night mode configuration based on the intensity of light, wherein the UAV is automatically configured in the day mode configuration based on the intensity of light meeting a threshold or in the night mode configuration based on the intensity of light not meeting the threshold. The determination may be made prior to takeoff. Alternatively, the determination may be made after some or all takeoff operations have been performed. The mode detection tool602determines which of day mode or night mode applies at a given time and so that configurations of that determined mode may be applied for the operation of the UAV. In particular, when a determination is made to use day mode configurations, onboard light sources (e.g., infrared LEDs) of the UAV may be temporarily disabled to prevent unnecessary or otherwise undesirable illumination. For example, temporarily and selectively disabling infrared LEDs may limit an amount of infrared light which is collected by the image sensors of the cameras used for the vision-based navigation of the UAV in day mode. Other configuration changes to the UAV may also be made as a result of switching from day mode to night mode or from night mode to day mode. The autonomous navigation tool604includes functionality for enabling autonomous flight of the UAV. Regardless of whether the UAV is in day mode or night mode, autonomous flight functionality of the UAV generally includes switching between the use of cameras for vision-based navigation and the use of a global navigation satellite system (GNSS) and an inertial measurement unit (IMU) onboard the UAV for position-based navigation. In particular, autonomous flight of the UAV may use position-based navigation where objects within an environment in which the UAV is operating are determined to be at least some distance away from the UAV, and autonomous flight of the UAV may instead use vision-based navigation where those objects are determined to be less than that distance away from the UAV. With position-based navigation, the UAV may receive a series of location signals through a GNSS receiver. The received GNSS signals may be indicative of locations of the UAV within a world frame of reference. The UAV may use the location signals from the GNSS receiver to determine a location and velocity of the UAV. The UAV may determine an acceleration signal and an orientation signal within a navigation frame of reference based on acceleration signals from one or more accelerometers and angular rate signals from one or more gyroscopes, such as which may be associated with the IMU onboard the UAV. With vision-based navigation, one or more onboard cameras of the UAV may continuously or otherwise periodically collect data usable to generate images. The image may be processed in real-time or substantially in real-time to identify objects within the environment in which the UAV is operated and to determine a relative position of the UAV with respect to those objects. Depth estimation may be performed to determine the relative position of the UAV with respect to an object. Performing depth estimation includes modeling depth values for various pixels of the images generated based on the data collected using the onboard cameras. A depth value may, for example, be modeled according to RGB inputs collected for a subject pixel. Based on the depth estimation values and output from the onboard IMU, the trajectory of the UAV toward a detected object may be evaluated to enable the UAV to avoid object collision. The manner by which autonomous flight functionality is achieved using vision-based navigation or position-based navigation depends upon whether the UAV is in day mode or night mode. As described above with respect toFIG.2C, the UAV may include one or more cameras which do not have or use infrared filters. These onboard cameras thus collect image data which includes infrared data. However, as has been noted, infrared data can obscure the ultimate look of an image and thus may interfere with conventional image processing for vision-based navigation. Thus, when the UAV is in day mode, infrared data may be filtered out of the images used for vision-based navigation, for example, as described below with respect to the image filtering tool608. The infrared filtered images may then be processed using RGB-based depth estimation as described above. When the UAV is in night mode, and thus while infrared LEDs onboard the UAV are used to illuminate the environment in which the UAV is operating, the cameras will collect infrared data and a different technique for depth estimation in the infrared domain is used. In particular, in night mode, the autonomous navigation tool604uses intelligence for low-light and no-light depth estimation within the infrared domain. The intelligence may be an algorithm, a learning model, or other aspect configured to take in some input in the form of image data including infrared data and generate some output usable by or for the vision-based navigation functionality of the UAV. It is further noted that, due to the limited range of infrared LEDs, illumination reflections received by the onboard cameras of the UAV based on infrared light may result in the vision-based navigation functionality of the UAV being less reliable at some ranges than if that functionality otherwise used non-infrared light. Thus, in night mode, the distance representing the threshold at which vision-based navigation is used may be less than the distance used in day mode. The model update tool606includes functionality related to the updating of a learning model as the intelligence used by the autonomous navigation tool604for vision-based navigation of the UAV using infrared data in night mode. The model update tool606maintains a copy of the learning model at the UAV and applies updates to the learning model based on changes made at a server at which the learning model is trained. For example, the model update tool606may receive updates to the learning model from the server, such as over a network. In some implementations, the model update tool606may further select, determine, or identify one or more images captured by the onboard cameras of the UAV to use for training the learning model. For example, the images may be images captured without infrared data or from which infrared data has been filtered out. The learning model may be or include one or more of a neural network (e.g., a convolutional neural network, recurrent neural network, or other neural network), decision tree, vector machine, Bayesian network, genetic algorithm, deep learning system separate from a neural network, or other learning model. The learning model applies intelligence to identify complex patterns in the input and to leverage those patterns to produce output and refine systemic understanding of how to process the input to produce the output. In implementations where the intelligence is an algorithm or other aspect, the model update tool606uses functionality as described above for updating the algorithm or other aspect. The image filtering tool608filters images generated using collected image data which includes infrared data to remove the infrared data therefrom. Because night mode operation of the UAV includes the use of infrared data, the image filtering tool608may include or otherwise refer to functionality performed for images generated while the UAV is in day mode. Thus, when the UAV is in day mode and an image is generated using image data collected from one or more onboard cameras of the UAV, that image data is processed using the image filtering tool608to prepare the image data for use in vision-based navigation for the UAV. Filtering the image data to remove infrared data therefrom includes modifying the appearance of the image data, which may have a somewhat pink tonal appearance than image data collected using a camera which has or uses an infrared filter, to reduce or eliminate those pink tones. Those pink tones skew the perceptible quality of images and thus may negatively impact the functionality of day mode vision-based navigation and/or the overall appearance and quality of output presented to the operator the UAV. The filter applied by the image filtering tool608may be modeled based on software infrared filters which may be used for cameras. Alternatively, the filter applied by the image filtering tool608may be modeled using a learning model or other intelligence trained for infrared data removal. In some implementations, the image filtering tool608may be omitted. For example, the UAV may include both cameras which have or use infrared filters and cameras which do not have or use infrared filters. A camera which has or uses an infrared filter may use a software process for infrared filtering, a mechanical component for infrared filtering, or both. The cameras which have or use the infrared filters may be used for vision-based navigation of the UAV while the UAV is in day mode, and the cameras which do not have or use infrared filters may be used for vision-based navigation of the UAV while the UAV is in night mode. In another example, the autonomous navigation tool604and other aspects disclosed herein may operate against images that include both visible and infrared light. FIG.7is a block diagram of an example of UAV navigation using night mode obstacle avoidance intelligence. At least some of the operations shown and described with respect toFIG.7may, for example, be performed by or using the automated navigation tool604shown inFIG.6. Input700representing input which can be collected by a camera of a UAV, for example, the UAV102shown inFIG.1, is collected and processed using an image processing tool702to produce an image704. The input700may, for example, include image data including infrared data measured using an image sensor of an onboard camera of the UAV. The image processing tool702represents software usable to produce the image704from the input700. The image704is produced based on the infrared data of the input700and thus includes infrared aspects. However, in some implementations, the image704may be produced based on data other than infrared data. For example, the input700may include data measured from visible light and/or another form of light other than infrared light. The image704is provided to an obstacle avoidance tool706, which uses a learning model708trained for depth estimation for night mode images to detect objects within the image704. The training of the learning model708is described below with respect toFIG.8. The learning model708takes the image704as input and indicates a detection of a number of objects within the image704as the output. Where objects are detected, the obstacle avoidance tool706uses the indication output by the learning model708to determine a flight operation to prevent a collision by the UAV with the detected obstacle. The flight operation includes or refers to a maneuver for the UAV which changes a current path of the UAV to prevent the UAV from colliding with the detected object. In some implementations, other intelligence may be used in place of the learning model708. For example, the obstacle avoidance tool708may use an algorithm or other intelligence aspect configured to take the image704as input and indicate a detection of a number of objects within the image704as the output. The obstacle avoidance tool706outputs a control signal710including a command configured to cause the flight operation for preventing the collision by the UAV with the detected obstacle. The control signal710is received and processed by a propulsion control tool712of the UAV. The propulsion control tool712is configured to interface with one or more components associated with a propulsion system of the UAV to implement the flight operation associated with the control signal710. The output of the propulsion control tool712is a flight operation714performed or performable by the UAV. FIG.8is a block diagram of an example of a learning model800trained for night mode obstacle avoidance. The learning model800may, for example, be the learning model708used for night mode obstacle avoidance intelligence. The learning model800is trained using training samples802produced by processing input image data804. The input image data804are images generated based on image data collected by a camera having or using an infrared filter or otherwise after infrared data has been removed therefrom. The training samples802are images resulting from the processing of the input image data804to represent the images of the input image data804as if they had been generated in night mode without the use of infrared filtering. The training of the learning model800thus is to prepare an intelligence for vision-based navigation of the UAV in night mode. The learning model800may be trained at a server of a UAV system, for example, the server108shown inFIG.1. To produce the training samples802from the input image data804, a first copy of the input image data804is first processed using an infrared reflection mask simulation tool806. The infrared reflection mask simulation tool806simulates a reflection of infrared data from onboard infrared LEDs of the UAV to understand how that reflection could have interacted with exposure features of the camera or cameras which collected the input image data804. The output of the infrared reflection mask simulation tool806may thus be a determination of a range of the simulated infrared illumination within the environment depicted by the input image data804. At the same time as the infrared reflection mask simulation tool806is processing the first copy of the input image data804, or before or after such processing, a second copy of the input image data804is processed by a range-based darkening tool808. The range-based darkening tool808darkens RGB values within parts of the input image data804. The parts to darken are determined based on an expected range of infrared illumination. Thus, parts of the input image data804which are within the determined range (e.g., from the point of origin, being the camera of the UAV) are not processed by the range-based darkening tool808, and the remaining parts of the input image data804(e.g., parts beyond the determined range) are darkened. Darkening those parts may include applying a darkening filter to darken RGB values of pixels, remove brightness, and/or otherwise darken the respective input image data804. The output of the infrared reflection mask simulation tool806and the output of the range-based darkening tool808are then received as input to an image blending tool810. The image blending tool810blends those outputs, which are images modified either by an infrared reflection mask or by darkening, to produce a blended image which includes both the infrared reflection mask adjustment values and the darkened values. Blending the output of the infrared reflection mask simulation tool and the output of the range-based darkening tool808may include combining a first image representing the output of the infrared reflection mask simulation tool806and a second image representing the output of the range-based darkening tool808. The output of the image blending tool810is then received at and processed by a noise augmentation tool812. The noise augmentation tool812introduces camera noises to the image produced by the image blending tool810to cause the image to appear as if it had been produced using a camera. The camera noises include artifacts typically introduced by the image capture process using a camera, for example, based on light exposure and other factors. The training samples802are the output of the noise augmentation tool812. As a result of the processing performed by the infrared reflection mask simulation tool806, the range-based darkening tool808, the image blending tool810, and the noise augmentation tool812, the training samples802represent image data simulated to include infrared data and which may have effectively been collected at a UAV during night mode. The training samples802are then used to train the learning model800for depth estimation. The learning model800, once trained, or after updates, may be transmitted to a UAV for use in automated navigation, for example, using the model update tool606shown inFIG.6. FIG.9is a block diagram of an example of UAV navigation in day mode by filtering infrared data from images. At least some of the operations shown and described with respect toFIG.9may, for example, be performed by or using the image filtering tool608shown inFIG.6. An image900is produced by image processing functionality of a UAV (e.g., the image processing tool702shown inFIG.7) based on image data including infrared data collected by a camera of the UAV. The image900may, for example, be the image704shown inFIG.7and thus includes infrared data. A day mode check tool902checks whether the UAV is operating in day mode or night mode. Where the day mode check tool902determines that the UAV is operating in night mode, the remaining operations shown and described with respect toFIG.9are bypassed and the image900is further processed for autonomous vision-based navigation without filtering. Where the day mode check tool902determines that the UAV is operating in day mode, an image filtering tool904performs filtering against the image900based on calibrations906to produce a filtered image908. The filtering performed by the image filtering tool904reduces or otherwise entirely removes infrared data from the image900. Thus, the filtered image908represents the image900with less or otherwise without infrared data. The image filtering tool904may, for example, apply a filter for removing pink tones within the image900resulting from the collection of infrared data and use of same to produce the image900. The calibrations include or refer to settings used for the filtering performed by the image filtering tool904. In some implementations, the calibrations may be defined based on one or more configurations of the camera used to collect the image data processed to produce the image900. The filtered image908is thereafter used as input to an obstacle avoidance tool910, which processes the filtered image to detect a number of objects within an environment in which the UAV is operating. Autonomous vision-based navigation in day mode is then facilitated based on the output of the obstacle avoidance tool910. To further describe some implementations in greater detail, reference is next made to examples of techniques for autonomous aerial navigation in low-light and no-light conditions, for example, as described with respect toFIGS.1-9.FIG.10is a flowchart of an example of a technique1000for night mode obstacle avoidance using a learning model trained using infrared data.FIG.11is a flowchart of an example of a technique1100for training a learning model by synthetic generation and simulation of infrared data.FIG.12is a flowchart of an example of a technique1200for filtering infrared data from images processed during day mode operations of a UAV. The techniques1000,1100, and/or1200can be executed using computing devices, such as the systems, hardware, and software described with respect toFIGS.1-9. The techniques1000,1100, and/or1200can be performed, for example, by executing a machine-readable program or other computer-executable instructions, such as routines, instructions, programs, or other code. The steps, or operations, of the techniques1000,1100, and/or1200or another technique, method, process, or algorithm described in connection with the implementations disclosed herein can be implemented directly in hardware, firmware, software executed by hardware, circuitry, or a combination thereof. For simplicity of explanation, the techniques1000,1100, and1200are each depicted and described herein as a series of steps or operations. However, the steps or operations in accordance with this disclosure can occur in various orders and/or concurrently. Additionally, other steps or operations not presented and described herein may be used. Furthermore, not all illustrated steps or operations may be required to implement a technique in accordance with the disclosed subject matter. Referring first toFIG.10, the flowchart of the example of the technique1000for night mode obstacle avoidance using a learning model trained using infrared data is shown. At1002, a UAV is detected to be in a night mode configuration based on an amount of light within an environment in which the UAV is operating. At1004, an onboard light source of the UAV is caused to emit an infrared light based on the night mode configuration of the UAV. At1006, an image is produced from image data collected using an onboard camera of an UAV while the onboard light source emits the infrared light, in which the image data includes infrared data. At1008, an object is detected within the environment in which the UAV is operating by processing the image using a learning model trained for depth estimation of infrared images. At1010, a flight operation for the UAV to perform to avoid a collision with the object is determined. At1012, the UAV is caused to perform the flight operation. In some implementations, the technique1000may be performed to cause a performance of a flight operation based on light other than infrared light emitted from an onboard light source of the UAV. For example, an onboard light source of the UAV may be equipped or otherwise configured to emit visible light and/or another form of light other than infrared light. In such a case, an image may be produced from image data collected using the onboard camera of the UAV while the onboard light source of the UAV emits that visible light and/or other form of light, an object may be detected within the environment in which the UAV is operating based on the age, and the flight operation to be performed to avoid a collision with that object may be determined. Referring next toFIG.11, the flowchart of the example of the technique1100for training a learning model by synthetic generation and simulation of infrared data is shown. At1102, input image data is received or accessed. For example, the input image data may be received from a UAV including a camera used to collect the input image data. In another example, the input image data may be accessed from a memory which stores the input image data. At1104, infrared reflection mask simulation is performed against a first copy of input image data to produce a first image including infrared data. At1106, range-based darkening is performed against a second copy of the input image data to produce a second image including darkened RGB color data. At1108, the first image and the second image are combined to produce a combined image including the infrared data and the darkened RGB color data. At1110, camera noise is introduced within the combined image to produce training data. At1112, the learning model is trained using the training data. Referring finally toFIG.12, the flowchart of the example of the technique1200for filtering infrared data from images processed during day mode operations of a UAV is shown. At1202, an image is produced from image data collected using an onboard camera of a UAV, wherein the image data includes infrared data. At1204, the UAV is detected to be in a day mode configuration based on an amount of light within an environment in which the unmanned aerial vehicle is operating. At1206, at least some of the infrared data is removed from the image based on the day mode configuration and calibrations associated with the onboard camera to produce a filtered image. At1208, an object is detected within the environment in which the UAV is operating based on the filtered image. At1210, a flight operation for the UAV to perform to avoid a collision with the object is determined. At1212, the UAV is caused to perform the flight operation. The implementations of this disclosure can be described in terms of functional block components and various processing operations. Such functional block components can be realized by a number of hardware or software components that perform the specified functions. For example, the disclosed implementations can employ various integrated circuit components (e.g., memory elements, processing elements, logic elements, look-up tables, and the like), which can carry out a variety of functions under the control of one or more microprocessors or other control devices. Similarly, where the elements of the disclosed implementations are implemented using software programming or software elements, the systems and techniques can be implemented with a programming or scripting language, such as C, C++, Java, JavaScript, assembler, or the like, with the various algorithms being implemented with a combination of data structures, objects, processes, routines, or other programming elements. Functional aspects can be implemented in algorithms that execute on one or more processors. Furthermore, the implementations of the systems and techniques disclosed herein could employ a number of conventional techniques for electronics configuration, signal processing or control, data processing, and the like. The words “mechanism” and “component” are used broadly and are not limited to mechanical or physical implementations, but can include software routines in conjunction with processors, etc. Likewise, the terms “system” or “tool” as used herein and in the figures, but in any event based on their context, may be understood as corresponding to a functional unit implemented using software, hardware (e.g., an integrated circuit, such as an ASIC), or a combination of software and hardware. In certain contexts, such systems or mechanisms may be understood to be a processor-implemented software system or processor-implemented software mechanism that is part of or callable by an executable program, which may itself be wholly or partly composed of such linked systems or mechanisms. Implementations or portions of implementations of the above disclosure can take the form of a computer program product accessible from, for example, a computer-usable or computer-readable medium. A computer-usable or computer-readable medium can be a device that can, for example, tangibly contain, store, communicate, or transport a program or data structure for use by or in connection with a processor. The medium can be, for example, an electronic, magnetic, optical, electromagnetic, or semiconductor device. Other suitable mediums are also available. Such computer-usable or computer-readable media can be referred to as non-transitory memory or media, and can include volatile memory or non-volatile memory that can change over time. A memory of an apparatus described herein, unless otherwise specified, does not have to be physically contained by the apparatus, but is one that can be accessed remotely by the apparatus, and does not have to be contiguous with other memory that might be physically contained by the apparatus. While the disclosure has been described in connection with certain implementations, it is to be understood that the disclosure is not to be limited to the disclosed implementations but, on the contrary, is intended to cover various modifications and equivalent arrangements included within the scope of the appended claims, which scope is to be accorded the broadest interpretation so as to encompass all such modifications and equivalent structures as is permitted under the law. | 53,009 |
11861897 | In the figures:1. End cover;2. Dark box body;3. Spatial frequency domain imaging apparatus;4. Telescopic section;5. Opening-and-closing apparatus;201. Cylindrical dark box shell;202. Threaded hole;203. Annular boss;204: Cooling fan;205. Observation port cover;301. Light source module;302. Collimating lens;303. Dichroic mirror;304. Digital micromirror apparatus control board;305. Achromatic lens;306. First linear polarizer;307. Second linear polarizer;308. Camera;309. Reflector;310. Light source module controller;311. Fixed ear plate;312. Digital micromirror apparatus;313. Laser diode;314. TEC refrigeration sheet;315. Heat insulation ring;316. Radiator;317. Square box housing;501. Sector skeleton;502. Middle section;503. Support bracket;504. Light-shielding cloth;505. Opening-and-closing skeleton. DETAILED DESCRIPTION OF THE EMBODIMENTS The following further describes the present disclosure with reference to the accompanying drawings and specific embodiments, but the protection scope of the present disclosure is not limited thereto. The following clearly and completely describes the technical solutions of the present disclosure with reference to the accompanying drawings. Apparently, the described embodiments are merely some rather than all of the embodiments of the present disclosure. Based on the embodiments of the present disclosure, all other embodiments obtained by those of ordinary skill in the art without creative effort fall within the protection scope of the present disclosure. As shown inFIG.1andFIG.2, a portable apparatus for detecting early crop diseases based on spatial frequency domain imaging of the present disclosure includes the end cover1, the dark box body2, the spatial frequency domain imaging apparatus3, two telescopic sections4, and two opening-and-closing apparatuses5. The spatial frequency domain imaging apparatus3is mounted on the annular boss203inside the dark box body2. The end cover1is mounted on top of the dark box body2. The telescopic section4is mounted at the bottom of the dark box body2. The opening-and-closing apparatus5is mounted at the bottom of the telescopic section4. In this embodiment, one telescopic section4and one opening-and-closing apparatus5are continuously mounted at the bottom of the opening-and-closing apparatus5in the same order, or the telescopic section4and the opening-and-closing apparatus5are continuously added based on an actual requirement. The telescopic section4is used for crops with different heights. As shown inFIG.3,FIG.4, andFIG.5, the spatial frequency domain imaging apparatus3includes the square box housing317, three light source modules301, three collimating lenses302, two dichroic mirrors303, the digital micromirror apparatus control board304, the achromatic lens305, the first linear polarizer306, the second linear polarizer307, the camera308, the reflector309, the light source module controller310, fixed ear plates311, and the digital micromirror apparatus312. The three light source modules301are connected to the light source module controller310, and both the digital micromirror apparatus312and the camera308are connected to a computer. Two light source modules301are mounted on one side of the square box housing317, and the collimating lenses302, the dichroic mirrors303, the digital micromirror apparatus312, the achromatic lens305, and the reflector309are all mounted on a support bracket inside the square box housing317. Two collimating lenses302are respectively opposite to the two light source modules301. Centers of the two dichroic mirrors303are located on axes of the two collimating lenses302. The third light source module301is mounted on another side of the square box housing317which is adjacent to the two light source modules301(to be specific, light of the third light source module301is perpendicular to light of the two light source modules301). The third collimating lens302is mounted on an inner side of the third light source module301. The third collimating lens302is opposite to the third light source module301, and an axis of the third collimating lens302runs through centers of the two dichroic mirrors303. A center of the digital micromirror apparatus312is located on a straight line running through the centers of the two dichroic mirrors303. The center of the digital micromirror apparatus312is further located on an axis of the achromatic lens305. A center of the reflector309is located on the axis of the achromatic lens305, and the center of the reflector309is further located on an axis of the first linear polarizer306. The first linear polarizer306is mounted in a hole at the bottom of the square box housing317. The light source module controller310is mounted on the square box housing317. The digital micromirror apparatus control board304is fixed to the bottom of the square box housing317through a threaded connection. The second linear polarizer307is mounted in front of a lens of the camera308. The camera308is vertically mounted at the bottom of the square box housing317, and a lens extends out of a reserved hole of the square box housing317and shoots downward. As shown inFIG.6, the light source module301is controlled by the light source module controller310to emit light, the light passes through the collimating lens302and the dichroic mirror303onto the digital micromirror apparatus312; the digital micromirror apparatus312is controlled by the computer to reflect structured light with a sine stripe pattern; the structured light passes through the achromatic lens305; after being reflected by the reflector309, the structured light passes downward through the first linear polarizer306onto a crop to be detected; the reflected light on a surface of the crop to be detected passes through the second linear polarizer307, and is received by the camera308; and the camera308transmits acquired image data to the computer for subsequent processing. A polarization angle of the first linear polarizer306and the second linear polarizer307is 90 degrees, to weaken specular reflection on the surface of the sample to be detected, and improve an image demodulation effect. Mounting positions of the three light source modules301are reserved on the square box housing317and the light source modules301with three different wavelengths can be mounted. Under control of the light source module controller310, an image of a sample tissue illuminated by structured light with different wavelengths or a mixture of several wavelengths may be collected to determine a wavelength with optimal performance for the sample. Image switching of the digital micromirror apparatus312is controlled by the computer to be in a same period as acquisition of the camera308. As shown inFIG.7, the light source module301includes the laser diode313, the TEC refrigeration sheet314, the heat insulation ring315, and the radiator316. The laser diode313is pasted on a heat absorption surface of the TEC refrigeration sheet314by using a heat conductive adhesive. An annular heat insulation ring315is sleeved outside the two for alleviating impact of heat dissipated by the laser diode313on remaining devices. The heat insulation ring315and a heat dissipation surface of the TEC refrigeration sheet314are mounted on the radiator316. Four corners of the radiator316are provided with holes, for mounting the light source module301on the square box housing317. The TEC performs refrigeration by using a Peltier effect of a semiconductor. When a direct current power supply passes through a couple consisting of two semiconductor materials, a phenomenon that one end absorbs heat and the other end dissipates heat occurs. Heat of the TEC refrigeration sheet314can be transferred from one side to the other side by using this phenomenon, to reduce a temperature of the laser diode313, thereby prolonging its service life and ensuring light source stability. As shown inFIG.8, the dark box body2includes the dark box shell201, threaded holes202, the annular boss203, the cooling fan204, and the observation port cover205. The end cover1is mounted on a top end of the dark box body2. The annular boss203is located on an upper end inside the dark box shell201. The annular boss203is provided with three mounting threaded holes202corresponding to the fixed ear plates311that are integrated in the spatial frequency imaging domain apparatus3, to ensure that the spatial frequency domain imaging apparatus3is mounted at a fixed position. The dark box shell201is provided with a square through hole close to the light source module301, for mounting the cooling fan204to cool the light source module. An observation port is provided on a lower part of the dark box body2for manual observation of a height for capturing images of the crop. The observation port cover205is connected to the dark box shell201through a rotating shaft. When a suitable height for capturing images is determined, the observation port can be closed to form a darkroom environment. The bottom of the cylindrical dark box body2is connected to the telescopic section4. The telescopic section is made of a flexible material, and the height for capturing images can be changed within a particular range. As shown inFIG.9, the opening-and-closing apparatus5includes sector skeletons501, middle sections502, support brackets503, the light-shielding cloth504, and opening-and-closing skeletons505. The middle section502is annular, and is located at a center of the opening-and-closing apparatus5. The middle section502is made of an elastic rubber material. The middle section502is internally connected to the light-shielding cloth504. The middle section502is externally connected to the sector skeleton501. The light-shielding cloth504is laid between sector skeletons501. A part of the sector skeleton501is a sealed sector structure and is used as the support bracket503. The support bracket503is used to connect to another component, and the middle section502is embedded into the support bracket503. Each of two ends of the middle section502is provided with one long opening-and-closing skeleton505, for manually opening and closing an entire sector. The internal light-shielding cloth504is provided with an opening at the opening-and-closing skeleton505. When the two opening-and-closing skeletons505are closed, the internal light-shielding cloth504can adhere to a stem of the crop tightly, to prevent light from passing through. A guide rail for movement of the opening-and-closing skeleton505is provided inside an outer circumference of the sector skeleton501. A manner of connection between the opening-and-closing apparatus5, the telescopic section4, and the cylindrical dark box body2, and a manner of wiring between the light source module301, the digital micromirror apparatus312, the camera308, and the cooling fan204and the outside are not drawn in the drawings of the present disclosure. A detection method for early crop diseases based on spatial frequency domain imaging includes the following steps: the opening-and-closing skeleton505is rotated to open the light-shielding cloth504, and a crop to be detected is input into the dark box body2from the bottom; as shown inFIG.10, the observation port cover205is opened to observe a shooting distance of the crop sample to be detected, and the shooting distance of the crop sample is adjusted to a suitable distance before the observation port cover205is closed; as shown inFIG.11, all the opening-and-closing apparatuses5are closed, a suitable projection optical wavelength is selected by using the light source module controller310, then the spatial frequency domain imaging apparatus3is controlled by the computer to project structured light of sine grey scale patterns with different spatial frequencies (each spatial frequency adopts three phases: 0, 2 π/3, and 4 π/3) to the crop sample to be detected; after the sine gray pattern is switched each time, the camera308is controlled to acquire a diffuse reflection image of a surface of the crop sample once; after all the diffuse reflection images are captured, all the opening-and-closing apparatuses5are opened and the crop sample is replaced with a reference whiteboard whose height is the same as the crop sample, and the foregoing operations are repeated; uniformity correction is performed on the diffuse reflection image, the image is demodulated, and an alternating current component is extracted; and the alternating current component image is input to a trained disease detection model, and whether the crop to be detected has a disease is determined. Further, specific steps of the image modulation are as follows. First, uniformity correction is performed by using the following formula: R′=I-IdarkIwhite-Idark In the formula, R′ is an image diffuse reflection intensity after correction, Idarkis a dark field image intensity, and Iwhiteis a reference whiteboard image intensity under illumination of planar light. Next, demodulation is performed by using the following formula, to obtain a diffuse reflection amplitude envelope curve of the sample: MAC(x,fx)=23[(I1-I2)2+(I2-I3)2+(I3-I1)2] In the formula, I1, I2, and I3are respectively reflection intensities of pixels of the diffuse reflection image of the sample to be detected in the three phases of each spatial frequency. Finally, the alternating current component of the image is calculated by using the following formula: IAC(x, fx)=MAAC(x, fx)·cos(2πfx+α) In the formula, fxis a spatial frequency of a light source, and α is a spatial phase of the light source. Further, specific steps of obtaining the trained disease detection model are as follows: (1) features of the alternating current component of the diffuse reflection image are extracted by using a surf algorithm; (2) the extracted features are clustered by using a Kmeans algorithm; (3) a Bag of words is constructed, all the features of the alternating current component image are classified into different categories, and then statistics are collected on a frequency of each category of features; and (4) the Bag of words of each picture is used as a feature vector, a category of the picture is used as a label, and training is performed by using an SVM to obtain the disease detection model. The embodiments are preferred embodiments of the present disclosure, but the present disclosure is not limited to the foregoing implementations. Without departing from the essential content of the present disclosure, any obvious improvement, replacement, or modification that can be made by a person skilled in the art belongs to the protection scope of the present disclosure. | 14,649 |
11861898 | DETAILED DESCRIPTION In improvements disclosed herein, a video recording (optionally including audio) of a service call for performing maintenance on a medical imaging device is leveraged to provide the basis for authoring augmented reality (AR) content for an AR-based service manual. The service call may be recorded using a regular camera, or to provide depth information a stereo camera may be used, and/or a range camera. If a single (non-range) camera is used then depth information may be extracted from different vantage points as the technician moves around. The camera may be head-mounted so as to provide the “first person” view of the service person. During the service call, a pointer may optionally be used to mark locations of interest (LOIs), and/or may provide running verbal commentary for recordation. An authoring system is used to convert the recorded video of the service call to AR content for supporting future service calls directed to the same or similar maintenance task. The authoring system may use computer vision (CV) technology such as a Simultaneous Location and Mapping (SLAM) algorithm to simultaneously map the location of the service person and the salient environment (e.g. the serviced component and perhaps neighboring components). The extracted map can be used to align the component in the recorded video with computer-aided design (CAD) drawings of the component, and to spatially position the LOIs in three-dimensional space. A human editor then may add verbal explanation overlay (or this may be done using text-to-speech transcription of a written service manual). The human editor may also add overlays showing LOI markers as appropriate, and optionally may add other AR content assistive for the service technician such as part numbers, pop-up windows showing CAD drawing, CAD animations, or other visuals, colorized highlighting of key parts, and/or so forth. To facilitate use in the field, various stop points may be added to the AR content, and/or the AR content may be segmented into episodes that can be played back in different order (i.e. non-linear playback) to allow for handling of service calls in which steps may be performed in different order. To use the authored AR-based service manual content, the service technician wears a heads-up display (HUD) having a transparent display (i.e. see-through display) which presents the AR overlay on a transparent screen of the HUD so as to be superimposed on the service person's actual view of the component under service. The HUD further includes a camera that captures the technician's view, and applies a SLAM algorithm to align the AR content with this real-time video feed so as to match the AR content overlay (and audio playback, if provided) with the real-time view of the service person. Optionally, the service person can select a preview mode in which the pre-recorded video of the service call used to author the AR content is displayed, with the superimposed AR content. In this way, the service person can see a preview of how a step of the maintenance task is performed. Advantageously the AR processing is unchanged in this preview mode, except for: (i) performing the SLAM processing to align the AR content with the preview video rather than the live video feed, and (ii) displaying the prerecorded video content on the transparent display as an underlay. In a further contemplated variant, the live video feed may be processed to identify a particular component or sub-component so as to automatically retrieve and present the appropriate AR content. In the illustrative embodiments, the HUD employs a see-through display, which advantageously is transparent except for the AR content overlay so as to enable the service person to directly observe the component in real time. In a variant embodiment, the HUD employs an opaque display on which the live video feed is displayed as an underlay of the AR content (known as video augmented reality)—in this variant the service person indirectly “sees” the component being serviced by virtue of viewing the real-time video feed display. As another contemplated variant, instead of a HUD, the AR content could be displayed on the display screen of a cellphone or other mobile device, leveraging the built-in camera provided with most cellphones and other mobile devices to generate the real-time video feed. In this case the device operates in video augmented reality by displaying the live video feed on the cellphone display as an underlay. The disclosed approaches can also be used for updating an AR-based service manual. For example, if the service person encounters an unexpected problem, say due to use of a substitute part in the component of the particular deployed medical imaging device being serviced, then the live feed provided by the camera of the HUD may be used in conjunction with the authoring system to produce updated AR content appropriate for this substitute part. With reference toFIG.1, a system for recording a service call is illustrated. The subject of the illustrative service call is an imaging device10, which in this illustrative example is a hybrid imaging device including a transmission computed tomography (CT) imaging gantry12and a positron emission tomography (PET) imaging gantry14, that is, the illustrative imaging device is a CT/PET imaging device10. More generally, the subject of the service call could be any type of imaging device, e.g. a standalone PET or standalone CT imaging device, a magnetic resonance imaging (MRI) device, a gamma camera for single photon emission computed tomography (SPECT) imaging, an image guided therapy (iGT) device, or so forth. Even more generally, the disclosed AR based servicing guidance devices and methods may be applied to any type of system or device servicing performed in the field, e.g. may be applied to a radiation therapy delivery device, a research device such as a magnet, a cryogenic system, a factory robotic system, a processing furnace, or so forth. The service call is recorded using a mobile camera18capable of recording video, and preferably having a convenient form factor. In the illustrative example, the mobile camera18includes a stereo camera20mounted on eyeglasses21. The stereo camera20includes a left-eye camera22and a right-eye camera24, and thus advantageously acquires stereo video so as to simulate binocular human vision. By mounting the stereo camera20on the eyeglasses21, the mobile camera18provides a “first person”—if the eyeglasses21are worn by a service person who performs (or participates in performing) the servicing then the recorded video of the service call is advantageously from the viewpoint or vantage point of the service person. Instead of the illustrative mobile camera18employing eyeglasses21as the structural support, another eyewear form factor could be used, e.g. goggles, or a camera mounted on a headband or helmet or the like could provide a similar vantage for recording the video of the service call. By using the stereo camera20the recorded video is advantageously binocular in nature and can provide depth information for extracting three-dimensional (3D) information. However, in an alternative embodiment a monocular camera is employed—in this case computer vision can generally extract 3D information based on different vantage points provided by way of natural movement of the camera (due to movement of the service person's head during servicing). As another contemplated variant, a conventional optical camera may be augmented by a range camera to provide depth information. A difficulty with the mobile camera18with the illustrative eyewear-mounted camera20is that it may provide limited support structure for mounting an electronic processor—that is, it may be difficult to integrate onto the eyeglasses21a microprocessor or microcontroller with sufficient processing capacity to handle the video generation and optional processing. This is addressed in the illustrative embodiment by having the mobile camera18in wireless communication with a mobile device30, such as an illustrative cellular telephone (cellphone, sometimes referred to as a smartphone when equipped to execute application programs or “apps”) or a tablet computer. The wireless connection32may, by way of non-limiting illustration, be a Bluetooth™ wireless connection, WiFi connection, or other short- or intermediate-range wireless connection. A wired connection is also contemplated, e.g. a USB cable may physically connect the mobile camera18with the mobile device30. The illustrative mobile device30includes typical components such as an opaque display34disposed on a front side of the mobile device30, and a rear-facing camera36(occluded from view inFIG.1and hence illustrated in phantom) arranged on an opposite backside of the mobile device30. As is typical for such mobile devices, the display34is preferably a touch-sensitive display, e.g. a capacitive touch-sensitive display, or a surface-acoustic wave (SAW) display, so that a user can interact by touch with the mobile device30to run application programs (“apps”), provide inputs, operate a “soft” keyboard, and/or so forth. Recorded video40is transmitted via the wireless connection32from the eyewear-based mobile camera18to the mobile device30. The mobile device30typically also includes a microphone38, which may be used to provide an audio component for the recorded video40, or alternatively a microphone may be mounted on the eyeglasses21(not shown); as another alternative, the recorded video40may have no recorded sound (video-only). During the recording of the service call, the mobile device30may optionally be performing other tasks, e.g. by running other apps. For example, the mobile device30in the form of a cellphone may be used to telephonically discuss the service call with a remote expert42(e.g. a human expert with specialized knowledge about the particular service being performed, and/or about the particular device10or component of that device undergoing service). As will be described next, the recorded video40provides the basis for authoring augmented vision (AR) content for use during a subsequent service call. To this end, as diagrammatically shown inFIG.1the recorded video40is transferred to an AR content authoring device50in the form of a computer52executing instructions read from a non-transitory storage medium (not shown—such as a hard disk drive, RAID, or other magnetic storage medium; a flash memory, solid state drive, or other electronic storage medium; an optical disk or other optical storage medium; various combinations thereof; or so forth) to perform an AR content authoring method54. The AR content authoring may, for example, entail spatially registering computer aided design (CAD) drawings56with the recorded video40using computer vision (CV) processing such as Simultaneous Location and Mapping (SLAM) processing to align the AR content (e.g. part number annotations taken from the CAD drawings56, picture-in-picture (PIP) windows showing CAD drawings or portions thereof or animations generated from the CAD drawings56, and/or so forth) with the recorded video40. It is to be appreciated thatFIG.1merely shows one illustrative arrangement for capturing the video40of the service call. As already mentioned, other types of eyewear or headwear (e.g. goggles, headband, helmet, et cetera) may serve as the support structure for the mobile camera18so as to provide a first-person view of the servicing process; additionally, as already mentioned the stereo camera20may be replaced by a camera of another type, e.g. a monocular camera, a camera-plus-range camera combination, and/or so forth. In other embodiments, the recorded video of the service call may not be from the first-person view of the service person. For example, in an alternative embodiment, the mobile camera comprises the mobile device30with its rear-facing camera36arranged on the backside of the mobile device30serving as the camera for recording the service call. In this case, the vantage will not be first-person, but the service person (or an assistant) can hold the mobile device30so as to direct the rear-facing camera36appropriately to record the entire service process or only portions of the service process of interest (e.g. difficult portions). To assist in authoring the AR content, in some embodiments the service person performing the servicing captured by the recorded video40of the service call may actively point to a location of interest (LOI)60using a pointer62or other distinctive pointing mechanism (which could in some examples merely be the service person's finger or hand). In other embodiments, the LOIs are labeled after the recording of the recorded video40, e.g. by a user operating one or more user input devices (e.g. an illustrative keyboard64and/or mouse66) of the computer52to mark the LOIs in frames of the recorded video40. With reference now toFIG.2, an illustrative embodiment of the AR content authoring device50implemented by the computer52executing instructions read from a non-transitory storage medium (not shown) is described. The AR authoring process54receives as input the recorded video40and information from which AR content is to be generated, such as the CAD drawings56(which may optionally include data such as part numbers of components, 3D renderings, animations, and/or so forth). In an operation70, various components of the imaging device10(seeFIG.1; more generally, various components of the system or device undergoing servicing) are mapped to the recorded video40using computer vision (CV) processing. In a preferred embodiment, the mapping operation70is performed using Simultaneous Location and Mapping (SLAM) processing which employs Bayesian inference or another machine learning technique to optimally map the location of the service person (assuming first-person video; or, the location of the mobile device30if the recorded video40is acquired by the mobile device30) and reference points on the component or device10, on the one hand, to the content of a sequence of observations provided by successive frames of the recorded video40. Optionally, a priori known information such as the spatial layout of the component reference points provided by the CAD drawings56may be used to improve the mapping70. The SLAM processing task may be formulated mathematically by the probabilistic formulation P(c1(t), c2(t), . . . , cn(t), p(t)|f1, . . . , ft) where c1(t), c2(t), . . . , cn(t) are the locations of the reference points of the component or device10at a time t (where without loss of generality n reference points are assumed), p(t) is the location of the service person at time t, and f1, . . . , ftare the frames of the recorded video40up to the time t. Various SLAM algorithms known for robotic vision mapping, self-driving vehicle navigation technology, or so forth are suitably applied to implement the mapping operation70. In an operation72, the LOIs are received from a user via the user interfacing device(s)64,66(e.g., by displaying a video frame from the recorded video40and providing for the user to click on an LOI using the mouse66) or are extracted automatically from the recorded video40during the SLAM processing70. This latter approach entails performing SLAM or other CV processing by the computer52to detect a user operation (e.g. pointing using the pointer62, seeFIG.1) captured by the recorded video40which indicates at least one LOI (e.g. illustrative LOI60indicated inFIG.1) in the recorded video40. This indicated LOI60then becomes one of the reference points (cLOI(t)) to be mapped in the SLAM processing70. In an operation74, AR content is aligned with the recorded video40using the results of the SLAM processing70and LOI designations72. For example, an annotation such as a part number annotation, a CAD drawing or CAD drawing portion shown as a PIP window, or a CAD animation shown as a PIP window, may be added at the LOI or closely proximate to the LOI, so as to “label” the LOI with the part number of CAD information. In a more advanced approach, a wire frame drawing of a key part or part combination or assembly may be extracted from the CAD drawings56and overlaid as the AR content, again aligned using the mapping output by the SLAM processing70and LOI designations72with the actual image of the part(s) shown in the recorded video40. The thusly authored AR content may optionally be color-coded (e.g. using different colors to distinguish different parts of a parts assembly) or otherwise highlighted. AR content in the form of verbal narration may also be added. In this case, the AR content is assumed to have a temporal sequence aligning with the time sequence of frames making up the recorded video40, and the narration is added to be synced in time with when various servicing tasks are performed in the video. In an optional operation76, the AR content may be segmented in time by adding stop points and/or segmenting the AR content into self-contained episodes. For example, if the servicing involves removing a first assembly from the device10in order to reveal a second assembly that requires servicing, then the removal of the first assembly may be one episode, while the servicing of the revealed second assembly may be a separate and distinct episode. Stop points may be added to allow for stopping the AR content time progression to allow for manual operations—for example, if a part needs to be oiled before installing it may make sense to add a stop point during the oiling process. The resulting authored AR content forms a maintenance procedure AR library component80. In some embodiments, as will be described later herein, the AR library component80may include or have access to the recorded video40which is stored (optionally after segmentation and/or adding stop points analogously to the AR content authoring operation76) as preview video41as diagrammatically indicated inFIG.2. This preview video41, if provided, is used to provide a preview of a servicing procedure including superimposed AR content upon request by the service person. As part of an overall AR maintenance procedure library, the AR library component80may optionally be variously linked in an operation82with other AR library components to provide AR support for a sensible sequence of servicing operations. For example, consider a process of replacing the x-ray tube of the CT gantry12ofFIG.1. This servicing entails opening the CT gantry12to access the x-ray tube, removing the x-ray tube, perhaps performing some setup or configuration of the new x-ray tube, installing the new x-ray tube, and then re-assembling the CT gantry12. Each of these steps may, in one approach, be supported by a separate AR library component which are then suitably linked together. Thus, the service person may initial load and view the CT gantry opening AR content library component—when this is completed the next library component (e.g. configuring the new x-ray tube) may be automatically linked and invoked, and so forth. In a variant embodiment, the linking operation82may be automated on the basis of video content, e.g. during playback of the AR content the live video feed (see discussion later herein referencingFIG.3) the live video feed may be analyzed using CV processing to detect the particular model, type, or other specific component under servicing, and may then call up the correct AR library component for that particular model, type, or other specific component. With reference toFIG.3, the AR library component80generated by the AR content authoring described with reference toFIG.2is suitably presented to a service person performing a subsequent service call (that is, a service call performed subsequently to the service call that was recorded to generate the recorded video40as described with reference toFIG.1). The AR content is presented using a suitable mobile device, such as an illustrative head-up display (HUD)90in the form of eyeglasses21with the stereo camera20mounted thereon. The HUD90is thus seen to be similar to the eyewear-based mobile camera18already described with reference toFIG.1; however, the HUD90further includes a transparent display, in the illustrative case comprising a left transparent display92mounted in the left eye frame of the eyeglasses21, and a right transparent display94mounted in the left eye frame of the eyeglasses21. The HUD90thus constitutes a see-through AR display in which the user (e.g. service person) sees the actual scene directly by viewing through the transparent display92,94while AR content may be superimposed on this actually viewed scene by displaying the AR content on the transparent display92,94. The illustrative eyeglasses-based HUD90is an example, and numerous other HUD designs are contemplated. In a variant embodiment, the HUD employs goggles with a single lens extending across both left and right eyes—in this embodiment the transparent display may be a single transparent display likewise extending across both left and right eyes. A see-through AR display has substantial advantages in that the service person directly sees the actual scene (e.g. actually sees the component being serviced) so as to have the maximal visual acuity provided by the person's vision. (If the service person is nearsighted or otherwise requires prescription eyeglasses or contacts, then the service person suitably either wears the prescription contacts in conjunction with using the HUD90, or optionally may have the glass forming the transparent display92,94modified to incorporate the ocular prescription). However, it is contemplated in an alternative embodiment to employ a video AR display. In this alternative embodiment, the transparent display92,94is replaced by an opaque display which displays a video feed of the actual scene captured by the stereo camera20. The displayed video feed then serves as an underlay of the displayed AR content. This approach using a video AR display has the disadvantage that generally the video display will be of coarser resolution and/or may have other optical degradation compared with the direct view of the actual scene provided by the illustrative see-through HUD90. To provide AR content for supporting the service procedure, a live video feed96is communicated to the mobile device30(e.g. cellphone or tablet computer) via Bluetooth™ or another short- to intermediate-range wireless communication protocol (or, alternatively, view a wired USB or other wired connection) and the mobile device30in turn relays the live video feed96to a server computer100, e.g. via a cellular communication protocol such as 4G, or via a WiFi link to an Internet Service Provider (ISP) and/or hospital electronic network, or so forth. The server computer100executes instructions stored on a non-transitory storage medium (not shown—such as a hard disk drive, RAID, or other magnetic storage medium; a flash memory, solid state drive, or other electronic storage medium; an optical disk or other optical storage medium; various combinations thereof; or so forth) to perform alignment102to map AR content of the AR library component80to the live video feed96and to locate the agent (in this embodiment the HUD90). The alignment102may be performed by SLAM processing analogously to the mapping70(seeFIG.2) previously described. The AR content that is mapped to the live video feed96suitably includes the AR content defined in the AR library component80, e.g. part number annotations, annotated picture-in-picture (PIP) windows displaying CAD drawings or portions thereof and/or CAD animations, accompanying verbal narration, and/or so forth. For example, the aligned AR content may include a marker aligned with a LOI identified in the live video feed96by the SLAM processing. As another example, the aligned AR content may include an annotation (such as a part number annotation, a CAD drawing or CAD drawing portion, or a CAD animation) aligned with a LOI identified in the live video96feed by the CV processing102. The resulting AR content104is transmitted to the mobile device30via the 4G, WiFi/ISP/Internet or other communication pathway, and in turn is transmitted from the mobile device30to the HUD90via the Bluetooth™ or other short- to intermediate-range wireless communication. At the HUD90, the AR content104is displayed on the transparent display92,94as opaque or translucent text and/or images which the service person visually perceives as being superimposed on the view of the actual scene seen by the service person when looking through the transparent display92,94. (In the alternative embodiment in which a video AR display is employed, the live video feed96is displayed as an underlay over which the AR content104is superimposed). The HUD90optionally includes a user interface for controlling. In one illustrative embodiment, this is implemented by way of an AR user interface (UI) application program (“AR UI app”)106that may be run on the mobile device30. Via the AR UI app106, the service person can perform various control operations, such as: turn the AR content on or off (the latter being useful, for example, if the service person is confident that he or she does not need the assistance of the AR content, and/or if the AR content is occluding the view of the component being serviced); selecting to execute a particular AR content episode; pause the AR content (which, again, is generally presented as a time sequence synced with the live video feed96); adjust the brightness, transparency, and/or other display characteristics of the AR content; turn audio narration on or off; and/or so forth. In some embodiments, the AR UI app106may provide one or more mechanisms for the service person to interact with the AR content—for example, if the HUD90includes gaze tracking technology then a suitable control mechanism may be to direct gaze at an AR content element (e.g. PIP window, part number annotation, or so forth) and then speak a command which is detected by the microphone38and processed by speech recognition to interpret the spoken command. Rather than via spoken command, the user command may instead be input via soft controls (buttons, switches, soft keyboard, et cetera) displayed on the touch-sensitive display34of the mobile device30. These are merely illustrative user interfacing capabilities and control input mechanisms; more generally, substantially any type of user interfacing capability and/or control input mechanism suitably used for controlling presentation of AR content may be employed. With continuing reference toFIG.3, in some embodiments a servicing preview may be provided. As already noted, the preview video41may be provided for this purpose, which comprises the recorded video40used as the basis for authoring the AR content80(as previously described with reference toFIG.2) with some optional processing (e.g. insertion of stop points and/or segmentation into episodes, cf. operation76ofFIG.2). To employ the preview mode, the user request preview in an operation110, e.g. using the AR UI app106. In a responsive operation112, the server computer100is instructed to switch to preview mode (e.g., command sent along the same 4G or other communication pathway used to convey the live video feed96to the server100; optionally when switching to preview mode this communication of the live video feed to the server is turned off). The command112causes the server to perform a modification114to the AR alignment process102. This modification includes substituting the preview video41for the live video feed96, so that the AR content is aligned with the preview video41, and adding the preview video41as an underlay of the AR content. This combination of the AR content aligned to the preview video together with the underlay preview video then becomes the AR content104that is transmitted back to the mobile device30. At the mobile device30, in an operation116the preview video with superimposed AR content is displayed on the display34of the mobile device (while, at the same time, the AR content feed to the HUD90is turned off). The service person can then view the preview on the display34of the mobile device30without being hindered by extraneous AR content showing on the transparent display92,94. Display of the preview on the display34of the mobile device30is generally preferable since this content is no longer being spatially aligned with the live video feed from the stereo camera20; however, in another embodiment it is contemplated to display the preview on the transparent display92,94. The illustrative embodiment ofFIG.3employs the server computer100to perform the computationally complex SLAM processing102, which is beneficial if the mobile device30and the HUD90have limited processing capability. Likewise, using the mobile device30running the AR UI app106to provide user interfacing leverages the touch-sensitive display34and optionally other features (e.g. microphone38) of the mobile device to provide user-friendly interfacing. In alternative embodiments, these operations/processing may be otherwise distributed. In one contemplated embodiment, all processing and user interfacing is integrated with the HUD, which in this contemplated embodiment includes a flash memory or the like to store the AR library component80(or downloads this content from a cloud server) so as to provide a single-unit AR based servicing guidance device. As another variant, the operation102may be performed at the mobile device30, using the AR library component80either downloaded in its entirety to the mobile device30or streamed from a cloud server. These again are merely illustrative variants. Moreover, the illustrative HUD90may be replaced by another mobile device presenting the AR content. For example, the mobile device30, e.g. cellphone or tablet computer, may serve to present the AR content (in this variant embodiment the HUD90is omitted). To use the mobile device30to present AR content supporting a service call, the service person points the rear-facing camera36at the component being serviced, and the live video feed96is recorded by the rear-facing camera36. The server computer processes the live video feed via operation102as already described to generate the AR content, and transmits it back to the mobile device30which displays the live video feed96with the AR content superimposed (or, alternatively, the superimposition of the AR content on the live video feed96is performed at the server computer100which then transmits back the AR content with the underlay live video feed combined). Advantageously, the service person can view the display34of the mobile device30with the rear-facing camera36pointed at the component being serviced, so that the mobile device30operates as a video AR display. With comparative reference toFIGS.1and3, it will be appreciated that in some embodiments the HUD90used in the AR content support could also be used as the mobile camera18for recording the recorded video40ofFIG.1, that is, the HUD90could be substituted for the mobile camera18ofFIG.1. Such interoperability has a valuable advantage: The live video feed96captured by the HUD90during a service call can optionally be used as the recorded video40ofFIG.1, to serve as a basis for performing further AR authoring as perFIG.2to generate updated or additional AR content. For example, in the previous example of replacing an x-ray tube of the CT gantry12, suppose the service person, using the HUD90to provide AR content, has a new model x-ray tube that is different from the model that was installed in the service call that was recorded to create the AR content. In this case, the service person can record the live video feed96as he or she installs the new-model x-ray tube. This may involve obtaining telephonic advice from a remote expert42as previously described with reference toFIG.1. The recorded live video feed96of the installation of the new-model x-ray tube then becomes the recorded video40serving as the basis for performing the AR content authoring ofFIG.2. Thereafter, when the new-model x-ray tube is installed in subsequent service calls the newly authored AR content appropriate for the new-model x-ray tube can be used to provide the AR-based guidance. In performing the update AR content authoring as perFIG.2, in the operation82a link can be added that, for a given subsequent service call, loads either the AR content for supporting installation of the old-model tube or the AR content supporting installation of the new-model tube, depending upon which model x-ray tube is being installed. This link may be manual (e.g. the user inputs the x-ray tube model number via the AR UI106to load the correct AR support content) or automated (e.g., when the service person first gazes upon the x-ray tube the model number or a model-distinguishing feature is detected via CV processing and used to link in the correct AR content for the x-ray tube being installed. In the illustrative embodiments, the AR content authoring method54ofFIG.2processes recorded video40of a service call performed to repair or maintain a medical imaging device in order to generate the AR content80for subsequent use in delivery of AR-based service instructions as described with reference toFIG.3. However, the AR content authoring method54ofFIG.2may additionally or alternatively be applied to process recorded video40for other purposes, such as to create an annotated AR log entry to reflect the maintenance or repair action performed on the imaging device10. The authored AR content can be used as a record of maintenance checks, so as to verify maintenance and provide a tool for correlating the requirements of planned maintenance. In maintenance or repair logging applications, the AR content log entry typically comprises the recorded video40together with AR content annotations created in the operations72,74, and the AR content log entry (including the recorded video40and AR annotations) is suitably stored in the electronic service log of the imaging device10. The LOI designations operations72may be performed by the service person immediately after completion of the service call to identify locations he or she serviced, and the operation74may entail the service person adding AR content such as part number annotation, verbal and/or textual (optionally speech-to-text) verbal and/or textual explanation of the service performed and/or any unusual aspects of that service, inspection observations made by the service person serving as augmentation to the recorded video of the inspected components, and/or so forth. In these embodiments, the user operations72,74may, for example, be performed on a tablet computer, cellphone, or other portable electronic device carried by the service person. Some annotated AR content may be automatically generated, e.g. adding timestamp information, identification of service personnel (alternatively this could be manually added AR content), CV-based automatic identification of components or other objects appearing in the recorded video40, and/or so forth. The invention has been described with reference to the preferred embodiments. Modifications and alterations may occur to others upon reading and understanding the preceding detailed description. It is intended that the invention be construed as including all such modifications and alterations insofar as they come within the scope of the appended claims or the equivalents thereof. | 35,537 |
11861899 | DETAILED DESCRIPTION Reference will now be made in detail to exemplary embodiments, discussed with regards to the accompanying drawings. In some instances, the same reference numbers will be used throughout the drawings and the following description to refer to the same or like parts. Unless otherwise defined, technical and/or scientific terms have the meaning commonly understood by one of ordinary skill in the art. The disclosed embodiments are described in sufficient detail to enable those skilled in the art to practice the disclosed embodiments. It is to be understood that other embodiments may be utilized and that changes may be made without departing from the scope of the disclosed embodiments. For example, unless otherwise indicated, method steps disclosed in the figures can be rearranged, combined, or divided without departing from the envisioned embodiments. Similarly, additional steps may be added or steps may be removed without departing from the envisioned embodiments. Thus, the materials, methods, and examples are illustrative only and are not intended to be necessarily limiting. The disclosed embodiments can enable the display of augmented reality content (“AR content”) relative to an object in a current image. A client device and a remote server can interact to position the AR content in the current image. The remote server can perform computationally intensive operations, while the client device performs time-sensitive operations. The remote server can identify the object in a reference image, determine the position of the AR content relative to the object, and determine interest points in the reference image. The client device can then determine the placement of the AR content in a current image based, at least in part, on the reference image and interest points in the reference image. During typical operation, the client device may update the position of the AR content multiple times before the remote server generates another reference image. The device can use a platform-independent browser environment to provide images to the remote server and to receive the AR content from the remote server. The disclosed embodiment provide multiple technical improvements over existing AR systems. AR content can be placed in a current image relative to an arbitrarily positioned object, rather than relative to a predetermined vertical or horizontal plane, improving the realism of the AR experience. Information about the identified object can be used to refine the positioning of AR content, improving accuracy. Computationally complex calculations can be offloaded to the remote server, speeding the delivery of AR content. Furthermore, the remote server can interact with the device as-needed, increasing the scalability of the overall AR content delivery system. The disclosed embodiments provide AR content using a platform-independent browser environment. Accordingly, the disclosed embodiments can provide AR content to more users than systems that rely on specific hardware or Application Programming Interfaces offered by particular device manufactures. Furthermore, the object can be used as a trigger for displaying the AR content, in place of express triggers such as QR codes or the like, which may appear artificial to users and therefore may diminish immersion in the AR experience. FIG.1Adepicts a view of exemplary AR content102in a coordinate system103, consistent with disclosed embodiments. In this non-limiting example, AR content102is a sphere, though more sophisticated AR content can be envisioned. The center of the sphere is depicted as being at the origin of coordinate system103, though other relationships between AR content102and coordinate system103can be envisioned. Though depicted inFIG.1Aas a three-dimensional object, AR content102can also be a two-dimensional object, such as a label or banner. Coordinate system103can be express or implicit. As a non-limiting example, AR content102can be developed using a tool that allows objects to be placed in a virtual development environment. This virtual development environment can display coordinate axes showing the position and orientation of such objects. Alternatively, the virtual development environment can allow users to manipulate objects without expressly depicting coordinate axis. A pose of AR content102can be specified with regards to object101. The pose of AR content102can include the position and orientation of AR content102. In some embodiments, AR content102and object101can both have positions and orientations specified with regards to coordinate system103. For example, a center of AR content102can be at location [0, 0, 0] and a center of object101can be at location [a 0, −1, 0] in coordinate system103. In various embodiments, a difference in position and orientation between AR content102and object101can be specified. For example, a center of AR content102can be specified as a distance above a center of object101or above a point in a plane containing object101. In some embodiments, object101can be a planar object, such as a billboard, magazine or book page, box, painting, wall, playing card, counter, floor, or the like. In various embodiments, object101can be a non-planar object, such as a beer bottle, car, body part (e.g., a face or part of a face), or the like. As shown inFIG.1A, augmented reality content102is displayed a distance107above an upper surface109of object101. FIG.1Bdepicts a computing device103configured to display the AR content ofFIG.1Arelative to object101, consistent with disclosed embodiments. In some embodiments, computing device103can include a camera or be communicatively connected to a camera device (e.g., a webcam, digital video camera, action camera, or the like). For example, computing device103can include a camera capable of acquiring single images, sequences of images, or videos. As an additional example, computing device103can be configured to communicate with a webcam using a wired (e.g., USB or the like) or wireless (e.g., WIFI, Bluetooth, or the like) connection. In various embodiments, computing device103can include a display or be communicatively connected to a display device. For example, computing device103can have a built-in display or can be configured to communicate with a display device (e.g., a television, computer monitor, a remote computing device having a built-in monitor, or the like) using a wired (e.g., HMDI, DVI, Ethernet, or the like) or wireless (e.g., WIFI, Bluetooth, or the like) connection. In some embodiments, computing device103can be a mobile device, such as a wearable device (e.g., a smartwatch, headset, or the like), smartphone, tablet, laptop, digital video camera, action camera, or the like. Computing device103can be configured to acquire an image104of object101, consistent with disclosed embodiments. Computing device103can be configured to acquire image104using the camera of computing device103or a camera communicatively connected to computing device103. Image104can have a perspective, a representation of the three-dimension world as projected onto a plane of the two-dimensional image104. In some embodiments, image104can be acquired as a single image. In various embodiments, image104can be obtained from a stream of video data. Image104can include a projection of object101into the two-dimensional image104. Consistent with disclosed embodiments, computing device103can be configured to identify object101in image104. Computing device103can be configured to determine a correct placement of AR content102in image104. Determining the correct placement of AR content102in image104can include determining an overall transformation from a pose of AR content102in coordinate system103to a pose of AR content102in image104. Consistent with disclosed embodiments, this overall transformation can be divided into two or more component transformations. The overall transformation can be a function of the two or more component transformations. For example, the overall transformation can be a product of the two or more component transformations. The two or more component transformations can include a transformation from the pose of AR content102into a projection105, consistent with disclosed embodiments. Projection105can be a perspective view of object101, an isometric view of object101, or the like. As shown inFIG.1B, projection105can be a perspective top-down view onto the upper surface109of object101. The transformation can determine the position and orientation of AR content102in projection105. As projection105is a top-down perspective view, AR content102would appear over upper surface109of object101in projection105. If projection105were a side view of object101, AR content102would appear beside upper surface109(e.g., as inFIG.1A). The two or more component transformations can further include a transformation from projection105to a perspective of a reference image (not shown inFIG.1B). The reference image may have been acquired as a single image, or may have been obtained from a video stream. In some embodiments, the reference image may be obtained by the camera of computing device103, or the camera communicatively connected to computing device103. In various embodiments, the reference image may have been obtained by another camera. In some embodiments, the transformation can be a homography (e.g., a Euclidean or projective homography). The two or more component transformations can further include a transformation from the perspective of the reference image to the perspective of image104. In some embodiments, the transformation can be a homography (e.g., a Euclidean or projective homography). In some embodiments, this transformation can be determined by matching interest points in the reference image with points in image104. Such matching can be performed using a motion detection algorithm. These interest points may be, but need not be, points associated with object101. For example, the interest points can include points in the foreground or background of image104, apart from object101, such as the corners of other objects in the image. In some embodiments, AR content102can be placed into image104to create modified image111. In some embodiments, the overall transformation can be applied to the coordinates of AR content102to determine a location of these coordinates in the perspective of image104. In this manner, AR content102can be mapped to the perspective of image104. After such mapping, in some embodiments, additional operations can be performed to ensure that AR content102is correctly rendered in modified image111. For example, device103can be configured to determine which surfaces of AR content102are visible from the perspective of image104(e.g., some surfaces of AR content102may be obscured by other surfaces of AR content102, or by surfaces of other objects displayed in image104). FIG.2depicts an exemplary method for determining interest points (e.g., interest points201and202) in an image (e.g., image203), consistent with disclosed embodiments. Interest points can be points that have a well-defined position in the image and can be detected in multiple similar image of the same environment, under differing conditions (e.g., lighting, focal planes, or the like) and from differing perspectives. For example, corners of objects, line endings, points of maximal curvature, isolated points of local intensity maxima or minima can serve as points or interest. Interest points in an image can be detected using an interest point detection algorithm, such as Features from Accelerated Segment Test (FAST), Harris, Maximally stable extremal regions (MSER), or the like. In some embodiments, each interest point can be associated with a pixel patch in the image. For example, when the interest points are detected using FAST, each interest point may be associated with a circle of pixels. The FAST algorithm may have analyzed these pixels to determine whether the pixel in the center of the circle can be classified as a corner. As an additional example, when the MSER algorithm is used to identify a connected component, the pixel patch can be the blob of pixels making up the connected component As would be appreciated by those of skill in the art, the envisioned embodiments are not limited to any particular method of identifying interest points in the image. In some embodiments, an interest point can be represented by a feature descriptor vector (e.g., vector204, vector207). Scale-invariant feature transform (SIFT), Speeded Up Robust Features (SURF), Binary Robust Invariant Scalable Keypoint (BRISK), Fast Retina Keypoint (FREAK), are examples of known methods for generating feature descriptor vectors. Feature descriptor vectors can be a fixed-size vector of floating point numbers or bits that characterize the pixel patch (e.g., a 64-dimensional or 128-dimensional floating point vector, or a 512-dimension bit vector.) The vector can be generated by sampling the pixel patches, arranged within the image according to pixel grid205, within a descriptor sampling grid206having an orientation208, and the vector values can be chosen such that a distance between two vectors representing two pixel patches correlates with a degree of similarity (e.g., in luminance/brightness) between the two pixel patches. FIG.3depicts an exemplary method300for computing a transformation in perspective between image302and one or more other images (e.g., image301). In some embodiments, the two or more images can be acquired by the same camera. For example, method300can include comparing two or more images captured by a user device at different times, or two or more images captured by different camera, or any combination thereof. Method300can include determining points of interest (e.g., interest points303aand305a) in a selected one of the two or more images (e.g., image301). In some embodiments, the selected image can be a reference image. For example, the selected image can be the first image obtained in a sequence of images and the remaining images can be obtained later. The interest points can be identified as described above with regards toFIG.2. Method300can include matching at least some of the points in the reference image to corresponding points in image302. For example, as shown inFIG.3, interest point303acan match with interest point303b. Likewise, interest point305acan match with interest point305b. In some embodiments, matching can be performed between the pixel patches. For example, a motion detection algorithm (e.g., Extracted Points Motion Detection, or the like) can be used to detect a pixel patch in image302that matches a pixel patch associated with an interest point in the reference image. In various embodiments, matching can be performed between feature descriptors determined from pixel patches. For example, feature descriptors can be determined for an interest point in the reference image and for pixel patches in image302. The feature descriptor for the interest point in the reference image can be compared to the feature descriptors for the pixel patches in image302to determine a match. In some embodiments, the match can be the best match according to a metric dependent on the similarity of the feature descriptors. In various embodiments, a match can be sought for at least some, or all, interest points in the reference image. However, a match may not be identified for all interest points for which a match is sought. For example, changes in the position of the camera between image302and the reference image may cause a point of interest in the reference image to move outside of image302. As an additional example, changes in the environment may obscure or alter an interest point in the reference image (e.g., a person walking in front of a prior point of interest). Method300can include determining whether pairs of matching interest points can be used to generate a transformation between the reference image and image302, consistent with disclosed embodiments. This determination can depend on the relative positions of the interest points in the images (e.g., the position of the interest point in the reference image and the position of the matching point in image302). In some embodiments, when the relative position of the interest points in the images satisfies a distance criterion, the matching points are not used to generate the transformation. The distance criterion can be a threshold. For example, when a difference between a location of an interest point in the reference image and a location of the interest point in image302exceeds a distance threshold, the interest point and matching point may not be used to generate the transformation. Such points may be discarded to avoid poor matches. For example, an interest point in the reference image may erroneously match to a point in image302. Such an erroneous match may be far from the original location of the interest point in the reference image. Accordingly, discarding matches that are greater than a threshold distance can avoid using such erroneous matches in determining the transformation. For example, as shown inFIG.3, interest point305acan erroneously match to point305b. Including these erroneously matching points in the determination of the transformation from image301to image302could decrease the accuracy of the transformation. Method300can include determining a transformation between the reference image (e.g., image301) and image302. In some embodiments, the transformation can be determined by estimating a projective homography matrix between the reference image and image302. Methods for estimate such a matrix are described, as a non-limiting example, in “Pose estimation for augmented reality: a hands-on survey” by Marchand, Eric, Hideaki Uchiyama, and Fabien Spindler, and incorporated herein by reference. In some embodiments, the projective homography matrix can encode information about changes in position and orientation between the reference image and image302. In some embodiments, the computing device can have inertial measurement sensors. Inertial measurement data can be used in combination with the image data to estimate changes in position and orientation of the camera between the acquisition of the reference image and the acquisition of image302. These estimate changes in position and orientation can be used in determining the transformation from the reference image to image302, according to known methods. In various embodiments, image302can be compared to multiple previous reference images. Interest points in these multiple reference images can be matched to points in image302. These matching points can be used to estimate changes in position and orientation between the reference images and image302, enabling a more precise determination of the current position and orientation of the camera. FIG.4depicts an exemplary method400for displaying AR content on a device relative to an object, consistent with disclosed embodiments. Method400can be performed by a computing device420and a server430. In some embodiments, method400can be performed using a platform-independent browser environment. As described herein, method400can include steps of capturing an image, determine points of interests in the image, identifying an object in the image, and determining a transformation from a coordinate system of the AR content to current perspective of a camera associated with the computing device. In step301, the computing device (e.g., computing device103) can capture an image (e.g., image104). The device can be a smartphone, tablet, or a similar device with image capture functionality. The image can be a picture, a frame of a video feed, or another like representation of an object of interest (e.g., object101). The image may also be of an entire area of view, or it may be only a certain portion of the area of view. In some embodiments, a web application running on the computing device can be configured to initialize a camera feed. The camera feed can be configured to acquire images from a camera associated with the computing device. In various embodiments, the computing device can also be configured to acquire device position and orientation information using the DeviceMotion API, or a similar application. After acquiring the image, the device can transfer the image to an identification server. In some embodiments, the device can be configured to use WebRTC, or a similar application, to communicate with the identification server. The transfer can take place via, for example, a wireless link, a local area network (LAN), or another method for sending electromagnetic or optical signals that carry digital data streams representing various types of information. In some embodiments, the identification server can be an image reference database, a feature index database, or a web-based server that provides an image recognition service. It is to be understood that the identification server could be one server or a collection of servers. In step402, the identification server, after receiving the image, can determine interest points of interest (e.g., interest points303aand305a) in the image, as describe above with regards toFIG.2. The identification server can then transfer the points of interest to the device. In some embodiments, the identification server can transfer the points of interest to the device using the wireless link, a local area network (LAN), or another method used by the device to transfer the image to the identification server. In step403, the identification server can identify the object in the image (e.g., reference object101). The object can be a thing of interest, such as a billboard, an item of clothing, or a scene that the user would like to learn more information about. The object can be multidimensional or planar. In some embodiments, the identification server can identify the object by using an object recognition algorithms and pattern matching algorithm. For example, the identification server can use methods such the Viola-Jones method, for example, as described in “The Rapid Object Detection Using a Boosted Cascade of Simple Features,” by Paul Viola and Michael Jones, performs a cascade of predefined scan operations in order to assess the probability of the presence of a certain shape in the image, and a classification algorithm to identify the object. As an additional example, the identification server can use one or more machine learning methods, such as convolutional neural networks, decision trees, or the like to identify and localize the object within the image. Such machine learning methods may have, a training phase and a test phase, during which the training data may be used to identify objects. Such methods are computationally intensive and may rely on the careful pre-selection of training images. As a further example, the identification server can use attributes or features displayed by the object for image-based detection and recognition. In such embodiments, characteristics can be extracted from a set of training images of the object, and then the system can detect whether there are corresponding characteristics among either a set of snapshots, or between a snapshot and a training set of images. As can be appreciated from the foregoing, the disclosed embodiments are not limited to a particular manner of identifying the image in the image. In some embodiments, a model of the object can be available to the identification server. The model can be a matching representation of the object, and it can be specified with respect to a specific coordinate system. The model can be multidimensional or planar. For example, the model of a billboard could be a 3D representation of the billboard defined from the point-of-view facing the front of the billboard. In step404, when the identified object matches a model of the object that is available to the identification server, the identification server can identify AR content (e.g., augmented reality content102) that corresponds to the model of the object. The AR content can be computer-generated content to be overlaid on a real-world environment, and which can be specified with respect to a specific coordinate system. For example, the pose (or orientation) of the AR content could be specified with respect to a planar surface of the object. As an exemplary scenario, referencingFIG.1A, an AR sphere could be defined a certain distance away from the center of a surface of a billboard. In step405, the identification server can determine a first transformation for displaying the AR content in the image relative to the identified object, which it can then relay to the device. The first transformation can represent a set of relationships between the locations of points within the AR content's coordinate system and the locations of the corresponding points within the image's coordinate system, such that the AR content's pose is correct when overlaid on the image. Continuing with the example above, referencingFIG.1B, the identification server can determine a transformation such that the AR sphere is positioned correctly when a user device captures an image of the billboard. The first transformation can also be the product of a second transformation and a third transformation. The second transformation could represent, for example, a set of relationships between the locations of points within the AR content's coordinate system and the locations of the corresponding points within the object's coordinate system. Similarly, the third transformation could represent, for example, a set of relationships between the locations of points within the object's coordinate system and the locations of the corresponding points within the image's coordinate system. The first, second, and third transformations can be a Euclidean or projective homography, and they can each be determined using, at least in part, the image and a model of the identified object. The identification server can then transfer the first transformation to the device using, for example, the same method used by the device to transfer the image to the identification server. In step406, the device can capture a second image. The device can capture the second image in substantially the same manner as the (first) image as described above. For example, the second image can be captured at a later time using the same camera as the first image (though the reference object or the camera may have changed relative positions and orientations in the interim) In various embodiments, the device can capture the second image using a different camera. In step407, the device can determine a second transformation for displaying the AR content in the second image relative to the identified object. The second transformation can represent a set of relationships between the locations of points within the AR content's coordinate system and the locations of the corresponding points within the second image's coordinate system. The device can determine the second transformation by using, at least in part, the first transformation and/or the points of interest received by the device from the identification sever. The device can also determine the second transformation by estimating a third transformation from a perspective of the first image to a perspective of the second image. For example, the device could estimate the third transformation at least in part by matching interest points in the second image to a subset of the interest points in the first image. In an exemplary scenario, the device could estimate the third transformation using, at least in part, data acquired by one or more of the device's Inertial Measurement Units (IMU) and using positional data between the first and second image to make the estimation. In some embodiments, the second and the third transformation can be a Euclidean or projective homography. In cases where more than one image has been previously captured, the device can determine the second transformation by matching, for each one of a set of previously acquired images, a subset of interest points in the one of the previously acquired images with interest points in the second image. In step408, the device can display the augmented reality content in the second image in a pose relative to the object using, at least in part, the augmented reality content and/or the second transformation. The device can continue to capture subsequent images, can determine subsequent transformations, and can display the AR content in the correct pose relative to the object using the subsequent transformations until a disrupting event, such as loss of tracking, occurs. Tracking can be lost when, for example, the person who operates the device shifts the device's direction substantially, or the device captures a new object that had not been identified before. Once the system is again capable of tracking the thing of interest, the method can begin anew starting at step301. In some embodiments, the device can be configured to determine the second transformation on a background thread using a WebWorker API, or similar API. Meanwhile, in the main thread, the user interface (UI) and the AR overlays can be drawn while descriptor matching, motion detection calculation and all other calculations are taking place in background threads using WebWorkers. When using IMU data, the positional data can be calculated in the main thread using IMU sensor readings. FIG.5depicts an exemplary system500with which embodiments described herein can be implemented, consistent with embodiments of the present disclosure. System500can include a client device510, a network530, and a server540. Client device510can include one or more processors512, a memory device514, a storage device516, a display517, a network interface518, a camera519(or other image generation device), and an accelerometer522(or other orientation determination device), all of which can communicate with each other via a bus520. In some embodiments, display517can preferably be a touchscreen. The I/O devices can include a microphone and any other devices that can acquire and/or output a signal. Through network530, client device510can exchange data with a server540. Server540can also include one or more processors542, a memory device544, a storage device546, and a network interface548, all of which can communicate with each other via a bus550. Both memories514and544can be a random access memory (RAM) or other volatile storage devices for storing information and instructions to be executed by, respectively, processors512and542. Memories514and544can also be used for storing temporary variables or other intermediate information during execution of instructions to be executed by processors512and542. Such instructions, after being stored in non-transitory storage media accessible to processors512and514(e.g., storage devices516and546), can render computer systems510and540into special-purpose machines that are customized to perform the operations specified in the instructions. The instructions can be organized into different software modules, which can include, by way of example, components, such as software components, object-oriented software components, class components and task components, processes, functions, fields, procedures, subroutines, segments of program code, drivers, firmware, microcode, circuitry, data, databases, data structures, tables, arrays, and variables. In general, the word “module,” as used herein, can refer to logic embodied in hardware or firmware, or to a collection of software instructions, possibly having entry and exit points, written in a programming language, such as, for example, Java, Lua, C or C++. A software module can be compiled and linked into an executable program, installed in a dynamic link library, or written in an interpreted programming language such as, for example, BASIC, Perl, or Python. It will be appreciated that software modules can be callable from other modules or from themselves, and/or can be invoked in response to detected events or interrupts. Software modules configured for execution on computing devices can be provided on a computer readable medium, such as a compact disc, digital video disc, flash drive, magnetic disc, or any other tangible medium, or as a digital download (and can be originally stored in a compressed or installable format that requires installation, decompression, or decryption prior to execution). Such software code can be stored, partially or fully, on a memory device of the executing computing device, for execution by the computing device. Software instructions can be embedded in firmware, such as an EPROM. It will be further appreciated that hardware modules can be comprised of connected logic units, such as gates and flip-flops, and/or can be comprised of programmable units, such as programmable gate arrays or processors. The modules or computing device functionality described herein can be preferably implemented as software modules but can be represented in hardware or firmware. Generally, the modules described herein can refer to logical modules that can be combined with other modules or divided into sub-modules despite their physical organization or storage. Client device510and server540can implement the techniques described herein using customized hard-wired logic, one or more ASICs or FPGAs, firmware and/or program logic which in combination with the computer system causes or programs client device510and server540to be a special-purpose machine. According to some embodiments, the operations, functionalities, and techniques and other features described herein can be performed by client device540and server540in response to processors512and542executing one or more sequences of one or more instructions contained in, respectively, memories514and544. Such instructions can be read into memories514and544from another storage medium, such as storage devices516and546. Execution of the sequences of instructions contained in memories514and544can cause respectively processors512and542to perform the process steps described herein. In alternative embodiments, hard-wired circuitry can be used in place of or in combination with software instructions. The term “non-transitory media” as used herein can refer to any non-transitory media for storing data and/or instructions that cause a machine to operate in a specific fashion. Such non-transitory media can comprise non-volatile media and/or volatile media. Non-volatile media can include, for example, optical or magnetic devices, such as storage devices516and546. Volatile media can include dynamic memory, such as memories514and544. Common forms of non-transitory media can include, for example, a floppy disk, a flexible disk, hard disk, solid state drive, magnetic tape, or any other magnetic data storage medium, a CD-ROM, any other optical data storage medium, any physical medium with patterns of holes, a RAM, a PROM, and EPROM, a FLASH-EPROM, NVRAM, any other memory chip or cartridge, and networked versions of the same. Network interfaces518and548can provide a two-way data communication coupling to network530. For example, network interfaces518and548can be an integrated services digital network (ISDN) card, cable modem, satellite modem, or a modem to provide a data communication connection to a corresponding type of telephone line. As another example, network interfaces518and548can be a local area network (LAN) card to provide a data communication connection to a compatible LAN. Wireless links can also be implemented. In any such implementation, network interfaces518and548can send and receive electrical, electromagnetic or optical signals that carry digital data streams representing various types of information, and which can provide the data stream to storage devices516and546. Processors512and542can then convert the data into a different form (e.g., by executing software instructions to compress or decompress the data), and can then store the converted data into the storage devices (e.g., storage devices516and546) and/or transmit the converted data via network interfaces518and548over network530. According to some embodiments, the operations, techniques, and/or components described herein can be implemented by an electronic device, which can include one or more special-purpose computing devices. The special-purpose computing devices can be hard-wired to perform the operations, techniques, and/or components described herein, or can include digital electronic devices such as one or more application-specific integrated circuits (ASICs) or field programmable gate arrays (FPGAs) that are persistently programmed to perform the operations, techniques and/or components described herein, or can include one or more hardware processors programmed to perform such features of the present disclosure pursuant to program instructions in firmware, memory, other storage, or a combination. Such special-purpose computing devices can also combine custom hard-wired logic, ASICs, or FPGAs with custom programming to accomplish the technique and other features of the present disclosure. The special-purpose computing devices can be desktop computer systems, portable computer systems, handheld devices, networking devices, or any other device that can incorporate hard-wired and/or program logic to implement the techniques and other features of the present disclosure. The one or more special-purpose computing devices can be generally controlled and coordinated by operating system software, such as iOS, Android, Blackberry, Chrome OS, Windows XP, Windows Vista, Windows 7, Windows 8, Windows Server, Windows CE, Unix, Linux, SunOS, Solaris, VxWorks, or other compatible operating systems. In other embodiments, the computing device can be controlled by a proprietary operating system. Operating systems can control and schedule computer processes for execution, perform memory management, provide file system, networking, I/O services, and provide a user interface functionality, such as a graphical user interface (“GUI”), among other things. Other embodiments will be apparent to those skilled in the art from consideration of the specification and practice of the disclosed embodiments disclosed herein. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosed embodiments being indicated by the following claims. Furthermore, although aspects of the disclosed embodiments are described as being associated with data stored in memory and other tangible computer-readable storage mediums, one skilled in the art will appreciate that these aspects can also be stored on and executed from many types of tangible computer-readable media, such as secondary storage devices, like hard disks, floppy disks, or CD-ROM, or other forms of RAM or ROM. Accordingly, the disclosed embodiments are not limited to the above-described examples, but instead are defined by the appended claims in light of their full scope of equivalents. Moreover, while illustrative embodiments have been described herein, the scope includes any and all embodiments having equivalent elements, modifications, omissions, combinations (e.g., of aspects across various embodiments), adaptations or alterations based on the present disclosure. The elements in the claims are to be interpreted broadly based on the language employed in the claims and not limited to examples described in the present specification or during the prosecution of the application, which examples are to be construed as non-exclusive. Further, the steps of the disclosed methods can be modified in any manner, including by reordering steps or inserting or deleting steps. Furthermore, as used herein the term “or” encompasses all possible combinations, unless specifically stated otherwise or infeasible. For example, if it is stated that a component may include A or B, then, unless specifically stated otherwise or infeasible, the component may include A, or B, or A and B. As a second example, if it is stated that a component may include A, B, or C, then, unless specifically stated otherwise or infeasible, the component may include A, or B, or C, or A and B, or A and C, or B and C, or A and B and C. Similarly, the use of a plural term does not necessarily denote a plurality and the indefinite articles “a” and “an” do not necessary denote a single item, unless specifically stated otherwise or infeasible. It is intended, therefore, that the specification and examples be considered as example only, with a true scope and spirit being indicated by the following claims and their full scope of equivalents. | 40,824 |
11861900 | TECHNICAL DESCRIPTION According to various embodiments, techniques and mechanisms described herein may be used to identify and represent damage to an object such as a vehicle. The damage detection techniques may be employed by untrained individuals. For example, an individual may collect multi-view data of an object, and the system may detect the damage automatically. According to various embodiments, various types of damage may be detected. For a vehicle, such data may include, but is not limited to: scratches, dents, flat tires, cracked glass, broken glass, or other such damage. In some implementations, a user may be guided to collect multi-view data in a manner that reflects the damage detection process. For example, when the system detects that damage may be present, the system may guide the user to take additional images of the portion of the object that is damaged. According to various embodiments, techniques and mechanisms described herein may be used to create damage estimates that are consistent over multiple captures. In this way, damage estimates may be constructed in a manner that is independent of the individual wielding the camera and does not depend on the individual's expertise. In this way, the system can automatically detect damage in an instant, without requiring human intervention. Although various techniques and mechanisms are described herein by way of example with reference to detecting damage to vehicles, these techniques and mechanisms are widely applicable to detecting damage to a range of objects. Such objects may include, but are not limited to: houses, apartments, hotel rooms, real property, personal property, equipment, jewelry, furniture, offices, people, and animals. FIG.1illustrates a method100for damage detection, performed in accordance with one or more embodiments. According to various embodiments, the method100may be performed at a mobile computing device such as a smart phone. The smart phone may be in communication with a remote server. Alternately, or additionally, some or all of the method100may be performed at a remote computing device such as a server. The method100may be used to detect damage to any of various types of objects. However, for the purpose of illustration, many examples discussed herein will be described with reference to vehicles. At102, multi-view data of an object is captured. According to various embodiments, the multi-view data may include images captured from different viewpoints. For example, a user may walk around a vehicle and capture images from different angles. In some configurations, the multi-view data may include data from various types of sensors. For example, the multi-view data may include data from more than one camera. As another example, the multi-view data may include data from a depth sensor. As another example, the multi-view data may include data collected from an inertial measurement unit (IMU). IMU data may include position information, acceleration information, rotation information, or other such data collected from one or more accelerometers or gyroscopes. In particular embodiments, the multi-view data may be aggregated to construct a multi-view representation. Additional details regarding multi-view data and damage detection are discussed in U.S. Pat. No. 10,950,033, “DAMAGE DETECTION FROM MULTI-VIEW VISUAL DATA”, by Holzer et al., filed Nov. 22, 2019, which is hereby incorporated by reference in its entirety and for all purposes. At104, damage to the object is detected based on the captured multi-view data. In some implementations, the damage may be detected by evaluating some or all of the multi-view data with a neural network, by comparing some or all of the multi-view data with reference data, and/or any other relevant operations for damage detection. Additional details regarding damage detection are discussed throughout the application. At106, a representation of the detected damage is stored on a storage medium or transmitted via a network. According to various embodiments, the representation may include some or all of a variety of information. For example, the representation may include an estimated dollar value. As another example, the representation may include a visual depiction of the damage. As still another example, a list of damaged parts may be provided. Alternatively, or additionally, the damaged parts may be highlighted in a 3D CAD model. In some embodiments, a visual depiction of the damage may include an image of actual damage. For example, once the damage is identified at104, one or more portions of the multi-view data that include images of the damaged portion of the object may be selected and/or cropped. In some implementations, a visual depiction of the damage may include an abstract rendering of the damage. An abstract rendering may include a heatmap that shows the probability and/or severity of damage using a color scale. Alternatively, or additionally, an abstract rendering may represent damage using a top-down view or other transformation. By presenting damage on a visual transformation of the object, damage (or lack thereof) to different sides of the object may be presented in a standardized manner. FIG.2illustrates a method200of damage detection data capture, performed in accordance with one or more embodiments. According to various embodiments, the method200may be performed at a mobile computing device such as a smart phone. The smart phone may be in communication with a remote server. The method200may be used to detect damage to any of various types of objects. However, for the purpose of illustration, many examples discussed herein will be described with reference to vehicles. A request to capture input data for damage detection for an object is received at202. In some implementations, the request to capture input data may be received at a mobile computing device such as a smart phone. In particular embodiments, the object may be a vehicle such as a car, truck, or sports utility vehicle. An object model for damage detection is determined at204. According to various embodiments, the object model may include reference data for use in evaluating damage and/or collecting images of an object. For example, the object model may include one or more reference images of similar objects for comparison. As another example, the object model may include a trained neural network. As yet another example, the object model may include one or more reference images of the same object captured at an earlier point in time. As yet another example, the object model may include a 3D model (such as a CAD model) or a 3D mesh reconstruction of the corresponding vehicle. In some embodiments, the object model may be determined based on user input. For example, the user may identify a vehicle in general or a car, truck, or sports utility vehicle in particular as the object type. In some implementations, the object model may be determined automatically based on data captured as part of the method200. In this case, the object model may be determined after the capturing of one or more images at206. At206, an image of the object is captured. According to various embodiments, capturing the image of the object may involve receiving data from one or more of various sensors. Such sensors may include, but are not limited to, one or more cameras, depth sensors, accelerometers, and/or gyroscopes. The sensor data may include, but is not limited to, visual data, motion data, and/or orientation data. In some configurations, more than one image of the object may be captured. Alternatively, or additionally, video footage may be captured. According to various embodiments, a camera or other sensor located at a computing device may be communicably coupled with the computing device in any of various ways. For example, in the case of a mobile phone or laptop, the camera may be physically located within the computing device. As another example, in some configurations a camera or other sensor may be connected to the computing device via a cable. As still another example, a camera or other sensor may be in communication with the computing device via a wired or wireless communication link. According to various embodiments, as used herein the term “depth sensor” may be used to refer to any of a variety of sensor types that may be used to determine depth information. For example, a depth sensor may include a projector and camera operating in infrared light frequencies. As another example, a depth sensor may include a projector and camera operating in visible light frequencies. For instance, a line-laser or light pattern projector may project a visible light pattern onto an object or surface, which may then be detected by a visible light camera. One or more features of the captured image or images are extracted at208. In some implementations, extracting one or more features of the object may involve constructing a multi-view capture that presents the object from different viewpoints. If a multi-view capture has already been constructed, then the multi-view capture may be updated based on the new image or images captured at206. Alternatively, or additionally, feature extraction may involve performing one or more operations such as object recognition, component identification, orientation detection, or other such steps. At210, the extracted features are compared with the object model. According to various embodiments, comparing the extracted features to the object model may involve making any comparison suitable for determining whether the captured image or images are sufficient for performing damage comparison. Such operations may include, but are not limited to: applying a neural network to the captured image or images, comparing the captured image or images to one or more reference images, and/or performing any of the operations discussed with respect toFIGS.3and4. A determination is made at212as to whether to capture an additional image of the object. In some implementations, the determination may be made at least in part based on an analysis of the one or more images that have already been captured. In some embodiments, a preliminary damage analysis may be implemented using as input the one or more images that have been captured. If the damage analysis is inconclusive, then an additional image may be captured. Techniques for conducting damage analysis are discussed in additional detail with respect to the methods300and400shown inFIGS.3and4. In some embodiments, the system may analyze the captured image or images to determine whether a sufficient portion of the object has been captured in sufficient detail to support damage analysis. For example, the system may analyze the capture image or images to determine whether the object is depicted from all sides. As another example, the system may analyze the capture image or images to determine whether each panel or portion of the object is shown in a sufficient amount of detail. As yet another example, the system may analyze the capture image or images to determine whether each panel or portion of the object is shown from a sufficient number of viewpoints. If the determination is made to capture an additional image, then at214image collection guidance for capturing the additional image is determined. In some implementations, the image collection guidance may include any suitable instructions for capturing an additional image that may assist in changing the determination made at212. Such guidance may include an indication to capture an additional image from a targeted viewpoint, to capture an additional image of a designated portion of the object, or to capture an additional image at a different level of clarity or detail. For example, if possible damage is detected, then feedback may be provided to capture additional detail at the damaged location. At216, image collection feedback is provided. According to various embodiments, the image collection feedback may include any suitable instructions or information for assisting a user in collecting additional images. Such guidance may include, but is not limited to, instructions to collect an image at a targeted camera position, orientation, or zoom level. Alternatively, or additionally, a user may be presented with instructions to capture a designated number of images or an image of a designated portion of the object. For example, a user may be presented with a graphical guide to assist the user in capturing an additional image from a target perspective. As another example, a user may be presented with written or verbal instructions to guide the user in capturing an additional image. When it is determined to not capture an additional image of the object, then at218the captured image or images are stored. In some implementations, the captured images may be stored on a storage device and used to perform damage detection, as discussed with respect to the methods300and400inFIGS.3and4. Alternatively, or additionally, the images may be transmitted to a remote location via a network interface. FIG.3illustrates a method300for component-level damage detection, performed in accordance with one or more embodiments. According to various embodiments, the method300may be performed at a mobile computing device such as a smart phone. The smart phone may be in communication with a remote server. The method300may be used to detect damage to any of various types of objects. However, for the purpose of illustration, many examples discussed herein will be described with reference to vehicles. A skeleton is extracted from input data at302. According to various embodiments, a skeleton may refer to a three-dimensional or two-dimensional mesh. The input data may include visual data collected as discussed with respect to the method300shown inFIG.3. Alternatively, or additionally, the input data may include previously collected visual data, such as visual data collected without the use of recording guidance. In some implementations, the input data may include one or more images of the object captured from different perspectives. Alternatively, or additionally, the input data may include video data of the object. In addition to visual data, the input data may also include other types of data, such as IMU data. According to various embodiments, skeleton detection may involve one or more of a variety of techniques. Such techniques may include, but are not limited to: 2D skeleton detection using machine learning, 3D pose estimation, and 3D reconstruction of a skeleton from one or more 2D skeletons and/or poses. Calibration image data associated with the object is identified at304. According to various embodiments, the calibration image data may include one or more reference images of similar objects or of the same object at an earlier point in time. Alternatively, or additionally, the calibration image data may include a neural network used to identify damage to the object. A skeleton component is selected for damage detection at306. In some implementations, a skeleton component may represent a panel of the object. In the case of a vehicle, for example, a skeleton component may represent a door panel, a window, or a headlight. Skeleton components may be selected in any suitable order, such as sequentially, randomly, in parallel, or by location on the object. According to various embodiments, when a skeleton component is selected for damage detection, a multi-view capture of the skeleton component may be constructed. Constructing a multi-view capture of the skeleton component may involve identifying different images in the input data that capture the skeleton component from different viewpoints. The identified images may then be selected, cropped, and combined to produce a multi-view capture specific to the skeleton component. A viewpoint of the skeleton component is selected for damage detection at304. In some implementations, each viewpoint included in the multi-view capture of the skeleton component may be analyzed independently. Alternatively, or additionally, more than one viewpoint may be analyzed simultaneously, for instance by providing the different viewpoints as input data to a machine learning model trained to identify damage to the object. In particular embodiments, the input data may include other types of data, such as 3D visual data or data captured using a depth sensor or other type of sensor. According to various embodiments, one or more alternatives to skeleton analysis at302-310may be used. For example, an object part (e.g., vehicle component) detector may be used to directly estimate the object parts. As another example, an algorithm such as a neural network may be used to map an input image to a top-down view of an object such as a vehicle (and vice versa) in which the components are defined. As yet another example, an algorithm such as a neural network that classifies the pixels of an input image as a specific component can be used to identify the components. As still another example, component-level detectors may be used to identify specific components of the object. As yet another alternative, a 3D reconstruction of the vehicle may be computed and a component classification algorithm may be run on that 3D model. The resulting classification can then be back-projected into each image. As still another alternative, a 3D reconstruction of the vehicle can be computed and fitted to an existing 3D CAD model of the vehicle in order to identify the single components. At310, the calibration image data is compared with the selected viewpoint to detect damage to the selected skeleton component. According to various embodiments, the comparison may involve applying a neural network to the input data. Alternatively, or additionally, an image comparison between the selected viewpoint and one or more reference images of the object captured at an earlier point in time may be performed. A determination is made at312as to whether to select an additional viewpoint for analysis. According to various embodiments, additional viewpoints may be selected until all available viewpoints are analyzed. Alternatively, viewpoints may be selected until the probability of damage to the selected skeleton component has been identified to a designated degree of certainty. Damage detection results for the selected skeleton component are aggregated at314. According to various embodiments, damage detection results from different viewpoints to a single damage detection result per panel resulting in a damage result for the skeleton component. For example, a heatmap may be created that shows the probability and/or severity of damage to a vehicle panel such as a vehicle door. According to various embodiments, various types of aggregation approaches may be used. For example, results determined at310for different viewpoints may be averaged. As another example, different results may be used to “vote” on a common representation such as a top-down view. Then, damage may be reported if the votes are sufficiently consistent for the panel or object portion. A determination is made at316as to whether to select an additional skeleton component for analysis. In some implementations, additional skeleton components may be selected until all available skeleton components are analyzed. Damage detection results for the object are aggregated at314. According to various embodiments, damage detection results for different components may be aggregated into a single damage detection result for the object as a whole. For example, creating the aggregated damage results may involve creating a top-down view. As another example, creating the aggregated damage results may involve identifying standardized or appropriate viewpoints of portions of the object identified as damaged. As yet another example, creating the aggregated damage results may involve tagging damaged portions in a multi-view representation. As still another example, creating the aggregated damage results may involve overlaying a heatmap on a multi-view representation. As yet another example, creating the aggregated damage results may involve selecting affected parts and presenting them to the user. Presenting may be done as a list, as highlighted elements in a 3D CAD model, or in any other suitable fashion. In particular embodiments, techniques and mechanisms described herein may involve a human to provide additional input. For example, a human may review damage results, resolve inconclusive damage detection results, or select damage result images to include in a presentation view. As another example, human review may be used to train one or more neural networks to ensure that the results computed are correct and are adjusted as necessary. FIG.4illustrates an object-level damage detection method400, performed in accordance with one or more embodiments. The method400may be performed at a mobile computing device such as a smart phone. The smart phone may be in communication with a remote server. The method400may be used to detect damage to any of various types of objects. Evaluation image data associated with the object is identified at402. According to various embodiments, the evaluation image data may include single images captured from different viewpoints. As discussed herein, the single images may be aggregated into a multi-view capture, which may include data other than images, such as IMU data. An object model associated with the object is identified at404. In some implementations, the object model may include a 2D or 3D standardized mesh, model, or abstracted representation of the object. For instance, the evaluation image data may be analyzed to determine the type of object that is represented. Then, a standardized model for that type of object may be retrieved. Alternatively, or additionally, a user may select an object type or object model to use. The object model may include a top-down view of the object. Calibration image data associated with the object is identified at406. According to various embodiments, the calibration image data may include one or more reference images. The reference images may include one or more images of the object captured at an earlier point in time. Alternatively, or additionally, the reference images may include one or more images of similar objects. For example, a reference image may include an image of the same type of car as the car in the images being analyzed. In some implementations, the calibration image data may include a neural network trained to identify damage. For instance, the calibration image data may be trained to analyze damage from the type of visual data included in the evaluation data. The calibration data is mapped to the object model at408. In some implementations, mapping the calibration data to the object model may involve mapping a perspective view of an object from the calibration images to a top-down view of the object. The evaluation image data is mapped to the object model at410. In some implementations, mapping the evaluation image data to the object model may involve determine a pixel-by-pixel correspondence between the pixels of the image data and the points in the object model. Performing such a mapping may involve determining the camera position and orientation for an image from IMU data associated with the image. In some embodiments, a dense per-pixel mapping between an image and the top-down view may be estimated at410. Alternatively, or additionally, location of center of an image may be estimated with respect to the top-down view. For example, a machine learning algorithm such as deep net may be used to map the image pixels to coordinates in the top-down view. As another example, joints of a 3D skeleton of the object may be estimated and used to define the mapping. As yet another example, component-level detectors may be used to identify specific components of the object. In some embodiments, the location of one or more object parts within the image may be estimated. Those locations may then be used to map data from the images to the top-down view. For example, object parts may be classified on a pixel-wise basis. As another example, the center location of object parts may be determined. As another example, the joints of a 3D skeleton of an object may be estimated and used to define the mapping. As yet another example, component-level detectors may be used for specific object components. In some implementations, images may be mapped in a batch via a neural network. For example, a neural network may receive as input a set of images of an object captured from different perspectives. The neural network may then detect damage to the object as a whole based on the set of input images. The mapped evaluation image data is compared to the mapped calibration image data at412to identify any differences. According to various embodiments, the data may be compared by running a neural network on a multi-view representation as a whole. Alternatively, or additional, the evaluation and image data may be compared on an image-by-image basis. If it is determined at414that differences are identified, then at416a representation of the identified differences is determined. According to various embodiments, the representation of the identified differences may involve a heatmap of the object as a whole. For example, a heatmap of a top-down view of a vehicle showing damage is illustrated inFIG.2. Alternatively, one or more components that are damaged may be isolated and presented individually. At418, a representation of the detected damage is stored on a storage medium or transmitted via a network. In some implementations, the representation may include an estimated dollar value. Alternatively, or additionally, the representation may include a visual depiction of the damage. Alternatively, or additionally, affected parts may be presented as a list and/or highlighted in a 3D CAD model. In particular embodiments, damage detection of an overall object representation may be combined with damage representation on one or more components of the object. For example, damage detection may be performed on a closeup of a component if an initial damage estimation indicates that damage to the component is likely. FIG.5illustrates a damage detection and presentation method500, performed in accordance with one or more embodiments. In some embodiments, the method500may be used to capture image data, perform damage detection, and present information in a user interface as shown and described herein. According to various embodiments, the method500may be used to guide a user through a capture process, for instance by showing information related to capture completeness and/or coverage levels. For example, damage detection may be performed at a remote location such as on a remote server or in a cloud computing environment. In such a configuration, the method500may be used to indicate whether enough images have been captured for a suitable analysis by a remote damage detection algorithm without requiring back and forth communication to capture additional data. As another example, the method500may be used to indicate whether enough images have been captured for a human to analyze the data remotely. A request to perform damage detection for an object is received at502. According to various embodiments, the request may be received at a mobile computing device that has a camera. For instance, the request may be received at an application configured to perform damage detect. An image of the object is captured at504. According to various embodiments, the image may be captured by pointing the camera at the object. The image may be captured manually or automatically. For example, an image may be captured by the application periodically, for instance at an interval of once per second. As another example, the user may provide user input indicating that an image should be captured. As yet another example, images may be captured continuously, for instance in a video feed. In particular embodiments, images may be automatically captured in a way that is anticipated to increase one or more estimated parameter values. Examples of such estimated parameter values may include, but are not limited to: object coverage, damage estimate confidence, and image capture completeness. For example, object coverage may be determined based on the percentage of an object included in captured images. Accordingly, additional images of the object may be selected for capture to fill in one or more coverage gaps. As another example, damage estimate confidence may be based on characteristics such as light reflections, angles, detail level associated with captured images. Accordingly, additional images of the object may be selected for capture to provide for improved or additional lighting, covered angles, and/or detail for particular portions of the object for which damage detection confidence levels do not yet meet or exceed a designated threshold. One or more components of the object are identified at506. According to various embodiments, components may be identified as discussed with respect to the methods200and300shown inFIGS.2and3. A coverage level for each of the components is determined at508. In some embodiments, the system may analyze the captured image or images to determine whether a sufficient portion of the object has been captured in sufficient detail to support damage analysis. For example, the system may analyze the capture image or images to determine whether the object is depicted with a sufficient level of detail from all sides. As another example, the system may analyze the capture image or images to determine whether each panel or portion of the object is shown in a sufficient amount of detail. As yet another example, the system may analyze the capture image or images to determine whether each panel or portion of the object is shown from a sufficient number of viewpoints. Damage to the identified components is detected at510. According to various embodiments, damage may be detected as discussed with respect to the method400shown inFIG.4. At512, a user interface is updated based on the captured image. According to various embodiments, updating the user interface may include any or all of a variety of operations, as shown inFIGS.7-20. For example, the captured image may be displayed in the user interface. As another example, a live camera view may be displayed in the user interface. In some embodiments, one or more identified components may be highlighted in the user interface. For instance, as shown inFIG.7,FIG.9, andFIG.11, a component overlay may be displayed overlain on the captured image or a live camera feed of the object. Such information may be presented using different colored panels (not shown). In some embodiments, one or more component coverage levels may be displayed in the user interface at704. For instance, a coverage level may be displayed as a status bar associated with a component, as shown inFIG.7. Alternatively, or additionally, coverage levels may be displayed via a color overlay, for instance by highlighting in green objects that have inadequate coverage and highlighting in red those components that have inadequate cove rage. In some implementations, one or more coverage levels may be depicted in an object model. For instance, as shown inFIG.7, portions of an object represented in a captured image may be depicted as colored points or regions, with different colors corresponding to different components. As shown inFIG.9, capturing successive images may then result in the object model becoming solid as all external portions of the object are captured in an image. In some embodiments, detected damage may be presented in a user interface. For example, detected damage may be presented as a heatmap. The heatmap may be shown as an overlay on an image of the object, on a live camera feed of the object, and/or on a model of the object such as a top-down view. For example, inFIG.15throughFIG.20, detected damage is shown as both a heatmap overlain on a live camera view and as a heatmap overlain on a top-down view of the object. A determination is made at514as to whether to capture an additional image of the object. According to various embodiments, additional visual data may be captured until user input is received indicating that image capture should be terminated. Alternatively, or additionally, visual data may be captured until a determination is made that a coverage level for the object exceeds a designated threshold. FIG.6illustrates a computer system600configured in accordance with one or more embodiments. For instance, the computer system600can be used to provide MVIDMRs according to various embodiments described above. According to various embodiments, a system600suitable for implementing particular embodiments includes a processor601, a memory603, an interface611, and a bus615(e.g., a PCI bus). The system600can include one or more sensors609, such as light sensors, accelerometers, gyroscopes, microphones, cameras including stereoscopic or structured light cameras. As described above, the accelerometers and gyroscopes may be incorporated in an IMU. The sensors can be used to detect movement of a device and determine a position of the device. Further, the sensors can be used to provide inputs into the system. For example, a microphone can be used to detect a sound or input a voice command. In the instance of the sensors including one or more cameras, the camera system can be configured to output native video data as a live video feed. The live video feed can be augmented and then output to a display, such as a display on a mobile device. The native video can include a series of frames as a function of time. The frame rate is often described as frames per second (fps). Each video frame can be an array of pixels with color or gray scale values for each pixel. For example, a pixel array size can be 512 by 512 pixels with three color values (red, green and blue) per pixel. The three color values can be represented by varying amounts of bits, such as 24, 30, 5, 40 bits, etc. per pixel. When more bits are assigned to representing the RGB color values for each pixel, a larger number of colors values are possible. However, the data associated with each image also increases. The number of possible colors can be referred to as the color depth. The video frames in the live video feed can be communicated to an image processing system that includes hardware and software components. The image processing system can include non-persistent memory, such as random-access memory (RAM) and video RAM (VRAM). In addition, processors, such as central processing units (CPUs) and graphical processing units (GPUs) for operating on video data and communication busses and interfaces for transporting video data can be provided. Further, hardware and/or software for performing transformations on the video data in a live video feed can be provided. In particular embodiments, the video transformation components can include specialized hardware elements configured to perform functions necessary to generate a synthetic image derived from the native video data and then augmented with virtual data. In data encryption, specialized hardware elements can be used to perform a specific data transformation, i.e., data encryption associated with a specific algorithm. In a similar manner, specialized hardware elements can be provided to perform all or a portion of a specific video data transformation. These video transformation components can be separate from the GPU(s), which are specialized hardware elements configured to perform graphical operations. All or a portion of the specific transformation on a video frame can also be performed using software executed by the CPU. The processing system can be configured to receive a video frame with first RGB values at each pixel location and apply operation to determine second RGB values at each pixel location. The second RGB values can be associated with a transformed video frame which includes synthetic data. After the synthetic image is generated, the native video frame and/or the synthetic image can be sent to a persistent memory, such as a flash memory or a hard drive, for storage. In addition, the synthetic image and/or native video data can be sent to a frame buffer for output on a display or displays associated with an output interface. For example, the display can be the display on a mobile device or a view finder on a camera. In general, the video transformations used to generate synthetic images can be applied to the native video data at its native resolution or at a different resolution. For example, the native video data can be a 512 by 512 array with RGB values represented by 24 bits and at frame rate of 24 fps. In some embodiments, the video transformation can involve operating on the video data in its native resolution and outputting the transformed video data at the native frame rate at its native resolution. In other embodiments, to speed up the process, the video transformations may involve operating on video data and outputting transformed video data at resolutions, color depths and/or frame rates different than the native resolutions. For example, the native video data can be at a first video frame rate, such as 24 fps. But, the video transformations can be performed on every other frame and synthetic images can be output at a frame rate of 12 fps. Alternatively, the transformed video data can be interpolated from the 12 fps rate to 24 fps rate by interpolating between two of the transformed video frames. In another example, prior to performing the video transformations, the resolution of the native video data can be reduced. For example, when the native resolution is 512 by 512 pixels, it can be interpolated to a 256 by 256 pixel array using a method such as pixel averaging and then the transformation can be applied to the 256 by 256 array. The transformed video data can output and/or stored at the lower 256 by 256 resolution. Alternatively, the transformed video data, such as with a 256 by 256 resolution, can be interpolated to a higher resolution, such as its native resolution of 512 by 512, prior to output to the display and/or storage. The coarsening of the native video data prior to applying the video transformation can be used alone or in conjunction with a coarser frame rate. As mentioned above, the native video data can also have a color depth. The color depth can also be coarsened prior to applying the transformations to the video data. For example, the color depth might be reduced from 40 bits to 24 bits prior to applying the transformation. As described above, native video data from a live video can be augmented with virtual data to create synthetic images and then output in real-time. In particular embodiments, real-time can be associated with a certain amount of latency, i.e., the time between when the native video data is captured and the time when the synthetic images including portions of the native video data and virtual data are output. In particular, the latency can be less than 100 milliseconds. In other embodiments, the latency can be less than 50 milliseconds. In other embodiments, the latency can be less than 30 milliseconds. In yet other embodiments, the latency can be less than 20 milliseconds. In yet other embodiments, the latency can be less than 10 milliseconds. The interface611may include separate input and output interfaces, or may be a unified interface supporting both operations. Examples of input and output interfaces can include displays, audio devices, cameras, touch screens, buttons and microphones. When acting under the control of appropriate software or firmware, the processor601is responsible for such tasks such as optimization. Various specially configured devices can also be used in place of a processor601or in addition to processor601, such as graphical processor units (GPUs). The complete implementation can also be done in custom hardware. The interface611is typically configured to send and receive data packets or data segments over a network via one or more communication interfaces, such as wireless or wired communication interfaces. Particular examples of interfaces the device supports include Ethernet interfaces, frame relay interfaces, cable interfaces, DSL interfaces, token ring interfaces, and the like. In addition, various very high-speed interfaces may be provided such as fast Ethernet interfaces, Gigabit Ethernet interfaces, ATM interfaces, HSSI interfaces, POS interfaces, FDDI interfaces and the like. Generally, these interfaces may include ports appropriate for communication with the appropriate media. In some cases, they may also include an independent processor and, in some instances, volatile RAM. The independent processors may control such communications intensive tasks as packet switching, media control and management. According to various embodiments, the system600uses memory603to store data and program instructions and maintained a local side cache. The program instructions may control the operation of an operating system and/or one or more applications, for example. The memory or memories may also be configured to store received metadata and batch requested metadata. The system600can be integrated into a single device with a common housing. For example, system600can include a camera system, processing system, frame buffer, persistent memory, output interface, input interface and communication interface. In various embodiments, the single device can be a mobile device like a smart phone, an augmented reality and wearable device like Google Glass™ or a virtual reality head set that includes a multiple cameras, like a Microsoft Hololens™. In other embodiments, the system600can be partially integrated. For example, the camera system can be a remote camera system. As another example, the display can be separate from the rest of the components like on a desktop PC. In the case of a wearable system, like a head-mounted display, as described above, a virtual guide can be provided to help a user record a MVIDMR. In addition, a virtual guide can be provided to help teach a user how to view a MVIDMR in the wearable system. For example, the virtual guide can be provided in synthetic images output to head mounted display which indicate that the MVIDMR can be viewed from different angles in response to the user moving some manner in physical space, such as walking around the projected image. As another example, the virtual guide can be used to indicate a head motion of the user can allow for different viewing functions. In yet another example, a virtual guide might indicate a path that a hand could travel in front of the display to instantiate different viewing functions. FIGS.7-20illustrate images presented in a user interface and illustrating the collection and aggregation of information from visual data. In particular embodiments, the information may be used for detecting damage to an object. In various embodiments described herein, the object is referred to as being a vehicle. However, information about various types of objects may be captured. InFIG.7, a camera at a mobile computing device such as a mobile phone is pointed at the side of the vehicle. One or more images are captured. On the right, status bars corresponding to different portions of the vehicle are presented. In some embodiments, a status bar illustrates a confidence level in a damage detection estimate corresponding to the identified components. For example, because the camera is focused on the right side of the vehicle but has not yet focused on the left side of the vehicle, the confidence level for the right quarter panel is high, while the confidence level for the left quarter panel is zero. In some embodiments, a status bar illustrates a degree of coverage of an object or a portion of an object. For instance, a status bar may increase in value as image data of more of the object is captured. In particular embodiments, two or more status bars may be shown. For instance, one status bar may correspond to a confidence level, while another status bar may correspond to image data coverage. InFIG.7, different portions of the vehicle are shown with overlaying blocks of different colors. According to various embodiments, colors may be used to identify various types of information about the vehicle. For example, different colors may correspond with different vehicle components, different degrees of image data coverage, different degrees of image data completeness, and/or different confidence levels related to an estimated value such as detected damage. InFIG.8, a top-down view corresponding toFIG.7is shown, with color, as it may appear in an actual user interface. The top-down view again shows different colors corresponding to different components of the vehicle that have been captured in the visual data. As inFIG.6, only the right side of the vehicle has been captured in the visual data as of this time. FIGS.9,11, and13show subsequent views as the camera has been moved around the vehicle first toward the front, then along the right side, and then at the back.FIGS.10,12, and14show the corresponding top-down views illustrating the portions of the vehicle that have been captured. By the time the image shown inFIG.13has been captured, nearly all components of the vehicle have been captured with a high degree of confidence, as shown in the status bars inFIGS.13and14. InFIG.10, an abstract top-down diagram is shown at1002, with overlain regions such as1006illustrating captured areas. Coverage levels are then illustrated at1004, with larger and shaded bars indicating higher levels. InFIGS.12and14, coverage levels and overlain regions have increased due to the capture of additional images. FIGS.7,9,11, and13are particular images from a sequence of images captured as the camera is moved around the vehicle. Multi-view image data of a vehicle may include different or additional images. For instance, an image may be captured automatically at any suitable interval, such as once per second. FIGS.15-16illustrate a progression of images similar to those shown inFIGS.7-14around a different vehicle in a different user interface. InFIGS.15-18, the top-down view is presented in the lower right of the Figure. In addition, the status bars include percentages illustrating a confidence level associated with the captured data. In contrast toFIGS.7-14, the inset top-down view inFIGS.15-18illustrates detected damage rather than image data coverage. For example, inFIG.17, damage has been detected to the back and right sides of the vehicle. The detected damage is illustrated both on the top-down view as a heatmap and the image as overlain colored areas. FIG.19andFIG.20illustrate a progression of images similar to those shown inFIGS.15-18around a different vehicle. As inFIGS.15-18, confidence values for vehicle components increase as additional visual data is captured. Detected damage is shown on the inset heatmap. Any of the disclosed implementations may be embodied in various types of hardware, software, firmware, computer readable media, and combinations thereof. For example, some techniques disclosed herein may be implemented, at least in part, by computer-readable media that include program instructions, state information, etc., for configuring a computing system to perform various services and operations described herein. Examples of program instructions include both machine code, such as produced by a compiler, and higher-level code that may be executed via an interpreter. Instructions may be embodied in any suitable language such as, for example, Java, Python, C++, C, HTML, any other markup language, JavaScript, ActiveX, VBScript, or Perl. Examples of computer-readable media include, but are not limited to: magnetic media such as hard disks and magnetic tape; optical media such as flash memory, compact disk (CD) or digital versatile disk (DVD); magneto-optical media; and other hardware devices such as read-only memory (“ROM”) devices and random-access memory (“RAM”) devices. A computer-readable medium may be any combination of such storage devices. In the foregoing specification, various techniques and mechanisms may have been described in singular form for clarity. However, it should be noted that some embodiments include multiple iterations of a technique or multiple instantiations of a mechanism unless otherwise noted. For example, a system uses a processor in a variety of contexts but can use multiple processors while remaining within the scope of the present disclosure unless otherwise noted. Similarly, various techniques and mechanisms may have been described as including a connection between two entities. However, a connection does not necessarily mean a direct, unimpeded connection, as a variety of other entities (e.g., bridges, controllers, gateways, etc.) may reside between the two entities. In the foregoing specification, reference was made in detail to specific embodiments including one or more of the best modes contemplated by the inventors. While various implementations have been described herein, it should be understood that they have been presented by way of example only, and not limitation. For example, some techniques and mechanisms are described herein in the context of on-demand computing environments that include MTSs. However, the techniques of disclosed herein apply to a wide variety of computing environments. Particular embodiments may be implemented without some or all of the specific details described herein. In other instances, well known process operations have not been described in detail in order to avoid unnecessarily obscuring the disclosed techniques. Accordingly, the breadth and scope of the present application should not be limited by any of the implementations described herein, but should be defined only in accordance with the claims and their equivalents. | 50,250 |
11861901 | DETAILED DESCRIPTION OF PREFERRED EMBODIMENTS Generally, this disclosure enables various technologies for acting based on distance sensing and is now described more fully with reference toFIGS.1-40, in which some embodiments of this disclosure are shown. This disclosure may, however, be embodied in many different forms and should not be construed as necessarily being limited to only embodiments disclosed herein. Rather, these embodiments are provided so that this disclosure is thorough and complete, and fully conveys various concepts of this disclosure to skilled artisans. Note that various terminology used herein can imply direct or indirect, full or partial, temporary or permanent, action or inaction. For example, when an element is referred to as being “on,” “connected” or “coupled” to another element, then the element can be directly on, connected or coupled to the other element or intervening elements can be present, including indirect or direct variants. In contrast, when an element is referred to as being “directly connected” or “directly coupled” to another element, there are no intervening elements present. Likewise, as used herein, a term “or” is intended to mean an inclusive “or” rather than an exclusive “or.” That is, unless specified otherwise, or clear from context, “X employs A or B” is intended to mean any of the natural inclusive permutations. That is, if X employs A; X employs B; or X employs both A and B, then “X employs A or B” is satisfied under any of the foregoing instances. In addition, features described with respect to certain embodiments may be combined in or with various other embodiments in any permutational or combinatory manner. Different aspects or elements of example embodiments, as disclosed herein, may be combined in a similar manner. The term “combination”, “combinatory,” or “combinations thereof” as used herein refers to all permutations and combinations of listed items preceding that term. For example, “A, B, C, or combinations thereof” is intended to include at least one of: A, B, C, AB, AC, BC, or ABC, and if order is important in a particular context, also BA, CA, CB, CBA, BCA, ACB, BAC, or CAB. Continuing with this example, expressly included are combinations that contain repeats of one or more item or term, such as BB, AAA, AB, BBC, AAABCCCC, CBBAAA, CABABB, and so forth. A skilled artisan will understand that typically there is no limit on a number of items or terms in any combination, unless otherwise apparent from the context. Similarly, as used herein, various singular forms “a,” “an” and “the” are intended to include various plural forms as well, unless context clearly indicates otherwise. For example, a term “a” or “an” shall mean “one or more,” even though a phrase “one or more” is also used herein. Moreover, terms “comprises,” “includes” or “comprising,” “including” when used in this specification, specify a presence of stated features, integers, steps, operations, elements, or components, but do not preclude a presence and/or addition of one or more other features, integers, steps, operations, elements, components, or groups thereof. Furthermore, when this disclosure states that something is “based on” something else, then such statement refers to a basis which may be based on one or more other things as well. In other words, unless expressly indicated otherwise, as used herein “based on” inclusively means “based at least in part on” or “based at least partially on.” Additionally, although terms first, second, and others can be used herein to describe various elements, components, regions, layers, or sections, these elements, components, regions, layers, or sections should not necessarily be limited by such terms. Rather, these terms are used to distinguish one element, component, region, layer, or section from another element, component, region, layer, or section. As such, a first element, component, region, layer, or section discussed below could be termed a second element, component, region, layer, or section without departing from this disclosure. Also, unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in an art to which this disclosure belongs. As such, terms, such as those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in a context of a relevant art and should not be interpreted in an idealized or overly formal sense unless expressly so defined herein. FIG.1shows a schematic diagram of an embodiment of a device with a distance sensing unit according to this disclosure. In particular, a system100includes a housing102, a processor,104, a memory106, a distance sensing unit (DSU)108, a DSU110, and an output device112. The housing102houses the processor104, the memory106, the DSU108, the DSU110, and the output device112. For example, the housing102can house externally, internally, or others, such as when the housing102is at least physically coupled to at least one of such components, such as via fastening, mating, interlocking, adhering, magnetizing, suctioning, stitching, stapling, nailing, or other forms of physical coupling. The housing102can be rigid, flexible, elastic, solid, perforated, or others. For example, the housing102can include a plastic, a metal, a rubber, a wood, a precious metal, a precious stone, a fabric, a rare-earth element, or others. For example, the housing102can include, be physically or electrically coupled to, be a component of, or embodied as a desktop, a laptop, a tablet, a smartphone, a joystick, a videogame console, a camera, a microphone, a speaker, a keyboard, a mouse, a touchpad, a trackpad, a sensor, a display, a printer, an additive or subtractive manufacturing machine, a wearable, a vehicle, a furniture item, a plumbing tool, a construction tool, a mat, a firearm/rifle, a laser pointer, a scope, a binocular, an electrical tool, a drill, an impact driver, a flashlight, an engine, an actuator, a solenoid, a toy, a pump, or others. For example, the wearable includes a head-mounted display (e.g., virtual reality headset, augmented reality headset), a watch, a wrist-worn activity tracker, a hat, a helmet, a earbud, a hearing aid, a headphone, an eyewear frame, an eye lens, a band, a garment, a shoe, a jewelry item, a medical device, an activity tracker, a swimsuit, a bathing suit, a snorkel, a scuba breathing apparatus, a swimming leg fin, a handcuff, an implant, or any other device that can be worn on or in a body (including hair) of an animal, such a human, a dog, a cat, a bird, a fish, or any other, whether domesticated or undomesticated, whether male or female, whether elderly, adult, teen, toddler, infant, or others. For example, the garment can include a jacket, a shirt, a tie, a belt, a band, a pair of shorts, a pair of pants, a sock, an undershirt, an underwear item, a bra, a jersey, a skirt, a dress, a blouse, a sweater, a scarf, a glove, a bandana, an elbow pad, a kneepad, a pajama, a robe, or others. For example, the jewelry item can include an earring, a necklace, a ring, a bracelet, a pin, a brooche, or others, whether worn on a body or clothing. For example, the shoe can include a dress shoe, a sneaker, a boot, a heeled shoe, a roller skate, a rollerblade, or others. In some embodiments, the memory106, the DSU108, the DSU110, and the output device112are supported via at least one a platform or a frame. For example, at least one of the platform or the frame can support externally, internally, or others, such as when at least one of the platform or the frame is at least physically coupled to at least one of such components, such as via fastening, mating, interlocking, adhering, magnetizing, suctioning, stitching, stapling, nailing, or other forms of physical coupling. At least one of the platform or the frame can be rigid, flexible, elastic, solid, perforated, or others. For example, at least one of the platform or the frame can include a plastic, a metal, a rubber, a wood, a precious metal, a precious stone, a fabric, a rare-earth element, or others. The processor104can include a single core or a multicore processor. The processor104can include a system-on-chip (SOC) or an application-specific-integrated-circuit (ASIC). The processor104is powered via an accumulator, such as a battery or others, whether the accumulator is housed or is not housed via the housing102. The processor104is in communication with the memory106, the DSU108, the DSU110, and the output device112. The memory106can include a read-only-memory (ROM), a random-access-memory (RAM), a hard disk drive, a flash memory, or others. The memory106is powered via an accumulator, such as a battery or others, whether the accumulator is housed or is not housed via the housing102. The output device112can include a light source, a sound source, a radio wave source, a vibration source, a display, a speaker, a printer, a transmitter, a transceiver, or others. The output device112is powered via an accumulator, such as a battery or others, whether the accumulator is housed or is not housed via the housing102. The DSU108can include at least one of a radar unit, a lidar unit, a sonar unit, or others, whether wired or wireless. For example, the radar unit can include a digital radar unit (DRU), as disclosed in U.S. Pat. No. 9,019,150, which is incorporated by reference herein for all purposes including any DSU or DRU systems, structures, environments, configurations, techniques, algorithms, or others. For example, the DRU unit can be embodied as disclosed in U.S. Pat. No. 9,019,150 column 7, line 33 through column 17, line 3; column 30, line 38 through column 32, line 30; and column 41, line 60 through column 44, line 46, along with and in light of FIGS. 2, 3A, 3B, 14, 27A, and 27B of that patent. For example, the DSU can be line-of-sight based or non-line-of-sight based. For example, the DSU can be based on a radio signal, an optical signal, a sound signal, or others modalities of sensing. Note that the system100can include more than one DSU108up to DSU n110. For example, the system100can include at least two, three, four, five, six, seven, eight, nine, ten, scores, tens, fifties, hundreds, thousands, millions, or more DSU108embodied as DSU n110. Therefore, in such configurations, at least two of the DSUs are not in sync with each other and do not interfere with each other, but are able to receive echoes or signals from each other, as explained in U.S. Pat. No. 9,019,150 referenced above and incorporated by reference herein for all purposes including any DSU or DRU systems, structures, environments, configurations, techniques, algorithms, or others. For example, the DSUs108-110ncan be identical to or different from each other in structure, function, operation, modality, positioning, materials, or others. FIG.2shows a schematic diagram of an embodiment of a defined area containing a device and an object according to this disclosure. In particular, a system200includes a housing202, an object204, and a defined area206. The defined area206contains the housing202and the object204. For example, the defined area206can include a physically fenced area, a digitally fenced area, a geo-fenced area, a building (residential/commercial), a garage, a basement, a vehicle (land/marine/aerial/satellite), an indoor area, an outdoor area, a mall, a school, a cubicle grid, a utility room, a walk-in refrigerator, a restaurant, a coffee shop, a subway or bus or train station, an airport, a barracks, a camp site, a house of worship, a gas station, an oil field, a refinery, a warehouse, a farm, a laboratory, a library, a long term storage facility, an industrial facility, a post office, a shipping hub or station, a supermarket, a retail store, a home improvement center, a parking lot, a toy store, a manufacturing plant, a processing plan, a pool, a hospital, a medical facility, an energy plan, a nuclear reactor, or others. The housing202can be the housing102or another object. The housing202can be mobile or stationary within the defined area202with respect to the object204or the defined area206. The housing202can be secured within the defined area206, such as via fastening, mating, interlocking, adhering, magnetizing, suctioning, stitching, stapling, nailing, or other forms of physical coupling to the defined area206or an object positioned within or extending into the defined area206. The object204can be secured within the defined area206, such as via fastening, mating, interlocking, adhering, magnetizing, suctioning, stitching, stapling, nailing, or other forms of physical coupling to the defined area206or another object positioned within or extending into the defined area206. The object204can be the housing102or another object. For example, the defined area206can be the object204. The object204can be mobile or stationary within the defined area202with respect to the housing202or the defined area206. The defined area206can be movable with respect to the housing202or the object204. The object204may form or be a part of a boundary of the defined area206. For example, the defined area206may be a fenced area and the object204may be a fence that forms the boundary. FIG.3shows a flowchart of an embodiment of a method for acting based on a position of a device within a defined area according to this disclosure. In particular, a method300includes the housing202positioned within the defined area206and the object204positioned within the defined area206. The housing202is embodied as the housing102. For example, the housing202is a head-mounted display or an eyewear unit, the object204is a furniture item, and the defined area206is a room. In block302, the processor104requests the DSU108to obtain a reading based on the object204. The reading can be based on an echo received via the DSU108off the object204when the DSU108emitted a signal, which can be toward the object204. The reading can be based on a signal emitted via the object204. Once the DSU108obtains the reading, then that reading is available to the processor104. In block304, the processor104determines a position of the housing202relative to the object204within the defined area206based on the reading. For example, the position is determined based on a time of flight when the reading is based on the echo received via the DSU108. Note that the position can be estimated or refined, such when other positional information is available, such as via being predetermined. In block306, the processor104takes an action based on the position. The action can include reading a data structure, writing a datum to a data structure, modifying a datum within a data structure, deleting a datum in a data structure, causing an input device to take an action, causing an output device to take an action, causing a signal to be generated, causing a signal to be sent, causing a signal to be received, or others. For example, the input device can include a camera, a microphone, a user input interface, a touch-enabled display, a receiver, a transceiver, a sensor, a hardware server, or others. For example, the output device can include a display, a speaker, a vibrator, an actuator, a valve, a pump, a transmitter, a transceiver, a hardware server, or others. For example, the signal can be sent outside the defined area206or inside the defined area206. For example, the signal can be sent to or received from the object204, the defined area206, or another device, whether local to or remote to the housing202, the object204, or the defined area206. For example, the datum can include information about the position or others. For example, the action based on the position can include changing velocity based on a determined or measured ambient property, such as at least one of position over time (velocity), velocity over time (acceleration), object path, trajectory, or others. In block308, the processor104can cause a content to be output, such as via the output device112. The content can be based on the position or include information about the position. For example, the content can include an audio containing a warning message, a direction message, a navigational content, an instructional content, or others. In block308, the processor104can cause a content to be modified, such as the content stored on the memory106or the content output via the output device112. The content can be based on the position or include information about the position. For example, the content can include an graphic containing a warning message, a direction message, a navigational content, an instructional content, or others. For example, the information about the position can include at least one of position over time (velocity), velocity over time (acceleration), object path, trajectory, or others. In block312, the processor104can cause a map of the defined area206to be formed based on the position, which can be in real-time. The map can symbolically depict a perimeter or periphery of the defined area206and can symbolically depict the housing202or the object204within the perimeter or periphery. The map can be stored on the memory106or remote from the housing202, such as via the object204or external to the defined area206, such as via a server or others. The map, when formed, can be presented via the output device112. For example, the map can be displayed via the output device112. In block314, the processor104can cause a path of the housing202or the object204within the defined area206to be determined based on the position, which can be in real-time. For example, the path can be symbolically depicted over a map. The path can correspond to a path already traveled by the housing202or the object204within the defined area206. The path can correspond to a path that the housing202should travel relative to the object204within the defined area206or external to the defined area206. For example, the path can enable a user of the housing202to navigate to a specified or predetermined point within the defined area206or external to the defined area206. In block316, the content that is output via the output device112is an augmented reality content based on the position. For example, the augmented reality content can include at least one of images or sound based on the position. For example, the augmented reality content can be a navigational content, a warning content, a directional content, an instructional content, a videogame content, an immersive experience content, an educational content, a shopping content, or others. The augmented reality content can be modifiable based on the position in real-time. In block316, the content that is output via the output device112is a virtual reality content based on the position. For example, the virtual reality content can include at least one of images or sound based on the position. For example, the virtual reality content can be a navigational content, a warning content, a directional content, an instructional content, a videogame content, an immersive experience content, an educational content, a shopping content, or others. The virtual reality content can be modifiable based on the position in real-time. For example, when the housing202is a head-mounted display or an eyewear unit, the virtual reality content can help a wearer of the head-mounted display or the eyewear unit to track a position thereof (e.g., inside-out tracking, outside-in tracking, with markers, without markers) or to avoid obstacles, such as via minimize walking into an obstacle, such as the object204, the defined area206, or others. FIG.4shows a schematic diagram of an embodiment of a defined area containing an object according to this disclosure. In particular, a system400includes a defined area402, a plurality of DSUs404-410, an object412, and a processor414. The defined area402can be as the defined area206, as explained above, or others. The defined area402contains the DSUs404-410and the object412. The processor414is external to the defined area402, but can be internal to the defined area402. The DSUs404-410can be as the DSU108or110, as explained above, or others. The DSUs404-410can be identical to or different from each other in structure, function, operation, modality, positioning, materials, or others. The DSUs404-410are not in sync with each other and do not interfere with each other, but are able to receive echoes or signals from each other, as explained in U.S. Pat. No. 9,019,150 referenced above and incorporated by reference herein for all purposes including any DSU systems, structures, environments, configurations, techniques, algorithms, or others. For example, the DSU unit can be embodied as disclosed in U.S. Pat. No. 9,019,150 column 7, line 33 through column 17, line 3; column 30, line 38 through column 32, line 30; and column 41, line 60 through column 44, line 46, along with and in light of FIGS. 2, 3A, 3B, 14, 27A, and 27B of that patent. At least one of the DSUs404-410can be secured within the defined area402, such as via fastening, mating, interlocking, adhering, magnetizing, suctioning, stitching, stapling, nailing, or other forms of physical coupling to the defined area402or another object positioned within or extending into the defined area402. The object412can be as the housing202, the object204, or others, as explained above. The object412can be secured within the defined area402, such as via fastening, mating, interlocking, adhering, magnetizing, suctioning, stitching, stapling, nailing, or other forms of physical coupling to the defined area402or another object positioned within or extending into the defined area402. The processor414can be a central processing unit (CPU) or can be embodied as the processor104as explained above. The processor414is in communication with the DSUs404-410, whether wired or wireless. The defined area402can be movable with respect to the object414or the DSUs404-410or the processor414. FIG.5shows a flowchart of an embodiment of a method of acting based on a position of an object within a defined area according to this disclosure. In particular, a method500is performed via the system400. In block502, the processor414requests at least one of the DSUs404-410to obtain a reading based on the object412within the defined area402. The reading can be based on an echo received via the at least one of the DSUs404-410off the object412when the at least one of the DSUs404-410emitted a signal, which can be toward the object412. The reading can be based on a signal emitted via the object412. Once the at least one of the DSUs404-410obtains the reading, then that reading is available to the processor414. In block504, the processor414determines a position of the object412relative to the at least one of the DSUs404-410within the defined area402based on the reading. For example, the position is determined based on a time of flight when the reading is based on the echo received via the at least one of the DSUs404-410. Note that the position can be estimated or refined, such when other positional information is available, such as via being predetermined. In block506, the processor414takes an action based on the position. The action can include reading a data structure, writing a datum to a data structure, modifying a datum within a data structure, deleting a datum in a data structure, causing an input device to take an action, causing an output device to take an action, causing a signal to be generated, causing a signal to be sent, causing a signal to be received, or others. For example, the input device can include a camera, a microphone, a user input interface, a touch-enabled display, a receiver, a transceiver, a sensor, a hardware server, or others. For example, the output device can include a display, a speaker, a vibrator, an actuator, a valve, a pump, a transmitter, a transceiver, a hardware server, or others. For example, the signal can be sent outside the defined area402or inside the defined area402. For example, the signal can be sent to or received from the object412, the defined area402, or another device, whether local to or remote to the object402or the defined area402. For example, the datum can include information about the position or others. In block508, the processor414can cause a path of the object412within the defined area402to be determined based on the position, which can be in real-time. For example, the path can be symbolically depicted over a map. The path can correspond to a path already traveled by the object412within the defined area402. The path can correspond to a path that the object412should travel relative to the defined area402within the defined area402or external of the defined area402. For example, the path can enable a user of the object412to navigate to a specified or predetermined point within the defined area402or external to the defined area402. In block510, the processor414can cause a map of the defined area402to be formed based on the position, which can be in real-time. The map can symbolically depict a perimeter or periphery of the defined area402and can symbolically depict the object412within the perimeter or periphery. The map can be stored local to or remote from the processor414or remote from or local to the object412or external to the defined area402, such as via a server or others. The map, when formed, can be presented via an output device in communication with the processor414, such as via the output device112. For example, the map can be displayed via the output device112. In block512, each of the DSUs404-410is a cluster of DSUs. As such, the processor414can cause the DSUs404-410to concurrently obtain a plurality of readings such that a plurality of positions of each of the DSUs404-410is determined relative to each other. For example, if the object412moves between a pair of DSUs within a cluster of DSUs or between a pair of clusters of DSUs, then the pair of DSUs or the pair of clusters of DSUs can each report its own readings and the processor414can determine a pair of positions of the pair of DSUs or the pair of clusters of DSUs relative to the object412. The processor can compare the readings of the DSUs404-410to determine the position of the DSUs404-410. The processor is programmed such that the DSUs404-410are observing the same object, although variations are possible, such as when the DSUs404-410are observing different objects, whether inclusive or exclusive of the object. If the processor414already knows where the object412is within the defined area402, then the processor414can determine the pair of positions of the pair of DSUs or the pair of clusters of DSUs relative to the defined area402. Likewise, if the object412moves along a pair of DSUs within a cluster of DSUs or between a pair of clusters of DSUs such that the pair of DSUs within the cluster of DSUs or the pair of clusters of DSUs are on a same side relative to the object412, then similar determination can be performed. Note that a cluster of DSUs effectively reduces a need for setup positional calibration, i.e., at least two DSU within a cluster work in a triangulation manner. Also, at least two DSU clusters can share data between each other or send data to the processor414, such as a server or others, such the processor414can learn position and orientation of the at least two clusters, which effectively reduces a need for setup positional calibration. Further, note that a set of data sourced from a DSU cluster can be fused or combined with a set of data sourced from an inertial measurement unit (IMU), which can included in the object412or the DSU cluster, in order to enhance or further refine the set of data sourced from the DSU cluster. Moreover, note that a DSU cluster or a plurality of DSU clusters can be configured as or used as or form a mesh network for DSUs. Additionally, a server can receive a set of DSU data from the object412relative to the DSUs404-410or the defined area402to refine or update positioning of the DSUs404-410relative to the object412or the defined area402or the object412relative to the DSUs404-410or the defined area402or the defined area402relative to the DSUs404-410or the object412. Moreover, the object412can collect its own set of DSU data relative to the DSUs404-410or the defined area402to refine or update its own set of DSU data. Furthermore, the server can receive the set of DSU data from the object412relative to the DSUs404-410or the defined area402or the object412can collect its own set of DSU data relative to the DSUs404-410or the defined area402can be used to identify a placement of uncooperative targets or enable tracking of the object412when the object412is out-of-range with on-board sensors. The server can be programmed to track one object, such as the object412, such as when that object is coming closer to the DSUs404-410or moving away from the DSUs404-410, whether or not the object is on opposite sides or along the DSUs404-410. This functionality can be extended to the server tracking multiple objects, whether those objects are moving in different directions from each other, and this form of programming the server will speed up a process of tracking the objects. In some embodiments, the object does not need to be in motion, if there is more than one object. If the object412moves and only one DSU of the DSUs404-410measures the distance to the object412, and another DSU of the DSUs404-410does not measure the distance to the object412, then the processor414can be programmed to exclude a coverage area of the second DSU (the one that did not detect the object412) of the DSUs404-410from the area402. If the object412moves along a path or trajectory and one DSU of the DSUs404-410can detect the object412at one time and then another DSU of the DSUs404-410at another time, such as 2, 3, 4, 5, 6, 7, 8, 9, 10 seconds or more, then the path or trajectory can be extended or interpolated, and the positions of the DSUs (the first DSU that detects the object412and the second DSU that detects the object412at another time) of the DSUs404-410can be extracted. For example, if a land vehicle, such as a car, a motorcycle, a tractor, a robot, or others, is moving along a straight road or some other defined surface or path, then the land vehicle is the object412. Two or more DSUs of the DSUs404-410can be placed on a side of the road or embedded into the road or placed above the road, including hovering over the road, to observe the land vehicle or others traveling on the road. The two DSUs of the DSUs404-410need not to observe the land vehicle concurrently or at an exact same moment, if the land vehicle moves or travels down the road or some other defined surface or path, such as a street or others, in a straight line at approximately constant speed. Successive passes of land vehicle can help to refine one or more estimates of the position(s) of the at least two DSUs of the DSUs404-410. Note that the road or some other defined surface or path can be rectilinear or non-rectilinear, such as arcuate, sinusoidal, pulsating, zigzag, or others, whether symmetrical or asymmetrical, whether open-shaped or closed-shape. If one or more DSU of the DSUs404-410is monitoring an area and the object412reports to the processor414that the object412is present or moving in any direction along any plane, but the DSU of the DSUs404-41does not detect the object412, then the processor414can determine that the area is monitored by the DSU of the DSUs404-410is clear of obstacles. For example, if a drone needs to land in an area, and the DSUs404-410do not detect an object in a landing zone, then the done is free to land. Note that the DSUs404-410can be hosted via the drone or not hosted via the drone, such as via being land-based or marine-based or aerial based, such as on another vehicle or support structure of that respective type. For example, the another vehicle can include a car, a robot, a boat, a helicopter, a quadcopter, a tower, a post, a frame, a platform, or others. FIG.6Ashows a schematic diagram of an embodiment of a plurality of distance sensing unit clusters tracking an object traveling along the distance sensing unit clusters according to this disclosure.FIG.6Bshows a schematic diagram of an embodiment of a plurality of distance sensing unit clusters tracking an object traveling between the distance sensing unit clusters according to this disclosure. In particular, each of a system600A and600B includes a first DSU cluster602, as explained above, a second DSU cluster604, as explained above, and an object606. The first DSU cluster602and the second DSU cluster604are in communication with a processor, such as the processor414, a server, or others. The object606can be the housing202, the object204, the object412, or others. As shown inFIG.6A, in the system600A, the first DSU cluster602and the second DSU cluster604are positioned on a same side relative to the object606, as explained above, regardless whether the object606is moving relative to the first DSU cluster602and the second DSU cluster604or vice versa. As shown inFIG.6B, in the system600B, the object606is positioned between the first DSU cluster602and the second DSU cluster604, as explained above, regardless whether the object606is moving relative to the first DSU cluster602and the second DSU cluster604or vice versa. Therefore, as perFIGS.6A and6B, if the object606is moving along a travel path608relative to the first DSU cluster602and the second DSU cluster604or vice versa, then the first DSU cluster602and the second DSU cluster604can concurrently obtain a plurality of readings based on the object606moving along the travel path608and send the readings to the processor. As such, the processor can learn a position of each of the first DSU cluster602and the second DSU cluster604relative to the object606, in real-time, while the object606is moving along the travel path608, due to data sharing via the processor. Note that at least one of the system600A or the system600B can occur internal to or external to a defined area or without a defined area, as explained above. Also, note that when the object606includes a DSU, as explained above, the object606can position itself relative to at least one of the first DSU cluster602or the second DSU cluster604, such as in the system600A and the system600B. This technique also works where the first DSU cluster602and the second DSU cluster604are not overlapping each other in coverage, as described above. In some embodiments, the processor414can cause a content to be output, as explained above. The content can be based on the position or include information about the position. For example, the content can include an audio containing a warning message, a direction message, a navigational content, an instructional content, or others. The processor414can cause a content to be modified, as explained above. The content can be based on the position or include information about the position. For example, the content can include an graphic containing a warning message, a direction message, a navigational content, an instructional content, or others. The content can include an augmented reality content based on the position. For example, the augmented reality content can include at least one of images or sound based on the position. For example, the augmented reality content can be a navigational content, a warning content, a directional content, an instructional content, a videogame content, an immersive experience content, an educational content, a shopping content, or others. The augmented reality content can be modifiable based on the position in real-time. The content can include a virtual reality content based on the position. For example, the virtual reality content can include at least one of images or sound based on the position. For example, the augmented reality content can be a navigational content, a warning content, a directional content, an instructional content, a videogame content, an immersive experience content, an educational content, a shopping content, or others. The virtual reality content can be modifiable based on the position in real-time. For example, when the object412is a head-mounted display or an eyewear unit, the virtual reality content can help a wearer of head-mounted display or the eyewear unit to track a position thereof (e.g., inside-out tracking, outside-in tracking, with markers, without markers) or to avoid obstacles, such as via minimize walking into an obstacle, such as the defined area402or others. FIG.7shows a schematic diagram of an embodiment of a defined area containing a distance sensing unit in communication with a processor and an object with a distance sensing unit in communication with the processor according to this disclosure. In particular, a system700includes a defined area702, a processor704, a DSU706, and an object708. The defined area702contains the processor704, the DSU706, and the object708, as explained above. The defined area702can be as the defined area402, as explained above, or others. The processor704can be positioned external to the defined area702. The processor704can be as the processor414, as explained above. The DSU706can be as any of the DSUs404-410, as explained above, or others. The object708can be as the object412, as explained above or others. The object708hosts a DSU, as explained above. The processor704is in communication with the DSU706and the object708. At least one of the processor704, the DSU706, or the object708can be secured within the defined area702, such as via fastening, mating, interlocking, adhering, magnetizing, suctioning, stitching, stapling, nailing, or other forms of physical coupling to the defined area702or an object positioned within or extending into the defined area702. The defined area702can be moving with respect to at least one of the processor704, the DSU706, or the object708. FIG.8shows a flowchart of an embodiment of method of acting based on a plurality of readings from a plurality of distance sensing units according to this disclosure. In particular, a method800can be performed via the system700. In block802, the processor704obtains a first DSU reading from the DSU706based on the object708moving within the defined area702. The first DSU reading can be based on an echo received via the DSU706off the object708when the DSU706emitted a signal, which can be toward the object708. The first DSU reading can be based on a signal emitted via the object708. Once the DSU706obtains the first DSU reading, then that reading is available to the processor704. In block804, the processor704obtains a second DSU reading from the object708based on the object708moving within the defined area702. The second DSU reading can be based on an echo received via the DSU of the object708off the defined area702when the object708emitted a signal, which can be toward the defined area702. The second DSU reading can be based on a signal emitted via the DSU706. Once the DSU of the object708obtains the second DSU reading, then that reading is available to the processor704. In block806, the processor704take an action based on the first DSU reading and the second DSU reading. The action can include determining a position of the object708relative to the defined area702or the DSU706or a position of the defined area702or the DSU706relative to the object708. For example, the position can be determined based on a time of flight when at least one of the first DSU reading or the second DSU reading is based on the echo received via at least one of the DSU706or the DSU of the object708. Note that the position can be estimated or refined, such when other positional information is available, such as via being predetermined. The processor704can take an action based on the position. The action can include reading a data structure, writing a datum to a data structure, modifying a datum within a data structure, deleting a datum in a data structure, causing an input device to take an action, causing an output device to take an action, causing a signal to be generated, causing a signal to be sent, causing a signal to be received, or others. For example, the input device can include a camera, a microphone, a user input interface, a touch-enabled display, a receiver, a transceiver, a sensor, a hardware server, or others. For example, the output device can include a display, a speaker, a vibrator, an actuator, a valve, a pump, a transmitter, a transceiver, a hardware server, or others. For example, the signal can be sent outside the defined area702or inside the defined area702. For example, the signal can be sent to or received from the object708, the defined area702, or another device, whether local to or remote to the object708or the defined area702. For example, the datum can include information about the position or others. In block808, the processor704can cause a path of the object708within the defined area702to be determined based on the position, which can be in real-time. For example, the path can be symbolically depicted over a map. The path can correspond to a path already traveled by the object708within the defined area702. The path can correspond to a path that the object708should travel relative to the defined area702within the defined area702or external of the defined area702. For example, the path can enable a user of the object708to navigate to a specified or predetermined point within the defined area702or external to the defined area702. In block810, the processor704can cause a map of the defined area702to be formed based on the position, which can be in real-time. The map can symbolically depict a perimeter or periphery of the defined area702and can symbolically depict the object708within the perimeter or periphery. The map can be stored local to or remote from the processor704or remote from or local to the object708or external to the defined area702, such as via a server or others. The map, when formed, can be presented via an output device in communication with the processor704, such as via the output device112. For example, the map can be displayed via the output device112. In block812, the DSU706is included in a first cluster of DSUs, as explained above. As such, the processor704can determine a position of the first cluster of DSUs with respect to a second cluster of DSUs, as explained above. In some embodiments, the processor704can cause a content to be output, as explained above. The content can be based on the position or include information about the position. For example, the content can include an audio containing a warning message, a direction message, a navigational content, an instructional content, or others. The processor704can cause a content to be modified, as explained above. The content can be based on the position or include information about the position. For example, the content can include an graphic containing a warning message, a direction message, a navigational content, an instructional content, or others. The content can include an augmented reality content based on the position. For example, the augmented reality content can include at least one of images or sound based on the position. For example, the augmented reality content can be a navigational content, a warning content, a directional content, an instructional content, a videogame content, an immersive experience content, an educational content, a shopping content, or others. The augmented reality content can be modifiable based on the position in real-time. The content can include a virtual reality content based on the position. For example, the virtual reality content can include at least one of images or sound based on the position. For example, the augmented reality content can be a navigational content, a warning content, a directional content, an instructional content, a videogame content, an immersive experience content, an educational content, a shopping content, or others. The virtual reality content can be modifiable based on the position in real-time. For example, when the object708is a head-mounted display or an eyewear unit, the virtual reality content can help a wearer of the head-mounted display or the eyewear unit to track a position thereof (e.g., inside-out tracking, outside-in tracking, with markers, without markers) or to avoid obstacles, such as via minimize walking into an obstacle, such as the defined area702or others. FIG.9shows a schematic diagram of an embodiment of a defined area containing a plurality of distance sensing unit clusters tracking an object traveling along the distance sensing unit clusters within the defined area according to this disclosure.FIG.10shows a schematic diagram of an embodiment of a defined area containing a plurality of distance sensing unit clusters tracking an object traveling between the distance sensing unit clusters within the defined area according to this disclosure. In particular, each of a system900A and900B includes a first DSU cluster904, as explained above, a second DSU cluster906, as explained above, and an object908. The first DSU cluster602and the second DSU cluster604are in communication with a processor, such as the processor704, a server, or others. The object908can be the housing202, the object204, the object412, or others. The object908hosts a DSU. As shown inFIG.10, in the system900B, the first DSU cluster904and the second DSU cluster906are positioned on a same side relative to the object908, as explained above, regardless whether the object908is moving relative to the first DSU cluster904and the second DSU cluster906or vice versa. As shown inFIG.9, in the system900A, the object908is positioned between the first DSU cluster904and the second DSU cluster906, as explained above, regardless whether the object908is moving relative to the first DSU cluster904and the second DSU cluster906or vice versa. Therefore, as perFIGS.9and10, if the object908is moving along a travel path910relative to the first DSU cluster904and the second DSU cluster906or vice versa, then the first DSU cluster904and the second DSU cluster906can concurrently obtain a plurality of readings based on the object908moving along the travel path910and send the readings to the processor. As such, the processor can learn a position of each of the first DSU cluster904and the second DSU cluster906relative to the object908, in real-time, while the object908is moving along the travel path910, due to data sharing via the processor. Note that at least one of the system900A or the system900B can occur internal to or external to a defined area or without a defined area, as explained above. Also, note that since the object908includes the DSU, as explained above, the object908can position itself relative to at least one of the first DSU cluster904or the second DSU cluster906, such as in the system900A and the system900B. This technique also works where the first DSU cluster904and the second DSU cluster906are not overlapping each other in coverage, as described above. In some embodiments, an object can include a land vehicle, such as an automobile, a motorcycle, a bus, a truck, a skateboard, a moped, a scooter, a bicycle, a tank, a tractor, a rail car, a locomotive, or others, where the land vehicle hosts a DSU. The land vehicle can collect a set of data from the DSU and share the set of data, which can be in real-time, with an land vehicle infrastructure item, such as a gas station, a charging station, a toll station, a parking meter, a drive-through-commerce station, an emergency service vehicle, a vehicle, which can be via a V2V protocol, a garage, a parking spot, a hydrant, a street sign, a traffic light, a load cell, an road-based wireless induction charger, a fence, a sprinkler, or others. When the land vehicle infrastructure item also hosts a DSU, then that DSU can also collect a set of data and share that set of data, which can be in real-time, with the land vehicle, as explained above. Such configurations can detect discrepancies, such as objects that the land vehicle infrastructure item is not aware of or does not know enough about. Also, as explained above, the land vehicle with the DSU can detect and thereby track consumer communication units, whether internal or external the land vehicle, such as Wi-Fi enabled devices, such as smartphones, tablets, wearables, infotainment unit, video gaming consoles, toys, or others, in order to determine its position or a position of a consumer communication unit. For example, the land vehicle with the DSU can track its position relative to a plurality of consumer communication units based on where the consumer communication units are typically positioned. As such, when density or frequency of the consumer communications units is increased or decreased from a typical amount, then the land vehicle with the DSU can take an action or avoid taking an action, such as changing speed, slowing down, accelerating, stopping, operating a component of the vehicle, such as a window, infotainment system, sound a horn, siren, or alarm, opening/closing a door, a trunk, a hood, turn on windshield wipers, turn on regular or high beam lights, activate/deactivate parking/brake, navigate on road, swerve, turn, or others. One or more embodiments of the subject matter described herein relate to distance and/or motion sensing systems and methods, such as radar and/or optical remote sensing systems and methods. Known radar systems transmit analog electromagnetic waves toward targets and receive echoes of the waves that reflect off the targets. Based on the distance between antennas that transmit the analog waves and the target objects, and/or movement of the target objects, the strength and/or frequency of the received echoes may change. The strength, frequency, and/or time-of-flight of the echoes may be used to derive the distance to the targets and/or movement of the targets. Some known radar systems are limited in the accuracy at which the systems can measure distances to the targets. For example, the resolution at which these systems may be able to calculate the distance to targets may be relatively large. Moreover, some of these systems may have circuitry, such as a transmit/receive switch, that controls when the systems transmit waves or receive echoes. The switch can require a nonzero period of time to allow the systems to switch from transmitting waves to receiving echoes. This period of time may prevent the systems from being used to measure distances to targets that are relatively close, as the transmitted waves may reflect off the targets back to the receiving antennas before the systems can switch from transmission to reception. Additionally, some known systems have energy leakage from the transmitting antenna to the receiving antenna. This energy leakage can interfere with and/or obscure the measurement of distances to the targets and/or the detection of motion. In one embodiment, a method (e.g., a method for measuring a separation distance to a target object) is provided. The method includes transmitting an electromagnetic first transmitted signal from a transmitting antenna toward a target object that is a separated from the transmitting antenna by a separation distance. The first transmitted signal includes a first transmit pattern representative of a first sequence of digital bits. The method also includes receiving a first echo of the first transmitted signal that is reflected off the target object, converting the first echo into a first digitized echo signal, and comparing a first receive pattern representative of a second sequence of digital bits to the first digitized echo signal to determine a time of flight of the first transmitted signal and the echo. In another embodiment, a system (e.g., a sensing system) is provided that includes a transmitter, a receiver, and a correlator device. The transmitter is configured to generate an electromagnetic first transmitted signal that is communicated from a transmitting antenna toward a target object that is a separated from the transmitting antenna by a separation distance. The first transmitted signal includes a first transmit pattern representative of a sequence of digital bits. The receiver is configured to generate a first digitized echo signal that is based on an echo of the first transmitted signal that is reflected off the target object. The correlator device is configured to compare a first receive pattern representative of a second sequence of digital bits to the first digitized echo signal to determine a time of flight of the first transmitted signal and the echo. In another embodiment, another method (e.g., for measuring a separation distance to a target object) is provided. The method includes transmitting a first transmitted signal having waveforms representative of a first transmit pattern of digital bits and generating a first digitized echo signal based on a first received echo of the first transmitted signal. The first digitized echo signal includes waveforms representative of a data stream of digital bits. The method also includes comparing a first receive pattern of digital bits to plural different subsets of the data stream of digital bits in the first digitized echo signal to identify a subset of interest that more closely matches the first receive pattern than one or more other subsets. The method further includes identifying a time of flight of the first transmitted signal and the first received echo based on a time delay between a start of the data stream in the first digitized echo signal and the subset of interest. In accordance with one or more embodiments of the presently described subject matter, systems and methods are provided for determining distances between a sensing apparatus and one or more targets. The distances may be determined by measuring times of flight of transmitted signals (e.g., radar, light, or other signals) that reflect off the targets. As one example, a signal that includes a known or designated transmit pattern (such as waveforms that represent a sequence of bits) is transmitted and echoes of this signal are received. This transmit pattern can be referred to as a coarse stage transmit pattern. The echoes may include information representative of the pattern in the transmitted signal. For example, the echoes may be received and digitized to identify a sequence or stream of data that is representative of noise, partial reflections of the transmitted signal off one or more objects other than the target, and reflections off the target. A coarse stage receive pattern can be compared to the digitized data stream that is based on the received echoes to determine a time of flight of the transmitted signal. The coarse stage receive pattern can be the same as the transmit pattern or differ from the transmit pattern by having a different length and/or sequence of bits (e.g., “0” and “1”). The coarse stage receive pattern is compared to different portions of the digitized data stream to determine which portion of the data stream more closely matches the receive pattern than one or more other portions. For example, the coarse stage receive pattern may be shifted (e.g., with respect to time) along the data stream to identify a portion of the data stream that matches the coarse stage receive pattern. A time delay between the start of the data stream and the matching portion of the coarse stage receive pattern may represent the time of flight of the transmitted signal. This measurement of the time of flight may be used to calculate a separation distance to the target. As described below, this process for measuring the time of flight may be referred to as coarse stage determination of the time of flight. The coarse stage determination may be performed once or several times in order to measure the time of flight. For example, a single “burst” of a transmitted signal may be used to measure the time of flight, or several “bursts” of transmitted signals (having the same or different transmit patterns) may be used. A fine stage determination may be performed in addition to or in place of the coarse stage determination. The fine stage determination can include transmitting one or more additional signals (e.g., “bursts”) toward the target and generating one or more baseband echo signals based on the received echoes of the signals. The additional signals may include a fine stage transmit pattern that is the same or different pattern as the coarse stage transmit pattern. The fine stage determination can use the time of flight measured by the coarse stage determination (or as input by an operator) and compare a fine stage receive pattern that is delayed by the measured time of flight to a corresponding portion of the data stream. For example, instead of shifting the fine stage receive pattern along all or a substantial portion of the baseband echo signal, the fine stage receive pattern (or a portion thereof) can be time shifted by an amount that is equal to or based on the time delay measured by the coarse stage determination. Alternatively, the fine stage receive pattern may be shifted along all or a substantial portion of the baseband echo signal. The time shifted fine stage receive pattern can be compared to the baseband echo signal to determine an amount of overlap or, alternatively, an amount of mismatch between the waveforms of the time-shifted fine stage receive pattern and the baseband echo signal. This amount of overlap or mismatch may be translated to an additional time delay. The additional time delay can be added with the time delay measured by the coarse stage determination to calculate a fine stage time delay. The fine stage time delay can then be used to calculate a time of flight and separation distance to the target. In one embodiment, an ultrafine stage determination may be performed in addition to or in place of the coarse stage determination and/or the fine stage determination. The ultrafine stage determination can involve a similar process as the fine stage determination, but using a different component of the receive pattern and/or the data stream. For example, the fine stage determination may examine the in-phase (I) component or channel of the receive pattern and the data stream to measure the overlap or mismatch between the receive pattern and the data stream. The ultrafine stage determination can use the quadrature (Q) component or channel of the receive pattern and the data stream to measure an additional amount of overlap or mismatch between the waveforms of the receive pattern and the data stream. Alternatively, the ultrafine stage determination may separately examine the I channel and Q channel of the receive pattern and the data stream. The use of I and Q channels or components is provided as one example embodiment. Alternatively, one or more other channels or components may be used. For example, a first component or channel and a second component or channel may be used, where the first and second components or channels are phase shifted relative to each other by an amount other than ninety degrees. The amounts of overlap or mismatch calculated by the ultrafine stage determination can be used to calculate an additional time delay that can be added to the time delays from the coarse stage and/or the fine stage to determine a time of flight and/or separation distance to the target. Alternatively or additionally, the amount of overlap or mismatch between the waveforms in the I channel and Q channel can be examined to resolve phases of the echoes in order to detect motion of the target. Alternatively or additionally, the ultrafine stage determination may involve a similar process as the coarse stage determination. For example, the coarse stage determination may examine the I channel of the receive pattern and the data stream to determine correlation values of different subsets of the data stream and, from those correlation values, determine a subset of interest and a corresponding time-of-flight, as described herein. The ultrafine stage determination can use the Q channel of the receive pattern and the data stream to determine correlation values of different subsets of the data stream and, from those correlation values, determine a subset of interest and a time-of-flight. The times-of-flight from the I channel and Q channel can be combined (e.g., averaged) to calculate a time of flight and/or separation distance to the target. The correlation values calculated by the ultrafine stage determination can be used to calculate an additional time delay that can be added to the time delays from the coarse stage and/or the fine stage to determine a time of flight and/or separation distance to the target. Alternatively or additionally, the correlation values of the waveforms in the I channel and Q channel can be examined to resolve phases of the echoes in order to calculate separation distance or motion of the target. The coarse, fine, and ultrafine stage determinations can be performed independently (e.g., without performing one or more of the other stages) and/or together. The fine and ultrafine stage determinations can be performed in parallel (e.g., with the fine stage determination examining the I channel and the ultrafine stage determination examining the Q channel) or sequentially (e.g., with the ultrafine stage determination examining both the I and Q channels). The coarse and ultrafine stage determinations can be performed in parallel (e.g., with the coarse stage determination examining the I channel and the ultrafine stage determination examining the Q channel) or sequentially (e.g., with the ultrafine stage determination examining both the I and Q channels). In one embodiment, a receive pattern mask may be applied to the digitized data stream to remove (e.g., mask off) or otherwise change one or more portions or segments of the data stream. The masked data stream can then be compared to the receive pattern of the corresponding stage determination (e.g., coarse stage, fine stage, or ultrafine stage) to measure the time of flight, as described herein. In one embodiment, the various patterns (e.g., the coarse stage transmit pattern, the fine stage transmit pattern, the coarse stage receive pattern, the fine stage receive pattern, and/or the receive pattern mask) may be the same. Alternatively, one or more (or all) of these patterns may differ from each other. For example, different ones of the patterns may include different sequences of bits and/or lengths of the sequences. The various patterns (e.g., the coarse stage transmit pattern, the fine stage transmit pattern, the coarse stage receive pattern, the fine stage receive pattern, and/or the receive pattern mask) that are used in the ultrafine stage may also differ from those used in the coarse or fine stages alone, and from each other. FIG.11is a schematic diagram of one embodiment of a sensing system100. The system100can be used to determine distances between a sensing apparatus102and one or more objects104and/or to identify movement of the one or more target objects104, where the target objects104may have positions that may change or that are not known. In one embodiment, the sensing apparatus102includes a radar system that transmits electromagnetic pulse sequences as transmitted signals106toward the target object104that are at least partially reflected as echoes108. Alternatively, the sensing apparatus102can include an optical sensing system, such as a Light Detection And Ranging (LIDAR) system, that transmits light toward the target object104as the transmitted signals106and receives reflections of the light off the target object104as the echoes108. In another embodiment, another method of transmission may be used, such as sonar, in order to transmit the transmitted signals106and receive the echoes108. A time of flight of the transmitted signals106and echoes108represents the time delay between transmission of the transmitted signals106and receipt of the echoes108off of the target object104. The time of flight can be proportional to a distance between the sensing apparatus102and the target object104. The sensing apparatus102can measure the time of flight of the transmitted signals106and echoes108and calculate a separation distance110between the sensing apparatus102and the target object104based on the time of flight. The sensing system100may include a control unit112(“External Control Unit” inFIG.1) that directs operations of the sensing apparatus102. The control unit112can include one or more logic-based hardware devices, such as one or more processors, controllers, and the like. The control unit112shown inFIG.11may represent the hardware (e.g., processors) and/or logic of the hardware (e.g., one or more sets of instructions for directing operations of the hardware that is stored on a tangible and non-transitory computer readable storage medium, such as computer software stored on a computer memory). The control unit112can be communicatively coupled (e.g., connected so as to communicate data signals) with the sensing apparatus102by one or more wired and/or wireless connections. The control unit112may be remotely located from the sensing apparatus102, such as by being disposed several meters away, in another room of a building, in another building, in another city block, in another city, in another county, state, or country (or other geographic boundary), and the like. In one embodiment, the control unit112can be communicatively coupled with several sensing assemblies102located in the same or different places. For example, several sensing assemblies102that are remotely located from each other may be communicatively coupled with a common control unit112. The control unit112can separately send control messages to each of the sensing assemblies102to individually activate (e.g., turn ON) or deactivate (e.g., turn OFF) the sensing assemblies102. In one embodiment, the control unit112may direct the sensing assembly102to take periodic measurements of the separation distance110and then deactivate for an idle time to conserve power. In one embodiment, the control unit112can direct the sensing apparatus102to activate (e.g., turn ON) and/or deactivate (e.g., turn OFF) to transmit transmitted signals106and receive echoes108and/or to measure the separation distances110. Alternatively, the control unit112may calculate the separation distance110based on the times of flight of the transmitted signals106and echoes108as measured by the sensing apparatus102and communicated to the control unit112. The control unit112can be communicatively coupled with an input device114, such as a keyboard, electronic mouse, touchscreen, microphone, stylus, and the like, and/or an output device116, such as a computer monitor, touchscreen (e.g., the same touchscreen as the input device114), speaker, light, and the like. The input device114may receive input data from an operator, such as commands to activate or deactivate the sensing apparatus102. The output device116may present information to the operator, such as the separation distances110and/or times of flight of the transmitted signals106and echoes108. The output device116may also connect to a communications network, such the internet. The form factor of the sensing assembly102may have a wide variety of different shapes, depending on the application or use of the system100. The sensing assembly102may be enclosed in a single enclosure1602, such as an outer housing. The shape of the enclosure1602may depend on factors including, but not limited to, needs for power supply (e.g., batteries and/or other power connections), environmental protection, and/or other communications devices (e.g., network devices to transmit measurements or transmit/receive other communications). In the illustrated embodiment, the basic shape of the sensing assembly102is a rectangular box. The size of the sensing assembly102can be relatively small, such as three inches by six inches by two inches (7.6 centimeters by 15.2 centimeters by 5.1 centimeters), 70 mm by 140 mm by 10 mm, or another size. Alternatively, the sensing assembly102may have one or more other dimensions. FIG.12is a schematic diagram of one embodiment of the sensing apparatus102. The sensing apparatus102may be a direct-sequence spread-spectrum radar device that uses a relatively high speed digital pulse sequence that directly modulates a carrier signal, which is then transmitted as the transmitted signals106toward a target object104. The echoes108may be correlated to the same pulse sequence in the transmitted signals106in order to determine the time of flight of the transmitted signals106and echoes108. This time of flight can then be used to calculate the separation distance110(shown inFIG.11). The sensing apparatus102includes a front end200and a back end202. The front end200may include the circuitry and/or other hardware that transmits the transmitted signals106and receives the reflected echoes108. The back end202may include the circuitry and/or other hardware that forms the pulse sequences for the transmitted signals106or generates control signals that direct the front end200to form the pulse sequences for inclusion in the transmitted signals106, and/or that processes (e.g., analyzes) the echoes108received by the front end200. Both the front end200and the back end202may be included in a common housing. For example (and as described below), the front end200and the back end202may be relatively close to each other (e.g., within a few centimeters or meters) and/or contained in the same housing. Alternatively, the front end200may be remotely located from the back end202. The components of the front end200and/or back end202are schematically shown as being connected by lines and/or arrows inFIG.12, which may be representative of conductive connections (e.g., wires, busses, and the like) and/or wireless connections (e.g., wireless networks). The front end200includes a transmitting antenna204and a receiving antenna206. The transmitting antenna204transmits the transmitted signals106toward the target object104and the receiving antenna206receives the echoes108that are at least partially reflected by the target object104. As one example, the transmitting antenna204may transmit radio frequency (RF) electromagnetic signals as the transmitted signals106, such as RF signals having a frequency of 24 gigahertz (“GHz”)±1.5 GHz. Alternatively, the transmitting antenna204may transmit other types of signals, such as light, and/or at another frequency. In the case of light transmission the antenna may be replaced by a laser or LED or other device. The receiver may be replaced by a photo detector or photodiode. A front end transmitter208(“RF Front-End,” “Transmitter, and/or “TX” inFIG.12) of the front end200is communicatively coupled with the transmitting antenna204. The front end transmitter208forms and provides the transmitted signal106to the transmitting antenna204so that the transmitting antenna204can communicate (e.g., transmit) the transmitted signal106. In the illustrated embodiment, the front end transmitter208includes mixers210A,210B and an amplifier212. Alternatively, the front end transmitter208may not include the amplifier212. The mixers210A,210B combine (e.g., modulate) a pulse sequence or pattern provided by the back end202with an oscillating signal216(e.g., a carrier signal) to form the transmitted signal106that is communicated by the transmitting antenna204. In one embodiment, the mixers210A,210B multiply pattern signals230A,230B (“Baseband signal” inFIG.12) received from one or more transmit (TX) pattern generators228A,228B by the oscillating signal216. The pattern signal230includes the pattern formed by the pattern code generator228. As described below, the pattern signal230can include several bits arranged in a known or designated sequence. An oscillating device214(“Oscillator” inFIG.12) of the front end200generates the oscillating signal216that is communicated to the mixers210A,210B. As one example, the oscillating device214may include or represent a voltage controlled oscillator (VCO) that generates the oscillating signal216based on a voltage signal that is input into the oscillating device214, such as by a power source (e.g., battery) disposed in the sensing apparatus102and/or as provided by the control unit112(shown inFIG.11). The amplifier212may increase the strength (e.g., gain) of the transmitted signal106. In the illustrated embodiment, the mixer210A receives an in-phase (I) component or channel of a pattern signal230A and mixes the I component or channel of the pattern signal230A with the oscillating signal216to form an I component or channel of the transmitted signal106. The mixer210B receives a quadrature (Q) component or channel of a pattern signal230B and mixes the I component or channel of the pattern signal230B with the oscillating signal216to form a Q component or channel of the transmitted signal106. The transmitted signal106(e.g., one or both of the I and Q channels) is generated when the TX baseband signal230flows to the mixers210. The digital output gate250may be disposed between the TX pattern generator and the mixers210for added control of the TX baseband signal230. After a burst of one or more transmitted signals106is transmitted by the transmitting antenna204, the sensing assembly102may switch from a transmit mode (e.g., that involves transmission of the transmitted signals106) to a receive mode to receive the echoes108off the target object104. In one embodiment, the sensing assembly102may not receive or sense the echoes108when in the transmit mode and/or may not transmit the transmitted signals106when in the receive mode. When the sensing assembly102switches from the transmit mode to the receive mode, the digital output gate250can reduce the amount of time that the transmit signal106generated by the transmitter208to the point that it is eliminated (e.g., reduced to zero strength). For example, the gate250can include tri-state functionality and a differential high-pass filter (which is represented by the gate250). The baseband signal230passes through the filter before the baseband signal230reaches the upconversion mixer210. The gate250can be communicatively coupled with, and controlled by, the control unit112(shown inFIG.11) so that the control unit112can direct the filter of the gate250to enter into a tri-state (e.g., high-impedance) mode when the transmitted signal106(or burst of several transmitted signals106) is transmitted and the sensing assembly102is to switch over to receive the echoes108. The highpass filter across differential outputs of the gate250can reduce the input transmit signal106relatively quickly after the tri-state mode is initiated. As a result, the transmitted signal106is prevented from flowing to the transmitting antenna204and/or from leaking to the receiving antenna206when the sensing assembly102receives the echoes108. A front end receiver218(“RF Front-End,” “Receiver,” and/or “RX”) of the front end200is communicatively coupled with the receiving antenna206. The front end receiver218receives an echo signal224representative of the echoes108(or data representative of the echoes108) from the receiving antenna206. The echo signal224may be an analog signal in one embodiment. The receiving antenna206may generate the echo signal224based on the received echoes108. In the illustrated embodiment, an amplifier238may be disposed between the receive antenna206and the front end receiver218. The front end receiver218can include an amplifier220and mixers222A,222B. Alternatively, one or more of the amplifiers220,238may not be provided. The amplifiers220,238can increase the strength (e.g., gain) of the echo signal224. The mixers222A,222B may include or represent one or more mixing devices that receive different components or channels of the echo signal224to mix with the oscillating signal216(or a copy of the oscillating signal216) from the oscillating device214. For example, the mixer222A can combine the analog echo signal224and the I component of the oscillating signal216to extract the I component of the echo signal224into a first baseband echo signal226A that is communicated to the back end202of the sensing apparatus102. The first baseband echo signal226A may include the I component or channel of the baseband echo signal. The mixer222B can combine the analog echo signal224and the Q component of the oscillating signal216to extract the Q component of the analog echo signal224into a second baseband echo signal226B that is communicated to the back end202of the sensing apparatus102. The second baseband echo signal226B can include the Q component or channel of the baseband echo signal. In one embodiment, the echo signals226A,226B can be collectively referred to as a baseband echo signal226. In one embodiment, the mixers222A,222B can multiply the echo signal224by the I and Q components of the oscillating signal216to form the baseband echo signals226A,226B. The back end202of the sensing apparatus102includes a transmit (TX) pattern code generator228that generates the pattern signal230for inclusion in the transmitted signal106. The transmit pattern code generator228includes the transmit code generators228A,228B. In the illustrated embodiment, the transmit code generator228A generates the I component or channel pattern signal230A (“I TX Pattern” inFIG.12) while the transmit code generator228B generates the Q component or channel pattern signal230B (“Q TX Pattern” inFIG.12). The transmit patterns generated by the transmit pattern code generator228can include a digital pulse sequence having a known or designated sequence of binary digits, or bits. A bit includes a unit of information that may have one of two values, such as a value of one or zero, high or low, ON or OFF, +1 or −1, and the like. Alternatively, a bit may be replaced by a digit, a unit of information that may have one of three or more values, and the like. The pulse sequence may be selected by an operator of the system100shown inFIG.11(such as by using the input device114shown inFIG.11), may be hard-wired or programmed into the logic of the pattern code generator228, or may otherwise be established. The transmit pattern code generator228creates the pattern of bits and communicates the pattern in the pattern signals230A,230B to the front end transmitter208. The pattern signals230A,230B may be individually or collectively referred to as a pattern signal230. In one embodiment, the pattern signal230may be communicated to the front end transmitter208at a frequency that is no greater than 3 GHz. Alternatively, the pattern signal230may be communicated to the front end transmitter208at a greater frequency. The transmit pattern code generator228also communicates the pattern signal230to a correlator device232(“Correlator” inFIG.12). For example, the pattern code generator228may generate a copy of the pattern signal that is sent to the correlator device232. The backend section202includes or represents hardware (e.g., one or more processors, controllers, and the like) and/or logic of the hardware (e.g., one or more sets of instructions for directing operations of the hardware that is stored on a tangible and non-transitory computer readable storage medium, such as computer software stored on a computer memory). The RX backend section202B receives the pattern signal230from the pattern code generator228and the baseband echo signal226(e.g., one or more of the signals226A,226B) from the front end receiver200. The RX backend section202B may perform one or more stages of analysis of the baseband echo signal226in order to determine the separation distance110and/or to track and/or detect movement of the target object104. The stages of analysis can include a coarse stage, a fine stage, and/or an ultrafine stage, as described above. In the coarse stage, the baseband processor232compares the pattern signal230with the baseband echo signal226to determine a coarse or estimated time of flight of the transmitted signals106and the echoes108. For example, the baseband processor232can measure a time delay of interest between the time when a transmitted signal106is transmitted and a subsequent time when the pattern in the pattern signal230(or a portion thereof) and the baseband echo signal226match or substantially match each other, as described below. The time delay of interest may be used as an estimate of the time of flight of the transmitted signal106and corresponding echo108. In the fine stage, the sensing assembly102can compare a replicated copy of the pattern signal230with the baseband echo signal226. The replicated copy of the pattern signal230may be a signal that includes the pattern signal230delayed by the time delay of interest measured during the coarse stage. The sensing assembly102compares the replicated copy of the pattern signal230with the baseband echo signal226to determine a temporal amount or degree of overlap or mismatch between the replicated pattern signal and the baseband echo signal226. This temporal overlap or mismatch can represent an additional portion of the time of flight that can be added to the time of flight calculated from the coarse stage. In one embodiment, the fine stage examines I and/or Q components of the baseband echo signal226and the replicated pattern signal. In the ultrafine stage, the sensing assembly102also can examine the I and/or Q component of the baseband echo signal226and the replicated pattern signal to determine a temporal overlap or mismatch between the I and/or Q components of the baseband echo signal226and the replicated pattern signal. The temporal overlap or mismatch of the Q components of the baseband echo signal226and the replicated pattern signal may represent an additional time delay that can be added to the time of flight calculated from the coarse stage and the fine stage (e.g., by examining the I and/or Q components) to determine a relatively accurate estimation of the time of flight. Alternatively or additionally, the ultrafine stage may be used to precisely track and/or detect movement of the target object104within the bit of interest. The terms “fine” and “ultrafine” are used to mean that the fine stage may provide a more accurate and/or precise (e.g., greater resolution) calculation of the time of flight (tF) and/or the separation distance110relative to the coarse stage and that the ultrafine stage may provide a more accurate and/or precise (e.g., greater resolution) calculation of the time of flight (tF) and/or the separation distance110relative to the fine stage and the coarse stage. Alternatively or additionally, the time lag of the waveforms in the I channel and Q channel can be examined to resolve phases of the echoes in order to calculate separation distance or motion of the target. As described above, the ultrafine stage determination may involve a similar process as the coarse stage determination. For example, the coarse stage determination may examine the I channel of the receive pattern and the data stream to determine correlation values of different subsets of the data stream and, from those correlation values, determine a subset of interest and a corresponding time-of-flight, as described herein. The ultrafine stage determination can use the I and/or Q channel of the receive pattern and the data stream to determine correlation values of different subsets of the data stream and, from those correlation values, determine a subset of interest and a time-of-flight. The times-of-flight from the I channel and Q channel can be combined (e.g., averaged) to calculate a time of flight and/or separation distance to the target. The correlation values calculated by the ultrafine stage determination can be used to calculate an additional time delay that can be added to the time delays from the coarse stage and/or the fine stage to determine a time of flight and/or separation distance to the target. Alternatively or additionally, the correlation values of the waveforms in the I channel and Q channel can be examined to resolve phases of the echoes in order to calculate separation distance or motion of the target. The backend202can include a first baseband processor232A (“I Baseband Processor” inFIG.12) and a second baseband processor232B (“Q Baseband Processor” inFIG.12). The first baseband processor232A may examine the I component or channel of the echo signal226A and the second baseband processor232B may examine the Q component or channel of the echo signal226B. The backend202can provide a measurement signal234as an output from the analysis of the baseband echo signal226. In one embodiment, the measurement signal234includes an I component or channel measurement signal234A from the first baseband processor232A and a Q component or channel measurement signal234B from the second baseband processor232B. The measurement signal234may include the separation distance110and/or the time of flight. The total position estimate260can be communicated to the control unit112(shown inFIG.11) so that the control unit112can use data or information representative of the separation distance110and/or the time of flight for one or more other uses, calculations, and the like, and/or for presentation to an operator on the output device116(shown inFIG.11). As described below, a correlation window that also includes the pattern (e.g., the pulse sequence of bits) or a portion thereof that was transmitted in the transmitted signal106may be compared to the baseband echo signal226. The correlation window may be progressively shifted or delayed from a location in the baseband echo signal226representative of a start of the echo signal226(e.g., a time that corresponds to the time at which the transmitted signal106is transmitted, but which may or may not be the exact beginning of the baseband echo signal) and successively, or in any other order, compared to different subsets or portions of the baseband echo signal226. Correlation values representative of degrees of match between the pulse sequence in the correlation window and the subsets or portions of the baseband echo signal226can be calculated and a time delay of interest (e.g., approximately the time of flight) can be determined based on the time difference between the start of the baseband echo signal226and one or more maximum or relatively large correlation values. The maximum or relatively large correlation value may represent at least partial reflection of the transmitted signals106off the target object104, and may be referred to as a correlation value of interest. As used herein, the terms “maximum,” “minimum,” and forms thereof, are not limited to absolute largest and smallest values, respectively. For example, while a “maximum” correlation value can include the largest possible correlation value, the “maximum” correlation value also can include a correlation value that is larger than one or more other correlation values, but is not necessarily the largest possible correlation value that can be obtained. Similarly, while a “minimum” correlation value can include the smallest possible correlation value, the “minimum” correlation value also can include a correlation value that is smaller than one or more other correlation values, but is not necessarily the smallest possible correlation value that can be obtained. The time delay of interest can then be used to calculate the separation distance110from the coarse stage. For example, in one embodiment, the separation distance110may be estimated or calculated as: d=tF×c2(Equation#1) where d represents the separation distance110, tFrepresents the time delay of interest (calculated from the start of the baseband echo signal226to the identification of the correlation value of interest), and c represents the speed of light. Alternatively, c may represent the speed at which the transmitted signals106and/or echoes108move through the medium or media between the sensing apparatus102and the target object104. In another embodiment, the value of tFand/or c may be modified by a calibration factor or other factor in order to account for portions of the delay between transmission of the transmitted signals106and receipt of the echoes108that are not due to the time of flight of the transmitted signals106and/or echoes108. With continued reference to the sensing assembly102shown inFIG.12,FIGS.13A and13Bare schematic diagrams of a coarse stage determination of a time of flight for a transmitted signal106and corresponding echo108in accordance with one embodiment. By “coarse,” it is meant that one or more additional measurements or analyses of the same or different echo signal224(shown inFIG.12) that is generated from the reflected echoes108may be performed to provide a more accurate and/or precise measurement of the time of flight (tF) and/or separation distance110. The use of the term “coarse” is not intended to mean that the measurement technique described above is inaccurate or imprecise. As described above, the pattern generated by the pattern code generator228and the baseband echo signal226are received by the RX backend202B. The baseband echo signal226can be formed by mixing (e.g., multiplying) the echo signal224by the oscillating signal216in order to translate the echo signal224into a baseband signal. FIG.13Aillustrates a square waveform transmitted signal322representative of the transmitted signal106(shown inFIG.11) and the digitized echo signal226. The echo signal226shown inFIG.13Amay represent the I component or channel of the echo signal226(e.g., the signal226A). The signals322,226are shown alongside horizontal axes304representative of time. The transmitted signal322includes pattern waveform segments326that represent the pattern that is included in the transmitted signal106. In the illustrated embodiment, the pattern waveform segments326correspond to a bit pattern of 101011, where 0 represents a low value328of the transmitted signal322and 1 represents a high value330of the transmitted signal322. Each of the low or high values328,330occurs over a bit time332. In the illustrated embodiment, each pattern waveform segment326includes six bits (e.g., six 0s and 1s), such that each pattern waveform segment326extends over six bit times332. Alternatively, one or more of the pattern waveform segments326may include a different sequence of low or high values328,330and/or occur over a different number of bit times332. The baseband echo signal226includes in one embodiment a sequence of square waves (e.g., low and high values328,330), but the waves may have other shapes. The echo signal226may be represented as a digital echo signal740(shown and described below in connection withFIG.13B). As described below, different portions or subsets of the digital echo signal740can be compared to the pattern sequence of the transmitted signal106(e.g., the pattern waveform segments326) to determine a time delay of interest, or estimated time of flight. As shown inFIG.13A, the square waves (e.g., low and high values328,330) of the baseband echo signal226may not exactly line up with the bit times332of the transmitted signal322. FIG.13Billustrates the digitized echo signal740ofFIG.13Aalong the axis304that is representative of time. As shown inFIG.13B, the digitized echo signal740may be schematically shown as a sequence of bits300,302. Each bit300,302in the digitized echo signal740can represent a different low or high value328,330(shown inFIG.13A) of the digitized echo signal740. For example, the bit300(e.g., “0”) can represent low values328of the digitized echo signal740and the bit302(e.g., “1”) can represent high values330of the digitized echo signal740. The baseband echo signal226begins at a transmission time (to) of the axis304. The transmission time (to) may correspond to the time at which the transmitted signal106is transmitted by the sensing assembly102. Alternatively, the transmission time (to) may be another time that occurs prior to or after the time at which the transmitted signal106is transmitted. The baseband processor232obtains a receive pattern signal240from the pattern generator228, similar to the transmit pattern (e.g., in the signal230) that is included in the transmitted signal106, the receive pattern signal240may include a waveform signal representing a sequence of bits, such as a digital pulse sequence receive pattern306shown inFIG.13. The baseband processor232compares the receive pattern306to the echo signal226. In one embodiment, the receive pattern306is a copy of the transmit pattern of bits that is included in the transmitted signal106from the pattern code generator228, as described above. Alternatively, the receive pattern306may be different from the transmit pattern that is included in the transmitted signal106. For example, the receive pattern306may have a different sequence of bits (e.g., have one or more different waveforms that represent a different sequence of bits) and/or have a longer or shorter sequence of bits than the transmit pattern. The receive pattern306may be represented by one or more of the pattern waveform segments326, or a portion thereof, shown inFIG.13A. The baseband processor232uses all or a portion of the receive pattern306as a correlation window320that is compared to different portions of the digitized echo signal740in order to calculate correlation values (“CV”) at the different positions. The correlation values represent different degrees of match between the receive pattern306and the digitized echo signal740across different subsets of the bits in the digitized echo signal740. In the example illustrated inFIG.13, the correlation window320includes six bits300,302. Alternatively, the correlation window320may include a different number of bits300,302. The correlator device731can temporally shift the correlation window320along the echo signal740in order to identify where (e.g., which subset of the echo signal226) more closely matches the pattern in the correlation window320more than one or more (or all) of the other portions of the echo signal740. In one embodiment, when operating in the coarse stage determination, the first baseband processor232A compares the correlation window320to the I component or channel of the echo signal226. For example, the correlator device731may compare the bits in the correlation window320to a first subset308of the bits300,302in the digitized echo signal740. For example, the correlator device731can compare the receive pattern306with the first six bits300,302of the digitized echo signal740. Alternatively, the correlator device731can begin by comparing the receive pattern306with a different subset of the digitized echo signal740. The correlator device731calculates a first correlation value for the first subset308of bits in the digitized echo signal740by determining how closely the sequence of bits300,302in the first subset308match the sequence of bits300,302in the receive pattern306. In one embodiment, the correlator device731assigns a first value (e.g., +1) to those bits300,302in the subset of the digitized echo signal740being compared to the correlation window320that match the sequence of bits300,302in the correlation window320and a different, second value (e.g., −1) to those bits300,302in the subset of the digitized echo signal740being examined that do not match the sequence of bits300,302in the correlation window320. Alternatively, other values may be used. The correlator device731may then sum these assigned values for the subset of the digitized echo signal740to derive a correlation value for the subset. With respect to the first subset308of bits in the digitized echo signal, only the fourth bit (e.g., zero) and the fifth bit (e.g., one) match the fourth bit and the fifth bit in the correlation window320. The remaining four bits in the first subset308do not match the corresponding bits in the correlation window320. As a result, if +1 is assigned to the matching bits and −1 is assigned to the mismatching bits, then the correlation value for the first subset308of the digitized echo signal740is calculated to be −2. On the other hand, if +1 is assigned to the bits and 0 is assigned to the mismatching bits, then the correlation value for the first subset308of the digitized echo signal740is calculated to be +2. As described above, other values may be used instead of +1 and/or −1. The correlator device731then shifts the correlation window320by comparing the sequence of bits300,302in the correlation window320to another (e.g., later or subsequent) subset of the digitized echo signal740. In the illustrated embodiment, the correlator device731compares the correlation window320to the sixth through seventh bits300,302in the digitized echo signal740to calculate another correlation value. As shown inFIG.13, the subsets to which the correlation window320is compared may at least partially overlap with each other. For example, each of the subsets to which the correlation window320is compared may overlap with each other by all but one of the bits in each subset. In another example, each of the subsets may overlap with each other by a fewer number of the bits in each subset, or even not at all. The correlator device731may continue to compare the correlation window320to different subsets of the digitized echo signal740to calculate correlation values for the subsets. In continuing with the above example, the correlator device731calculates the correlation values shown inFIG.13for the different subsets of the digitized echo signal740. InFIG.13, the correlation window320is shown shifted below the subset to which the correlation window320is compared, with the correlation value of the subset to which the correlation window320is compared shown to the right of the correlation window320(using values of +1 for matches and −1 for mismatches). As shown in the illustrated example, the correlation value associated with the fifth through tenth bits300,302in the digitized echo signal226has a correlation value (e.g., +6) that is larger than one or more other correlation values of the other subsets, or that is the largest of the correlation values. In another embodiment, the receive pattern306that is included in the correlation window320and that is compared to the subsets of the digitized echo signal740may include a portion, and less than the entirety, of the transmit pattern that is included in the transmitted signal106(shown inFIG.11). For example, if the transmit pattern in the transmitted signal106includes a waveform representative of a digital pulse sequence of thirteen (or a different number) of bits300,302, the correlator device731may use a receive pattern306that includes less than thirteen (or a different number) of the bits300,302included in the transmit pattern. In one embodiment, the correlator device731can compare less than the entire receive pattern306to the subsets by applying a mask to the receive pattern306to form the correlation window320(also referred to as a masked receive pattern). With respect to the receive pattern306shown inFIG.13, the correlator device731may apply a mask comprising the sequence “000111” (or another mask) to the receive pattern306to eliminate the first three bits300,302from the receive pattern306such that only the last three bits300,302are compared to the various subsets of the digitized echo signal740. The mask may be applied by multiplying each bit in the mask by the corresponding bit in the receive pattern306. In one embodiment, the same mask also is applied to each of the subsets in the digitized echo signal740when the correlation window320is compared to the subsets. The correlator731may identify a correlation value that is largest, that is larger than one or more correlation values, and/or that is larger than a designated threshold as a correlation value of interest312. In the illustrated example, the fifth correlation value (e.g., +6) may be the correlation value of interest312. The subset or subsets of bits in the digitized echo signal740that correspond to the correlation value of interest312may be identified as the subset or subsets of interest314. In the illustrated example, the subset of interest314includes the fifth through tenth bits300,302in the digitized echo signal740. In this example, if the start of the subset of interest is used to identify the subset of interest then the delay of interest would be five. Multiple subsets of interest may be identified where the transmitted signals106(shown inFIG.11) are reflected off of multiple target objects104(shown inFIG.11), such as different target objects104located different separation distances110from the sensing assembly102. Each of the subsets of the digitized echo signal740may be associated with a time delay (td) between the start of the digitized echo signal740(e.g., tO) and the beginning of the first bit in each subset of the digitized echo signal740. Alternatively, the beginning of the time delay (td) for the subset can be measured from another starting time (e.g., a time before or after the start of the digitized echo signal740(tO) and/or the end of the time delay (td) may be at another location in the subset, such as the middle or at another bit of the subset. The time delay (td) associated with the subset of interest may represent the time of flight (tF) of the transmitted signal106that is reflected off a target object104. Using Equation #1 above, the time of flight can be used to calculate the separation distance110between the sensing assembly102and the target object104. In one embodiment, the time of flight (tF) may be based on a modified time delay (td), such as a time delay that is modified by a calibration factor to obtain the time of flight (tF). As one example, the time of flight (tF) can be corrected to account for propagation of signals and/or other processing or analysis. Propagation of the echo signal224, formation of the baseband echo signal226, propagation of the baseband echo signal226, and the like, through the components of the sensing assembly102can impact the calculation of the time of flight (tF). The time delay associated with a subset of interest in the baseband echo signal226may include the time of flight of the transmitted signals106and echoes108, and also may include the time of propagation of various signals in the analog and digital blocks (e.g., the correlator device731and/or the pattern code generator228and/or the mixers210and/or the amplifier238) of the system100. In order to determine the propagation time of data and signals through these components, a calibration routine can be employed. A measurement can be made to a target of known distance. For example, one or more transmitted signals106can be sent to the target object104that is at a known separation distance110from the transmit and/or receiving antennas204,206. The calculation of the time of flight for the transmitted signals106can be made as described above, and the time of flight can be used to determine a calculated separation distance110. Based on the difference between the actual, known separation distance110and the calculated separation distance110, a measurement error that is based on the propagation time through the components of the sensing assembly102may be calculated. This propagation time may then be used to correct (e.g., shorten) further times of flight that are calculated using the sensing assembly102. In one embodiment, the sensing assembly102may transmit several bursts of the transmitted signal106and the correlator device731may calculate several correlation values for the digitized echo signals740that are based on the reflected echoes108of the transmitted signals106. The correlation values for the several transmitted signals106may be grouped by common time delays (td), such as by calculating the average, median, or other statistical measure of the correlation values calculated for the same or approximately the same time delays (td). The grouped correlation values that are larger than other correlation values or that are the largest may be used to more accurately calculate the time of flight (tF) and separation distance110relative to using only a single correlation value and/or burst. FIG.14illustrates one example of correlation values that are calculated and averaged over several transmitted signals106shown inFIG.11. The correlation values400are shown alongside a horizontal axis402representative of time (e.g., time delays or times of flight) and a vertical axis404representative of the magnitude of the correlation values400. As shown inFIG.14, several peaks406,408may be identified based on the multiple correlation values400that are grouped over several transmitted signals106. The peaks406,408may be associated with one or more target objects104(shown inFIG.11) off which the transmitted signals106reflected. The time delays associated with one or more of the peaks406,408(e.g., the time along the horizontal axis402) can be used to calculate the separation distance(s)110of one or more of the target objects104associated with the peaks406,408, as described above. FIG.15is another schematic diagram of the sensing assembly102shown inFIG.12. The sensing assembly102is illustrated inFIG.5as including a radio front end500and a processing back end502. The radio front end500may include at least some of the components included in the front end200(shown inFIG.12) of the sensing assembly102and the processing back end502may include at least some of the components of the back end202(shown inFIG.12) of the sensing assembly102, and/or one or more components (e.g., the front end transmitter208and/or receiver218shown inFIG.12) of the front end200. As described above, the received echo signal224may be conditioned by circuits506(e.g., by the front end receiver218shown inFIG.12) that are used for high-speed optical communications systems in one embodiment. This conditioning may include amplification and/or quantization only. The signal224may then pass to a digitizer730that creates a digital signal based on the signal224, which is then passed to the correlator731(described below) for comparison to the original transmit sequence to extract time-of-flight information. The correlator device731and the conditioning circuits may be collectively referred to as the baseband processing section of the sensing apparatus102. Also as described above, the pattern code generator228generates the pattern (e.g., a digital pulse sequence) that is communicated in the pattern signal230. The digital pulse sequence may be relatively high speed in order to make the pulses shorter and increase accuracy and/or precision of the system100(shown inFIG.11) and/or to spread the transmitted radio energy over a very wide band. If the pulses are sufficiently short enough, the bandwidth may be wide enough to be classified as Ultra-wideband (UWB). As a result, the system100can be operated in the 22-27 GHz UWB band and/or the 3-10 GHz UWB band that are available worldwide (with regional variations) for unlicensed operation. In one embodiment, the digital pulse sequence is generated by one or more digital circuits, such as a relatively low-power Field-Programmable Gate Array (FPGA)504. The FPGA504may be an integrated circuit designed to be configured by the customer or designer after manufacturing to implement a digital or logical system. As shown inFIG.15, the FPGA504can be configured to perform the functions of the pulse code generator228and the correlator device731. The pulse sequence can be buffered and/or conditioned by one or more circuits508and then passed directly to the transmit radio of the front end500(e.g., the front end transmitter208). FIG.16is a schematic diagram of one embodiment of the front end200of the sensing assembly102shown inFIG.12. The front end200of the sensing assembly102may alternatively be referred to as the radio front end500(shown inFIG.15) or the “radio” of the sensing assembly102. In one embodiment, the front end200includes a direct-conversion transmitter600(“TX Chip” inFIG.16) and receiver602(“RX Chip” inFIG.6), with a common frequency reference generator604(“VCO Chip” inFIG.16). The transmitter600may include or represent the front end transmitter208(shown inFIG.12) and the receiver602may include or represent the front end receiver218(shown inFIG.12). The common frequency reference generator604may be or include the oscillator device214shown inFIG.12. The common frequency reference generator604may be a voltage controlled oscillator (VCO) that produces a frequency reference signal as the oscillating signal216. In one embodiment, the frequency of the reference signal216is one half of a designated or desired carrier frequency of the transmitted signal106(shown inFIG.11). Alternatively, the reference signal216may be another frequency, such as the same frequency as the carrier frequency, an integer multiple or divisor of the carrier frequency, and the like. In one embodiment, the reference generator604emits a frequency reference signal216that is a sinusoidal wave at one half the frequency of the carrier frequency. The reference signal is split equally and delivered to the transmitter600and the receiver602. Although the reference generator604may be able to vary the frequency of the reference signal216according to an input control voltage, the reference generator604can be operated at a fixed control voltage in order to cause the reference generator604to output a fixed frequency reference signal216. This is acceptable since frequency coherence between the transmitter600and the receiver602may be automatically maintained. Furthermore, this arrangement can allow for coherence between the transmitter600and the receiver602without the need for a phase locked loop (PLL) or other control structure that may limit the accuracy and/or speed at which the sensing assembly102operates. In another embodiment a PLL may be added to for other purposes, such as stabilizing the carrier frequency or otherwise controlling the carrier frequency. The reference signal216can be split and sent to the transmitter600and receiver602. The reference signal216drives the transmitter600and receiver602, as described above. The transmitter600may drive (e.g., activate to transmit the transmitted signal106shown inFIG.11) the transmitting antenna204(shown inFIG.12). The receiver602may receive the return echo signal through the receiving antenna206(shown inFIG.12) that is separate from the transmitting antenna204. This can reduce the need for a T/R (transmit/receive) switch disposed between the transmitter600and the receiver602. The transmitter600can up-convert the timing reference signal216and transmit an RF transmit signal606through the transmitting antenna204in order to drive the transmitting antenna204to transmit the transmitted signal106(shown inFIG.11). In one embodiment, the output of the transmitter600can be at a maximum frequency or a frequency that is greater than one or more other frequencies in the sensing assembly102(shown inFIG.11). For example, the transmit signal606from the transmitter600can be at the carrier frequency. This transmit signal606can be fed directly to the transmitting antenna204to minimize or reduce the losses incurred by the transmit signal606. In one embodiment, the transmitter600can take separate in-phase (I) and quadrature (Q) digital patterns or signals from the pattern generator604and/or the pattern code generator228(shown inFIG.12). This can allow for increased flexibility in the transmit signal606and/or can allow for the transmit signal606to be changed “on the fly,” or during transmission of the transmitted signals106. As described above, the receiver602may also receive a copy of the frequency reference signal216from the reference generator604. The returning echoes108(shown inFIG.11) are received by the receiving antenna206(shown inFIG.12) and may be fed directly to the receiver602as the echo signal224. This arrangement can give the system maximum or increased possible input signal-to-noise ratio (SNR), since the echo signal224propagates a minimal or relatively small distance before the echo signal224enters the receiver602. For example, the echo signal224may not propagate or otherwise go through a switch, such as a transmit/receive (TX/RX) switch. The receiver602can down-convert a relatively wide block of frequency spectrum centered on the carrier frequency to produce the baseband signal (e.g., the baseband echo signal226shown inFIG.12). The baseband signal may then be processed by a baseband analog section of the sensing assembly102(shown inFIG.11), such as the correlator device731(shown inFIG.12) and/or one or more other components, to extract the time of flight (tF). As described above, this received echo signal224includes a delayed copy of the TX pattern signal. The delay may be representative of and/or is a measurement of the round-trip, time-of-flight of the transmitted signal106and the corresponding echo108. The frequency reference signal216may contain or comprise two or more individual signals such as the I and Q components that are phase shifted relative to each other. The phase shifted signals can also be generated internally by the transmitter600and the receiver602. For example, the signal216may be generated to include two or more phase shifted components (e.g., I and Q components or channels), or may be generated and later modified to include the two or more phase shifted components. In one embodiment, the front end200provides relatively high isolation between the transmit signal606and the echo signal224. This isolation can be achieved in one or more ways. First, the transmit and receive components (e.g., the transmitter600and receiver602) can be disposed in physically separate chips, circuitry, or other hardware. Second, the reference generator604can operate at one half the carrier frequency so that feed-through can be reduced. Third, the transmitter600and the receiver602can have dedicated (e.g., separate) antennas204,206that are also physically isolated from each other. This isolation can allow for the elimination of a TX/RX switch that may otherwise be included in the system100. Avoiding the use of the TX/RX switch also can remove the switch-over time between the transmitting of the transmitted signals106and the receipt of the echoes108shown inFIG.11. Reducing the switch-over time can enable the system100to more accurately and/or precisely measure distances to relatively close target objects104. For example, reducing this switch-over time can reduce the threshold distance that may be needed between the sensing assembly102and the target object104in order for the sensing assembly102to measure the separation distance110shown inFIG.11before transmitted signals106are received as echoes108. FIG.17is a circuit diagram of one embodiment of a baseband processing system232of the system100shown inFIG.11. In one embodiment, the baseband processing system232is included in the sensing assembly102(shown inFIG.11) or is separate from the system100but operatively coupled with the system100to communicate one or more signals between the systems100,232. For example, the baseband processing system232can be coupled with the front end receiver218(shown inFIG.12) to receive the echo signal226(e.g., the echo signal226A and/or226B). For example, at least part of the system232may be disposed between the front end receiver218and the Control and Processing Unit (CPU)270shown inFIG.17. The baseband processing system232may provide for the coarse and/or fine and/or ultrafine stage determinations described above. In one embodiment, the system100(shown inFIG.11) includes a fine transmit pattern (e.g., a transmit pattern for fine stage determination) in the transmitted signal106following the coarse stage determination. For example, after transmitting a first transmit pattern in a first transmitted signal106(or one or more bursts of several transmitted signals106) to use the coarse stage and calculate a time delay in the echo signal226(and/or the time of flight), a second transmit pattern can be included in a subsequent, second transmitted signal106for the fine stage determination of the time of flight (or a portion thereof). The transmit pattern in the coarse stage may be the same as the transmit pattern in the fine stage. Alternatively, the transmit pattern of the fine stage may differ from the transmit pattern of the coarse stage, such as by including one or more different waveforms or bits in a pulse sequence pattern of the transmitted signal106. The baseband processing system232receives the echo signal226(e.g., the I component or channel of the echo signal226A and/or the Q component or channel of the echo signal226B from the front end receiver218(shown inFIG.11). The echo signal226that is received from the front end receiver218is referred to as “I or Q Baseband signal” inFIG.17. As described below, the system232also may receive a receive pattern signal728(“I or Q fine alignment pattern” inFIG.17) from the pattern code generator228(shown inFIG.12). Although not shown inFIG.12or17, the pattern code generator228and the system232may be coupled by one or more conductive pathways (e.g., busses, wires, cables, and the like) to communicate with each other. The system232can provide output signals702A,702B (collectively or individually referred to as an output signal702and shown as “Digital energy estimates for I or Q channel” inFIG.17). In one embodiment, the baseband processing system232is an analog processing system. In another embodiment, the baseband processing system232is a hybrid analog and digital system comprised of components and signals that are analog and/or digital in nature. The digitized echo signal226that is received by the system232may be conditioned by signal conditioning components of the baseband processing system232, such as by modifying the signals using a conversion amplifier704(e.g., an amplifier that converts the baseband echo signal226, such as by converting current into a voltage signal). In one embodiment, the conversion amplifier704includes or represents a trans-impedance amplifier, or “TIA” inFIG.17). The signal conditioning components can include a second amplifier706(e.g., a limiting amplifier or “Lim. Amp” inFIG.17). The conversion amplifier704can operate on a relatively small input signal that may be a single-ended (e.g., non-differential) signal to produce a differential signal708(that also may be amplified and/or buffered by the conversion amplifier704and/or one or more other components). This differential signal708may still be relatively small in amplitude. In one embodiment, the differential signal708is then passed to the second amplifier706that increases the gain of the differential signal708. Alternatively, the second amplifier706may not be included in the system232if the conversion amplifier704produces a sufficiently large (e.g., in terms of amplitude and/or energy) output differential signal710. The second amplifier706can provide relatively large gain and can tolerate saturated outputs710. There may be internal positive feedback in the second amplifier706so that even relatively small input differences in the differential signal708can produce a larger output signal710. In one embodiment, the second amplifier706quantizes the amplitude of the received differential signal708to produce an output signal710. The second amplifier706may be used to determine the sign of the input differential signal708and the times at which the sign changes from one value to another. For example, the second amplifier706may act as an analog-to-digital converter with only one bit precision in one embodiment. Alternatively, the second amplifier706may be a high-speed analog-to-digital converter that periodically samples the differential signal708at a relatively fast rate. Alternatively, the second amplifier may act as an amplitude quantizer while preserving timing information of the baseband signal226. The use of a limiting amplifier as the second amplifier706can provide relatively high gain and relatively large input dynamic range. As a result, relatively small differential signals708that are supplied to the limiting amplifier can result in a healthy (e.g., relatively high amplitude and/or signal-to-noise ratio) output signal710. Additionally, larger differential signals708(e.g., having relatively high amplitudes and/or energies) that may otherwise result in another amplifier being overdriven instead result in a controlled output condition (e.g., the limiting operation of the limiting amplifier). The second amplifier706may have a relatively fast or no recovery time, such that the second amplifier706may not go into an error or saturated state and may continue to respond to the differential signals708that are input into the second amplifier706. When the input differential signal708returns to an acceptable level (e.g., lower amplitude and/or energy), the second amplifier706may avoid the time required by other amplifiers for recovery from an overdrive state (that is caused by the input differential signal708). The second amplifier706may avoid losing incoming input signals during such a recovery time. A switch device712(“Switch” inFIG.17) that receives the output differential signal710(e.g., from the second amplifier706) can control where the output differential signal710is sent. For example, the switch device712may alternate between states where, in one state (e.g., a coarse acquisition or determination state), the switch device712directs the output differential signal710along a first path716to the digitizer730and then to the correlator device731. The digitizer730includes one or more analog or digital components, such as a processor, controller, buffers, digital gates, delay lines, samplers and the like, that digitize received signals into a digital signal, such as the digital echo signal740described above in connection withFIG.13B. The first path716is used to provide for the coarse stage determination of the time of flight, as described above. In one embodiment, the signals710may pass through another amplifier714and/or one or more other components before reaching the correlator device731for the coarse stage determination. In another state, the switch device712directs the output differential signal710along a different, second path718to one or more other components (described below). The second path718is used for the fine stage determination of the time of flight in the illustrated embodiment. The switch device712may alternate the direction of flow of the signals (e.g., the output differential signal710) from the first path716to the second path718. Control of the switch device712may be provided by the control unit112(shown inFIG.11). For example, the control unit112may communicate control signals to the switch device712to control where the signals flow after passing through the switch device712. The output differential signals710received by the switch device712may be communicated to a comparison device720in the second path718. Alternatively, the switch device712(or another component) may convert the differential signals710into a single-ended signal that is input into the comparison device720. The comparison device720also receives the receive pattern signal728from the pattern generator228(shown inFIG.12). The receive pattern signal728is referred to as “I or Q fine alignment pattern” inFIG.17). The receive pattern signal728may include a copy of the same transmit pattern that is transmitted in the transmitted signal106used to generate the echo signal226being analyzed by the system232. Alternatively, the receive pattern signal728may differ from the transmit signal that is transmitted in the transmitted signal106used to generate the echo signal226being analyzed by the system232. The comparison device720compares the signals received from the switch device712with the receive pattern signal728to identify differences between the echo signal226and the receive pattern signal728. In one embodiment, the receive pattern signal728includes a pattern that is delayed by the time delay (e.g., the time of flight) identified by the coarse stage determination. The comparison device720may then compare this time-delayed pattern in the pattern signal728to the echo signal226(e.g., as modified by the amplifiers704,710) to identify overlaps or mismatches between the time-delayed pattern signal728and the echo signal226. In one embodiment, the comparison device720may include or represent a limiting amplifier that acts as a relatively high-speed XOR gate. An “XOR gate” includes a device that receives two signals and produces a first output signal (e.g., a “high” signal) when the two signals are different and a second output signal (e.g., a “low” signal) or no signal when the two signals are not different. In another embodiment, the system may only include the coarse baseband processing circuits716or the fine baseband processing circuits718. In this case, the switch712may also be eliminated. For example, this may be to reduce the cost or complexity of the overall system. As another example, the system may not need the fine accuracy and the rapid response of the coarse section716is desired. The coarse, fine and ultrafine stages may be used in any combination at different times in order to balance various performance metrics. Intelligent control can be manually provided by an operator or automatically generated by a processor or controller (such as the control unit112) autonomously controlling the assembly102based on one or more sets of instructions (such as software modules or programs) stored on a tangible computer readable storage medium (such as a computer memory). The intelligent control can manually or automatically switch between which stages are used and/or when based on feedback from one or more other stages. For example, based on the determination from the coarse stage (e.g., an estimated time of flight or separation distance), the sensing assembly102may manually or automatically switch to the fine and/or ultrafine stage to further refine the time of flight or separation distance and/or to monitor movement of the target object104. With continued reference toFIG.17,FIG.18is a schematic diagram of one example of how the comparison device720compares a portion800of the baseband echo signal226with a portion802of the time-delayed pattern signal728in one embodiment. Although only portions800,802of the pattern signal728and the echo signal226are shown, the comparison device720may compare more, or all, of the echo signal226with the pattern signal728. The portion800of the echo signal226and the portion802of the pattern signal728are shown disposed above each other and above a horizontal axis804that is representative of time. An output signal806represents the signal that is output from the comparison device720. The output signal806represents differences (e.g., a time lag, amount of overlap, or other measure) between the portion800of the echo signal226and the portion802of the pattern signal728. The comparison device720may output a single ended output signal806or a differential signal as the output signal806(having components806A and806B, as shown inFIG.18). In one embodiment, the comparison device720generates the output signal806based on differences between the portion800of the echo signal226and the portion802of the time-delayed pattern signal728. For example, when a magnitude or amplitude of both portions800,802is “high” (e.g., has a positive value) or when the magnitude or amplitude of both portions800,802is “low” (e.g., has a zero or negative value), the comparison device720may generate the output signal806to have a first value. In the illustrated example, this first value is zero. When a magnitude or amplitude of both portions800,802differ (e.g., one has a high value and the other has a zero or low value), the comparison device720may generate the output signal806with a second value, such as a high value. In the example ofFIG.18, the portion800of the echo signal226and the portion802of the pattern signal728have the same or similar value except for time periods808,810. During these time periods808,810, the comparison device720generates the output signal806to have a “high” value. Each of these time periods808,810can represent the time lag, or delay, between the portions800,802. During other time periods, the comparison device720generates the output signal806to have a different value, such as a “low” or zero value, as shown inFIG.18. Similar output signals806may be generated for other portions of the echo signal226and pattern signal728. FIG.19illustrates another example of how the comparison device720compares a portion900of the baseband echo signal226with a portion902of the pattern signal728. The portions900,902have the same or similar values except for time periods904,906. During these time periods904,906, the comparison device720generates the output signal806to have a “high” value. During other time periods, the comparison device720generates the output signal806to have a different value, such as a “low” or zero value. As described above, the comparison device720may compare additional portions of the baseband signal226with the pattern signal728to generate additional portions or waveforms in the output signal806. FIG.20illustrates another example of how the comparison device720compares a portion1000of the baseband echo signal226with a portion1002of the pattern signal230. The portions1000,1002have the same or similar values over the time shown inFIG.20. As a result, the output signal806that is generated by the comparison device720does not include any “high” values that represent differences in the portions1000,1002. As described above, the comparison device720may compare additional portions of the baseband signal226with the pattern signal728to generate additional portions or waveforms in the output signal806. The output signals806shown inFIGS.8,9, and10are provided merely as examples and are not intended to be limitations on all embodiments disclosed herein. The output signals806generated by the comparison device720represent temporal misalignment between the baseband echo signal226and the pattern signal728that is delayed by the time of flight or time delay measured by the coarse stage determination. The temporal misalignment may be an additional portion (e.g., to be added to) the time of flight of the transmitted signals106(shown inFIG.11) and the echoes108(shown inFIG.11) to determine the separation distance110(shown inFIG.11). The temporal misalignment between the baseband signal226and the pattern signal728may be referred to as a time lag. The time lag can be represented by the time periods808,810,904,906. For example, the time lag of the data stream226inFIG.18may be the time encompassed by the time period808or810, or the time by which the portion802of the baseband signal226follows behind (e.g., lags) the portion800of the pattern signal728. Similarly, the time lag of the portion902of the baseband signal226may be the time period904or906. With respect to the example shown inFIG.20, the portion1000of the baseband signal does not lag behind the portion1002of the pattern signal728. As described above, several time lags may be measured by comparing more of the baseband signal226with the time-delayed pattern signal728. In order to measure the temporal misalignment between the baseband signal226and the time-delayed pattern signal, the output signals806may be communicated from the conversion device720to one or more filters722. In one embodiment, the filters722are low-pass filters. The filters722generate energy signals724that are proportional to the energy of the output signals806. The energy of the output signals806is represented by the size (e.g., width) of waveforms812,910in the output signals806. As the temporal misalignment between the baseband signal226and the pattern signal728increases, the size (and energy) of the waveforms812,910increases. As a result, the amplitude and/or energy conveyed or communicated by the energy signals724increases. Conversely, as the temporal misalignment between the baseband signal226and the time-delayed pattern signal728decreases, the size and/or amplitude and/or energy of the waveforms812,910also decreases. As a result, the energy conveyed or communicated by the energy signals724decreases. As another example, the above system could be implemented using the opposite polarity, such as with an XNOR comparison device that produces “high” signals when the baseband signal226and the time-delayed pattern signal728are the same and “low” when they are different. In this example, as the temporal misalignment between the baseband signal226and the pattern signal728increases, the size (and energy) of the waveforms812,910decreases. As a result, the amplitude and/or energy conveyed or communicated by the energy signals724decreases. Conversely, as the temporal misalignment between the baseband signal226and the time-delayed pattern signal728decreases, the size, amplitude, and/or energy of the waveforms812,910also increases. As a result, the energy conveyed or communicated by the energy signals724increases. The energy signals724may be communicated to measurement devices726(“ADC” inFIG.17). The measurement devices726can measure the energies of the energy signals724. The measured energies can then be used to determine the additional portion of the time of flight that is represented by the temporal misalignment between the baseband signal226and the time-delayed pattern signal728. In one embodiment, the measurement device726periodically samples the energy and/or amplitude of energy signals724in order to measure the energies of the energy signals724. For example, the measurement devices726may include or represent analog-to-digital converters (ADC) that sample the amplitude and/or energy of the energy signals724in order to measure or estimate the alignment (or misalignment) between the echo signal226and the pattern signal728. The sampled energies can be communicated by the measurement devices726as the output signal702to the control unit112or other output device or component (shown as “Digital energy estimates for I or Q channel” inFIG.17). The control unit112(or other component that receives the output signal710) may examine the measured energy of the energy signals724and calculate the additional portion of the time of flight represented by the temporal misalignment between the baseband signal226and the time-delayed pattern signal728. The control unit112also may calculate the additional portion of the separation distance110that is associated with the temporal misalignment. In one embodiment, the control unit112compares the measured energy to one or more energy thresholds. The different energy thresholds may be associated with different amounts of temporal misalignment. Based on the comparison, a temporal misalignment can be identified and added to the time of flight calculated using the coarse stage determination described above. The separation distance110may then be calculated based on the combination of the coarse stage determination of the time of flight and the additional portion of the time of flight from the fine stage determination. FIG.21illustrates examples of output signals724provided to the measurement devices726and energy thresholds used by the control unit112or other component or device (shown inFIG.12) in accordance with one example. The output signals702are shown alongside a horizontal axis1102representative of time and a vertical axis1104representative of energy. Several energy thresholds1106are shown above the horizontal axis1102. Although eight output signals724A-H and eight energy thresholds1106A-H are shown, alternatively, a different number of output signals724and/or energy thresholds1106may be used. The measurement devices726may digitize the energy signals724to produce the energy data output signals702. When the output signals702are received from the measurement devices726(shown inFIG.17) by the CPU270, the output signals706can be compared to the energy thresholds1106to determine which, if any, of the energy thresholds1106are exceeded by the output signals702. For example, the output signals702having less energy (e.g., a lower magnitude) than the energies associated with the output signal702A may not exceed any of the thresholds1106, while the output signal702A approaches or reaches the threshold1106A. The output signal702B is determined to exceed the threshold1106A, but not exceed the threshold1106B. As shown inFIG.21, other output signals702may exceed some thresholds1106while not exceeding other thresholds1106. The different energy thresholds1106are associated with different temporal misalignments between the echo signal226and the time-delayed pattern signal728in one embodiment. For example, the energy threshold1106A may represent a temporal misalignment of 100 picoseconds, the energy threshold1106B may represent a temporal misalignment of 150 picoseconds, the energy threshold1106C may represent a temporal misalignment of 200 picoseconds, the energy threshold1106D may represent a temporal misalignment of 250 picoseconds, and so on. For example, 724B may be the result of the situation shown inFIGS.8and724Emay be the result of the situation inFIG.19. The measured energy of the output signal702can be compared to the thresholds1106to determine if the measured energy exceeds one or more of the thresholds1106. The temporal misalignment associated with the largest threshold1106that is approached or reached or represented by the energy of the output signal702may be identified as the temporal misalignment between the echo signal226and the time-delayed pattern signal728. In one embodiment, no temporal alignment may be identified for output signals702having or representing energies that are less than the threshold1106A. The energy thresholds1106may be established by positioning target objects104(shown inFIG.11) a known separation distance110(shown inFIG.11) from the sensing assembly102(shown inFIG.11) and observing the levels of energy that are represented or reached or approached by the output signals702. In addition or as an alternate to performing the fine stage determination of the time of flight, the ultrafine stage may be used to refine (e.g., increase the resolution of) the time of flight measurement, track movement, and/or detect movement of the target object104(shown inFIG.11). In one embodiment, the ultrafine stage includes comparing different components or channels of the same or different echo signals226as the fine stage determination. For example, in one embodiment, the coarse stage determination may measure a time of flight from echo signals226that are based on echoes108received from transmission of a first set or burst of one or more transmitted signals106, as described above. The fine stage determination may measure an amount of temporal misalignment or overlap between echo signals226that are based on echoes108received from transmission of a subsequent, second set or burst of one or more transmitted signals106(that may use the same or different transmit pattern as the first set or burst of transmitted signals106). The fine stage determination may measure the temporal misalignment between the echo signals226from the second set or burst of transmitted signals106and a receive pattern signal (which may be the same or different receive pattern as used by the coarse stage determination) as that is time delayed by the time of flight measured by the coarse stage, as described above. In one embodiment, the fine stage determination examines the I and/or Q component or channel of the echo signals226. The ultrafine stage determination may measure the temporal misalignment of the echo signals226from the same second set or burst of transmitted signals106as the fine stage determination, or from a subsequent third set or burst of transmitted signals106. The ultrafine stage determination may measure the temporal misalignment between the echo signals226and a receive pattern signal (that is the same or different as the receive pattern signal used by the fine stage determination) that is time-delayed by the time of flight measured by the coarse stage. In one embodiment, the ultrafine stage measures the temporal misalignment of the I and/or Q component or channel of the echo signals226while the fine stage measures the temporal misalignment of the Q and/or I component or channel of the same or different echo signals226. The temporal misalignment of the I component may be communicated to the control unit112(or other component or device) as the output signals702(as described above) while the temporal misalignment of the Q component may be communicated to the control unit112(or other component or device) as output signals1228. Alternatively or additionally, the time lag of the waveforms in the I channel and Q channel can be examined to resolve phases of the echoes in order to calculate separation distance or motion of the target. As described above, the ultrafine stage determination may alternatively or additionally involve a similar process as the coarse stage determination. For example, the coarse stage determination may examine the I channel of the receive pattern and the data stream to determine correlation values of different subsets of the data stream and, from those correlation values, determine a subset of interest and a corresponding time-of-flight, as described herein. The ultrafine stage determination can use the Q channel of the receive pattern and the data stream to determine correlation values of different subsets of the data stream and, from those correlation values, determine a subset of interest and a time-of-flight. The times-of-flight from the I channel and Q channel can be combined (e.g., averaged) to calculate a time of flight and/or separation distance to the target. The correlation values calculated by the ultrafine stage determination can be used to calculate an additional time delay that can be added to the time delays from the coarse stage and/or the fine stage to determine a time of flight and/or separation distance to the target. Alternatively or additionally, the correlation values of the waveforms in the I channel and Q channel can be examined to resolve phases of the echoes in order to calculate separation distance or motion of the target. FIG.22is a circuit diagram of another embodiment of a baseband processing system1200of the system100shown inFIG.11. In one embodiment, the baseband processing system1200is similar to the baseband processing system232(shown inFIG.17). For example, the baseband processing system1200may be included in the sensing assembly102(shown inFIG.11) by being coupled with the front end receiver218, the pattern code generator228, and/or the baseband processor232of the sensing assembly102. The baseband processing system1200includes two or more parallel paths1202,1204that the I and Q components of the baseband echo signal226and the pattern signal can flow through for processing and analysis. For example, a first path1202can process and analyze the I components of the echo signal224and baseband echo signal226and the second path1204can process and analyze the Q components of the echo signal224and the baseband echo signal226. In the illustrated embodiment, each of the paths1202,1204includes the baseband processing system232described above. Alternatively, one or more of the paths1202,1204may include one or more other components for processing and/or analyzing the signals. In another embodiment, only a single path1202or1204may process and/or analyze multiple, different components of the baseband echo signal224and/or baseband echo signal226. For example, the path1202may examine the I component of the signal224and/or226during a first time period and then examine the Q component of the signal224and/or226during a different (e.g., subsequent or preceding) second time period. In operation, the echo signal224is received by the front end receiver218and is separated into separate I and Q signals1206,1208(also referred to herein as I and Q channels). Each separate I and Q signal1206,1208includes the corresponding I or Q component of the echo signal224and can be processed and analyzed similar to the signals described above in connection with the baseband processing system232shown inFIG.17. For example, each of the I signal1206and the Q signal1208can be received and/or amplified by a conversion amplifier1210(that is similar to the conversion amplifier704) in each path1202,1204to output a differential signal (e.g., similar to the signal708shown inFIG.17) to another amplifier1212(e.g., similar to the amplifier706shown inFIG.17). The amplifiers1212can produce signals having increased gain (e.g., similar to the signals710shown inFIG.17) that are provided to switch devices1214. The switch devices1214can be similar to the switch device712(shown inFIG.17) and can communicate the signals from the amplifiers1212to amplifiers1216(which may be similar to the amplifier714shown inFIG.17) and/or the correlator device232for the coarse stage identification of a time of flight, as described above. Similar to as described above in connection with the switch device712(shown inFIG.17), the switch devices1214can direct the signals from the amplifiers1212to comparison devices1218(that may be similar to the comparison device720shown inFIG.17), filters1220(that may be similar to the filters722shown inFIG.17), and measurement devices1222(that may be similar to the measurement devices726shown inFIG.17). The comparison devices1218may each receive different components of a receive pattern signal from the pattern code generator228. For example, the comparison device1218in the first path1202may receive an I component1224of a receive pattern signal for the fine stage and the comparison device1218in the second path1202may receive the Q component1226of the receive pattern signal for the ultrafine stage. The comparison devices1218generate output signals that represent temporal misalignments between the I or Q components1224,1226of the receive pattern signal and the I or Q components of the echo signal226, similar to as described above. For example, the comparison device1218in the first path1202may output a signal having an energy that represents (e.g., is proportional to) the temporal misalignment between the I component of the baseband echo signal226and the I component of the time-delayed receive pattern signal728. The comparison device1218in the second path1204may output another signal having an energy that represents the temporal misalignment between the Q component of the baseband echo signal226and the Q component of the time-delayed pattern signal728. Alternatively, there may be a single path700, as shown inFIG.17, that may be shared between I and Q operation. This could be accomplished by alternately providing or switching between the I and Q components of the baseband echo signal226A and226B. As described above, the energies of the signals output from the comparison devices1218can pass through the filters1220and be measured by the measurement devices1222to determine each of the temporal misalignments associated with the I and Q components of the echo signal226and the receive pattern signal. These temporal misalignments can be added together and added to the time of flight determined by the coarse stage determination. The sum of the temporal misalignments and the time of flight from the coarse stage determination can be used by the baseband processor232to calculate the separation distance110(shown inFIG.11), as described above. Because the I and Q components of the echo signal and the time-delayed receive pattern signal are phase shifted by approximately 90 degrees from each other, separately examining the I and Q components allows calculation of the carrier phase of the returning signal108according to Equation 2 below and can provide resolution on the order of one eighth or better (smaller) of the wavelength of the carrier signal of the transmitted signals106and echoes108. Alternatively, there may be 3 or more components separated by an amount other than 90 degrees. In one embodiment, the ultrafine stage determination described above can be used to determine relatively small movements that change the separation distance110(shown inFIG.11). For example, the ultrafine stage may be used to identify relatively small movements within a portion of the separation distance110that is associated with the subset of interest in the baseband echo signal226. FIG.23illustrates projections of I and Q components of the baseband echo signal226in accordance with one embodiment. The ultrafine stage determination can include the baseband processor232(shown inFIG.12) projecting a characteristic of the I and Q components of the baseband echo signal226onto a vector. As shown inFIG.23, a vector1300is shown alongside a horizontal axis1302and a vertical axis1304. The backend202or control unit112or other processing or computation devices by examination of the data signals234,702,1228,260, or others or a combination of some or all of the signals may determine the vector1300as a projection of the characteristic (e.g., amplitude) of the I component1320of the echo signal along the horizontal axis1302and a projection of the characteristic (e.g., amplitude) of the Q component1321of the echo signal along the vertical axis1304. For example, the vector1300may extend to a location along the horizontal axis1302by an amount that is representative of an amplitude of the I component of the echo signal and to a location along the vertical axis1304by an amount that is representative of an amplitude of the Q component of the echo signal. The phase of the carrier can then calculated as: φ=arctan(IQ)(Equation#2) where ϕ denotes the phase and I is the I projection1320and Q is the Q projection1321. The carrier phase or the change in carrier phase can be used to calculate the distance or change in distance through the equation: distance=φ×λ360(Equation#3) where λ, is the wavelength of the carrier frequency and ϕ is the phase expressed in degrees as calculated from Equation 2 above. The baseband processor232(shown inFIG.12) may then determine additional vectors1306,1308based on the echoes108(shown inFIG.11) received from additional transmitted signals106(shown inFIG.11). Based on changes in the vector1300to the vector1306or the vector1308, the baseband processor232may identify movement of the target object104(shown inFIG.11) within the portion of the separation distance110(shown inFIG.11) that is associated with the subset of interest. For example, rotation of the vector1300in a counterclockwise direction1310toward the location of the vector1306may represent movement of the target object104toward the sensing assembly102shown inFIG.11(or movement of the sensing assembly102toward the target object104). Rotation of the vector1300in a clockwise direction1312toward the location of the vector1308may represent movement of the target object104away from the sensing assembly102(or movement of the sensing assembly102toward the target object104). Alternatively, movement of the vector1300in the counter-clockwise direction1310may represent movement of the target object104away from the sensing assembly102(or movement of the sensing assembly102toward the target object104) while movement of the vector1300in the clockwise direction1312may represent movement of the target object104toward the sensing assembly102shown inFIG.11(or movement of the sensing assembly102toward the target object104). The correlator device232may be calibrated by moving the target object104toward and away from the sensing assembly102to determine which direction of movement results in rotation of the vector1300in the clockwise direction1312or counter-clockwise direction1310. The coarse, fine, and/or ultrafine stage determinations described above may be used in a variety of combinations. For example, the coarse stage determination may be used to calculate the separation distance110(shown inFIG.11), even if the approximate distance from the sensing device102(shown inFIG.11) to the target object104(shown inFIG.11) is not known. Alternatively, the coarse stage may be used with the fine and/or ultrafine stage determinations to obtain a more precise calculation of the separation distance110. The coarse, fine and ultrafine stages may be used in any combination at different times in order to balance various performance metrics. As another example, if the separation distance110(shown inFIG.11) is known, the fine or ultrafine stage determinations can be activated without the need for first identifying the bit of interest using the coarse stage determination. For example, the system100(shown inFIG.11) may be in a “tracking” mode where updates from the initial known separation distance110are identified and/or recorded using the fine and/or ultrafine state determinations. Returning to the discussion of the system100shown inFIG.11, in another embodiment, the system100discern between echoes108that are reflected off of different target objects104. For example, in some uses of the system100, the transmitted signals106may reflect off of multiple target objects104. If the target objects104are located different separation distances110from the sensing assembly102, a single baseband echo signal226(shown inFIG.12) may represent several sequences of bits that represent echoes off the different target objects104. As described below, a mask may be applied to the baseband echo signal226and the pattern in the correlation window that is compared to the baseband echo signal226in order to distinguish between the different target objects104. FIG.24illustrates a technique for distinguishing between echoes108(shown inFIG.11) that are reflected off different target objects104(shown inFIG.11) in accordance with one embodiment. When a first transmitted signal106shown inFIG.11(or a series of first transmitted signals106) reflect off of multiple target objects104, the digital pulse sequence (e.g., the pattern of bits) in the pattern signal230(shown inFIG.12) may be modified relative to the digital pulse sequence in the first transmitted signal106for transmission of a second transmitted signal106(or series of second transmitted signals106). The echoes108and corresponding baseband echo signal226(shown inFIG.12) of the second transmitted signal106may be compared to the modified digital pulse sequence to distinguish between the multiple target objects104(e.g., to calculate different times of flight and/or separation distances110associated with the different target objects104). A first digitized echo signal1400inFIG.24represents the sequence of bits that may be generated when a transmitted signal106(shown inFIG.11) reflects off a first target object104at a first separation distance110(shown inFIG.11) from the sensing assembly102(shown inFIG.11). A second digitized echo signal1402represents the sequence of bits that so may be generated when the transmitted signal106reflects off a different, second target object104that is a different, second separation distance110from the sensing assembly102. Instead of separately generating the digitized echo signals1400,1402, the sensing assembly102may generate a com biped digitized echo signal1404that represents the combination of echoes108off the different target objects104. The combined digitized echo signal1404may represent a combination of the digitized echo signals1400,1402. A correlation window1406includes a sequence1414of bits that can be compared to either digitized echo signal1400,1402to determine a subset of interest, such as the subsets of interest1408,1410, in order to determine times of flight to the respective target objects104(shown inFIG.11), as described above. However, when the echoes108(shown inFIG.11) off the target objects104are combined and the combined digitized echo signal1404is generated, the correlation window1406may be less accurate or unable to determine the time of flight to one or more of the several target objects104. For example, while separate comparison of the correlation window1406to each of the digitized echo signals1400,1402may result in correlation values of +6 being calculated for the subsets of interest1408,1410, comparison of the correlation window1406to the combined digitized echo signal1404may result in correlation values of +5, +4, and +4 for the subsets that include the first through sixth bits, the third through eighth bits, and the seventh through twelfth bits in the combined digitized echo signal1404. As a result, the baseband processor232(shown inFIG.12) may be unable to distinguish between the different target objects104(shown inFIG.11). In one embodiment, a mask1412can be applied to the sequence1414of bits in the correlation window1406to modify the sequence1414of bits in the correlation window1406. The mask1412can eliminate or otherwise change the value of one or more of the bits in the correlation window1406. The mask1412can include a sequence1416of bits that are applied to the correlation window1406(e.g., by multiplying the values of the bits) to create a modified correlation window1418having a sequence1420of bits that differs from the sequence1414of bits in the correlation window1406. In the illustrated example, the mask1412includes a first portion of the first three bits (“101”) and a second portion of the last three bits (“000”). Alternatively, another mask1412may be used that has a different sequence of bits and/or a different length of the sequence of bits. Applying the mask1412to the correlation window1406eliminates the last three bits (“011”) in the sequence1414of bits in the correlation window1406. As a result, the sequence1420of bits in the modified correlation window1418includes only the first three bits (“101”) of the correlation window1418. In another embodiment, the mask1412adds additional bits to the correlation window1406and/or changes values of the bits in the correlation window1406. The sequence1420of bits in the modified correlation window1418can be used to change the sequence of bits in the pattern signal230(shown inFIG.12) that is communicated to the transmitter for inclusion in the transmitted signals106(shown inFIG.11). For example, after receiving the combined digitized echo signal1404and being unable to discern between the different target objects104(shown inFIG.11), the sequence of bits in the pattern that is transmitted toward the target objects104can be changed to include the sequence1420of bits in the modified correlation window1412or some other sequence of bits to aid in the discernment of the different target objects104. An additional combined digitized echo signal1422may be received based on the echoes108of the transmitted signals106that include the sequence1420of bits. The modified correlation window1418can then be compared with the additional digitized echo signal1422to identify subsets of interest associated with the different target objects104(shown inFIG.11). In the illustrated embodiment, the modified correlation window1418can be compared to different subsets of the digitized echo signal1422to identify first and second subsets of interest1424,1426, as described above. For example, the first and second subsets of interest1424,1426may be identified as having higher or the highest correlation values relative to other subsets of the digitized echo signal1422. In operation, when transmitted signals106reflect off multiple target objects104, the pattern transmitted in the signals106can be modified relatively quickly between successive bursts of the transmitted signals106when one or more of the target objects104cannot be identified from examination of the digitized echo signal226. The modified pattern can then be used to distinguish between the target objects104in the digitized echo signal740using the correlation window that includes the modified pattern. In another embodiment, the digital pulse sequence of bits included in a transmitted signal106(shown inFIG.11) may be different from the digital pulse sequence of bits included in the correlation window and compared to the baseband echo signal226(shown inFIG.12). For example, the pattern code generator228(shown inFIG.12) may create heterogeneous patterns and communicate the heterogeneous patterns in the pattern signals230(shown inFIG.12) to the transmitter208and the baseband processor232. The transmitter208can mix a first pattern of bits in the transmitted signal106and the baseband processor232can compare a different, second pattern of bits to the baseband echo signal226that is generated based on echoes108(shown inFIG.11) of the transmitted signals106. With respect to the example described above in connection withFIG.24, the sequence1414of bits in the correlation window1406can be included in the transmitted signals106while the sequence1416of bits in the mask1412or the sequence1420of bits in the modified correlation window1418can be compared to the digitized echo signal1422. Using different patterns in this manner can allow for the sensing assembly102(shown inFIG.11) to distinguish between multiple target objects104, as described above. Using different patterns in this manner can additionally allow for the sensing assembly102(shown inFIG.11) to perform other functions including, but not limited to clutter mitigation, signal-to-noise improvement, anti-jamming, anti-spoofing, anti-eavesdropping, and others. FIG.25is a schematic view of an antenna1500in accordance with one embodiment. The antenna1500may be used as the transmitting antenna204and/or the receiving antenna206, both of which are shown inFIG.12. Alternatively, another antenna may be used for the transmitting antenna204and/or the receiving antenna206. The antenna1500includes a multidimensional (e.g., two dimensional) array1502of antenna unit cells1504. The unit cells1504may represent or include microstrip patch antennas. Alternatively, the unit cells1504may represent another type of antenna. Several unit cells1504can be conductively coupled in series with each other to form a series-fed array1506. In the illustrated embodiment, the unit cells1504are connected in a linear series. Alternatively, the unit cells1504can be connected in another shape. Several series-fed arrays1506are conductively coupled in parallel to form the array1502in the illustrated embodiment. The numbers of unit cells1504and series-fed arrays1506shown inFIG.15are provided as examples. A different number of unit cells1504and/or arrays1506may be included in the antenna1500. The antenna1500may use the several unit cells1504to focus the energy of the transmitted signals106(shown inFIG.11) through constructive and/or destructive interference. FIG.26is a schematic diagram of one embodiment of the front end200of the sensing assembly102(shown inFIG.11). The antennas1500may be used as the transmitting antenna204and the receiving antenna206, as shown inFIG.26. Each antenna1500may be directly connected to the receiver602or transmitter600(e.g., with no other components disposed between the antenna1500and the receiver602or transmitter600) by a relatively short length of transmission line1600. The front end200of the sensing assembly102may be housed in an enclosure1602, such as a metal or otherwise conductive housing, with radio transmissive windows1604over the antennas1500. Alternatively, the front end200may be housed in a non-metallic (e.g., dielectric) enclosure. The windows over the antennas1500may not be cut out of the enclosure1602, but may instead represent portions of the enclosure1602that allows the transmitted signals106and echoes108pass through the windows1604from or to the antennas1500. The enclosure1602may wrap around the antennas1500so that the antennas are effectively recessed into the conducting body of the enclosure1602, which can further improve isolation between the antennas1500. Alternatively, in the case of a non-conducting enclosure1602, the antennas1500may be completely enclosed by the enclosure1602and extra metal foil, and/or absorptive materials, or other measures may be added to improve isolation between the antennas1500. In one embodiment, if the isolation is sufficiently high, the transmit and receiving antennas1500can be operated at the same time if the returning echoes108are sufficiently strong. This may be the case when the target is at very close range, and can allow for the sensing assembly102to operate without a transmit/receive switch. FIG.27is a cross-sectional view of one embodiment of the antenna1500along line17-17inFIG.26. The antenna1500(“Planar Antenna” inFIG.27) includes a cover layer1700(“Superstate” inFIG.27) of an electrically insulating material (such as a dielectric or other nonconducting material). Examples of such materials for the cover layer1700include, but are not limited to quartz, sapphire, various polymers, and the like. The antenna1500may be positioned on a surface of a substrate1706that supports the antenna1500. A conductive ground plane1708may be disposed on an opposite surface of the substrate1706, or in another location. The cover layer1700may be separated from the antenna1500by an air gap1704(“Air” inFIG.27). Alternatively, gap between the cover layer1700and the antenna1500may be at least partially filled by another material or fluid other than air. As another alternative, the air gap may be eliminated, and the cover layer1700may rest directly on the antenna1500. The cover layer1700can protect the antenna1500from the environment and/or mechanical damage caused by external objects. In one embodiment, the cover layer1700provides a lensing effect to focus the energy of the transmitted signals106emitted by the antenna1500into a beam or to focus the energy of the reflected echoes108toward the antenna1500. This lensing effect can permit transmitted signals106and/or echoes108to pass through additional layers1702of materials (e.g., insulators such as Teflon, polycarbonate, or other polymers) that are positioned between the antenna1500and the target object104(shown inFIG.11). For example, the sensing assembly102can be mounted to an object being monitored (e.g., the top of a tank of fluid being measured by the sensing assembly102), while the lensing effect can permit the sensing assembly102to transmit the signals106and receive the echoes108through the top of the tank without cutting windows or openings through the top of the tank). In one embodiment, the substrate1708may have a thickness dimension between the opposite surfaces that is thinner than a wavelength of the carrier signal of the transmitted signals106and/or echoes108. For example, the thickness of the substrate1708may be on the order of 1/20th of a wavelength. The thicknesses of the air gap1704and/or superstrate1700may be larger, such as ⅓ of the wavelength. Either one or both of the air gap1704and the superstrate1700may also be removed altogether. One or more embodiments of the system100and/or sensing assembly102described herein may be used for a variety of applications that use the separation distance110and/or time of flight that is measured by the sensing assembly102. Several specific examples of applications of the system100and/or sensing assembly102are described herein, but not all applications or uses of the system100or sensing assembly102are limited to those set forth herein. For example, many applications that use the detection of the separation distance110(e.g., as a depth measurement) can use or incorporate the system100and/or sensing assembly102. FIG.28illustrates one embodiment of a containment system1800. The system1800includes a containment apparatus1802, such as a fluid tank, that holds or stores one or more fluids1806. The sensing assembly102may be positioned on or at a top1804of the containment apparatus1802and direct transmitted signals106toward the fluid1806. Reflected echoes108from the fluid1806are received by the sensing assembly102to measure the separation distance110between the sensing assembly102and an upper surface of the fluid1806. The location of the sensing assembly102may be known and calibrated to the bottom of the containment apparatus1802so that the separation distance110to the fluid1806may be used to determine how much fluid1806is in the containment apparatus1802. The sensing assembly102may be able to accurately measure the separation distance110using one or more of the coarse, fine, and/or ultrafine stage determination techniques described herein. Alternatively or additionally, the sensing apparatus102may direct transmitted signals106toward a port (e.g., a filling port through which fluid1806is loaded into the containment apparatus1802) and monitor movement of the fluid1806at or near the port. For example, if the separation distance110from the sensing assembly102to the port is known such that the bit of interest of the echoes108is known, the ultrafine stage determination described above maybe used to determine if the fluid1806at or near the port is moving (e.g., turbulent). This movement may indicate that fluid1806is flowing into or out of the containment apparatus1802. The sensing assembly102can use this determination as an alarm or other indicator of when fluid1806is flowing into or out of the containment apparatus1802. Alternatively, the sensing assembly102could be positioned or aimed at other strategically important locations where the presence or absence of turbulence and/or the intensity (e.g., degree or amount of movement) could indicate various operating conditions and parameters (e.g., amounts of fluid, movement of fluid, and the like). The sensing assembly102could periodically switch between these measurement modes (e.g., measuring the separation distance110being one mode and monitoring for movement being another mode), and then report the data and measurements to the control unit112(shown inFIG.11). Alternatively, the control unit112could direct the sensing assembly102to make the various types of measurements (e.g., measuring the separation distance110or monitoring for movement) at different times. FIG.29illustrates one embodiment of a zone restriction system1900. The system1900may include a sensing assembly102directing transmitted signals106(shown inFIG.11) toward a first zone1902(e.g., area on a floor, volume in space, and the like). A human operator1906may be located in a different, second zone1904to perform various duties. The first zone1902may represent a restricted area or volume where the operator1906is to remain out of when one or more machines (e.g., automated robots or other components) operate for the safety of the operator1906. The sensing assembly102can direct the transmitted signals106toward the first zone1902and monitor the received echoes108to determine if the operator1906enters into the first zone1902. For example, intrusion of the operator1906into the first zone1902may be detected by identification of movement using the one or more of the coarse, fine, and/or ultrafine stage determination techniques described herein. If the sensing assembly102knows the distance to the first zone1902(e.g., the separation distance110to the floor in the first zone1902), then the sensing assembly102can monitor for movement within the subset of interest in the echo signal that is generated based on the echoes, as described above. When the sensing assembly102detects entry of the operator1906into the first zone1902, the sensing assembly102can notify the control unit112(shown inFIG.11), which can deactivate machinery operating in the vicinity of the first zone1902to avoid injuring the operator1906. FIG.30illustrates another embodiment of a volume restriction system2000. The system2000may include a sensing assembly102directing transmitted signals106(shown inFIG.11) toward a safety volume2002(“Safety zone” inFIG.30). Machinery2004, such as an automated or manually control robotic device, may be located and configured to move within the safety volume2002. The volume through which the transmitted signals106are communicated may be referred to as a protected volume2006. The protected zone2006may represent a restricted area or volume where humans or other objects are to remain out of when the machinery2004operates. The sensing assembly102can direct the transmitted signals106through the protected volume2006and monitor the received echoes108to determine if there is any motion identified outside of the safety zone2002but within the protected zone2006. For example, intrusion of a human into the protected volume2006may be detected by identification of movement using the ultrafine stage determination described above. When the sensing assembly102detects entry into the protected volume2006, the sensing assembly102can notify the control unit112(shown inFIG.11), which can deactivate the machinery2004to avoid injuring any person or thing that has entered into the protected volume2006. FIG.31is a schematic diagram of one embodiment of a mobile system2100that includes the sensing assembly102. The system2100includes a mobile apparatus2102with the sensing assembly102coupled thereto. In the illustrated embodiment, the mobile apparatus2102is a mobilized robotic system. Alternatively, the mobile apparatus2102may represent another type of mobile device, such as an automobile, an underground drilling vessel, or another type of vehicle. The system2100uses measurements made by the sensing assembly102to navigate around or through objects. The system2100may be useful for automated navigation based on detection of motion and/or measurements of separation distances110between the sensing assembly102and other objects, and/or for navigation that is assisted with such measurements and detections. For example, the sensing assembly102can measure separation distances110between the sensing assembly102and multiple objects2104A-D in the vicinity of the mobile apparatus2102. The mobile apparatus2102can use these separation distances110to determine how far the mobile apparatus2102can travel before needing to turn or change direction to avoid contact with the objects2104A-D. In one embodiment, the mobile apparatus2102can use multiple sensing assemblies102to determine a layout or map of an enclosed vicinity2106around the mobile apparatus2102. The vicinity2106may be bounded by the walls of a room, building, tunnel, and the like. A first sensing assembly102on the mobile apparatus2102may be oriented to measure separation distances110to one or more boundaries (e.g., walls or surfaces) of the vicinity2106along a first direction, a second sensing assembly102may be oriented to measure separation distances110to one or more other boundaries of the vicinity2106along a different (e.g., orthogonal) direction, and the like. The separation distances110to the boundaries of the vicinity2106can provide the mobile apparatus2102with information on the size of the vicinity2106and a current location of the mobile apparatus2102. The mobile apparatus2102may then move in the vicinity2106while one or more of the sensing assemblies102acquire updated separation distances110to one or more of the boundaries of the vicinity2106. Based on changes in the separation distances110, the mobile apparatus2102may determine where the mobile apparatus2102is located in the vicinity2106. For example, if an initial separation distance110to a first wall of a room is measured as ten feet (three meters) and an initial separation distance110to a second wall of the room is measured as five feet (1.5 meters), the mobile apparatus2102may initially locate itself within the room. If a later separation distance110to the first wall is four feet (1.2 meters) and a later separation distance110to the second wall is seven feet (2.1 meters), then the mobile apparatus2102may determine that it has moved six feet (1.8 meters) toward the first wall and two feet (0.6 meters) toward the second wall. In one embodiment, the mobile apparatus2102can use information generated by the sensing assembly102to distinguish between immobile and mobile objects2104in the vicinity2106. Some of the objects2104A,2104B, and2104D may be stationary objects, such as walls, furniture, and the like. Other objects210C may be mobile objects, such as humans walking through the vicinity2106, other mobile apparatuses, and the like. The mobile apparatus2102can track changes in separation distances110between the mobile apparatus2102and the objects2104A,2104B,2104C,2104D as the mobile apparatus2102moves. Because the separation distances110between the mobile apparatus2102and the objects2104may change as the mobile apparatus2102moves, both the stationary objects2104A,2104B,2104D and the mobile objects2104C may appear to move to the mobile apparatus2102. This perceived motion of the stationary objects2104A,2104B,2104D that is observed by the sensing assembly102and the mobile apparatus2102is due to the motion of the sensing assembly102and the mobile apparatus2102. To compute the motion (e.g., speed) of the mobile apparatus2102, the mobile apparatus210can track changes in separation distances110to the objects2104and generate object motion vectors associated with the objects2104based on the changes in the separation distances110. FIG.32is a schematic diagram of several object motion vectors generated based on changes in the separation distances110between the mobile apparatus2102and the objects (e.g., the objects2104ofFIG.31) in accordance with one example. The object motion vectors2200A-F can be generated by tracking changes in the separation distances110over time. In order to estimate motion characteristics (e.g., speed and/or heading) of the mobile apparatus2102, these object motion vectors2200can be combined, such as by summing and/or averaging the object motion vectors2200. For example, a motion vector2202of the mobile apparatus2102may be estimated by determining a vector that is an average of the object motion vectors2200and then determining an opposite vector as the motion vector2202. The combining of several object motion vectors2200can tend to correct spurious object motion vectors that are due to other mobile objects in the environment, such as the object motion vectors2200C,2200F that are based on movement of other mobile objects in the vicinity. The mobile apparatus2102can learn (e.g., store) which objects are part of the environment and that can be used for tracking movement of the mobile apparatus2102and may be referred to as persistent objects. Other objects that are observed that do not agree with the known persistent objects are called transient objects. Object motion vectors of the transient objects will have varying trajectories and may not agree well with each other or the persistent objects. The transient objects can be identified by their trajectories as well as their radial distance from the mobile apparatus2102, e.g. the walls of the tunnel will remain at their distance, whereas transient objects will pass closer to the mobile apparatus2102. In another embodiment, multiple mobile apparatuses2102may include the sensing system100and/or sensing assemblies102to communicate information between each other. For example, the mobile apparatuses2102may each use the sensing assemblies102to detect when the mobile apparatuses2102are within a threshold distance from each other. The mobile apparatuses2102may then switch from transmitting the transmitted signals106in order to measure separation distances110and/or detect motion to transmitting the transmitted signals106to communicate other information. For example, instead of generating the digital pulse sequence to measure separation distances110, at least one of the mobile apparatuses2102may use the binary code sequence (e.g., of ones and zeros) in a pattern signal that is transmitted toward another mobile apparatus2102to communicate information. The other mobile apparatus2102may receive the transmitted signal106in order to identify the transmitted pattern signal and interpret the information that is encoded in the pattern signal. FIG.33is a schematic diagram of one example of using the sensing assembly102in a medical application. The sensing assembly102may use one or more of the stages described above (e.g., coarse stage, fine stage, and ultrafine stage) to monitor changes in position of a patient2300and/or relatively small movements of the patient. For example, the ultrafine stage determination of movement described above may be used for breath rate detection, heart rate detection, monitoring gross motor or muscle movement, and the like. Breath rate, heart rate and activity can be useful for diagnosing sleep disorders, and since the sensing is non-contact and can be more comfortable for the patient being observed. As one example, the separation distance110to the abdomen and/or chest of the patient2300can be determined to within one bit of the digital pulse sequence (e.g., the bit of interest), as described above. The sensing assembly102can then track relatively small motions of the chest and/or abdomen within the subset of interest to track a breathing rate and/or heart rate. Additionally or alternatively, the sensing assembly102can track the motions of the chest and/or abdomen and combine the motions with a known, measured, observed, or designated size of the abdomen to estimate the tidal volume of breaths of the patient2300. Additionally or alternatively, the sensing assembly102can track the motions of the chest and abdomen together to detect paradoxical breathing of the patient2300. As another example, the sensing assembly102may communicate transmitted signals106that penetrate into the body of the patient2300and sense the motion or absolute position of various internal structures, such as the heart. Many of these positions or motions can be relatively small and subtle, and the sensing assembly102can use the ultrafine stage determination of motion or the separation distance110to sense the motion or absolute position of the internal structures. Using the non-contact sensing assembly102also may be useful for situations where it is impossible or inconvenient to use wired sensors on the patient2300(e.g., sensors mounted directly to the test subject, connected by wires back to a medical monitor). For example, in high-activity situations where conventional wired sensors may get in the way, the sensing assembly102may monitor the separation distance110and/or motion of the patient2300from afar. In another example, the sensing assembly102can be used for posture recognition and overall motion or activity sensing. This can be used for long-term observation of the patient2300for the diagnosis of chronic conditions, such as depression, fatigue, and overall health of at-risk individuals such as the elderly, among others. In the case of diseases with relatively slow onset, such as depression, the long term observation by the sensing assembly102may be used for early detection of the diseases. Also, since the unit can detect the medical parameters or quantities without anything being mounted on the patient2300, the sensing assembly102may be used to make measurements of the patient2300without the knowledge or cooperation of the patient2300. This could be useful in many situations, such as when dealing with children who would be made upset if sensors are attached to them. It may also give an indication of the mental state of a patient2300, such as their breath becoming rapid and shallow when they become nervous. This would give rise to a remote lie-detector functionality. In another embodiment, data generated by the sensing assembly102may be combined with data generated or obtained by one or more other sensors. For example, calculation of the separation distance110by the sensing assembly102may be used as a depth measurement that is combined with other sensor data. Such combination of data from different sensors is referred to herein as sensor fusion, and includes the fusing of two or more separate streams of sensor data to form a more complete picture of the phenomena or object or environment that is being sensed. As one example, separation distances110calculated using the sensing assembly102may be combined with two-dimensional image data acquired by a camera. For example, without the separation distances110, a computer or other machine may not be able to determine the actual physical size of the objects in a two-dimensional image. FIG.34is a two-dimensional image2404of human subjects2400,2402in accordance with one example of an application of the system100shown inFIG.11. The image2404may be acquired by a two-dimensional image forming apparatus, such as a camera. The image forming apparatus may acquire the image for use by another system, such as a security system, an automatically controlled (e.g., moveable) robotic system, and the like. The human subjects2400,2402may be approximately the same size (e.g., height). In reality, the human subject2400is farther from the image forming apparatus that acquired the image2404than the human subject2402. However, due to the inability of the image forming apparatus to determine the relative separation distances between the image forming apparatus and each of the subjects2400,2402, the system that relies on the image forming apparatus to recognize the subjects2400,2402may be unable to determine if the subject2400is located farther away (e.g., is at the location of2400A) or is a much smaller human than the subject2402(e.g., is the size represented by2400B). The sensing assembly102(shown inFIG.11) can determine separation distances110(shown inFIG.11) between the image forming apparatus (e.g., with the sensing assembly102disposed at or near the image forming apparatus) and each of the subjects2400,2402to provide a depth context to the image2404. For example, the image forming apparatus or the system that uses the image2404for one or more operations may use the separation distance110to each of the subjects2400,2402to determine that the subjects2400,2402are approximately the same size, with the subject2400located farther away than the subject2402. With this separation distance110(shown inFIG.11) information and information about the optics that were used to capture the two dimensional image2400, it may be possible to assign actual physical sizes to the subjects2400,2402. For example, knowing the physical size that is encompassed by different portions (e.g., pixels or groups of pixels) of the image2400and knowing the separation distance110to each subject2400,2402, the image forming apparatus and/or the system using the image2404for one or more operations can calculate sizes (e.g., heights and/or widths) of the subjects2400,2402. FIG.35is a schematic diagram of a sensing system2500that may include the sensing assembly102(shown inFIG.11) in accordance with one embodiment. Many types of sensors such as light level sensors, radiation sensors, moisture content sensors, and the like, obtain measurements of target objects104that may change as the separation distance110between the sensors and the target objects104varies. The sensing systems2500shown inFIG.35may include or represent one or more sensors that acquire information that changes as the separation distance110changes and may include or represent the sensing assembly102. Distance information (e.g., separation distances110) from the sensing systems2500and the target objects104can provide for calibration or correction of other sensor information that is dependent on the distance between the sensor and the targets being read or monitored by the sensor. For example, the sensing systems2500can acquire or measure information (e.g., light levels, radiation, moisture, heat, and the like) from the target objects104A,104B and the separation distances110A,110B to the target objects104A,104B. The separation distances110A,110B can be used to correct or calibrate the measured information. For example, if the target objects104A,104B both provide the same light level, radiation, moisture, heat, and the like, the different separation distances110A,110B may result in the sensing systems2500A,2500B measuring different light levels, radiation, moisture, heat, and the like. With the sensing assembly102(shown inFIG.1) measuring the separation distances110A,110B, the measured information for the target object104A and/or104B can be corrected (e.g., increased based on the size of the separation distance110A for the target object104A and/or decreased based on the size of the separation distance110B for the target object104B) so that the measured information is more accurate relative to not correcting the measured information for the different separation distances110. As another example, the sensing system2500may include a reflective pulse oximetry sensor and the sensing assembly102. Two or more different wavelengths of light are directed at the surface of the target object104by the system2500and a photo detector of the system2500examines the scattered light. The ratio of the reflected power can be used to determine the oxygenation level of the blood in the target object104. Instead of being directly mounted (e.g., engaged to) the body of the patient that is the target object104, the sensing system2500may be spaced apart from the body of the patient. The surface of the patient body can be illuminated with light sources and the sensing assembly102(shown inFIG.11) can measure the separation distance110to the target object104(e.g., to the surface of the skin). The oxygenation level of the blood in the patient can then be calibrated or corrected for the decrease in the reflected power of the light that is caused by the sensing system2500being separated from the patient. In another embodiment, the sensing assembly102and/or system100shown inFIG.11can be provided as a stand-alone unit that can communicate with other sensors, controllers, computers, and the like, to add the above-described functionality to a variety of sensor systems. A software-implemented system can collect and aggregate the information streams from the sensors and deliver the sensed information to the controlling system, where the separation distance110measured by the assembly102and/or system100is used in conjunction with the sensed information. Alternatively or additionally, the separation distances110measured by the assembly102can be collected along with a time stamp or other marker such as geographic location without communicating directly with the other sensors, controller, computer, and the like. The software-implemented system can then reconcile the separation distance110and other sensor data to align the measurements with each other. The examples of sensor fusion described herein are not limited to just the combination of the sensing assembly102and one other sensor. Additional sensors may be used to aggregate the separation distances110and/or motion detected by the sensing assembly102with the data streams acquired by two or more additional sensors. For example, audio data (from a microphone), video data (from a camera), and the separation distances110and/or motion from the sensing assembly102can be aggregated to give a more complete understanding of a physical environment. FIG.38is a schematic diagram of a sensing system2800that may include the sensing assembly102in accordance with one embodiment. The sensing system2800includes a sensor2802that obtains lateral size data of a target object2804. For example, the sensor2802may be a camera that obtains a two dimensional image of a box or package. FIG.39is a schematic diagram representative of the lateral size data of the target object2804that is obtained by the sensor2802. The sensor2802(or a control unit communicatively coupled with the sensor2802) may measure two dimensional sizes of the target object2804, such as a length dimension2806and a width dimension2808. For example, a two-dimensional surface area2900of the target object2804may be calculated from the image acquired by the sensor2802. In one embodiment, the number of pixels or other units of the image formed by the sensor2802can be counted or measured to determine the surface area2900of the target object2804. FIG.40is another view of the sensing assembly102and the target object2804shown inFIGS.28and29. In order to calculate the volume or three dimensional outer surface area of the target object2804, the sensing assembly102may be used to measure a depth dimension2810of the target object2804. For example, the sensing assembly102may measure the separation distance110between the sensing assembly102and a surface3000(e.g., an upper surface) of the target object2804that is imaged by the sensor2802. If a separation distance3002between the sensing assembly102and a supporting surface3004on which the target object2804is known or previously measured, then the separation distance110may be used to calculate the depth dimension2810of the target object2804. For example, the measured separation distance110may be subtracted from the known or previously measured separation distance3002to calculate the depth dimension2810. The depth dimension2810may be combined (e.g., by multiplying) with the lateral size data (e.g., the width dimension2808and the length dimension2806) of the target object2804to calculate a volume of the target object2804. In another example, the depth dimension2810can be combined with the lateral size data to calculate surface areas of each or one or more surfaces of the target object2804, which may then be combined to calculate an outer surface area of the target object2804. Combining the depth data obtained from the sensing assembly102with the two dimensional, or lateral, data obtained by the sensor2802may be useful in applications where the size, volume, or surface area of the target object2804is to be measured, such as in package shipping, identification or distinguishing between different sized target objects, and the like. FIG.36is a schematic diagram of another embodiment of a sensing system2600. The sensing system2600may be similar to the system100shown inFIG.11. For example, the system2600may include a sensing assembly2602(“Radar Unit”) that is similar to the sensing assembly102(shown inFIG.11). Although the sensing assembly2602is labeled “Radar Unit” inFIG.36, alternatively, the sensing assembly2602may use another technique or medium for determining separation distances110and/or detecting motion of a target object104(e.g., light), as described above in connection with the system100. The assembly2602includes a transmitting antenna2604that may be similar to the transmitting antenna204(shown inFIG.12) and a receiving antenna2606that may be similar to the receiving antenna206(shown inFIG.12). In the illustrated embodiment, the antennas2604,2606are connected to the assembly2602using cables2608. The cables2608may be flexible to allow the antennas2604,2606to be re-positioned relative to the target object104on-the-fly. For example, the antennas2604,2606may be moved relative to the target object104and/or each other as the transmitted signals106are transmitted toward the target object104and/or the echoes108are received off the target object104, or between the transmission of the transmitted signals106and the receipt of the echoes108. The antennas2604,2606may be moved to provide for pseudo-bistatic operation of the system2600. For example, the antennas2604,2606can be moved around to various or arbitrary locations to capture echoes108that may otherwise be lost if the antennas2604,2606were fixed in position. In one embodiment, the antennas2604,2606could be positioned on opposite sides of the target object104in order to test for the transmission of the transmitted signals106through the target object104. Changes in the transmission of the transmitted signals106through the target object104can indicate physical changes in the target object104being sensed. This scheme can be used with greater numbers of antennas2604and/or2606. For example, multiple receiving antennas2606can be used to detect target objects104that may otherwise be difficult to detect. Multiple transmitting antennas2604may be used to illuminate target objects104with transmitted signals106that may otherwise not be detected. Multiple transmitting antennas2604and multiple receiving antennas2606can be used at the same time. The transmitting antennas2604and/or receiving antennas2606can be used at the same time, transmitting copies of the transmitted signal106or receiving multiple echoes108, or the sensing assembly2602can be switched among the transmitting antennas2604and/or among the receiving antennas2606, with the observations (e.g., separation distances110and/or detected motion) built up over time. FIGS.27A-Billustrate one embodiment of a method2700for sensing separation distances from a target object and/or motion of the target object. The method2700may be used in conjunction with one or more of the systems or sensing assemblies described herein. At2702, a determination is made as to whether to use to the coarse stage determination of the time of flight and/or separation distance. For example, an operator of the system100(shown inFIG.11) may manually provide input to the system100and/or the system100may automatically determine whether to use the coarse stage determination described above. If the coarse stage determination is to be used, flow of the method2700proceeds to2704. Alternatively, flow of the method2700may proceed to2718. In one embodiment, the coarse stage uses a single channel (e.g., either the I channel or the Q channel) of the transmitted signal and received echo signal to determine the time of flight and/or separation distance, also as described above. At2704, an oscillating signal is mixed with a coarse transmit pattern to create a transmitted signal. For example, the oscillating signal216(shown inFIG.12) is mixed with a digital pulse sequence of the transmit pattern signal230(shown inFIG.12) to form the transmitted signal106(shown inFIG.11), as described above. At2706, the transmitted signal is transmitted toward a target object. For example, the transmitting antenna204(shown inFIG.12) may transmit the transmitted signal106(shown inFIG.11) toward the target object104(shown inFIG.11), as described above. At2708, echoes of the transmitted signal that are reflected off the target object are received. For example, the echoes108(shown inFIG.11) that are reflected off the target object104(shown inFIG.11) are received by the receiving antenna206(shown inFIG.12), as described above. At2710, the received echoes are down converted to obtain a baseband signal. For example, the echoes108(shown inFIG.11) are converted into the baseband echo signal226(shown inFIG.12). For example, the received echo signal224may be mixed with the same oscillating signal216(shown inFIG.12) that was mixed with the coarse transmit pattern signal230(shown inFIG.12) to generate the transmitted signal106(shown inFIG.11). The echo signal224can be mixed with the oscillating signal216to generate the baseband echo signal226(shown inFIG.12) as the coarse receive data stream, as described above. At2712, the baseband signal is digitized to obtain the coarse receive data stream. For example, it may pass through the baseband processor232including the digitizer730to produce the digitized echo signal740. At2714, a correlation window (e.g., a coarse correlation window) and a coarse mask are compared to the data stream to identify a subset of interest. Alternatively, the mask (e.g., a mask to eliminate or change one or more portions of the data stream) may not be used. In one embodiment, the coarse correlation window320(shown inFIG.13) that includes all or a portion of the coarse transmit pattern included in the transmitted signal106(shown inFIG.11) is compared to various subsets or portions of the digitized echo signal740(shown inFIG.12), as described above. Correlation values can be calculated for the various subsets of the data stream226, and the subset of interest may be identified by comparing the correlation values, such as by identifying the subset having a correlation value that is the greatest or is greater than one or more other subsets of interest. At2716, a time of flight of the transmitted signal and echo is calculated based on a time delay of the subset of interest. This time of flight can be referred to as a coarse time of flight. As described above, the subset of interest can be associated with a time lag (td) between transmission of the transmitted signal106(shown inFIG.11) and the first bit of the subset of interest (or another bit in the subset of interest). The time of flight can be equal to the time lag, or the time of flight can be based on the time lag, with a correction or correlation factor (e.g., for the propagation of signals) being used to modify the time lag to the time of flight, as described above. At2718, a determination is made as to whether the fine stage determination of the separation distance is to be used. For example, a determination may be made automatically or manually to use the fine stage determination to further refine the measurement of the separation distance110(shown inFIG.11) and/or to monitor or track motion of the target object104(shown inFIG.11), as described above. If the fine stage is to be used, then flow of the method2700may proceed to2720. On the other hand, if the fine stage is not to be used, then flow of the method2700may return to2702. At2720, an oscillating signal is mixed with a digital pulse sequence to create a transmitted signal. As described above, the transmit pattern that is used in the fine stage may be different from the transmit pattern used in the coarse stage. Alternatively, the transmit pattern may be the same for the coarse stage and the fine stage. At2722, the transmitted signal is communicated toward the target object, similar to as described above in connection with2706. At2724, echoes of the transmitted signal that are reflected off the target object are received, similar to as described above in connection with2708. At2726, the received echoes are down converted to obtain a baseband signal. For example, the echoes108(shown inFIG.11) are converted into the baseband echo signal226(shown inFIG.12). At2728, the baseband signal226is compared to a fine receive pattern. The fine receive pattern may be delayed by the coarse time of flight, as described above. For example, instead of comparing the baseband signal with the receive pattern with both the baseband signal and the receive pattern having the same starting or initial time reference, the receive pattern may be delayed by the same time as the time delay measured by the coarse stage determination. This delayed receive pattern also may be referred to as a “coarse delayed fine extraction pattern”728. At2730, a time lag between the fine data stream and the time delayed receive pattern is calculated. This time lag may represent the temporal overlap or mismatch between the waveforms in the fine data stream and the time delayed receive pattern, as described above in connection withFIGS.8through11. The time lag may be measured as the energies of the waveforms that represent the overlap between the fine data stream and the time delayed receive pattern. As described above, time periods808,810,904,906(shown inFIGS.8and9) representative of the time lag may be calculated. At2732, the time of flight measured by the coarse stage (e.g., the “time of flight estimate”) is refined by the time lag. For example, the time lag calculated at2730can be added to the time of flight calculated at2716. Alternatively, the time lag may be added to a designated time of flight, such as a time of flight associated with or calculated from a designated or known separation distance110(shown inFIG.11). At2734, the time of flight (that includes the time lag calculated at2732) is used to calculate the separation distance from the target object, as described above. Flow of the method2700may then return to2702in a loop-wise manner. The above methods can be repeated for the I and Q channels separately or in parallel using parallel paths as inFIG.22or a switch or multiplexed path as described above to extract differences in the I and Q channels. These differences can be examined to resolve the phase of the echoes. In one embodiment, performance of the fine stage determination (e.g., as described in connection with2720through2732) is performed on one of the I or Q components of channels of the transmit signal and the echo signal, as described above. For example, the I channel of the echo signal226(shown inFIG.12) may be examined in order to measure the amount of temporal overlap between the time-delayed receive pattern and the echo signal226, as described above. In order to perform the ultrafine stage determination, a similar examination may be performed on another component or channel of the echo signal, such as the Q channel. For example, the I channel analysis of the echo signal226(e.g., the fine stage) may be performed concurrently or simultaneously with the Q channel analysis of the same echo signal226(e.g., the ultrafine stage). Alternatively, the fine stage and ultrafine stage may be performed sequentially, with one of the I or Q channels being examined to determine a temporal overlap of the echo signal and the time-delayed receive pattern before the other of the Q or I channels being examined to determine a temporal overlap. The temporal overlaps of the I and Q channels are used to calculate time lags (e.g., I and Q channel time lags), which can be added to the coarse stage determination or estimate of the time of flight. This time of flight can be used to determine the separation distance110(shown inFIG.11), as described above. Alternatively or additionally, the time lags of the waveforms in the I channel and Q channel can be examined to resolve phases of the echoes in order to calculate separation distance or motion of the target. As described above, the ultrafine stage determination may alternatively or additionally involve a similar process as the coarse stage determination. For example, the coarse stage determination may examine the I channel of the receive pattern and the data stream to determine correlation values of different subsets of the data stream and, from those correlation values, determine a subset of interest and a corresponding time-of-flight, as described herein. The ultrafine stage determination can use the Q channel of the receive pattern and the data stream to determine correlation values of different subsets of the data stream and, from those correlation values, determine a subset of interest and a time-of-flight, as described above. The times-of-flight from the I channel and Q channel can be combined (e.g., averaged) to calculate a time of flight and/or separation distance to the target. The correlation values calculated by the ultrafine stage determination can be used to calculate an additional time delay that can be added to the time delays from the coarse stage and/or the fine stage to determine a time of flight and/or separation distance to the target. Alternatively or additionally, the correlation values of the waveforms in the I channel and Q channel can be examined to resolve phases of the echoes in order to calculate separation distance or motion of the target. In another embodiment, another method (e.g., a method for measuring a separation distance to a target object) is provided. The method includes transmitting an electromagnetic first transmitted signal from a transmitting antenna toward a target object that is separated from the transmitting antenna by a separation distance. The first transmitted signal includes a first transmit pattern representative of a first sequence of digital bits. The method also includes receiving a first echo of the first transmitted signal that is reflected off the target object, converting the first echo into a first digitized echo signal, and comparing a first receive pattern representative of a second sequence of digital bits to the first digitized echo signal to determine a time of flight of the first transmitted signal and the echo. In another aspect, the method also includes calculating the separation distance to the target object based on the time of flight. In another aspect, the method also includes generating an oscillating signal and mixing at least a first portion of the oscillating signal with the first transmit pattern to form the first transmitted signal. In another aspect, converting the first echo into the first digitized echo signal includes mixing at least a second portion of the oscillating signal with an echo signal that is based on the first echo received off the target object. In another aspect, comparing the first receive pattern includes matching the sequence of digital bits of the first receive pattern to subsets of the first digitized echo signal to calculate correlation values for the subsets. The correlation values are representative of degrees of match between the sequence of digital bits in the first receive pattern and the subsets of the first digitized echo signal. In another aspect, at least one of the subsets of the digitized echo signal is identified as a subset of interest based on the correlation values. The time of flight can be determined based on a time delay between transmission of the transmitted signals and occurrence of the subset of interest. In another aspect, the method also includes transmitting an electromagnetic second transmitted signal toward the target object. The second transmitted signal includes a second transmit pattern representative of a second sequence of digital bits. The method also includes receiving a second echo of the second transmitted signal that is reflected off the target object, converting the second echo into a second baseband echo signal, and comparing a second receive pattern representative of a third sequence of digital bits to the second baseband echo signal to determine temporal misalignment between one or more waveforms of the second baseband echo signal and one or more waveforms of the second receive pattern. The temporal misalignment representative of a time lag between the second receive pattern and the second baseband echo signal is extracted and then the time lag is then calculated. In another aspect, the method also includes adding the time lag to the time of flight. In another aspect, converting the second echo into the second digitized echo signal includes forming an in-phase (I) channel of the second baseband echo signal and a quadrature (Q) channel of the second baseband echo signal. Comparing the second receive pattern includes comparing an I channel of the second receive pattern to the I channel of the second digitized echo signal to determine an I component of the temporal misalignment and comparing a Q channel of the second receive pattern to the Q channel of the second digitized echo signal to determine a Q component of the temporal misalignment. In another aspect, the time lag that is added to the time of flight includes the I component of the temporal misalignment and the Q component of the temporal misalignment. In another aspect, the method also includes resolving phases of the first echo and the second echo by examining the I component of the temporal misalignment and the Q component of the temporal misalignment, where the time of flight calculated based on the phases that are resolved. In another aspect, at least two of the first transmit pattern, the first receive pattern, the second transmit pattern, or the second receive pattern differ from each other. In another aspect, at least two of the first transmit pattern, the first receive pattern, the second transmit pattern, or the second receive pattern include a common sequence of digital bits. In another embodiment, a system (e.g., a sensing system) is provided that includes a transmitter, a receiver, and a baseband processor. The transmitter is configured to generate an electromagnetic first transmitted signal that is communicated from a transmitting antenna toward a target object that is a separated from the transmitting antenna by a separation distance. The first transmitted signal includes a first transmit pattern representative of a sequence of digital bits. The receiver is configured to generate a first digitized echo signal that is based on an echo of the first transmitted signal that is reflected off the target object. The correlator device is configured to compare a first receive pattern representative of a second sequence of digital bits to the first digitized echo signal to determine a time of flight of the first transmitted signal and the echo. In another aspect, the baseband processor is configured to calculate the separation distance to the target object based on the time of flight. In another aspect, the system also includes an oscillating device configured to generate an oscillating signal. The transmitter is configured to mix at least a first portion of the oscillating signal with the first transmit pattern to form the first transmitted signal. In another aspect, the receiver is configured to receive at least a second portion of the oscillating signal and to mix the at least the second portion of the oscillating signal with an echo signal that is representative of the echo to create the first baseband echo signal. In another aspect, the baseband echo signal may be digitized into a first digitized echo signal and the correlator device is configured to compare the sequence of digital bits of the first receive pattern to subsets of the first digitized echo signal to calculate correlation values for the subsets. The correlation values are representative of degrees of match between the first receive pattern and the digital bits of the digitized echo signal. In another aspect, at least one of the subsets of the digitized echo signal is identified by the correlator device as a subset of interest based on the correlation values. The time of flight is determined based on a time delay between transmission of the first transmitted signal and occurrence of the subset of interest in the first digitized echo signal. In another aspect, the transmitter is configured to transmit an electromagnetic second transmitted signal toward the target object. The second transmitted signal includes a second transmit pattern representative of a second sequence of digital bits. The receiver is configured to create a second digitized echo signal based on a second echo of the second transmitted signal that is reflected off the target object. The baseband processor is configured to compare a second receive pattern representative of a third sequence of digital bits to the second digitized echo signal to determine temporal misalignment between one or more waveforms of the second digitized echo signal and one or more waveforms of the second receive pattern. The temporal misalignment is representative of a time lag between the second receive pattern and the second baseband echo signal that is added to the time of flight. In another aspect, the receiver is configured to form an in-phase (I) channel of the second digitized echo signal and a quadrature (Q) channel of the second digitized echo signal. The system can also include a baseband processing system configured to compare an I channel of the second receive pattern to the I channel of the second digitized echo signal to determine an I component of the temporal misalignment. The baseband processing system also is configured to compare a Q channel of the second receive pattern to the Q channel of the second digitized echo signal to determine a Q component of the temporal misalignment. In another aspect, the time lag that is added to the time of flight includes the I component of the temporal misalignment and the Q component of the temporal misalignment. In another aspect, the baseband processing system is configured to resolve phases of the first echo and the second echo based on the I component of the temporal misalignment and the Q component of the temporal misalignment. The time of flight is calculated based on the phases that are resolved. For example, the time of flight may be increased or decreased by a predetermined or designated amount based on an identified or measured difference in the phases that are resolved. In another embodiment, another method (e.g., for measuring a separation distance to a target object) is provided. The method includes transmitting a first transmitted signal having waveforms representative of a first transmit pattern of digital bits and generating a first digitized echo signal based on a first received echo of the first transmitted signal. The first digitized echo signal includes waveforms representative of a data stream of digital bits. The method also includes comparing a first receive pattern of digital bits to plural different subsets of the data stream of digital bits in the first digitized echo signal to identify a subset of interest that indicates the presence and/or temporal location of the first receive pattern than one or more other subsets. The method further includes identifying a time of flight of the first transmitted signal and the first received echo based on a time delay between a start of the data stream in the first digitized echo signal and the subset of interest. In another aspect, the method also includes transmitting a second transmitted signal having waveforms representative of a second transmit pattern of digital bits and generating an in-phase (I) component of a second baseband echo signal and a quadrature (Q) component of the second baseband echo signal that is based on a second received echo of the second transmitted signal. The second baseband echo signal includes waveforms representative of a data stream of digital bits. The method also includes comparing a time-delayed second receive pattern of waveforms that are representative of a sequence of digital bits to the second baseband echo signal. The second receive pattern is delayed from a time of transmission of the second transmitted signal by the time delay of the subset of interest. An in-phase (I) component of the second receive pattern is compared to an I component of the second baseband echo signal to identify a first temporal misalignment between the second receive pattern and the second baseband echo signal. A quadrature (Q) component of the second receive pattern is compared to a Q component of the second baseband echo signal to identify a second temporal misalignment between the second receive pattern and the second baseband echo signal. The method also includes increasing the time of flight by the first and second temporal misalignments. In another aspect, the method also includes identifying motion of the target object based on changes in one or more of the first or second temporal misalignments. In another aspect, the first transmit pattern differs from the first receive pattern. It is to be understood that the above description is intended to be illustrative, and not restrictive. For example, the above described embodiments (and/or aspects thereof) may be used in combination with each other. In addition, many modifications may be made to adapt a particular situation or material to the teachings of the subject matter without departing from its scope. While the dimensions and types of materials described herein are intended to define the parameters of the subject matter, they are by no means limiting and are exemplary embodiments. Many other embodiments will be apparent to one of ordinary skill in the art upon reviewing the above description. The scope of the subject matter described herein should, therefore, be determined with reference to the appended claims, along with the full scope of equivalents to which such claims are entitled. In the appended claims, the terms “including” and “in which” are used as the plain-English equivalents of the respective terms “comprising” and “wherein.” Moreover, in the following claims, the terms “first,” “second,” and “third,” etc. are used merely as labels, and are not intended to impose numerical requirements on their objects. Further, the limitations of the following claims are not written in means-plus-function format and are not intended to be interpreted based on 35 U.S.C. § 112, sixth paragraph, unless and until such claim limitations expressly use the phrase “means for” followed by a statement of function void of further structure. This written description uses examples to disclose several embodiments of the subject matter, including the best mode, and also to enable any person of ordinary skill in the art to practice the embodiments disclosed herein, including making and using any devices or systems and performing any incorporated methods. The patentable scope of the subject matter is defined by the claims, and may include other examples that occur to one of ordinary skill in the art. Such other examples are intended to be within the scope of the claims if they have structural elements that do not differ from the literal language of the claims, or if they include equivalent structural elements with insubstantial differences from the literal languages of the claims. The foregoing description of certain embodiments of the disclosed subject matter will be better understood when read in conjunction with the appended drawings. To the extent that the figures illustrate diagrams of the functional blocks of various embodiments, the functional blocks are not necessarily indicative of the division between hardware circuitry. Thus, for example, one or more of the functional blocks (for example, processors or memories) may be implemented in a single piece of hardware (for example, a general purpose signal processor, microcontroller, random access memory, hard disk, and the like). Similarly, the programs may be stand-alone programs, may be incorporated as subroutines in an operating system, may be functions in an installed software package, and the like. The various embodiments are not limited to the arrangements and instrumentality shown in the drawings. As used herein, an element or step recited in the singular and proceeded with the word “a” or “an” should be understood as not excluding plural of said elements or steps, unless such exclusion is explicitly stated. Furthermore, references to “one embodiment” of the present subject matter are not intended to be interpreted as excluding the existence of additional embodiments that also incorporate the recited features. Moreover, unless explicitly stated to the contrary, embodiments “comprising,” “including,” or “having” an element or a plurality of elements having a particular property may include additional such elements not having that property. Since certain changes may be made in the above-described systems and methods, without departing from the spirit and scope of the subject matter herein involved, it is intended that all of the subject matter of the above description or shown in the accompanying drawings shall be interpreted merely as examples illustrating the concepts herein and shall not be construed as limiting the disclosed subject matter. Various embodiments of the present disclosure may be implemented in a data processing system suitable for storing and/or executing program code that includes at least one processor coupled directly or indirectly to memory elements through a system bus. The memory elements include, for instance, local memory employed during actual execution of the program code, bulk storage, and cache memory which provide temporary storage of at least some program code in order to reduce the number of times code must be retrieved from bulk storage during execution. I/O devices (including, but not limited to, keyboards, displays, pointing devices, DASD, tape, CDs, DVDs, thumb drives and other memory media, etc.) can be coupled to the system either directly or through intervening I/O controllers. Network adapters may also be coupled to the system to enable the data processing system to become coupled to other data processing systems or remote printers or storage devices through intervening private or public networks. Modems, cable modems, and Ethernet cards are just a few of the available types of network adapters. The present disclosure may be embodied in a system, a method, and/or a computer program product. The computer program product may include a computer readable storage medium (or media) having computer readable program instructions thereon for causing a processor to carry out aspects of the present disclosure. The computer readable storage medium can be a tangible device that can retain and store instructions for use by an instruction execution device. The computer readable storage medium may be, for example, but is not limited to, an electronic storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing. A non-exhaustive list of more specific examples of the computer readable storage medium includes the following: a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), a static random access memory (SRAM), a portable compact disc read-only memory (CD-ROM), a digital versatile disk (DVD), a memory stick, a floppy disk, a mechanically encoded device such as punch-cards or raised structures in a groove having instructions recorded thereon, and any suitable combination of the foregoing. Computer readable program instructions described herein can be downloaded to respective computing/processing devices from a computer readable storage medium or to an external computer or external storage device via a network, for example, the Internet, a local area network, a wide area network and/or a wireless network. The network may comprise copper transmission cables, optical transmission fibers, wireless transmission, routers, firewalls, switches, gateway computers and/or edge servers. A network adapter card or network interface in each computing/processing device receives computer readable program instructions from the network and forwards the computer readable program instructions for storage in a computer readable storage medium within the respective computing/processing device. Computer readable program instructions for carrying out operations of the present disclosure may be assembler instructions, instruction-set-architecture (ISA) instructions, machine instructions, machine dependent instructions, microcode, firmware instructions, state-setting data, or either source code or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, C++ or the like, and conventional procedural programming languages, such as the “C” programming language or similar programming languages. A code segment or machine-executable instructions may represent a procedure, a function, a subprogram, a program, a routine, a subroutine, a module, a software package, a class, or any combination of instructions, data structures, or program statements. A code segment may be coupled to another code segment or a hardware circuit by passing and/or receiving information, data, arguments, parameters, or memory contents. Information, arguments, parameters, data, etc. may be passed, forwarded, or transmitted via any suitable means including memory sharing, message passing, token passing, network transmission, among others. The computer readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider). In some embodiments, electronic circuitry including, for example, programmable logic circuitry, field-programmable gate arrays (FPGA), or programmable logic arrays (PLA) may execute the computer readable program instructions by utilizing state information of the computer readable program instructions to personalize the electronic circuitry, in order to perform aspects of the present disclosure. Aspects of the present disclosure are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the disclosure. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer readable program instructions. The various illustrative logical blocks, modules, circuits, and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both. To clearly illustrate this interchangeability of hardware and software, various illustrative components, blocks, modules, circuits, and steps have been described above generally in terms of their functionality. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the overall system. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present disclosure. The flowchart and block diagrams in the Figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods, and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems that perform the specified functions or acts or carry out combinations of special purpose hardware and computer instructions. Words such as “then,” “next,” etc. are not intended to limit the order of the steps; these words are simply used to guide the reader through the description of the methods. Although process flow diagrams may describe the operations as a sequential process, many of the operations can be performed in parallel or concurrently. In addition, the order of the operations may be re-arranged. A process may correspond to a method, a function, a procedure, a subroutine, a subprogram, etc. When a process corresponds to a function, its termination may correspond to a return of the function to the calling function or the main function. Features or functionality described with respect to certain example embodiments may be combined and sub-combined in and/or with various other example embodiments. Also, different aspects and/or elements of example embodiments, as disclosed herein, may be combined and sub-combined in a similar manner as well. Further, some example embodiments, whether individually and/or collectively, may be components of a larger system, wherein other procedures may take precedence over and/or otherwise modify their application. Additionally, a number of steps may be required before, after, and/or concurrently with example embodiments, as disclosed herein. Note that any and/or all methods and/or processes, at least as disclosed herein, can be at least partially performed via at least one entity or actor in any manner. Although preferred embodiments have been depicted and described in detail herein, skilled artisans know that various modifications, additions, substitutions and the like can be made without departing from spirit of this disclosure. As such, these are considered to be within the scope of the disclosure, as defined in the following claims. | 233,534 |
11861902 | DETAILED DESCRIPTION Virtual reality (VR) systems allow a user to interact with an environment that is generally unavailable to the user in the physical world. One goal of VR systems is to provide the user with an experience that is a close analog to the user's experience and interaction with the physical world. To achieve a convincing experience, the VR system generally visually isolates the user from the physical environment by blocking the user's vision with respect to the physical environment. Some systems may further enhance experience of the virtual world by isolating the user's hearing to the virtual world. Thus, the user's experience in VR may be limited to only the video and audio signals generated by the VR system. While such visual and auditory isolation may enhance the user's experience in the virtual world, the user remains subject to forces present in the physical world. Because the user of a VR system is sensorially isolated from the physical environment, the user may be vulnerable, or feel vulnerable, to conditions present in the physical environment. For example, objects that move in the physical environment proximate the VR system user will generally be invisible to the user and a collision between the object and the user may occur. Such objects may include other users of the VR system, human beings or animals generally, and other objects subject to change of proximity to the user caused by movement of the user or movement of the object. The VR system disclosed herein monitors the physical environment in which a VR system user operates to identify objects with which the user may collide. The VR system includes a plurality of monitoring stations that capture video images of the physical environment. The monitoring stations may be disposed at the periphery of the VR system operational environment to capture images of the environment from different angles. The images captured by the monitoring stations may be provided to an image processing device, such as a computer, that identifies objects in the captured images that present potential collision hazards for the VR system user. On identification of an object that presents a potential collision hazard, the image processing device may transmit information to the VR headset of the user that informs the user of the presence of the identified object. For example, on receipt of the information from the image processing system, the VR headset may display a representation of the object at location relative the user that approximates the location of the object relative the user in the physical environment. Accordingly, the VR system disclosed herein may apprise the user of the VR system of the presence of an object in the physical environment and allow the user to avoid a collision with object. FIG.1shows a VR system100with object detection in accordance with various examples. The VR system100includes a monitoring stations102-1and102-2(collectively referred to as monitoring stations102), a VR headset108, and an image processing device106. While two monitoring stations102are shown in the VR system100, some examples of the VR system100may include more or less than two monitoring stations102. Similarly, althoughFIG.1depicts the image processing device106as separate from the VR headset108and the monitoring stations102, in some implementations of the VR system100the image processing device106may a component of or integrated with the VR headset108or one of the monitoring stations102. Each of the monitoring stations102includes a distinct image sensor. For example, monitoring station102-1may include an imaging sensor104-1. Monitoring station102-2may include an imaging sensor104-2. Imaging sensors104-1and104-2may be collectively referred to as imaging sensor104. The image sensor104may be an infra-red (IR) depth sensing camera, a camera that captures luma and/or chroma information, or other image sensor suitable for capturing images of objects in a VR operating environment. InFIG.1, monitoring stations102are disposed to capture images of the VR operating environment114. The VR operating environment114is the physical area in which a user of the VR system100, i.e., a user of the VR headset108, interacts with the virtual world presented to the user via the VR headset108. For example, the VR headset108includes video display technology that displays images of the virtual world to the user of the VR headset108. The monitoring stations102may be disposed at different locations in the VR operating environment114to capture images of the VR operating environment114from different angles. The image processing device106is communicatively coupled to the monitoring stations102. For example, the image processing device106may be communicatively coupled to the monitoring stations102via a wired or wireless communication link, such as a IEEE 802.11 wireless network, an IEEE 802.3 wired network, or any other wired or wireless communication technology that allows the images of the VR operating environment114captured by the monitoring stations102to be transferred from the monitoring stations102to the image processing device106. The spatial relationship of the monitoring stations102may be known to the image processing device106. For example, the relative angle(s) of the optical center lines of the image sensors104of the monitoring stations102may be known to the image processing device106. That is, the monitoring stations102may be disposed to view the VR operating environment114at a predetermined relative angle, or the angle at which the monitoring stations102are disposed relative to one another may be communicated to the image processing device106. The image processing device106processes the images captured by the monitoring stations102to identify objects in the VR operating environment114. The image processing device106may apply motion detection to detect objects in the VR operating environment114that may collide with the user of the VR headset108. For example, the image processing device106may compare the time sequential images (e.g., video frames) of the VR operating environment114received from each monitoring station102to identify image to image changes that indicate movement of an object. InFIG.1, a person110is moving in the VR operating environment114. The person110is representative of any object that may be moving in the VR operating environment114. The monitoring stations102capture images of the VR operating environment114, and transmit the images to the image processing device106. For each monitoring station102, the image processing device106compares the images received from the monitoring station102and identifies differences in the images as motion of an object. For example, movement of the person110will cause differences in the images captured by the monitoring stations102, and the image processing device106will identify the image to image difference as movement of the person110. When the image processing device106detects an object in the VR operating environment, the image processing device106communicates to the VR headset108information regarding the detected object. The image processing device106is communicatively connected to the VR headset108. For example, the image processing device106may be communicatively coupled to the VR headset108via a wired or wireless communication link, such as a IEEE 802.11 wireless network, an IEEE 802.3 wired network, or any other wired or wireless communication technology that allows information to be transferred from the image processing device106to the VR headset108. The image processing device106may determine the location (in three-dimensional space) of an object detected as moving (and pixels representing points of the object) in the images captured by each of the monitoring stations102at a given time. Using the relative locations of the monitoring stations102known by the image processing device106, the image processing device106can determine the location of the detected object (e.g., determine the location of a point of the object identified in images captured by different monitoring stations102) by applying triangulation. That is, with the VR headset108having a first location, monitoring station102-1having a second location and first angle, and monitoring station102-2having a third location and second angle, image processing device106may determine a fourth location to be the location of a detected object in three-dimensional space based on the first location, the second location, the third location, the first angle, and the second angle by triangulating a common point found in multiple two-dimensional images captured by different monitoring stations102(e.g., monitoring station102-1and102-2). The image processing device106may transmit location information for a detected object to the VR headset108. The image processing device106may also determine the identity of an object detected in the VR operating environment114. For example, on detection of an object in the VR operating environment114, the image processing device106may apply a Viola-Jones object detection framework to identify the object based on the features of the object captured in an image. The image processing device106may transmit identity information (e.g., person, animal, etc.) for a detected object to the VR headset108. The VR headset108displays the information provided by the image processing device106for communication to a user. If the image processing device106renders the video displayed by the VR headset108, then the image processing device106may include the information identifying a detected object in the video frames transmitted to the VR headset108. If the VR headset108itself produces the video data displayed by the VR headset108, then the VR headset108may receive the information about the detected object from the image processing device106and integrate the information in the video generated by the VR headset108. Having been made aware of the presence of detected object, and optionally the objects location and/or identity, by the information presented via the VR headset108, the user will not be surprised by the presence of the detected object in the VR operating environment and can avoid collision with the detected object. FIG.2shows additional detail of the VR system100in accordance with various examples. The VR system100may also include one or more controllers112. The controller112is an input device that the user of the VR headset108manipulates to interact with the virtual world. The controller112may be, for example, a handheld device that the user operates to digitally interact with virtual objects in the virtual world. To identify objects in the VR operating environment114that may interfere with the user of the VR headset108, the image processing device106identifies an area about the VR headset108and the controller112as corresponding to the user. For example, the image processing device106may define an area116extending from above the VR headset108to the floor and to a maximum or predetermined extension of the controller112about a central axis corresponding to the user as representing the area occupied by the user of the VR system. The image processing device106may disregard motion detected within the area116when detecting objects in the VR operating environment114. Thus, the image processing device106may detect motion outside the area116as corresponding to an object that may be of interest to the user of the VR system100while ignoring motion inside the area116with respect to object detection. Some implementations of the image processing device106may dynamically map the body and limbs of a user of the VR headset108to define an area within which motion is disregarded. For example, the image processing device106may determine the location of the controller(s)112knowing that the distal end of the user's arm is attached to a controller112. The VR headset108identifies the location of the user's body and the user's legs are connected to the body. Using this information, the image processing device106may determine a dynamic amorphous boundary that defines an area within which motion is disregarded. FIG.3shows a block diagram of a monitoring station102for object detection in a VR system in accordance with various examples. The monitoring station102includes an image sensor304and a transmitter302. The image sensor304may be a red-green-blue light sensor of any resolution suitable for detection of movement in the VR operating environment114. In some implementations of the monitoring station102, the image sensor304may be an IR depth sensor that includes an IR projector and an IR camera. In the IR depth sensor, the IR projector may project a pattern of IR points and the IR camera may capture images of the points reflected from objects in the VR operating environment that can processed to determine distance. The image sensor304is communicatively coupled to the transmitter302. Images captured by image sensor304are transferred to the transmitter302, which transmits the images to the image processing device106. The transmitter302may include circuitry to transmit the images via wired or wireless media. The monitoring station102may include additional components that have been omitted fromFIG.3in the interest of clarity. For example, the monitoring station102may include a processor coupled to the image sensor304and the transmitter302, where the processor provides control and image transfer functionality in the monitoring station102. FIG.4shows a block diagram of a monitoring station102for object detection in a VR system in accordance with various examples. The monitoring station102ofFIG.4is similar to that shown inFIG.3and further includes headset and controller transducers402. The headset and controller transducers402may generate optical timing signals that allow the location and orientation of the VR headset108and each VR controller112to be determined with sufficient specificity to facilitate accurate determination of the position of the VR headset108and the VR controller112in the VR operating environment114. The headset and controller transducers402may include IR emitters that generate timed signals (e.g., a reference pulse and swept plane) for reception by sensors on the VR headset108and the VR controller112. The location and orientation of the VR headset108and controller112can be determined based on which sensors of the VR headset108and controller112detect signals generated by the transducers402and relative timing of pulse and sweep detection. FIG.5shows a block diagram of an image processing device106for object detection in a VR system in accordance with various examples. The image processing device106includes a transceiver502, a processor504, and storage506. The image processing device106may also include various components and systems that have been omitted fromFIG.5in the interest of clarity. For example, the image processing device106may include display systems, user interfaces, etc. The image processing device106may be implemented in a computer as known in the art. For example, the image processing device106may be implemented using a desktop computer, a notebook computer, rack-mounted computer, a tablet computer or any other computing device suitable for performing the image processing operations described herein for detection of objects in the VR operating environment114. The transceiver502communicatively couples the image processing device106to the monitoring stations102and the VR headset108. For example, the transceiver502may include a network adapter that connects the image processing device106to a wireless or wired network that provides communication between the monitoring stations102and the image processing device106. Such a network may be based on any of a variety of networking standards (e.g., IEEE 802) or be proprietary to communication between devices of the VR system100. The processor504is coupled to the transceiver502. The processor504may include a general-purpose microprocessor, a digital signal processor, a microcontroller, a graphics processor, or other device capable of executing instructions retrieved from a computer-readable storage medium. Processor architectures generally include execution units (e.g., fixed point, floating point, integer, etc.), storage (e.g., registers, memory, etc.), instruction decoding, instruction and data fetching logic, peripherals (e.g., interrupt controllers, timers, direct memory access controllers, etc.), input/output systems (e.g., serial ports, parallel ports, etc.) and various other components and sub-systems. The storage506is a computer-readable medium that stores instructions and data for access and use by the processor504. The storage506may include any of volatile storage such as random access memory, non-volatile storage (e.g., a hard drive, an optical storage device (e.g., CD or DVD), FLASH storage, read-only-memory), or combinations thereof. The storage506includes object detection508, object identification512, object location514, object prioritization516, and images518. The images518include images of the VR operating environment114captured by the monitoring stations102and transferred to the image processing device106via the transceiver502. The object detection508, object identification512, object location514, and object prioritization516include instructions that are executed by the processor504to process the images518. The objection detection508includes instructions that are executable by the processor504to detect objects in the VR operating environment114. The objection detection508may include motion detection510. The motion detection510includes instructions that are executable by the processor504to detect motion in the VR operating environment114. The motion detection510may detect motion by identifying image to image changes in images518, by identifying a difference between a reference image and a given one of the images518, or by other motion detection processing. In some implementations, the object detection508may ignore motion detected within the area116corresponding to a user of the VR headset108. The object detection508may identify points associated with a detected object using edge detection. The presence of a detected object may be communicated to the VR headset108by the image processing device106. The object location514includes instructions that are executable by the processor504to determine the three dimensional location of an object detected by the objection detection508. For example, if the object detection508identifies an object in images518captured by different monitoring stations102, then the object location514can determine the location of the object in the three dimensional VR operating environment114by triangulation based on the known locations of the monitoring stations102and the known angles between the optical centerlines of the image sensors304of the different monitoring stations102. The location of the object may be communicated to the VR headset108by the image processing device106. The object identification512includes instructions that are executable by the processor504to determine the identity of an object detected by the objection detection508. For example, the object identification512may include a Viola-Jones Framework trained to identify a variety of objects that may be present in the VR operating environment114, such as people and pets. The pixels corresponding to the detected object may be processed by the Viola-Jones Framework to identify the object. The identity of the object may be communicated to the VR headset108by the image processing device106. The object prioritization516includes instructions that are executable by the processor504to prioritize the object-related information provided to the VR headset108by the image processing device106. Prioritization may be based on object size, distance between the object and the VR headset108, object elevation, and/or other factors indicative of collision between the detected object and a user of the VR headset108. In some implementations, the object prioritization516may prioritize a plurality of detected objects according to a risk of collision between the object and the user of the VR headset108and communicate information concerning the objects to the VR headset108in order of highest determined collision risk. The storage506may include additional logic that has been omitted fromFIG.5in the interest of clarity. For example, the storage506may include VR headset communication that includes instructions that are executable by the processor504to transfer information concerning a detected object (e.g., object presence, location, and/or identity) to the VR headset108. In some implementations, the image processing device106may produce the video frames displayed by the VR headset108. In such implementations, the image processing device106may include video rendering instructions that are executable by the processor504to generate the video frames displayed by the VR headset108, and may include in the video frames information concerning a detected object. FIGS.6-8show display of object detection information in the VR headset108in accordance with various examples. The displays ofFIGS.6-8may be stereoscopic in practice, but are shown as monoscopic inFIGS.6-8to promote clarity. InFIG.6, the VR headset108displays an object detection warning as a text field602. The text field602may specify the location and identity of the object in addition to the presence of the object. InFIG.7, the VR headset108displays an object detection warning as an outline702or silhouette of a detected object. The outline702of the object may correspond to the identity of the object determined by the object identification512, and the location of the object in the display of the VR headset108may correspond to the location of the object determined by the object location514. InFIG.8, the VR headset108displays a warning symbol802indicating that an object has been detected and is to the left of the viewing field of the VR headset108. Some implementations may combine the object information displays ofFIGS.6-8. For example, an object outside of the viewing area of the VR headset108may be indicated as perFIG.8, and as the VR headset108is turned towards the object an outline702or silhouette of the object may be displayed as perFIG.7. Similarly, text display602may be combined with either the warning symbol802or the object outline702to provide additional information to the user of the VR headset108. FIG.9shows a flow diagram for a method900for object detection in a VR system in accordance with various examples. Though depicted sequentially as a matter of convenience, at least some of the actions shown can be performed in a different order and/or performed in parallel. Additionally, some implementations may perform only some of the actions shown. In some implementations, at least some of the operations of the method900can be implemented as instructions stored in a storage device and executed by one or more processors. In block902, the monitoring stations102capture images of the VR operating environment114. The monitoring stations102are disposed at different locations along the perimeter of the VR operating environment114. The locations of the monitoring stations102and the angles of intersection of the optical centerlines of the image sensors104of the monitoring stations102may be known to the image processing device106. The images captured by the monitoring stations102may include color information (chroma) and brightness information (luma), or may include only luma. The monitoring stations may capture images as a video stream. For example, the monitoring station102may capture images at a rate of 10, 20, 30, etc. images per second. The monitoring stations102transfer the captured images to the image processing device106for use in object detection. In block904, the image processing device106processes the images captured by the monitoring stations102to detect objects in the VR operating environment114. The processing may include application of motion detection to identify an object in the VR operating environment114. For example, the image processing device106may detect motion by comparing two images and identifying differences between the two images. The two images may be images captured by the same monitoring station102(e.g., monitoring station102-1) at different times. One of the images may be a reference image. Further, as explained with regard toFIG.2, the image processing device106may identify the VR headset108and/or a VR controller112in the images and ignore differences in the images (i.e., ignore motion) in a predetermined area116about the VR headset108and/or a VR controller112as attributable to movement of a user of the VR headset108. In block906, the image processing device106has detected an object in the VR operating environment114and transmits information concerning the detected object to the VR headset108. The information may include a notification of the objects presence in the VR operating environment114. The information may be embedded in a video stream provided by the image processing device106for display by the VR headset108, or provided separately from any video information transferred to the VR headset108. The information may be transferred via a wired or wireless communication channel provided between the image processing device106and the VR headset108. The communication channel may include a wired or wireless network in accordance with various networking standards. In block908, the VR headset108displays the information received from the image processing device106concerning the detected object. The information may be displayed in conjunction with a VR scene displayed by the VR headset108. FIG.10shows a flow diagram for a method1000for object detection in a VR system in accordance with various examples. Though depicted sequentially as a matter of convenience, at least some of the actions shown can be performed in a different order and/or performed in parallel. Additionally, some implementations may perform only some of the actions shown. In some implementations, at least some of the operations of the method1000can be implemented as instructions stored in a storage device and executed by one or more processors. In block1002, the monitoring stations102capture images of the VR operating environment114. The monitoring stations102are disposed at different locations at the perimeter of the VR operating environment114. The locations of the monitoring stations102and the angles of intersection of the optical centerlines of the image sensors104of the monitoring stations102may be known to the image processing device106. The images captured by the monitoring stations102may include color information (chroma) and brightness information (luma), or may include only luma. The monitoring stations may capture images as a video stream. For example, the monitoring station102may capture images at a rate of 10, 20, 30, etc. images per second. The monitoring stations102transfer the captured images to the image processing device106for use in object detection. In block1004, the image processing device106processes the images captured by the monitoring stations102to detect objects in the VR operating environment114. The processing may include application of motion detection to identify an object in the VR operating environment114. For example, the image processing device106may detect motion by comparing two images and identifying differences between the two images. The two images may be images captured by the same monitoring station102at different times. One of the images may be a reference image. The image processing device106may identify the VR headset108and/or a VR controller112in the images and ignore differences in the images (i.e., ignore motion) in a predetermined area116about the VR headset108and/or a VR controller112as attributable to movement of a user of the VR headset108. In block1006, the image processing device106processes the images captured by the monitoring stations102to determine the location of any detect objects in the VR operating environment114. The location determination processing may include identifying the detected object in time coincident images captured by different monitoring stations102, and applying triangulation to points of the detected object locate the object in three dimensions. In block1008, the image processing device106processes the images captured by the monitoring stations102to determine the identity of any objects detected in the VR operating environment114. The identify determination processing may include extracting and/or isolating an image of a detected object from an image captured by a monitoring station102, and providing the image of the detected object to a Viola-Jones Framework. The Viola-Jones Framework may extract features from the image and apply one or more classifiers to identify the object. Various other object identification algorithms may be used in some implementations of the identity determination processing. In block1010, the image processing device106prioritizes detected objects for presentation to the VR headset108. Prioritization may be based on object size, distance between the object and the VR headset108, object elevation, rate of movement towards the user, and/or other factors indicative of collision between the detected object and a user of the VR headset108. Some implementations of the object prioritization processing may prioritize a plurality of detected objects according to a risk of collision between the object and the user of the VR headset108and communicate information concerning the objects to the VR headset108in order of highest determined collision risk. In block1012, the image processing device106has detected an object in the VR operating environment114and transmits information concerning the detected object to the VR headset108. The information may include one or more notifications of the objects present in the VR operating environment114, location of the detected objects in the VR operating environment114, and identity of the detected objects. The information may be embedded in a video stream provided by the image processing device106for display by the VR headset108, or provided separately from any video information transferred to the VR headset108. The information may be transferred via a wired or wireless communication channel provided between the image processing device106and the VR headset108. The communication channel may include a wired or wireless network in accordance with various networking standards. In block1014, the VR headset108displays the information received from the image processing device106concerning the detected object to allow a user of the VR headset108to become aware of the detected object, and potentially avoid a collision with the detected object. The information may be displayed in conjunction with a VR scene displayed by the VR headset108. The above discussion is meant to be illustrative of the principles and various examples of the present invention. Numerous variations and modifications will become apparent to those skilled in the art once the above disclosure is fully appreciated. It is intended that the following claims be interpreted to embrace all such variations and modifications. | 31,446 |
11861903 | DETAILED DESCRIPTION The terms “brand exposure” and “exposures to brand identifiers” as used herein refer to the presentation of one or more brand identifiers in media content delivered by a media content stream, thereby providing an opportunity for an observer of the media content to become exposed to the brand identifier(s) (e.g., logo(s)). As used herein, a brand exposure does not require that the observer actually observe the brand identifier in the media content, but instead indicates that the observer had an opportunity to observe the brand identifier, regardless of whether the observer actually did so. Brand exposures may be tabulated and/or recorded to determine the effectiveness of intentional or unintentional product placement, display placement or advertising placement. In the description that follows, a broadcast of a baseball game is used as an example of a media stream that may be processed according to the methods and/or apparatus described herein to determine brand exposure. It will be appreciated that the example baseball game broadcast is merely illustrative and the methods and apparatus disclosed herein are readily applicable to processing media streams to determine brand exposure associated with any type of media content. For example, the media content may correspond to any type of sporting event, including a baseball game, as well as any television program, movie, streaming video content, video game presentation, etc. FIG.1is a schematic illustration of an example system to measure brand exposures in media streams. The example system ofFIG.1utilizes one or more media measurement techniques, such as, for example, audio codes, audio signatures, video codes, video signatures, image codes, image signatures, etc., to identify brand exposures in presented media content (e.g., such as content currently being broadcast or previously recorded content) provided by one or more media streams. In an example implementation, image signatures corresponding to one or more portions of a media stream are compared with a database of reference image signatures that represent corresponding portions of reference media content to facilitate identification of one or more scenes broadcast in the media stream and/or one or more brand identifiers included in the broadcast scene(s). To process (e.g., receive, play, view, record, decode, etc.) and present any number and/or type(s) of content, the example system ofFIG.1includes any number and/or type(s) of media device(s)105. The media device(s)105may be implemented by, for example, a set top box (STB), a digital video recorder (DVR), a video cassette recorder (VCR), a personal computer (PC), a game console, a television, a media player, etc., or any combination thereof. Example media content includes, but is not limited to, television (TV) programs, movies, videos, websites, commercials/advertisements, audio, games, etc. In the example system ofFIG.1, the example media device105receives content via any number and/or type(s) of sources such as, for example: a satellite receiver and/or antenna110, a radio frequency (RF) input signal115corresponding to any number and/or type(s) of cable TV signal(s) and/or terrestrial broadcast(s), any number and/or type(s) of data communication networks such as the Internet120, any number and/or type(s) of data and/or media store(s)125such as, for example, a hard disk drive (HDD), a VCR cassette, a digital versatile disc (DVD), a compact disc (CD), a flash memory device, etc. In the example system ofFIG.1, the media content (regardless of its source) may include for example, video data, audio data, image data, website data, etc. To generate the content for processing and presentation by the example media device(s)105, the example system ofFIG.1includes any number and/or type(s) of content provider(s)130such as, for example, television stations, satellite broadcasters, movie studios, website providers, etc. In the illustrated example ofFIG.1, the content provider(s)130deliver and/or otherwise provide the content to the example media device105via any or all of a satellite broadcast using a satellite transmitter135and a satellite and/or satellite relay140, a terrestrial broadcast received via the RF input signal115, a cable TV broadcast received via the RF input signal115, the Internet120, and/or the media store(s)125. To measure brand exposure (i.e., exposures to brand identifiers) in media stream(s) processed and presented by the example media device(s)105, the example system ofFIG.1includes at least one brand exposure monitor, one of which is illustrated at reference number150ofFIG.1. The example brand exposure monitor150ofFIG.1processes a media stream160output by the example media device105to identify at least one brand identifier being presented via the media device105. In general, the example brand exposure monitor150operates to identify brand identifiers and report brand exposure(s) automatically using known or previously learned information when possible, and then defaults to requesting manual user input when such automatic identification is not possible. At a high-level, the example brand exposure monitor150achieves this combination of automatic and manual brand exposure processing by first dividing the media stream160into a group of successive detected scenes, each including a corresponding group of successive image frames. The example brand exposure monitor150then excludes any scenes known not to include any brand identifier information. Next, the example brand exposure monitor150compares each non-excluded detected scene to a library of reference scenes to determine whether brand exposure monitoring may be performed automatically. For example, automatic brand exposure monitoring is possible if the detected scene matches information stored in the reference library which corresponds to a repeated scene of interest or a known scene of no interest. However, if the detected scene does not match (or fully match) information in the reference library, automatic brand exposure monitoring is not possible and the example brand exposure monitor150resorts to manual user intervention to identify some or all of the brand identifier(s) included in the detected scene for brand exposure reporting. Examining the operation of the example brand exposure monitor150ofFIG.1in greater detail, the example brand exposure monitor150determines (e.g., collects, computes, extracts, detects, recognizes, etc.) content identification information (e.g., such as at least one audio code, audio signature, video code, video signature, image code, image signature, etc.) to divide the media stream160into a group of successive scenes. For example, the brand exposure monitor150may detect a scene of the media stream160as corresponding to a sequence of adjacent video frames (i.e., image frames) having substantially similar characteristics such as, for example, a sequence of frames corresponding to substantially the same camera parameters (e.g., angle, height, aperture, focus length, etc.) and having background that is statistically stationary (e.g., the background may have individual components that move, but the overall background on average appears relative stationary). The brand exposure monitor150of the illustrated example utilizes scene change detection to mark the beginning image frame and the ending image frame corresponding to a scene. In an example implementation, the brand exposure monitor150performs scene change detection by creating an image signature for each frame of the media stream160(possibly after subsampling) and then comparing the image signatures of a sequence of frames to determine when a scene change occurs. For example, the brand exposure monitor150may compare the image signature corresponding to the starting image of a scene to the image signatures for one or more successive image frames following the starting frame. If the image signature for the starting frame does not differ significantly from a successive frame's image signature, the successive frame is determined to be part of the same scene as the starting frame. However, if the image signatures are found to differ significantly, the successive frame that differs is determined to be the start of a new scene and becomes the first frame for that new scene. Using the example of a media stream160providing a broadcast of a baseball game, a scene change occurs when, for example, the video switches from a picture of a batter to a picture of the outfield after the batter successfully hits the ball. Next, after the scene is detected, the brand exposure monitor150of the illustrated example determines at least one key frame and key image signature representative of the scene. For example, the key frame(s) and key image signature(s) for the scene may be chosen to be the frame and signature corresponding to the first frame in the scene, the last frame in the scene, the midpoint frame in the scene, etc. In another example, the key frame(s) and key image signature(s) may be determined to be an average and/or some other statistical combination of the frames and/or signatures corresponding to the detected scene. To reduce processing requirements, the brand exposure monitor150may exclude a detected scene under circumstances where it is likely the scene will not contain any brand identifiers (e.g., logos). In an example implementation, the brand exposure monitor150is configured to use domain knowledge corresponding to the particular type of media content being processed to determine when a scene exhibits characteristics indicating that the scene will not contain any brand identifiers. For example, in the context of media content corresponding to the broadcast of a baseball game, a scene including a background depicting only the turf of the baseball field may be known not to contain any brand identifiers. In such a case, the brand exposure monitor150may exclude a scene from brand exposure monitoring if the scene exhibits characteristics of a scene depicting the turf of the baseball field (e.g., such as a scene having a majority of pixels that are predominantly greenish in color and distributed such that, for example, the top and bottom areas of the scene include regions of greenish pixels grouped together). If a detected scene is excluded, the brand exposure monitor150of the illustrated example reports the excluded scene and then continues processing to detect the next scene of the media stream160. Assuming that a detected scene is not excluded, the example brand exposure monitor150then compares the image signature for the detected scene with one or more databases (not shown) of reference signatures representative of previously learned and/or known scenes to determine whether the current detected scene is a known scene or a new scene. If the current scene matches a previously learned and/or known scene stored in the database(s), the brand exposure monitor150obtains status information for the scene from the database(s). If the status information indicates that the scene had been previously marked as a scene of no interest in the database(s) (e.g., such as a scene known not to include any brand identifiers (e.g., logos)), the scene is reported as a scene of no interest and may be included in the database(s) as learned information to be used to identify future scenes of no interest. For example, and as discussed in greater detail below, a scene may be marked as a scene of no interest if it is determined that no brand identifiers (e.g., logos) are visible in the scene. The brand exposure monitor150of the illustrated example then continues processing to detect the next scene of the media stream160. If, however, the current scene is indicated to be a scene of interest, the brand exposure monitor150then determines one or more expected regions of interest residing within the current scene that may contain a brand identifier (e.g., logo), as discussed in greater detail below. The brand exposure monitor150then verifies the expected region(s) of interest with one or more databases (not shown) storing information representative of reference (e.g., previously learned and/or known) brand identifiers (e.g., logos). If all of the expected region(s) of interest are verified to include corresponding expected brand identifier(s), the example brand exposure monitor150reports exposure to matching brand identifiers. However, if the current scene does not match any reference (e.g., previously learned and/or known) scene, and/or at least one region of interest does not match one or more reference (e.g., previously learned and/or known) brand identifiers, the brand exposure monitor150initiates a graphical user interface (GUI) session at the GUI152. The GUI152is configured to display the current scene and prompt the user170to provide an identification of the scene and/or the brand identifiers included in the region(s) of interest. For each brand identifier recognized automatically or via information input by the user170via the GUI152, corresponding data and/or reports are stored in an example brand exposure database155for subsequent processing. After the scene and/or brand identifier(s) have been identified by the user170via the GUI152, the current scene and/or brand identifier(s) are stored in their respective database(s). In this way, the current scene and/or brand identifier(s), along with any corresponding descriptive information, are learned by the brand exposure monitor150and can be used to detect future instances of the scene and/or brand identifier(s) in the media stream160without further utilizing the output device and/or GUI152. An example manner of implementing the example brand exposure monitor150ofFIG.1is described below in connection withFIG.2. During scene and/or brand identifier recognition, the brand exposure monitor150may also present any corresponding audio content to the user170to further enable identification of any brand audio mention(s). Upon detection of an audio mention of a brand, the user170may so indicate the audio mention to the brand exposure monitor150by, for example, clicking on an icon on the GUI152, inputting descriptive information for the brand identifier (e.g., logo), etc. Furthermore, key words from closed captioning, screen overlays, etc., may be captured and associated with detected audio mentions of the brand. Additionally or alternatively, audio, image and/or video codes inserted by content providers130to identify content may be used to identify brand identifiers. For example, an audio code for a segment of audio of the media stream160may be extracted and cross-referenced to a database of reference audio codes. Audio exposure of a detected brand identifier may also be stored in the example brand exposure database155. The audio mentions stored in the example brand exposure database155may also contain data that links the audio mention(s) to scene(s) being broadcast. Additionally, the identified audio mentions may be added to reports and/or data regarding brand exposure generated from the example brand exposure database155. To record information (e.g., such as ratings information) regarding audience consumption of the media content provided by the media stream160, the example system ofFIG.1includes any number and/or type(s) of audience measurements systems, one of which is designated at reference numeral180inFIG.1. The example audience measurement system180ofFIG.1records and/or stores in an example audience database185information representative of persons, respondents, households, etc., consuming and/or exposed to the content provided and/or delivered by the content providers130. The audience information and/or data stored in the example audience database185may be further combined with the brand exposure information/data recorded in the example brand exposure database155by the brand exposure monitor150. In the illustrated example, the combined audience/brand exposure information/data is stored in an example audience and brand exposure database195. The combination of audience information and/or data and brand based exposure measurement information195may be used, for example, to determine and/or estimate one or more statistical values representative of the number of persons and/or households exposed to one or more brand identifiers. FIG.2illustrates an example manner of implementing the example brand exposure monitor150ofFIG.1. To process the media stream160ofFIG.1, the brand exposure monitor150ofFIG.2includes a scene recognizer252. The example scene recognizer252ofFIG.2operates to detect scenes and create one or more image signatures for each identified scene included in the media stream160. In an example implementation, the media stream160includes a video stream comprising a sequence of image frames having a certain frame rate (e.g., such as 30 frames per second). A scene corresponds to a sequence of adjacent image frames having substantially similar characteristics. For example, a scene corresponds to a sequence of images captured with similar camera parameters (e.g., angle, height, aperture, focus length) and having a background that is statistically stationary (e.g., the background may have individual components that move, but the overall background on average appears relatively stationary). To perform scene detection, the example scene recognizer252creates an image signature for each image frame (possibly after sub-sampling at a lower frame rate). The example scene recognizer then compares the image signatures within a sequence of frames to a current scene's key signature(s) to determine whether the image signatures are substantially similar or different. As discussed above, a current scene may be represented by one or more key frames (e.g., such as the first frame, etc.) with a corresponding one or more key signatures. If the image signatures for the sequence of frames are substantially similar to the key signature, the image frames are considered as corresponding to the current scene and at least one of the frames in the sequence (e.g., such as the starting frame, the midpoint frame, the most recent frame, etc.) is used as a key frame to represent the scene. The image signature corresponding to the key frame is then used as the image signature for the scene itself. If, however, a current image signature corresponding to a current image frame differs sufficiently from the key frames signature(s), the current image frame corresponding to the current image signature is determined to mark the start of a new scene of the media stream160. Additionally, the most recent previous frame is determined to mark the end of the previous scene. How image signatures are compared to determine the start and end frames of scenes of the media stream160depends on the characteristics of the particular image signature technique implemented by the scene recognizer252. In an example implementation, the scene recognizer252creates a histogram of the luminance (e.g., Y) and chrominance (e.g., U & V) components of each image frame or one or more specified portions of each image. This image histogram becomes the image signature for the image frame. To compare the image signatures of two frames, the example scene recognizer252performs a bin-wise comparison of the image histograms for the two frames. The scene recognizer252then totals the differences for each histogram bin and compares the computed difference to one or more thresholds. The thresholds may be preset and/or programmable, and may be tailored to balance a trade-off between scene granularity vs. processing load requirements. The scene recognizer252of the illustrated example may also implement scene exclusion to further reduce processing requirements. As discussed above, the example scene recognizer252may exclude a scene based on, for example, previously obtained domain knowledge concerning the media content carried by the example media stream160. The domain knowledge, which may or may not be unique to the particular type of media content being processed, may be used to create a library of exclusion characteristics indicative of a scene that will not include any brand identifiers (e.g., logos). If scene exclusion is implemented, the example scene recognizer252may mark a detected scene for exclusion if it possesses some or all of the exclusion characteristics. For example, and as discussed above, in the context of media content corresponding to the broadcast of a baseball game, a scene characterized by a predominantly greenish background may be marked for exclusion because the scene corresponds to a camera shot depicting the turf of the baseball field. This is because, based on domain knowledge concerning broadcasted baseball games, it is known that camera shots of the baseball field's turf rarely, if ever, include any brand identifiers to be reported. As discussed above, the example scene recognizer252of the illustrated example reports any excluded scene and then continues processing to detect the next scene of the media stream160. Alternatively, the example scene recognizer252could simply discard the excluded scene and continue processing to detect the next scene of the media stream160. An example manner of implementing the example scene recognizer252ofFIG.2is discussed below in connection withFIG.3. Assuming that the detected scene currently being processed (referred to as the “current scene”) is not excluded, the example scene recognizer252begins classifying the scene into one of the following four categories: a repeated scene of interest, a repeated scene of changed interest, a new scene, or a scene of no interest. For example, a scene of no interest is a scene known or previously identified as including no visible brand identifiers (e.g., logos). A repeated scene of interest is a scene of interest known to include visible brand identifiers (e.g., logos) and in which all visible brand identifiers are already known and can be identified. A repeated scene of changed interest is a scene of interest known to include visible brand identifiers (e.g., logos) and in which some visible brand identifiers (e.g., logos) are already known and can be identified, but other visible brand identifiers are unknown and/or cannot be identified automatically. A new scene corresponds to an unknown scene and, therefore, it is unknown whether the scene includes visible brand identifiers (e.g., logos). To determine whether the current scene is a scene of no interest or whether the scene is one of the other scenes of interest that may contain visible brand identifiers, the example scene recognizer252compares the image signature for the scene with one or more reference signatures. The reference signatures may correspond to previously known scene information stored in a scene database262and/or previously learned scene information stored in a learned knowledge database264. If the current scene's image signature does not match any of the available reference signatures, the example scene recognizer252classifies the scene as a new scene. If the scene's image signature does match one or more of the available reference signatures, but information associated with the matched reference signature(s) and stored in the scene database262and/or learned knowledge database264indicates that the scene includes no visible brand identifiers, the example scene recognizer252classifies the scene as a scene of no interest. Otherwise, the scene will be a classified as either a repeated scene of interest or a repeated scene of changed interest by the example scene recognizer252as discussed below. In an example implementation using the image histograms described above to represent image signatures, a first threshold (or first thresholds) could be used for scene detection, and a second threshold (or second thresholds) could be used for scene classification based on comparison with reference scenes. In such an implementation, the first threshold(s) would define a higher degree of similarity than the second threshold(s). In particular, while the first threshold(s) would define a degree of similarity in which there was little or no change (at least statistically) between image frames, the second threshold(s) would define a degree of similarity in which, for example, some portions of the compared frames could be relatively similar, whereas other portions could be different. For example, in the context of the broadcast baseball game example, a sequence of frames showing a first batter standing at home plate may meet the first threshold(s) such that all the frames in the sequence are determined to belong to the same scene. When a second batter is shown standing at home plate, a comparison of first frame showing the second batter with the frame(s) showing the first batter may not meet the first threshold(s), thereby identifying the start of a new scene containing the second batter. However, because the background behind home plate will be largely unchanging, a comparison of the first scene containing the first batter with the second scene containing the second batter may meet the second threshold(s), indicating that the two scenes should be classified as similar scenes. In this particular example, the scene containing the second batter would be considered a repeated scene relative to the scene containing the first batter. The scene database262may be implemented using any data structure(s) and may be stored in any number and/or type(s) of memories and/or memory devices260. The learned knowledge database264may be implemented using any data structure(s) and may be stored in any number and/or type(s) of memories and/or memory devices260. For example, the scene database262and/or the learned knowledge database264may be implemented using bitmap files, a JPEG file repository, etc. To determine whether the scene having a signature matching one or more reference signatures is a repeated scene of interest or a repeated scene of changed interest, the example scene recognizer252ofFIG.2identifies one or more expected regions of interest included in the scene at issue based on stored information associated with reference scene(s) corresponding to the matched reference signature(s). An example brand recognizer254(also known as a logo detector254) included in the example brand exposure monitor150then performs brand identifier recognition (also known as “logo detection”) by comparing and verifying the expected region(s) of interest with information corresponding to one or more corresponding expected reference brand identifiers stored in the learned knowledge database264and/or a brand library266. For example, the brand recognizer254may verify that the expected reference brand identifier(s) is/are indeed included in the expected region(s) of interest by comparing each expected region of interest in the scene's key frame with known brand identifier templates and/or templates stored in the example learned knowledge database264and/or a brand library266. The brand library266may be implemented using any data structure(s) and may be stored in any number and/or type(s) of memories and/or memory devices260. For example, the brand library266may store the information in a relational database, a list of signatures, a bitmap file, etc. Example techniques for brand identifier recognition (or logo detection) are discussed in greater detail below. Next, for each verified region of interest, the example brand recognizer254initiates a tracker function to track the contents of the verified region of interest across all the actual image frames include in the current scene. For example, the tracker function may compare a particular region of interest in the current scene's key frame with corresponding region(s) of interest in each of the other frames in the current scene. If the tracker function verifies that the corresponding expected region(s) of interest match in all of the current scene's image frames, the example scene recognizer252classifies the scene as a repeated scene of interest. If, however, at least one region of interest in at least one of the current scene's image frames could not be verified with a corresponding expected reference brand identifier, the scene recognizer252classifies the scene as a repeated scene of changed interest. The processing of repeated scenes of changed interest is discussed in greater detail below. After classification of the scene, the scene recognizer252continues to detect and/or classify the next scene in the media stream160. To provide identification of unknown and/or unidentified brand identifiers included in new scenes and repeated scenes of changed interest, the example brand exposure monitor150ofFIG.1includes the GUI152. The example GUI152, also illustrated inFIG.2, displays information pertaining to the scene and prompts the user170to identify and/or confirm the identity of the scene and/or one or more potential brand identifiers included in one or more regions of interest. The example GUI152may be displayed via any type of output device270, such as a television (TV), a computer screen, a monitor, etc., when a new scene or a repeated scene of changed interest is identified by the example scene recognizer252. In an example implementation, when a scene is classified as a new scene or a repeated scene of changed interest, the example scene recognizer252stops (e.g., pauses) the media stream160ofFIG.1and then the GUI152prompts the user170for identification of the scene and/or identification of one or more regions of interest and any brand identifier(s) included in the identified region(s) of interest. For example, the GUI152may display a blank field to accept a scene name and/or information regarding a brand identifier provided by the user170, provide a pull down menu of potential scene names and/or brand identifiers, suggest a scene name and/or a brand identifier which may be accepted and/or overwritten by the user170, etc. To create a pull down menu and/or an initial value to be considered by the user170to identify the scene and/or any brand identifiers included in any respective region(s) of interest of the scene, the GUI152may obtain data stored in the scene database262, the learned knowledge database264and/or the brand library266. To detect the size and shape of one or more regions of interest included in a scene, the example GUI152ofFIG.2receives manual input to facilitate generation and/or estimation of the location, boundaries and/or size of each region of interest. For example, the example GUI152could be implemented to allow the user170to mark a given region of interest by, for example, (a) clicking on one corner of the region of interest and dragging the cursor to the furthest corner, (b) placing the cursor on each corner of the region of interest and clicking while the cursor is at each of the corners, (c) clicking anywhere in the region of interest with the GUI152estimating the size and/or shape to calculate of the region of interest, etc. Existing techniques for specifying and/or identifying regions of interest in video frames typically rely on manually marked regions of interest specified by a user. Many manual marking techniques require a user to carefully mark all vertices of a polygon bounding a desired region of interest, or otherwise carefully draw the edges of some other closed graphical shape bounding the region of interest. Such manual marking techniques can require fine motor control and hand-eye coordination, which can result in fatigue if the number of regions of interest to be specified is significant. Additionally, different user are likely to mark regions of interest differently using existing manual marking techniques, which can result in irreproducible, imprecise and/or inconsistent monitoring performance due to variability in the specification of regions of interest across the video frames associated with the broadcast content undergoing monitoring. In a first example region of interest marking technique that may be implemented by the example GUI152, the GUI152relies on manual marking of the perimeter of a region of interest. In this first example marking technique, the user170uses a mouse (or any other appropriate input device) to move a displayed cursor to each point marking the boundary of the desired region of interest. The user170marks each boundary point by clicking a mouse button. After all boundary points are marked, the example GUI152connects the marked points in the order in which they were marked, thereby forming a polygon (e.g., such as a rectangle) defining the region of interest. Any area outside the polygon is regarded as being outside the region of interest. As mentioned above, one potential drawback of this first example region of interest marking technique is that manually drawn polygons can be imprecise and inconsistent. This potential lack of consistency can be especially problematic when region(s) of interest for a first set of scenes are marked by one user170, and region(s) of interest from some second set of scenes are marked by another user170. For example, inconsistencies in the marking of regions of interest may adversely affect the accuracy or reliability of any matching algorithms/techniques relying on the marked regions of interest. In a second example region of interest marking technique that may be implemented by the example GUI152, the GUI152implements a more automatic and consistent approach to marking a desired region of interest in any type of graphical presentation. For example, this second example region of interest marking technique may be used to mark a desired region of interest in an image, such as corresponding to a video frame or still image. Additionally or alternatively, the second example region of interest marking technique may be used to mark a desired region of interest in a drawing, diagram, slide, poster, table, document, etc., created using, for example, any type of computer aided drawing and/or drafting application, word processing application, presentation creation application, etc. The foregoing example of graphical presentations are merely illustrative and are not meant to be limiting with respect to the type of graphical presentations for which the second example region of interest marking technique may be used to mark a desired region of interest. In this automated example region of interest marking technique, the user170can create a desired region of interest from scratch or based on a stored and/or previously created region of interest acting as a template. An example sequence of operations to create a region of interest from scratch using this example automated region of interest marking technique is illustrated inFIG.10. Referring toFIG.10, to create a region of interest in an example scene1000from scratch, the user170uses a mouse (or any other appropriate input device) to click anywhere inside the desired region of interest to create a reference point1005. Once the reference point1005is marked, the example GUI152determines and displays an initial region1010around the reference point1005to serve as a template for region of interest creation. In an example implementation, the automated region of interest marking technique illustrated inFIG.10compares adjacent pixels in a recursive manner to automatically generate the initial template region1010. For example, starting with the initially selected reference point1005, adjacent pixels in the four directions of up, down, left and right are compared to determine if they are similar (e.g., in luminance and chrominance) to the reference point1005. If any of these four adjacent pixels are similar, each of those similar adjacent pixels then forms the starting point for another comparison in the four directions of up, down, left and right. This procedure continues recursively until no similar adjacent pixels are found. When no similar adjacent pixels are found, the initial template region1010is determined to be a polygon (e.g., specified by vertices, such as a rectangle specified by four vertices) or an ellipse (e.g., specified by major and minor axes) bounding all of the pixels recursively found to be similar to the initial reference point1005. As an illustrative example, inFIG.10the reference point1005corresponds to a position on the letter “X” (labeled with reference numeral1015) as shown. Through recursive pixel comparison, all of the dark pixels comprising the letter “X” (reference numeral1015) will be found to be similar to the reference point1005. The initial template region1010is then determined to be a rectangular region bounding all of the pixels recursively found to be similar in the letter “X” (reference numeral1015). The automated region of interest marking technique illustrated inFIG.10can also automatically combine two or more initial template regions to create a single region of interest. As an illustrative example, inFIG.10, as discussed above, selecting the reference point1005causes the initial template region1010to be determined as bounding all of the pixels recursively found to be similar in the letter “X” (reference numeral1015). Next, if the reference point1020was selected, a second initial template region1025would be determined as bounding all of the pixels recursively found to be similar in the depicted letter “Y” (labeled with reference numeral1030). After determining the first and second initial template regions1010and1025based on the respective first and second reference points1005and1020, a combined region of interest1035could be determined. For example, the combined region of interest1035could be determined as a polygon (e.g., such as a rectangle) or an ellipse bounding all of the pixels in the first and second initial template regions1010and1025. More generally, the union of some or all initial template regions created from associated selected reference points may be used to construct a bounding shape, such as a polygon, an ellipse, etc. Any point inside such a bounding shape is then considered to be part of the created region of interest and, for example, may serve as a brand identifier template. Additionally or alternatively, a set of helper tools may be used to modify, for example, the template region1010in a regular and precise manner through subsequent input commands provided by the user170. For example, instead of combining the initial template region1010with the second template region1025as described above, the user170can click on a second point1050outside the shaded template region1010to cause the template region1010to grow to the selected second point1050. The result is a new template region1055. Similarly, the user170can click on a third point (not shown) inside the shaded template region1010to cause the template region1010to shrink to the selected third point. Furthermore, the user can access an additional set of helper tools to modify the current template region (e.g., such as the template region1010) in more ways than only a straightforward shrinking or expanding of the template region to a selected point. In the illustrated example, the helper tool used to modify the template region1010to become the template region1055was a GROW_TO_POINT helper tool. Other example helper tools include a GROW_ONE_STEP helper tool, a GROW_ONE_DIRECTIONAL_STEP helper tool, a GROW_TO_POINT_DIRECTIONAL helper tool, an UNDO helper tool, etc. In the illustrated example, clicking on the selected point1050with the GROW_ONE_STEP helper tool activated would cause the template region1010to grow by only one step of resolution to become the new template region1060. However, if the GROW_ONE_DIRECTIONAL_STEP helper tool were activated, the template region1010would grow by one step of resolution only in the direction of the selected point1015to become the new template region1065(which corresponds to the entire darker shaded region depicted inFIG.10). If a GROW_TO_POINT_DIRECTIONAL helper tool were activated (example not shown), the template region1010would grow to the selected point, but only in the direction of the selected point. In the case of the DIRECTIONAL helper tools, the helper tool determines the side, edge, etc., of the starting template region nearest the selected point to determine in which direction the template region should grow. Additionally, other helper tools may be used to select the type, size, color, etc., of the shape/polygon (e.g., such as a rectangle) used to create the initial template region, to specify the resolution step size, etc. Also, although the example helper tools are labeled using the term “GROW” and the illustrated examples depict these helper tools as expanding the template region1010, these tools also can cause the template region1010to shrink in a corresponding manner by selecting a point inside, instead or outside, the example template region1010. As such, the example helper tools described herein can cause a starting template region to grow in either an expanding or contracting manner depending upon whether a point is selected outside or inside the template region, respectively. As mentioned above, the user170can also use the example automated region of interest creation technique to create a desired region of interest based on a stored and/or previously created region of interest (e.g., a reference region of interest) acting as a template. To create a region of interest using a stored and/or previously created region of interest, the user170uses a mouse (or any other appropriate input device) to select a reference point approximately in the center of the desired region of interest. Alternatively, the user170could mark multiple reference points to define a boundary around the desired region of interest. To indicate that the example GUI152should create the region of interest from a stored and/or previously created region of interest rather than from scratch, the user170may use a different mouse button (or input selector on the input device) and/or press a predetermined key while selecting the reference point(s), press a search button on the graphical display before selecting the reference point(s), etc. After the reference point(s) are selected, the example GUI152uses any appropriate template matching procedure (e.g., such as the normalized cross correlation template matching technique described below) to match a region associated with the selected reference point(s) to one or more stored and/or previously created region of interest. The GUI152then displays the stored and/or previously created region of interest that best matches the region associated with the selected reference point(s). The user170may then accept the returned region or modify the region using the helper tools as described above in the context of creating a region of interest from scratch. In some cases, a user170may wish to exclude an occluded portion of a desired region of interest because, for example, some object is positioned such that it partially obstructs the brand identifier(s) (e.g., logo(s)) included in the region of interest. For example, in the context of a media content presentation of a baseball game, a brand identifier in a region of interest may be a sign or other advertisement position behind home plate which is partially obstructed by the batter. In situations such as these, it may be more convenient to initially specify a larger region of interest (e.g., the region corresponding to the entire sign or other advertisement) and then exclude the occluded portion of the larger region (e.g., the portion corresponding to the batter) to create the final, desired region of interest. To perform such region exclusion, the user170may use an EXCLUSION MARK-UP helper tool to create a new region that is overlaid (e.g., using a different color, shading, etc.) on a region of interest initially created from scratch or from a stored and/or previously created region of interest. Additionally, the helper tools already described above (e.g., such as the GROW_TO_POINT, GROW_ONE_STEP, GROW_ONE_DIRECTIONAL_STEP, etc. helper tools) may be used to modify the size and/or shape of the overlaid region. When the user170is satisfied with the overlaid region, the GUI152excludes the overlaid region (e.g., corresponding to the occlusion) from the initially created region of interest to form the final, desired region of interest. Returning toFIG.2, once the information for the current scene, region(s) of interest, and/or brand identifiers included therein has been provided via the GUI152for a new scene or a repeated scene of changed interest, the example GUI152updates the example learned knowledge database264with information concerning, for example, the brand identifier(s) (e.g., logo(s)), identity(ies), location(s), size(s), orientation(s), etc. The resulting updated information may subsequently be used for comparison with another identified scene detected in the media stream160ofFIG.1and/or a scene included in any other media stream(s). Additionally, and as discussed above, a tracker function is then initiated for each newly marked region of interest. The tracker function uses the marked region of interest as a template to track the corresponding region of interest in the adjacent image frames comprising the current detected scene. In particular, an example tracker function determines how a region of interest marked in a key frame of a scene may change (e.g., in location, size, orientation, etc.) over the adjacent image frames comprising the scene. Parameters describing the region of interest and how it changes (if at all) over the scene are used to derive an exposure measurement for brand identifier(s) included in the region of interest, as well as to update the example learned knowledge database264with information concerning how the brand identifier(s) included in the region of interest may appear in subsequent scenes. Furthermore, if the marked region of interest contains an excluded region representing an occluded portion of a larger region of interest (e.g., such as an excluded region marked using the EXCLUSION MARK-UP helper tool described above), the example tracker function can be configured to track the excluded region separately to determine whether the occlusion represented by the excluded region changes and/or lessens (e.g., becomes partially or fully removed) in the adjacent frames. For example, the tracker function can use any appropriate image comparison technique to determine that at least portions of the marked region of interest and at least portions of the excluded region of interest have become similar to determine that the occlusion has changed and/or lessened. If the occlusion changes and/or lessens in the adjacent frames, the example tracker function can combine the marked region of interest with the non-occluded portion(s) of the excluded region to obtain a new composite region of interest and/or composite brand identifier template (discussed below) for use in brand identifier recognition. Next, to continue analysis of brand identifier exposure in the media stream160ofFIG.1, the example scene recognizer252restarts the media stream160ofFIG.1. As discussed above, to recognize a brand identifier (e.g., logo) appearing in a scene, and to gather information regarding the brand identifier, the example brand exposure monitor150ofFIG.2includes the brand recognizer254(also known as the logo detector254). The example brand recognizer254ofFIG.2determines all brand identifiers appearing in the scene. For example the brand recognizer254may recognize brand identifiers in a current scene of interest by comparing the region(s) of interest with one or more reference brand identifiers (e.g., one or more reference logos) stored in the learned knowledge database264and/or known brand identifiers stored in the brand library266. The reference brand identifier information may be stored using any data structure(s) in the brand library266and/or the learned knowledge database264. For example, the brand library266and/or the learned knowledge database264may store the reference brand identifier information using bitmap files, a repository of JPEG files, etc. To reduce processing requirements and improve recognition efficiency such that, for example, brand identifiers may be recognized in real-time, the example brand recognizer254uses known and/or learned information to analyze the current scene for only those reference brand identifiers expected to appear in the scene. For example, if the current scene is a repeated scene of interest matching a reference (e.g., previously learned or known) scene, the example brand recognizer254may use stored information regarding the matched reference scene to determine the region(s) of interest and associated brand identifier(s) expected to appear in the current scene. Furthermore, the example brand recognizer254may track the recognized brand identifiers appearing in a scene across the individual image frames comprising the scene to determine additional brand identifier parameters and/or to determine composite brand identifier templates (as discussed above) to aid in future recognition of brand identifiers, to provide more accurate brand exposure reporting, etc. In an example implementation, the brand recognizer254performs template matching to compare a region of interest in the current scene to one or more reference brand identifiers (e.g., one or more reference logos) associated with the matching reference scene. For example, when a user initially marks a brand identifier (e.g., logo) in a detected scene (e.g., such as a new scene), the marked region represents a region of interest. From this marked region of interest, templates of different sizes, perspectives, etc. are created to be reference brand identifiers for the resulting reference scene. Additionally, composite reference brand identifier templates may be formed by the example tracker function discussed above from adjacent frames containing an excluded region of interest representing an occlusion that changes and/or lessens. Then, for a new detected scene, template matching is performed against these various expected reference brand identifier(s) associated with the matching reference scene to account for possible (and expected) perspective differences (e.g., differences in camera angle, zooming, etc.) between a reference brand identifier and its actual appearance in the current detected scene. For example, a particular reference brand identifier may be scaled from one-half to twice its size, in predetermined increments, prior to template matching with the region of interest in the current detected scene. Additionally or alternatively, the orientation of the particular reference brand identifier may be varied over, for example, −30 degrees to +30 degrees, in predetermined increments, prior to template matching with the region of interest in the current detected scene. Furthermore, template matching as implemented by the example brand recognizer254may be based on comparing the luminance values, chrominance values, or any combination thereof, for the region(s) of interest and the reference brand identifiers. An example template matching technique that may be implemented by the example brand recognizer254for comparing a region of interest to the scaled versions and/or different orientations of the reference brand identifiers is described in the paper “Fast Normalized Cross Correlation” by J. P. Lewis, available at http://www.idiom.com/˜zilla/Work/nvisionInterface/nip.pdf (accessed Oct. 24, 2007), which is submitted herewith and incorporated by reference in its entirety. In an example implementation based on the template matching technique described by Lewis, the example brand recognizer254computes the normalized cross correlation (e.g., based on luminance and/or chrominance values) of the region of interest with each template representative of a particular reference brand identifier having a particular scaling and orientation. The largest normalized cross correlation across all templates representative of all the different scalings and orientations of all the different reference brand identifiers of interest is then associated with a match, provided the correlation exceeds a threshold. As discussed in Lewis, the benefits of a normalized cross correlation implementation include robustness to variations in amplitude of the region of interest, robustness to noise, etc. Furthermore, such an example implementation of the example brand recognizer254can be implemented using Fourier transforms and running sums as described in Lewis to reduce processing requirements over a brute-force spatial domain implementation of the normalized cross correlation. To report measurements and other information about brand identifiers (e.g., logos) recognized and/or detected in the example media stream160, the example brand exposure monitor150ofFIG.2includes a report generator256. The example report generator256ofFIG.2collects the brand identifiers, along with any associated appearance parameters, etc., determined by the brand recognizer254, organizes the information, and produces a report. The report may be output using any technique(s) such as, for example, printing to a paper source, creating and/or updating a computer file, updating a database, generating a display, sending an email, etc. While an example manner of implementing the example brand exposure monitor150ofFIG.1has been illustrated inFIG.2, some or all of the elements, processes and/or devices illustrated inFIG.2may be combined, divided, re-arranged, omitted, eliminated and/or implemented in any way. Further, the example scene recognizer252, the example brand recognizer254, the example GUI152, the example mass memory260, the example scene database262, the example learned knowledge database264, the example brand library266, the report generator256, and/or more generally, the example brand exposure monitor150may be implemented by hardware, software, firmware and/or any combination of hardware, software and/or firmware. Thus, for example, any of the example scene recognizer252, the example brand recognizer254, the example GUI152, the example mass memory260, the example scene database262, the example learned knowledge database264, the example brand library266, the report generator256, and/or more generally, the example brand exposure monitor150could be implemented by one or more circuit(s), programmable processor(s), application specific integrated circuit(s) (ASIC(s)), programmable logic device(s) (PLD(s)) and/or field programmable logic device(s) (FPLD(s)), etc. When any of the appended claims are read to cover a purely software implementation, at least one of the example brand exposure monitor150, the example scene recognizer252, the example brand recognizer254, the example GUI152, the example mass memory260, the example scene database262, the example learned knowledge database264, the example brand library266and/or the report generator256are hereby expressly defined to include a tangible medium such as a memory, digital versatile disk (DVD), compact disk (CD) etc. Moreover, the example brand exposure monitor150may include one or more elements, processes, and/or devices instead of, or in addition to, those illustrated inFIG.2, and/or may include more than one of any or all of the illustrated, processes and/or devices. An example manner of implementing the example scene recognizer252ofFIG.2is shown inFIG.3. The example scene recognizer252ofFIG.3includes a signature generator352to create one or more image signatures for each frame (possibly after sub-sampling) included in, for example, the media stream160. The one or more image signatures are then used for scene identification and/or scene change detection. In the illustrated example, the signature generator252generates the image signature for an image frame included in the media stream160by creating an image histogram of the luminance and/or chrominance values included in the image frame. To implement scene identification and scene change detection as discussed above, the example scene recognizer252ofFIG.3includes a scene detector354. The example scene detector354of the illustrated example detects scenes in the media stream160by comparing successive image frames to a starting frame representative of a current scene being detected. As discussed above, successive image frames that have similar image signatures are grouped together to form a scene. One or more images and their associated signatures are then used to form the key frame(s) and associated key signature(s) for the detected frame. In the illustrated example, to detect a scene by determining whether a scene change has occurred, the example scene detector354compares the generated image signature for a current image frame with the image signature for the starting frame (or the appropriate key frame) of the scene currently being detected. If the generated image signature for the current image frame is sufficiently similar to the starting frame's (or key frame's) image signature (e.g., when negligible motion has occurred between successive frames in the media stream160, when the camera parameters are substantially the same and the backgrounds are statistically stationary, etc.), the example scene detector354includes the current image frame in the current detected scene, and the next frame is then analyzed by comparing it to the starting frame (or key frame) of the scene. However, if the scene detector354detects a significant change between the image signatures (e.g., in the example of presentation of a baseball game, when a batter in the preceding frame is replaced by an outfielder in the current image frame of the media stream160), the example scene detector354identifies the current image frame as starting a new scene, stores the current image frame as the starting frame (and/or key frame) for that new scene, and stores the image signature for the current frame for use as the starting image signature (and/or key signature) for that new scene. The example scene detector then marks the immediately previous frame as the ending frame for the current scene and determines one or more key frames and associated key image signature(s) to represent the current scene. As discussed above, the key frame and key image signature for the current scene may be chosen to be, for example, the frame and signature corresponding to the first frame in the scene, the last frame in the scene, the midpoint frame in the scene, etc. In another example, the key frame and key image signature may be determined to be an average and/or some other statistical combination of the frames and/or signatures corresponding to the detected scene. The current scene is then ready for scene classification. The example scene recognizer252ofFIG.3also includes a scene excluder355to exclude certain detected scenes from brand exposure processing. As discussed above, a detected scene may be excluded under circumstances where it is likely the scene will not contain any brand identifiers (e.g., logos). In an example implementation, the scene excluder355is configured to use domain knowledge corresponding to the particular type of media content being processed to determine when a detected scene exhibits characteristics indicating that the scene will not contain any brand identifiers. If a detected scene is excluded, the scene excluder355of the illustrated example invokes the report generator256ofFIG.2to report the excluded scene. To categorize a non-excluded, detected scene, the example scene recognizer252ofFIG.2includes a scene classifier356. The example scene classifier356compares the current detected scene (referred to as the current scene) to one or more reference scenes (e.g., previously learned and/or known scenes) stored in the scene database262and/or the learned knowledge database264. For example, the scene classifier356may compare an image signature representative of the current scene to one or more reference image signatures representative of one or more respective reference scenes. Based on the results of the comparison, the example scene classifier356classifies the current scene into one of the following four categories: a repeated scene of interest, a repeated scene of changed interest, a new scene, or a scene of no interest. For example, if the current scene's image signature does not match any reference scene's image signature, the example scene classifier356classifies the scene as a new scene and displays the current scene via the output device270onFIG.2. For example, a prompt is shown via the GUI152to alert the user170of the need to identify the scene. However, if the current scene's image signature matches a reference scene's image signature and the corresponding reference scene has already been marked as a scene of no interest, information describing the current scene (e.g., such as its key frame, key signature, etc.) for use in detecting subsequent scenes of no interest is stored in the example learned knowledge database264and the next scene in the media stream160is analyzed. If, however, a match is found and the corresponding reference scene has not been marked as a scene of no interest, the example scene classifier356then determines one or more expected regions of interest in the detected scene based on region of interest information corresponding to the matched reference scene. The example scene classifier356then invokes the example brand recognizer254to perform brand recognition by comparing the expected region(s) of interest included in the current scene with one or more reference brand identifiers (e.g., previously learned and/or known brand identifiers) stored in the learned knowledge database264and/or the brand library266. Then, if one or more regions of interest included in the identified scene do not match any of the corresponding expected reference brand identifier stored in the learned knowledge database264and/or the brand database266, the current scene is classified as a repeated scene of changed interest and displayed at the output device270. For example, a prompt may be shown via the GUI152to alert the user170of the need to detect and/or identify one or more brand identifiers included in the non-matching region(s) of interest. The brand identifier (e.g., logo) marking/identification provided by the user170is then used to update the learned knowledge database264. Additionally, if the current scene was a new scene, the learned knowledge database264may be updated to use the current detected scene as a reference for detecting future repeated scenes of interest. However, if all of the region(s) of interest included in the current scene match the corresponding expected reference brand identifier(s) stored in the learned knowledge database264and/or the brand database266, the current scene is classified as a repeated scene of interest and the expected region(s) of interest are automatically analyzed by the brand recognizer254to provide brand exposure reporting with no additional involvement needed by the user170. Furthermore, and as discussed above, a tracker function is then initiated for each expected region of interest. The tracker function uses the expected region of interest as a template to track the corresponding region of interest in the adjacent image frames comprising the current detected scene. Parameters describing the region of interest and how it changes over the scene are used to derive an exposure measurement for brand identifier(s) included in the region of interest, as well as to update the example learned knowledge database264with information concerning how the brand identifier(s) included in the region of interest may appear in subsequent scenes. While an example manner of implementing the example scene recognizer252ofFIG.2has been illustrated inFIG.3, some or all of the elements, processes and/or devices illustrated inFIG.3may be combined, divided, re-arranged, omitted, eliminated and/or implemented in any way. Further, the example signature generator352, the example scene detector354, the example scene classifier356, and/or more generally, the example scene recognizer252ofFIG.3may be implemented by hardware, software, firmware and/or any combination of hardware, software and/or firmware. Moreover, the example scene recognizer252may include data structures, elements, processes, and/or devices instead of, or in addition to, those illustrated inFIG.3and/or may include more than one of any or all of the illustrated data structures, elements, processes and/or devices. An example manner of implementing the example brand recognizer254ofFIG.2is shown inFIG.4. To detect one or more brand identifiers (e.g., one or more logos) in a region of interest of the identified scene, the example brand recognizer254ofFIG.4includes a brand identifier detector452. The example brand identifier detector452ofFIG.4compares the content of each region of interest specified by, for example, the scene recognizer252ofFIG.2, with one or more reference (e.g., previously learned and/or known) brand identifiers corresponding to a matched reference scene and stored in the example learned knowledge database264and/or the brand library266. For example, and as discussed above, the brand identifier detector452may perform template matching to compare the region of interest to one or more scaled versions and/or one or more different orientations of the reference brand identifiers (e.g., reference logos) stored in the learned knowledge database264and/or the brand library266to determine brand identifiers (e.g., logos) included in the region of interest. For example, the brand identifier detector452may include a region of interest (ROI) detector464and a ROI tracker464. The example ROI detector462locates an ROI in a key frame representing the current scene by searching each known or previously learned (e.g., observed) ROI associated with the current scene's matching reference scene. Additionally, the example ROI detector462may search all known or previously learned locations, perspectives (e.g., size, angle, etc.), etc., for each expected ROI associated with the matching reference scene. Upon finding an ROI in the current scene that matches an expected ROI in the reference scene, the observed ROI, its locations, its perspectives, and its association with the current scene are stored in the learned knowledge database264. The learned knowledge database264, therefore, is updated with any new learned information each time an ROI is detected in a scene. The example ROI tracker464then tracks the ROI detected by the ROI detector464in the key frame of the current scene. For example, the ROI tracker464may search for the detected ROI in image frames adjacent to the key frame of the current scene, and in a neighborhood of the known location and perspective of the detected ROI in the key frame of the current scene. During the tracking process, appearance parameters, such as, for example, location, size, matching quality, visual quality, etc. are recorded for assisting the detection of ROI(s) in future repeated image frames and for deriving exposure measurements. (These parameters may be used as search keys and/or templates in subsequent matching efforts.) The example ROI tracker464stops processing the current scene when all frames in the scene are processed and/or when the ROI cannot be located in a certain specified number of consecutive image frames. To identify the actual brands associated with one or more brand identifiers, the example brand recognizer254ofFIG.4includes a brand identifier matcher454. The example brand identifier matcher454processes the brand identifier(s) detected by the brand identifier detector452to obtain brand identity information stored in the brand database266. For example, the brand identity information stored in the brand database266may include, but is not limited to, internal identifiers, names of entities (e.g., corporations, individuals, etc.) owning the brands associated with the brand identifiers, product names, service names, etc. To measure the exposure of the brand identifiers (e.g., logos) detected in, for example, the scenes detected in the media stream160, the example brand recognizer254ofFIG.4includes a measure and tracking module456. The example measure and tracking module456ofFIG.4collects appearance data corresponding to the detected/recognized brand identifier(s) (e.g., logos) included in the image frames of each detected scene, as well as how the detected/recognized brand identifier(s) (e.g., logos) may vary across the image frames comprising the detected scene. For example, such reported data may include information regarding location, size, orientation, match quality, visual quality, etc., for each frame in the detected scene. (This data enables a new ad payment/selling model wherein advertisers pay per frame and/or time of exposure of embedded brand identifiers.) In an example implementation, the measure and tracking module456determines a weighted location and size for each detected/recognized brand identifier. For example, the measure and tracking module456may weight the location and/or size of a brand identifier by the duration of exposure at that particular location and/or size to determine the weighted location and/or size information. A report of brand exposure may be generated from the aforementioned information by a report generator256. While an example manner of implementing the example brand recognizer254ofFIG.2has been illustrated inFIG.4, some or all of the elements, processes and/or devices illustrated inFIG.4may be combined, divided, re-arranged, omitted, eliminated and/or implemented in any way. Further, the example brand identifier detector452, the example brand identifier matcher454, the example measure and tracking module456, and/or more generally, the example brand recognizer254ofFIG.4may be implemented by hardware, software, firmware and/or any combination of hardware, software and/or firmware. Thus, for example, any of the example brand identifier detector452, the example brand identifier matcher454, the example measure and tracking module456, and/or more generally, the example brand recognizer254could be implemented by one or more circuit(s), programmable processor(s), application specific integrated circuit(s) (ASIC(s)), programmable logic device(s) (PLD(s)) and/or field programmable logic device(s) (FPLD(s)), etc. When any of the appended claims are read to cover a purely software implementation, at least one of the example brand recognizer254, the example brand identifier detector452, the example brand identifier matcher454and/or the example measure and tracking module456are hereby expressly defined to include a tangible medium such as a memory, digital versatile disk (DVD), compact disk (CD) etc. Moreover, the example brand recognizer254ofFIG.4may include one or more elements, processes, and/or devices instead of, or in addition to, those illustrated inFIG.4, and/or may include more than one of any or all of the illustrated elements, processes and/or devices. To better illustrate the operation of the example signature generator352, the example scene detector354, the example scene excluder355, the example scene classifier356, the example ROI detector464and the example ROI tracker464, example scenes that could be processed to measure brand exposure are shown inFIGS.5A-5D. The example scenes shown inFIGS.5A-5Dare derived from a media stream160providing example broadcasts of sporting events. The example sporting event broadcasts are merely illustrative and the methods and apparatus disclosed herein are readily applicable to processing media streams to determine brand exposure associated with any type of media content. Turning to the figures,FIG.5Aillustrates four example key frames505,510,515and525associated with four respective example scenes which could qualify for scene exclusion based on known or learned domain knowledge. In particular, the example key frame505depicts a scene having a background (e.g., a golf course) that includes a predominance of uniformly grouped greenish pixels. If the domain knowledge corresponding to the type of media content that generated the example key frame505(e.g., such as an expected broadcast of a golfing event) indicates that a scene including a predominance of uniformly grouped greenish pixels should be excluded because it corresponds to a camera shot of the playing field (e.g., golf course), the example scene excluder355could be configured with such knowledge and exclude the scene corresponding to the example key frame505. The example key frame510corresponds to a scene of short duration because the scene includes a predominance of components in rapid motion. In an example implementation, the scene excluder355could be configured to exclude such a scene because it is unlikely a brand identifier would remain in the scene for sufficient temporal duration to be observed meaningfully by a person consuming the media content. The example key frame515corresponds to a scene of a crowd of spectators at the broadcast sporting event. Such a scene could also be excluded by the example scene excluder355if, for example, the domain knowledge available to the scene excluder355indicated that a substantially uniform, mottled scene corresponds to an audience shot and, therefore, is not expected to include any brand identifier(s). The example key frame520corresponds to a scene from a commercial being broadcast during the example broadcasted sporting event. In an example implementation, the scene excluder355could be configured to excludes scenes corresponding to a broadcast commercial (e.g., based on a detected audio code in the example media stream160, based on a detected transition (e.g., blank screen) in the example media stream160, etc.) if, for example, brand exposure reporting is to be limited to embedded advertisements. FIG.5Billustrates two example key frames525and530associated with two example scenes which could be classified as scenes of no interest by the example scene classifier356. In the illustrated example, the key frame525corresponds to a new detected scene that may be marked by the user170as a scene of no interest because the example key frame525does not include any brand identifiers (e.g., logos). The scene corresponding to the key frame525then becomes a learned reference scene of no interest. Next, the example key frame530corresponds to a subsequent scene detected by the example scene detector354. By comparing the similar image signatures (e.g., image histograms) for the key frame520to the key frame530of the subsequent detected scene, the example scene classifier356may determine that the image frame530corresponds to a repeat of the reference scene corresponding to the example key frame525. In such a case, the example scene classifier356would then determine that the key frame530corresponds to a repeated scene of no interest because the matching reference scene corresponding to the example key frame525had been marked as a scene of no interest. FIG.5Cillustrates two example key frames535and540associated with two example scenes which could be classified as scenes of interest by the example scene classifier356. In the illustrated example, the key frame535corresponds to a new detected scene that may be marked by the user170as a scene of interest because the example key frame535has a region of interest545including a brand identifier (e.g., the sign advertising “Banner One”). The scene corresponding to the key frame535would then become a learned reference scene of interest. Furthermore, the user170may mark the region of interest545, which would then be used by the example ROI tracker464to create one or more reference brand identifier templates for detecting subsequent repeated scenes of interest corresponding to this reference scene and region of interest. Next, the example key frame540corresponds to a subsequent scene detected by the example scene detector354. By comparing the similar image signatures (e.g., image histograms) for the key frame535and the key frame540of the subsequent detected scene, the example scene classifier356may determine that the image frame540corresponds to a repeat of the reference scene corresponding to the example key frame535. Because the reference scene is a scene of interest, the example scene classifier356would then invoke the example ROI detector462to find the appropriate expected region of interest in the example key frame540based on the reference template(s) corresponding to the reference region of interest535. In the illustrated example, the ROI detector462finds the region of interest550corresponding to the reference region of interest535because the two regions of interest are substantially similar except for expected changes in orientation, size, location, etc. Because the example ROI detector462found and verified the expected region of interest550in the illustrated example, the example scene classifier356would classify the scene corresponding to the example key frame540as a repeated scene of interest relative to the reference scene corresponding to the example key frame535. FIG.5Dillustrates two example key frames555and560associated with two example scenes which could be classified as scenes of interest by the example scene classifier356. In the illustrated example, the key frame555corresponds to a new detected scene that may be marked by the user170as a scene of interest because the example key frame555has a region of interest565including a brand identifier (e.g., the sign advertising “Banner One”). The scene corresponding to the key frame555would then become a learned reference scene of interest. Furthermore, the user170may mark the region of interest565, which would then be used by the example ROI tracker464to create one or more reference brand identifier templates for detecting subsequent repeated scenes of interest corresponding to this reference scene and region of interest. Next, the example key frame560corresponds to a subsequent scene detected by the example scene detector354. By comparing the similar image signatures (e.g., image histograms) for the key frame555and the key frame560of the subsequent detected scene, the example scene classifier356may determine that the image frame560corresponds to a repeat of the reference scene corresponding to the example key frame555. Because the reference scene is a scene of interest, the example scene classifier356would then invoke the example ROI detector462to find the appropriate expected region of interest in the example key frame560based on the reference template(s) corresponding to the reference region of interest565. In the illustrated example, the ROI detector462does not find any region of interest corresponding to the reference region of interest565because there is no brand identifier corresponding to the advertisement “Banner One” in the example key frame560. Because the example ROI detector was unable to verify the expected region of interest in the illustrated example, the example scene classifier356would classify the scene corresponding to the example key frame560as a repeated scene of changed interest relative to the reference scene corresponding to the example key frame555. Next, because the scene corresponding to the example key frame560is classified as a repeated scene of changed interest, the user170would be requested to mark any brand identifier(s) included in the scene. In the illustrated example, the user170may mark the region of interest570because it includes a brand identifier corresponding to a sign advertising “Logo Two.” The example ROI tracker464would then be invoked to create one or more reference brand identifier templates based on the marked region of interest570for detecting subsequent repeated scenes of interest including this new reference region of interest. FIGS.6A-6Bcollectively form a flowchart representative of example machine accessible instructions600that may be executed to implement the example scene recognizer252ofFIGS.2and/or3, and/or at least a portion of the example brand exposure monitor150ofFIGS.1and/or2.FIGS.7A-7Ccollectively form a flowchart representative of example machine accessible instructions700that may be executed to implement the example GUI152ofFIGS.1and/or2, and/or at least a portion of the example brand exposure monitor150ofFIGS.1and/or2.FIGS.8A-8Bare flowcharts representative of example machine accessible instructions800and850that may be executed to implement the example brand recognizer254ofFIGS.2and/or3, and/or at least a portion of the example brand exposure monitor150ofFIGS.1and/or2. The example machine accessible instructions ofFIGS.6A-6B,7A-7C and/or8A-8Bmay be carried out by a processor, a controller and/or any other suitable processing device. For example, the example machine accessible instructions ofFIGS.6A-6B,7A-7C and/or8A-8Bmay be embodied in coded instructions stored on a tangible medium such as a flash memory, a read-only memory (ROM) and/or random-access memory (RAM) associated with a processor (e.g., the example processor905discussed below in connection withFIG.9). Alternatively, some or all of the example brand exposure monitor150, the example GUI152, the example scene recognizer252, and/or the example brand recognizer254may be implemented using any combination(s) of application specific integrated circuit(s) (ASIC(s)), programmable logic device(s) (PLD(s)), field programmable logic device(s) (FPLD(s)), discrete logic, hardware, firmware, etc. Also, some or all of the example machine accessible instructions ofFIGS.6A-B,7A-C and/or8A-8B may be implemented manually or as any combination of any of the foregoing techniques, for example, any combination of firmware, software, discrete logic and/or hardware. Further, although the example machine accessible instructions are described with reference to the example flowcharts ofFIGS.6A-6B,7A-7C and8A-8B, many other methods of implementing the machine accessible instructions ofFIGS.6A-6B,7A-7C, and/or8A-8B may be employed. For example, the order of execution of the blocks may be changed, and/or one or more of the blocks described may be changed, eliminated, sub-divided, or combined. Additionally, some or all of the example machine accessible instructions ofFIGS.6A—6,7A-7C and/or8A-8B may be carried out sequentially and/or carried out in parallel by, for example, separate processing threads, processors, devices, discrete logic, circuits, etc. Turning toFIGS.6A-6B, execution of the example machine executable instructions600begins with the example scene recognizer252included in the example brand exposure monitor150receiving a media stream, such as the example media stream160ofFIG.1(block602ofFIG.6A). The example scene recognizer252then detects a scene included in the received media stream160by comparing image signatures created for image frames of the media stream160(block604). As discussed above, successive image frames that have substantially similar image signatures (e.g., such as substantially similar image histograms) are identified to be part of the same scene. In an example implementation, one of the substantially similar image frames will be stored as a key frame representative of the scene, and the image signature created for the key frame will serve as the detected scene's image signature. An example technique for generating the image signature at block604which uses image histograms is discussed above in connection withFIG.2. Next, the example scene recognizer252performs scene exclusion by examining the key frame of the current scene detected at block604for characteristics indicating that the scene does not include any brand identifiers (e.g., logos) (block606). For example, and as discussed above, domain knowledge specific to the type of media content expected to be processed may be used to configure the example scene recognizer252to recognize scene characteristics indicative of a scene lacking any brand identifiers that could provide brand exposure. In the context of the baseball game example, a scene characterized as primarily including a view of a blue sky (e.g., when following pop-up fly ball), a view of the ground (e.g., when following a ground ball), or having a quickly changing field of view (e.g., such as when a camera pans to follow a base runner), etc., may be excluded at block606. If the current detected scene is excluded (block608), the example scene recognizer252invokes the report generator256to report the exclusion of the current detected scene (block610). Additionally or alternatively, the example scene recognizer252may store information describing the excluded scene as learned knowledge to be used to exclude future detected scenes and/or classify future scenes as scenes of no interest. The example scene recognizer252then examines the media stream160to determine whether the media stream160has ended (block630). If the end of the media stream160has been reached, execution of the example machine accessible instructions600ends. If the media stream160has not completed (block630), control returns to block604to allow the example scene recognizer252to detect a next scene in the media stream160. Returning to block608, if the current detected scene (also referred to as the “current scene”) is not excluded, the example scene recognizer252compares the current scene with one or more reference (e.g., previously learned and/or known) scenes stored in one or more databases (e.g., the scene database262and/or the learned knowledge database264ofFIG.2) (block612). An example technique for performing the comparison at block612is discussed above in connection withFIG.2. For example, at block612the example scene recognizer252may compare the image signature (e.g., image histogram) for the current scene with the image signatures (e.g., image histograms) for the reference scenes. A signature match may be declared if the current scene's signature has a certain degree of similarity with a reference scene's signature as specified by one or more thresholds. Control then proceeds to block614ofFIG.6B. If the example scene recognizer252determines that the image signature of the current scene does not match any reference (e.g., previously learned and/or known) scene's signature (block614), the scene is classified as a new scene (block626). The example scene recognizer252then stops (e.g., pauses) the media stream160and passes the scene along with the scene classification information to the example GUI152to enable identification of the scene and any brand identifier(s) (e.g., logos) included in the scene (block627). Example machine readable instructions700that may be executed to perform the identification procedure at block627are illustrated inFIGS.7A-7Cand discussed in greater detail below. After any identification via the GUI152is performed at block627, the example scene recognizer252restarts the media stream160and control proceeds to block610ofFIG.6Aat which the example scene recognizer252invokes the report generator256to report brand exposure based on the identification of the current scene and/or brand identifier(s) included therein obtained at block627. Control then returns to block630to determine whether there are more scenes remaining in the media stream160. Returning to block614ofFIG.6B, if the image signature of the current scene matches an image signature corresponding to a reference (e.g., previously learned and/or known) scene, a record of stored information associated with the matched reference scene is retrieved (block616). If the matched reference scene was marked and/or was otherwise determined to be a scene of no interest (block618) (e.g., a scene known to not include brand identifiers), the current scene is classified as a scene of no interest (block619). Control proceeds to block610ofFIG.6Aat which the example scene recognizer252invokes the report generator256to report that the current scene has been classified as a scene of no interest. Control then returns to block630to determine whether there are more scenes remaining in the media stream160. However, if the reference scene was not marked or otherwise determined to be a scene of no interest (block618), one or more regions of interest are then determined for the current scene (block620). The region(s) of interest are determined based on stored region of interest information obtained at block616for the matched reference scene. The determined region(s) of interest in the scene is(are) then provided to the example brand recognizer254to enable comparison with one or more reference (e.g., previously learned and/or known) brand identifiers (block621). Example machine readable instructions800that may be executed to perform the comparison procedure at block621are illustrated inFIG.8Aand discussed in greater detail below. Based on the processing at block621performed by, for example, the example brand recognizer254, if the example scene recognizer252determines that at least one region of interest does not match any reference (previously learned and/or known) brand identifiers (block622), the scene is classified as a repeated scene of changed interest (block628). A region of interest in a current scene may not match any reference brand identifier(s) associated with the matched reference scene if, for example, the region of interest includes brand identifier(s) (e.g., logos) that are animated, virtual and/or changing over time, etc. The example scene recognizer252then stops (e.g., pauses) the media stream160and provides the scene, the scene classification, and the region(s) of interest information to the example GUI152to enable identification of the scene and any brand identifier(s) included in the scene (block629). Example machine readable instructions700that may be executed to perform the identification procedure at block629are illustrated inFIGS.7A-7Cand discussed in greater detail below. After any identification via the GUI152is performed at block629, the example scene recognizer restarts the media stream160and control proceeds to block610ofFIG.6Aat which the example scene recognizer252invokes the report generator256to report brand exposure based on the identification of the current scene and/or brand identifier(s) included therein obtained at block629. Control then returns to block630to determine whether there are more scenes remaining in the media stream160. Returning to block622, if all regions of interest in the scene match reference (e.g., previously learned and/or known) brand identifiers, the example scene recognizer252classifies the scene as a repeated scene of interest (block624). The example scene recognizer252then provides the scene, the determined region(s) of interest and the detected/recognized brand identifier(s) to, for example, the example brand recognizer254to enable updating of brand identifier characteristics, and/or collection and/or calculation of brand exposure information related to the detected/recognized the brand identifier(s) (block625). Example machine readable instructions850that may be executed to perform the processing at block625are illustrated inFIG.8Band discussed in greater detail below. Next, control proceeds to block610ofFIG.6Aat which the example scene recognizer252invokes the report generator256to report brand exposure based on the brand identifier(s) recognized/detected at block625. Control then returns to block630to determine whether there are more scenes remaining in the media stream160. Turning toFIGS.7A-7C, execution of the machine executable instructions700begins with the GUI152receiving a detected scene and a classification for the scene from, for example, the example scene recognizer252or via processing performed at block627and/or block629ofFIG.6B(block701). The example GUI152then displays the scene via, for example, the output device270(block702). The example GUI152then evaluates the scene classification received at block701(block704). If the scene is classified as a new scene (block706), the example GUI152then prompts the user170to indicate whether the current scene is a scene of interest (or, in other words, is not a scene of no interest) (block708). In the illustrated example, the current scene will default to be a scene of no interest unless the user indicates otherwise. For example, at block708the GUI152may prompt the user170to enter identifying information, a command, click a button, etc., to indicate whether the scene is of interest or of no interest. Additionally or alternatively, the GUI152may automatically determine that the scene is of no interest if the user170does not begin to mark one or more regions of interest in the current scene within a predetermined interval of time after the scene is displayed. If the user170indicates that the scene is of no interest (e.g., by affirmative indication or by failing to enter any indication regarding the current scene) (block710), the detected scene is reported to be a scene of no interest (block712) and execution of the example machine accessible instructions700then ends. However, if the user170indicates that the scene is of interest (block710), the user170may input a scene title for the current scene (block714). The example GUI152then stores the scene title (along with the image signature) for the current scene in a database, (e.g., such as the learned knowledge database264) (block716). After processing at block716completes, or if the scene was not categorized as a new scene (block706), control proceeds to block718ofFIG.7B. Next, the example GUI152prompts the user170to click on a region of interest in the displayed scene (block718). Once the user170has clicked on the region of interest, the example GUI152determines at which point the user170clicked and determines a small region around the point clicked (block720). The example GUI152then calculates the region of interest and highlights the region of interest in the current scene being displayed via the output270(block722). If the user170then clicks an area inside or outside of the highlighted displayed region of interest to resize and/or reshape the region of interest (block724), the example GUI152re-calculates and displays the updated region of interest. Control returns to block724to allow the user170to continue re-sizing or re-shaping the highlighted, displayed region of interest. In another implementation, the region of interest creation technique of blocks718-726can be adapted to implement the example automated region of interest creation technique described above in connection withFIG.10. If the GUI152detects that the user170has not clicked an area inside or outside the highlighted region within a specified period of time (block724), the example GUI152then compares the region of interest created by the user170with one or more reference (e.g., previously learned and/or known) brand identifiers (block728). For example, at block728the example GUI152may provide the created region of interest and current scene's classification of, for example, a new scene or a repeated scene of changed interest to the example brand recognizer254to enable comparison with one or more reference (e.g., previously learned and/or known) brand identifiers. Additionally, if the scene is classified as a new scene, as opposed to a repeated scene of changed interest, the example brand recognizer254may relax the comparison parameters to return brand identifiers that are similar to, but that do not necessarily match, the created region of interest. Example machine readable instructions800that may be executed to perform the comparison procedure at block728are illustrated inFIG.8Aand discussed in greater detail below. Next, after the brand identifier(s) is(are) compared at block728, the example GUI152displays the closest matching reference (e.g., previously learned and/or known) brand identifier to the region of interest (block730). The example GUI152then prompts the user to accept the displayed brand identifier or to input a new brand identifier for the created region of interest (block732). Once the user has accepted the brand identifier displayed by the example GUI152and/or has input a new brand identifier, the example GUI152stores the description of the region of interest and the brand identifier in a database (e.g., such as the learned knowledge database264) (block734). For example, the description of the region of interest and/or brand identifier(s) contained therein may include, but is not limited to, information related to the size, shape, color, location, texture, duration of exposure, etc. Additionally or alternatively, the example GUI152may provide the information regarding the created region(s) of interest and the identified brand identifier(s) to, for example, the example brand recognizer254to enable reporting of the brand identifier(s). Example machine readable instructions850that may be executed to perform the processing at block734are illustrated inFIG.8Band discussed in greater detail below. Next, if the user170indicates that there are more regions of interest to be identified in the current scene (e.g., in response to a prompt) (block736), control returns to block718at which the GUI152prompts the user to click on a new region of interest in the scene to begin identifying any brand identifier(s) included therein. However, if the user indicates that all regions of interest have been identified, control proceeds to block737ofFIG.7Cat which a tracker function is initiated for each newly marked region of interest. As discussed above, a tracker function uses the marked region(s) of interest as a template(s) to track the corresponding region(s) of interest in the adjacent image frames comprising the current detected scene. After the processing at block737completes, the media stream160is restarted after having been stopped (e.g., paused) (block738). The example GUI152then provides the scene and region(s) of interest to, for example, the example brand recognizer254to enable updating of brand identifier characteristics, and/or collection and/or calculation of brand exposure information related to the identified brand identifier(s) (block740). Execution of the example machine accessible instructions700then ends. Turning toFIG.8A, execution of the example machine executable instructions800begins with a brand recognizer, such as the example brand recognizer254, receiving a scene, the scene's classification and one or more regions of interest from, for example, the example scene recognizer252, the example GUI152, the processing at block621ofFIG.6B, and/or the processing at block728ofFIG.7C(block801). The example brand recognizer254then obtains the next region of interest to be analyzed in the current scene from the information received at block801(block802). The example brand recognizer254then compares the region of interest to one or more expected reference brand identifier templates (e.g., corresponding to a reference scene matching the current scene) having, for example, one or more expected locations, sizes, orientations, etc., to determine which reference brand identifier matches the region of interest (block804). An example brand identifier matching technique based on template matching that may be used to implement the processing at block804is discussed above in connection withFIG.4. Additionally, if the scene classification received at block801indicates that the scene is a new scene, the comparison parameters of the brand identifier matching technique employed at block804may be relaxed to return brand identifiers that are similar to, but that do not necessarily match, the compared region of interest. Next, the example brand recognizer254returns the reference brand identifier matching (or which closely matches) the region of interest being examined (block806). Then, if any region of interest has not been analyzed for brand exposure reporting (block808), control returns to block802to process the next region of interest. If, however, all regions of interest have been analyzed (block808), execution of the example machine accessible instructions800then ends. Turning toFIG.8B, execution of the example machine executable instructions850begins with a brand recognizer, such as the example brand recognizer254, receiving information regarding one or more regions of interest and one or more respective brand identifiers detected therein from, for example, the example scene recognizer252, the example GUI152, the processing at block625ofFIG.6B, and/or the processing at block734ofFIG.7C(block852). The example brand recognizer254then obtains the next detected brand identifier to be processed from the one or more detected brand identifiers received at block852(block854). Next, one or more databases (e.g., the learned knowledge database264FIG.2, the brand library266ofFIG.2, etc.) are queried for information regarding the detected brand identifier (block856). The brand identifier data may include, but is not limited to, internal identifiers, names of entities (e.g., corporations, individuals, etc.) owning the brands associated with the brand identifiers, brand names, product names, service names, etc. Next, characteristics of a brand identifier detected in the region of interest in the scene are obtained from the information received at block852(block858). Next, the example brand recognizer254obtains the characteristics of the reference brand identifier corresponding to the detected brand identifier and compares the detected brand identifier's characteristics with the reference brand identifier's characteristics (block860). The characteristics of the brand identifier may include, but are not limited to, location, size, texture, color, quality, duration of exposure, etc. The comparison at block860allows the example brand recognizer254to detect and/or report changes in the characteristics of brand identifiers over time. After the processing at block860completes, the identification information retrieved at block856for the detected brand identifier, the detected brand identifier's characteristics determined at block858and/or the changes in the brand identifier detected at block860are stored in one or more databases (e.g., such as the brand exposure database155ofFIG.1) for reporting and/or further analysis (block812). Then, if any region of interest has not yet been analyzed for brand exposure reporting (block814), control returns to block854to process the next region of interest. If all regions of interest have been analyzed, execution of the example machine accessible instructions850then ends. FIG.9is a schematic diagram of an example processor platform900capable of implementing the apparatus and methods disclosed herein. The example processor platform900can be, for example, a server, a personal computer, a personal digital assistant (PDA), an Internet appliance, a DVD player, a CD player, a digital video recorder, a personal video recorder, a set top box, or any other type of computing device. The processor platform900of the example ofFIG.9includes at least one general purpose programmable processor905. The processor905executes coded instructions910and/or912present in main memory of the processor905(e.g., within a RAM915and/or a ROM920). The processor905may be any type of processing unit, such as a processor core, a processor and/or a microcontroller. The processor905may execute, among other things, the example machine accessible instructions ofFIGS.6A-6B,7A-7C, and/or8A-8B to implement any, all or at least portions of the example brand exposure monitor150, the example GUI152, the example scene recognizer252, the example brand recognizer254, etc. The processor905is in communication with the main memory (including a ROM920and/or the RAM915) via a bus925. The RAM915may be implemented by DRAM, SDRAM, and/or any other type of RAM device, and ROM may be implemented by flash memory and/or any other desired type of memory device. Access to the memory915and920may be controlled by a memory controller (not shown). The RAM915and/or any other storage device(s) included in the example processor platform900may be used to store and/or implement, for example, the example brand exposure database155, the example scene database262, the example learned knowledge database264and/or the example brand library266. The processor platform900also includes an interface circuit930. The interface circuit930may be implemented by any type of interface standard, such as a USB interface, a Bluetooth interface, an external memory interface, serial port, general purpose input/output, etc. One or more input devices935and one or more output devices940are connected to the interface circuit930. For example, the interface circuit930may be coupled to an appropriate input device935to receive the example media stream160. Additionally or alternatively, the interface circuit930may be coupled to an appropriate output device940to implement the output device270and/or the GUI152. The processor platform900also includes one or more mass storage devices945for storing software and data. Examples of such mass storage devices945include floppy disk drives, hard drive disks, compact disk drives and digital versatile disk (DVD) drives. The mass storage device945may implement for example, the example brand exposure database155, the example scene database262, the example learned knowledge database264and/or the example brand library266. As an alternative to implementing the methods and/or apparatus described herein in a system such as the device ofFIG.9, the methods and or apparatus described herein may be embedded in a structure such as a processor and/or an ASIC (application specific integrated circuit). Finally, although certain example methods, apparatus and articles of manufacture have been described herein, the scope of coverage of this patent is not limited thereto. On the contrary, this patent covers all methods, apparatus and articles of manufacture fairly falling within the scope of the appended claims either literally or under the doctrine of equivalents. | 106,300 |
11861904 | DETAILED DESCRIPTION A system is presented below that records the activity of an individual, such as a job candidate, within a booth in order to create a customized presentation version of that recording and present that version to a user computer system. A hiring manager, potential future co-worker, or potential future supervisor of the job candidate are examples of users who will view this customized presentation during the process of hiring. The customized presentation is designed to efficiently showcase the relevant skills and qualities of the individual, showing the individual in the best possible light to the user. While the example context of employment evaluation has been mentioned so far, the system described herein is valuable in many other contexts. During the video presentation, the individual responds to prompts, such as interview questions. The content of the recording is then segmented and analyzed according to these prompts. The prompts and the individual's performance in the segments of the recording determine which segments illustrate specific skills, and which segments show positive qualities of the individual, such as empathy, engagement, or technical competence. The user's own preferences can be analyzed and used to design the customized presentation. In addition, the preferences of other users, such as other users in the same role as the user, can be used to modify the customized presentation. As a result, the user is able to see the most relevant information to the user's objective, such as a job listing, early in the customized presentation. Information that is not relevant to the user is eliminated automatically from the customized presentation. The individual is shown to the user in the best possible light, with the skills important to that user or type of user emphasized in the customized presentation. System10 FIG.1shows a system10that records the activity of an individual20within a booth100in order to create a customized presentation version30of that recording and present that version30to a user computer system160. Data recorded from the booth100is managed in part by a system server170and stored in data store180. In the preferred embodiment, the system server170allows multiple user computer systems160to access data180from multiple booths100, although the following discussion will primarily focus on the interaction between a single booth100and a single user computer system170. The booth or kiosk100is preferably an enclosed room that records high-quality audio and visual data of individual20. The kiosk100houses multiple visual cameras110, including a first camera112, a second camera114, and a third camera116. Each of the cameras110is capable of recording visual data of the individual20from different angles. For example, the first camera112might record the individual20from the left side, while the second camera114records the individual20from the center and the third camera116records the individual20from the right side. The cameras110can be digital visual cameras that record data in the visible spectrum using, for example, a CCD or CMOS image sensor. The kiosk100also houses at least one microphone120for recording audio. InFIG.1, two microphones122,124are shown in the kiosk100. In some embodiments, the microphones120are embedded into and form part of the same physical component as a camera110. In other embodiments, one or more of the microphones120are separate components that can be mounted apart from the cameras110within the kiosk100. The sound data from the microphones120can be combined with the visual data from the cameras110into video (audio plus visual) material for later review of a user's recording session within the booth. The sound recorded by the microphones120can also be used for behavioral analysis of the individual20. Speech recorded by the microphones120can be analyzed to extract behavioral data, such as vocal pitch and vocal tone, speech cadence, word patterns, word frequencies, total time spent speaking, and other information conveyed in the speaker's voice and speech. Additionally, the audio can be analyzed using speech to text technology, and the words chosen by the individual20while speaking can be analyzed for word choice, word frequency, etc. The kiosk100also incorporates one or more depth sensors130that can detect changes in the position of the individual20. Only one depth sensor130is shown inFIG.1, but the preferred embodiment will utilize multiple depth sensors130aimed at different portions of the individual20. The depth sensors130utilize one or more known technologies that are able to record the position and movement of the individual20, such as stereoscopic optical depth sensor technology, infrared sensors, laser sensors, or even LIDAR sensors. These sensors130generate information about the facial expression, body movement, body posture, and hand gestures of individual20. As explained in the incorporated patent application Ser. No. 16/366,703, depth sensors130can also be referred to as behavioral sensors, and data from these sensors130can be combined with information obtained from visual cameras110and microphones120to provide detailed behavioral data concerning the individual20. This information can then be used to extrapolate information about the individual's emotional state during their recorded session in the booth100, such as whether the individual20was calm or nervous, or whether the individual20was speaking passionately about a particular subject. The kiosk100also includes one or more user interfaces140. User interface140is shown inFIG.1as a display screen that can display content and images to the individual20. Alternatively, the user interface140can take the form of a touchscreen that both displays content and accepts inputs from the user20. In still further embodiments, the user interface140can incorporate multiple devices that can provide data to, or accept input from, the individual20, including a screen, a touchscreen, an audio speaker, a keyboard, a mouse, a trackpad, or a smart whiteboard. In the embodiments described herein and in the incorporated documents, the user interface140is capable of providing prompts to individual20, such as to answer a question. The interface140can also show a recorded or live video to the individual20, or it can prompt the individual20to demonstrate a skill or talent. A computer102at the booth is able to capture visual data of the individual20from the cameras110, capture audio data of the individual20from the microphones120, and capture behavioral data input from the depth sensors130. This data is all synchronized or aligned. This means, for example, that audio information recorded by all of the microphones120will be synchronized with the visual information recorded by all of the cameras110and the behavioral data taken from the sensors130, so that all the data taken at the same time can be identified and compared for the same time segment. As explained in the incorporated provisional application, the computer102can actually be implemented using a plurality of separate, individual computers located at the booth100. In the context of the present application, the computer102will nonetheless be referred to as a single device for ease in explaining the present invention. The computer102is a computing device that includes a processor for processing computer programming instructions. In most cases, the processor is a CPU, such as the CPU devices created by Intel Corporation (Santa Clara, CA), Advanced Micro Devices, Inc (Santa Clara, CA), or a RISC processer produced according to the designs of Arm Holdings PLC (Cambridge, England). Furthermore, computer102has memory, which generally takes the form of both temporary, random access memory (RAM) and more permanent storage such a magnetic disk storage, FLASH memory, or another non-transitory (also referred to as permanent) storage medium. The memory and storage (referred to collectively as “memory”) contain both programming instructions and data. In practice, both programming and data will be stored permanently on non-transitory storage devices and transferred into RAM when needed for processing or analysis. In some embodiments, the computer102may include a graphics processing unit (or GPU) for enhanced processing of visual input and outputs, or an audio processing board, a single chip audio processor, or a digital signal processor (or DSP) that accelerates the processing of audio inputs and outputs. The computer102is tasked with receiving the raw visual data from the cameras110, the raw audio data from the microphones120, and the raw sensor data from the behavioral depth sensors130. The computer102is also tasked with making sure that this data is safely stored. The data can be stored locally, or it can be stored remotely. InFIG.1, the data is stored in data store (also referred to as data or database)180. This database180includes defined database entities that may constitute database tables in a relational database. In other embodiments, these entities constitute database objects or any other type of database entity usable with a computerized database. In the present embodiment, the phrase database entity refers to data records in a database whether comprising a row in a database table, an instantiation of a database object, or any other populated database entity. Data within this database180can be “associated” with other data. This association can be implemented in a variety of techniques depending on the technology used to store and manage the database, such as through formal relationships in a relational database or through established relationships between objects in an object-oriented database. In some of the figures, relationships between database entities as shown using crow-foot lines to indicate the types of relationships between these entities. Although this database180is showed as being connected to the booth100over network150, this data180can be stored locally to the booth100and computer102. To save storage space, audio and video compression formats can be utilized when storing data180. These can include, but are not limited to, H.264, AVC, MPEG-4 Video, MP3, AAC, ALAC, and Windows Media Audio. Note that many of the video formats encode both visual and audio data. To the extent the microphones120are integrated into the cameras110, the received audio and video data from a single integrated device can be stored as a single file. However, in some embodiments, audio data is stored separately the video data. Nonetheless,FIG.1shows audio, visual, and sensor data being combined as a single recorded data element182in data store180. The computer102is generally responsible for coordinating the various elements of the booth100. For instance, the computer102is able to provide visual instructions or prompts to an individual20through one or more displays140that are visible to the individual20when using the booth100. Furthermore, audio instructions can be provided to the individual20either through speakers (not shown) integrated into the booth100or through earpieces or headphones (also not shown) worn by the individual20. In addition, the computer102is responsible for receiving input data from the user, such as through a touchpad integrated into interface140. The system10shown inFIG.1also includes a user computer system160and a system server170. These elements160,170are also computer systems, so they may take the same form as computer102described above. More particularly, these computing systems160,170will each include a processor162,172, memory and/or storage164,174, and a network interface168,178to allow communications over network150. The memory164,174is shown inFIG.1as containing computer programming166,176that controls the processor162,172. InFIG.1, the system server170is represented as a single computing device. Nonetheless, it is expected that some embodiments will implement the system server170over multiple computing devices all operating together through common program as a single system server170. In some embodiments, the user computer system160takes the form of a mobile device such as a smartphone or tablet computer. If the user computer160is a standard computer system, it will operate custom application software or browser software166that allows it to communicate over the network150as part of the system10. In particular, the programming166will at least allow communication with the system server170over the network170. In other embodiments, the system10will be designed to allow direct communication between the user computer system160and the booth's computer102, or even between the user computer system160and data180. If the user computer160is a mobile device, it will operate either a custom app or a browser app166that achieves the same communication over network150. This application or app166allows a user using user computer system160to view the audio and visual data182recorded by the booth100of individual20. This app166receives this recorded data182in the form of a presentation30that is presented on a user interface of the user computer system160, such as a screen or monitor and speakers. In the preferred embodiment, the particular presentation30of the recorded data182that is presented to the individual computer160is customized to make the presentation30as interesting and as relevant as possible for that particular user. That customization can be performed either by the system server170or the booth computer102, and is performed in the manner described below. In the preferred embodiment, the user computer system160requests a presentation30relating to an individual20through the system server170. The user computer system160must identify an individual20for whom the booth100has recorded data182. This can be accomplished by directly identifying the individual20, or through a search or other data analysis techniques. Information about the individual20is stored in the data store180as individual data184. As shown inFIG.2, this individual data184can contain personally identifying information200, such as the name of and the contact information for the individual20, as well as background data210. Note that even thoughFIG.1shows that both audio, visual and sensor data182and individual data184are found in the same data store180, there is no need for all the data180to be physically or logically stored in a single structure.FIG.1merely schematically groups data180into a single element for ease in understanding the invention. Nonetheless, relationships between certain types of data180will drive the systems and methods described below, so it will be necessary to analyze and compare much of the data180shown with other data180. System10can be used in a variety of contexts to create and present custom presentations30. In one context, the individual20can take the form of a candidate for an employment position. The user of the user computer system160can be a representative of a potential employer that is looking to fill a job opening. In this context, the background information210for the individual20may include resume type information, such as prior work experience, type of employment being sought, education background, professional training, awards and certifications, etc. Thus, the user computer system160may request that the system server170identify data182and create presentations for individuals20that are interested in employment opportunities at the user's workplace. In a second context, the individual might be a musician looking for performance opportunities, and the individual data184might include the instruments the musician plays, the type of music performed, available booking dates, and expected compensation. In one embodiment, system server170collects data182from booth100and stores it in a central database (or other type of data store)180. The system server170is preferably in communication with a plurality of other kiosks (not shown) and may aggregate all their data in a single database180. In other embodiments, data from separate booth remains separated into separate data stores180that are located at each booth100but remain accessible to the system server170. Database180also contains criteria data186. Criteria data186constitutes information that is of interest to the user of system160and is relevant to the data182acquired by the booth100. In the context of an employment search, the criteria data186may containing data describing a job opening and the skills and experience requirements for the job. This is described in more detail below. Database180also includes information or data188about the user of user computer system160(and other user computer systems160that connect to system10but are not shown inFIG.1). This information188can be used to help customize the presentation30that is presented to user computer system160. Finally, the database180maintains historical information190about previous criteria data186(such as data about previous job openings) and previous actions by users of user computer systems160. It is important that the system10secure the confidentiality of, and restrict access to, the data in its database180. To accomplish this, no user computer160may access any of the data180unless the user computer160is fully authenticated and authorized. In one embodiment, user authentication and authorization is performed by the system server170. In addition, data180is also secured against physical tampering or access. Encryption can also be used on data180when at rest, meaning that even if physical access is obtained to the data, all data relating to any specific individual20or user remains secure. It is also important that every individual20who records a session at the booth100be fully aware of how their recorded data182will be used, stored, processed, and shared. This is typically accomplished through informational and contractual content that is provided to the individual20. In this way, the system10will handle the data of the individual20only in a manner consistent with that agreed to by the individual20. Customized Versions of Presentation30 The user of user computer system160is interested in viewing presentations30containing the audio and visual material recorded by individuals20in booth100. This system10is useful in a variety of contexts in which users wish to view automatically customized presentations of the recordings made by one or more individuals20. In the context of individuals20interviewing for potential jobs, for example, the user may be an employer looking to find job candidates. In another employment context, an employer may want to record and automatically edit a video of an employee of the month. Different viewers (users) within the employer organization will be interested in different aspects of the outstanding employee's (individual's) background and accomplishments. In the context of musicians20looking for work, the user may work at a venue looking to book an act, or be an agent looking for new clients, or a band manager looking for new band members. In the context of students applying for scholarships, the users may be different members of a scholarship committee that are each interested in different aspects of the student's history, personality, or skills. In the context of a non-profit requesting a grant, the users may be directors of a community foundation. And finally, in the context of a politician looking to win an election, the users could be voters, each of whom are interested in different aspects of politician's personalities and positions. If we were to consider an employer looking to find job candidates, it is likely that multiple users at a single employer would be asked to review a plurality of job candidates. Each of these users is interested in seeing the audio and visual data of the candidates20recorded at booth100. However, these users may not be interested in the same aspects of the recorded sessions. This is especially true when a large number of job candidates20must be reviewed, and users do not have the time to review the entire recording session of all potential candidates20. In one example, an employment manager user may be interested in general questions about how a candidate deals with adversity, handles problems in the workplace, and responds to questions about relocation. The candidate's future boss, however, may be most interested in whether the candidates have the technical ability to handle the requirements of the job. In contrast, future colleagues and co-workers may have a lesser interest in the individual's technical background but would like to see portions of the recorded sessions that are most likely to reveal the personality of the candidate. System10is able to meet these differing needs of the users by generating different “stories” or “versions”30of the recorded sessions appropriate for each user. As explained in detail below, each version or story30is created for a user based on the user's role in the process, and perhaps upon their past use of the system10. Although preference parameters can be selected by the user to force a particular configuration of the version30to be presented to that user, the embodiments presented below are designed to automatically generate a customized version30of the recorded session without manual selection of detailed preference parameters. The system10and methods described below can be applied to the recorded data182of a variety of individuals20. In the context of a job interview, a user of user computer system160would desire to see recorded sessions for multiple individuals20. The method for creating custom presentations30for that user can be applied to the recorded sessions of each individual20. Using this system10, the user will automatically only see a portion of each candidate's session, and the portion chosen will be consistent across individuals20according to methods described below. An employment manager, for instance, would be shown each individual's answers to prompts on how the individual20deals with adversity, handles problems in the workplace, and responds to questions about relocation. A different user, such as a co-worker, would see answers to prompts that are written to elicit information that reveals the personality of the candidate. Audio, Visual, and Sensor Data182 FIG.3shows details about the recorded182that is maintained about a recorded session of the individual20at the booth100. Part of this information is the content304and timing306of prompts300, which are the instructions that were provided to the individual20during the recording. These instructions can take a variety of forms depending on the type of session that is being recorded for the individual20. In one embodiment, the booth100is being used to conduct an automated employment interview with the individual20. To begin the interview, the individual20sits on a seat in front of the cameras110, microphones120, and depth sensors130. The height and position of each of the cameras110and the depth sensors130may be adjusted to optimally capture the recorded data182. Prompts (usually instructions or questions) are provided to the individual20, either audibly or through the visual interface screen140(or both). For example, in the context of an automated interview for a medical job, the prompts might include:1) Why did you choose to work in your healthcare role?2) What are three words that others would use to describe your work and why?3) How do you try to create a great experience for your patients?4) How do you handle stressful work situations?5) What would it take for you to move somewhere else for a job (Salary, location, organization, etc.)6) What is your dream job in healthcare?7) Tell us about a time you used a specific clinical skill in an urgent situation (describe the situation and skill).8) What area of medicine to you consider yourself a specialist in? Why? The prompt data300contains information about the content304of each prompt given during the recorded session of the user20. In addition, the prompt data300contains information306about the timing of these prompts. The timing of these prompts306can be tracked in a variety of ways, such as in the form of minutes and seconds from the beginning of a recorded session. For instance, a first prompt may have been given to the individual20at a time of 1 minute, 15 seconds (1:15) from the beginning of the recorded session. Note that this time may represent the time at which the first prompt was initiated (when the screen showing the prompt was first shown to the user20or when the audio containing the prompt began). Alternatively, this time may represent the time at which the prompt was finished (when the audio finished) or when the user first began to respond to the prompt. A second prompt may have been given to the individual20at a time of 4 minutes, 2 seconds (4:02), the third prompt at 6 minutes, 48 seconds (6:48), etc. The time between prompts can be considered the prompt segment306. The prompt segment306may constitute the time from the beginning of one prompt to the beginning of the next prompt, or from a time that a first prompt was finished to a time just before the beginning of a second prompt. This allows some embodiments to define the prompt segments306to include the time during which the prompt was given to the individual20, while other embodiments define the prompt segments306to include only the time during which the individual responded to the prompt. Regardless of these details, the prompt data300contains the timing information necessary to define prompt segments306for each prompt300. Prompt data300in data store180includes the text or audio of the instructions provided to the individual (or an identifier that uniquely identifies that content) and the timing information needed to define the prompt segments306. Note that, even in situations where multiple individuals20receive the exact same prompts during their recorded sessions in the booth100, the timing information for the prompt segments306will differ from individual to individual based upon the time taken to respond to each prompt. This means that the first prompt segment (the prompt segment for the first prompt) for a first individual20may run from location 1:15 (1 minute, 15 seconds from the beginning of the recorded session) to 3:45, while the first prompt segment for a second individual20may run from 1:53 to 7:12. Thus, the prompt data300is specific to each individual20, and will therefore be associated with individual data184. Further note that in the remainder of this disclosure, figure number300will be used to refer to both the overall prompt data300as well as the textual or audio prompts300given to the user and stored in the prompt data300. In some contexts, prompts300can be broad, general questions or instructions that are always provided to all individuals20(or at least to all individuals20that are using the booth100for the same purpose, such as applying for a particular job or type of job). In other contexts, the computer102can analyze the individual's response to the prompts300to determine whether additional or follow up prompts300should be given to the individual20on the same topic. For instance, an individual looking for employment may indicate in response to a prompt that they have experience with data storage architecture. This particular technology may be of interest to current users that are looking to hire employees in this area. However, one or more of these potential employers want to hire employees only with expertise in a particular sub-specialty relating to data storage architecture. The computer102may analyze the response of the individual20in real time using speech-to-text technology and then determine that additional prompts300on this same topic are required in order to learn more about the technical expertise of the individual20. These related prompts300can be considered “sub-prompts” of the original inquiry. InFIG.3, it is seen that prompts300can be related to other prompts300by being connecting through sub-prompt links302. These sub-prompts can be treated like any other prompt300by system10, but their status as sub-prompts can help with the analysis of the resulting recordings. Although sub-prompts are like all other prompts300, this disclosure will sometimes refer to sub-prompts using figure number302. The individual20typically provides oral answers or responses to the prompts300, but in some circumstances the individual20will be prompted to do a physical activity, or to perform a skill. In any event, the individual's response to the prompt300will be recorded by the booth using cameras110, microphones120, and depth data sensors130. The booth computer102is responsible for providing prompts300and therefore can easily ascertain the timing at which each prompt300is presented. This timing information can be used in order to divide the entire session recorded by the cameras110, microphones120, and sensors130into separate segments. The temporal definition of each segment is stored in time segment data310. In other words, the time during which the individual20responds to each question becomes the various time segments310, with each prompt300serving as the dividing points between time segments310. In other words, each time segment310can correspond to a single prompt segment. If twelve prompts300are presented to the individual20, a separate time segment310will be defined for each response resulting in twelve separate segments310being identified, one for each of the twelve prompt segment. Time segments310do not need to correspond exactly to prompt segments306. For example, an additional time segment310can be associated with the time before the first question and after the last question is fully answered, which would create fourteen time segments310for a session that contains only twelve separate prompt segments306. Although determining when certain instructions300are provided to the individual20is one of the best ways to divide up the time segment data310, it is not the only way. The incorporated Ser. No. 16/366,746 patent application (the “'746 application”), for example, describes other techniques for defining time segments310. This application described these techniques as searching for “switch-initiating events” that can be detected in the content of data recorded at the booth100by the microphones110, cameras120, and sensors130. While the Ser. No. 16/366,746 patent application primarily defines switch-initiating events in the context of switching cameras, these events are equally useful for dividing the time segment data310into different time segments310. As explained in the '746 application, behavioral data analysis can be performed on this data to identify appropriate places to define a new time segment310. For example, facial recognition data, gesture recognition data, posture recognition data, and speech-to-text can be monitored to look for switch-initiating events. If the candidate turns away from one of the cameras110to face a different camera110, the system10can detect that motion and note it as a switch-initiating event which will be used to create a new time segment310. Hand gestures or changes in posture can also be used to trigger the system10to cut from one camera angle to a different camera angle, which also constitutes a dividing event between time segments310. Another type of switch-initiating event can be the passage of a particular length of time. A timer can be set for a number of seconds that is the maximum desirable amount of time for a single segment, such as 90 seconds. Conversely, a minimum time period (such as 5 seconds) may also be established to prevent each segment310from being too short. In one embodiment, the prompt segments306defined according to when prompts300are provided to the individual20are used to create a first set of time segments310. Switching events detected within each prompt segment can then be used to split one time segments310into multiple time segments310. For example, the '746 application explains that the identification of low-noise event can be considered a switch-initiating event. If an average decibel level over a particular range of time (such as 4 seconds) is below a threshold level (such as 30 decibels), this will be considered a low noise audio segment that can be used to subdivide time segments310. In the context of an interview, one time segment310can originally be defined to cover an entire prompt segment (such as the entire response the individual20provided to a first prompt300). If a low-noise event is identified within that response, that time segment310is split into two different time segments—one before the low-noise event and one after the low-noise event. Furthermore, the '746 application describes the option to remove extended low volume segments or pauses from an audiovisual presentation altogether. If time segment310were divided into two using this technique, the first of these new time segments310would be the time before the beginning of the low noise event, and the second time segment would be the time after the low-volume segment or pause is completed, thereby removing the low volume segment from any of the defined time segments310. This would, of course, lead to two (or more) time segments310being associated with the prompt segment for a single prompt300. This can be seen inFIG.3by the use of crow's foot notation showing links between separate data elements. This same notation shows that, at least for the embodiment shown inFIG.3, each time segment corresponds to a single audio segment for each microphone120and a single visual segment for each camera110. As shown inFIG.3, each time segment310is also associated with preference or metadata320, audio data330,332, visual data340,342,344, and sensor data350. The two audio segments330,332represent the audio data recorded by each of the two microphones122,124, respectively, during that time segment310. Similarly, three visual segments340,342,344comprise the visual data recorded by each of the three cameras112,114,116, respectively, during that segment310. The sensor segment350constitutes the behavioral and position data recorded by sensor130. Obviously, the number of separate audio segments330,332, visual segments340,342,344, and sensor segments350depends upon the actual numbers of microphones120, cameras110, and sensors130that are being used to record the individual20in the booth, as there is a one-to-one relationship between the separate data elements (330-350) for a time segment310and the recording devices110,120,130recording the individual20. The preference data320for a time segment310can contain different types of data for that segment310. One benefit of having multiple cameras110and multiple microphones120in booth100is that it is possible to utilize visual material from different cameras110and audio material from different microphones120and arrange them automatically into a single audiovisual presentation. The resulting presentation cuts between different camera angles to create a visually interesting presentation30. In addition, the presentation30can use the recording from whichever microphone best captured the audio for a particular time segment310. Because all the data from the recording devices110,120,130are time synchronized, the creation of time segments310will automatically lead to the parallel segmentation of the audio data into segments330,332and the visual data into segments340,342,344using the same timing information. While each time segment310is associated with segments320-350from all of the recording devices110,120,130, the system10can select the preferred audio and visual data source for that segment310and store that preference in preference data320. Thus, the preference data320for a first segment310might indicate the best audio is the audio-2segment332from microphone-2124and that the best visual information is visual-3segment344from camera-3116. In the very next time segment310, the preference data320may indicate the preferred audio is the audio-1segment330from microphone-1122and the best visual information is visual-1segment340from camera-1112. The preferred audio and visual data for the segment can together be considered the video (audio data plus visual data) or preferred video for that segment. In one embodiment, the video for a time segment310is stored merely in the preference data320. Even after the selection of the preferred audio and visual data, all audio segments330,332and all visual segments340,342,344are stored and retained in recorded data182. One benefit of this approach is that alternative techniques can be provided after the fact to alter the selection of the preferred audio or visual segment. In another embodiment, once the preferred audio and visual data are selected, only that data is stored permanently in recorded data182as the video data for that time segment310. One benefit of this approach is that it significantly reduces the amount of data stored in recorded data182 Selecting either microphone-1122or microphone-2124to be the preferred audio source for a particular time segment310likely requires an analysis of the sound quality recorded in each segment330,332. In some instances, the highest quality audio segment330,332might be the one with the highest volume, or least amount of noise (the best signal to noise ratio as determined through estimation algorithms). In instances where microphones120are embedded into cameras110, or where each microphone120is located physically close to a single camera110, the preferred audio source can be the microphone120associated with the camera110that took the preferred visual data. Selecting the best visual presentation for a time segment310can be more difficult but can still be automatically determined. For example, the data captured from the multiple cameras110can be analyzed to determine whether a particular event of interest takes place. The system10may, for instance, use facial recognition to determine which camera110the individual20is facing at a particular time and then select the visual segment340,342,344for that camera. In another example, the system10may use gesture recognition to determine that the individual20is using their hands when talking. In this circumstance, the system10might then select the visual segment340,342,344that best captures the hand gestures. If the individual20consistently pivots to the left while gesturing, a right camera profile shot might be subjectively better than minimizing the candidate's energy using the left camera feed. In other situations, a single prompt segment has been divided into separate time segments310, and the selection of the visual segment340,342,344is based simply on ensuring a camera change when compared to the selected source camera110for the previous time segment310. Either the computer(s)102at the booth100or the system server170can be responsible for analyzing the data and selecting these preferences320on behalf of the system10. By linking each time segment310in data182with the prompts300, the relationship between prompts300and time segments310can be maintained even if a time segment310originally associated with an entire prompt300has been split into multiple segments310. Thus, if a presentation is to show all of an individual's answer to a particular prompt300(such as question number seven in an interview), all time segments310for that prompt segment can be immediately identified and presented. During that presentation, the microphone120that recorded the audio and the camera110recording the visual data can vary and switch from one time segment310to another310within the answer based on the source choices stored in the preference data320. In incorporated patent application Ser. No. 16/366,703 (the “'703 application”), a system and method for rating interview videos is presented. For example, this '703 application teaches the creation of an empathy score by examining visual, audio, and depth sensor data of an interview candidate during a recording session at a booth. This incorporated application also describes the creation of a combined score that incorporates the empathy score with an evaluative score based on considering an applicant's attention to detail and career engagement. Regardless of the technique used, it is possible to create an overall evaluation score for various individuals that have recorded sessions at booth100. The same techniques described in the '703 application can be utilized to create an empathy score for a time segment310of video. When this is performed, the resulting score360is stored within or in connection to the preference data320. InFIG.3, three different scores360, namely value 1 score362, value 2 score364, and value 3 score366are shown as being recorded for each segment310. Examples scores360may relate to various attributes that can be analyzed and rated using the data collected by the booth100. For example, the scores362,364,366may be for empathy, engagement, or technical competence. Other possible scores include confidence, sincerity, assertiveness, comfort, or any other type of personality score that could be identified using known techniques based on an analysis of visual data, audio data, and depth sensor/movement data. In the embodiment shown inFIG.3, a separate score360is determined for each time segment310. In other embodiments, all time segments310for a single prompt300can be analyzed and scored together as a single unit. In the '703 application, the sensor data for a segment is integral for determining the empathy score. The depth sensor, such as sensor130, records the body movements, posture, hand movements, leg movements, and sometimes even facial features and reactions of an individual20during a recording session. Data350from this depth sensor130reveals a great deal of information about the behavior of the individual20, and therefore is frequently used for creating scores360for a time segment310. Thus, dotted line374is included inFIG.3to show that the sensor segment data350is one of the criteria used to create the scores360. This data350can also be combined with other data, such as visual segment data340,342,344or audio segment data330,332, in order to develop a deeper understanding of the individual's behavior. For example, depth sensor data350can be combined with facial recognition and analysis data372that is generated by analyzing the various visual data segments340,342,344. This allows for a better understanding of the facial reactions and expressions of the individual20. Thus, the facial recognition and analysis data372is also shown as contributing to the creation of scores360. In other embodiments, audio segment data330,332is analyzed, as pitch and tone can indicate the stress and comfort level of the individual20during the recording session. In still further embodiments, the audio segment data330,332is converted to textual data370(using speech-to-text technology), and the textual data becomes part of the behavior analysis. This type of scoring is further described in the incorporated patent applications identified above. In some instances, the timing or content of a particular prompt300asked of the individual20impacts the resulting scores360created for a time segment310. For example, when the individual20is responding to the first question or instruction300, the system10can use data from that time segment310as a baseline to compare the answers from the beginning of the session to the answers later in the session. As another example, a particular instruction300can be designed to stimulate a type of emotional response from the individual20, or it can be designed to prompt an individual20to talk about their technical expertise in a particular field. Data acquired while the individual20is responding to that instruction300can be given more weight in certain types of scoring360. Thus, for instance, answers to questions concerning technical expertise may automatically be given a higher “technical” score360. In this way, it can be crucial to analyze the audio data422, the visual data432, and the sensor data434in the context of prompts300when creating scores360, which is why dotted line376is also shown inFIG.3. FIG.4shows another example of how the audio-visual data182ofFIG.3can be visualized. In this case, the data400shows the recorded responses of an individual20to two prompts300, namely prompt-1410and prompt-2430. Prompt-1410has been divided into two time segments, segment1a412and segment1b422. This division may occur as described above, such as by identifying a low noise portion of the recording in the answer to prompt-1410. The preference data for time segment1a412indicates that audio segment2.1a414is the preferred audio for this time segment412. The ‘2’ before the decimal point on element414indicates that this audio data is from the second microphone, or microphone-2124. The ‘1’ after the decimal point indicates that this is associated with prompt-1. The following ‘a’ indicates that this is the first time segment412from prompt-1410. The ‘1a’ can also be interpreted to reveal that the segment414relates to time segment1a412, which is the first time segment412for prompt-1410. Similarly, visual segment2.1a416is the preferred visual data. Visual segment2.1ais from camera-2114(the ‘2’ before the decimal point) and is for time segment1a412. The combination of the preferred audio segment2.1a414and the preferred visual segment2.1a416constitutes the video data417for time segment1a412, with the term video referring to the combination of audio and visual data. Preference data (at element418inFIG.4) also indicates that this time segment412is associated with an empathy score of 8, an engagement score of 4, and a technical score of 4. The preference data320for time segment1b422indicates a preference for audio data424from microphone-1122and visual data426from camera-1112. This combination of audio and visual data424,426creates video segment427for time segment1b. Time segment1b422is also associated with scores of 7, 4, and 4 for empathy, engagement, and technical competence, respectively. The video segments417and427for time segment1a412and time segment1b422, respectively, together comprise the video segments411for prompt-1410. In other words, the video for the prompt segment of prompt-1410can be presented by first presenting the video segment417for time segment1a412and then presenting the video segment427for time segment1b422. The data for prompt-2430is associated with only a single time segment432. For this time segment432, audio data434from microphone-2124and visual data436from camera-3116are preferred, and together form the video segments431for prompt-2430. It is also true that audio2.2434and visual3.2436together form the video segment437for time segment2432, as the video437for the time segment432is the same as the video431for the entire prompt-2430. The scores438for time segment432are 3, 9, and 8, respectively. Method for Establishing Recorded Data FIG.5shows a method500for establishing the recorded data182in data store180. The method500begins at step505, in which a session is recorded at booth100of an individual20responding a plurality of prompts300. The individual20is recorded with one or more cameras110, one or more microphones120, and one or more behavioral sensors (depth sensing devices)130. Prompts300are provided to the individual20, and the timings and content of the prompts300are recorded along with the data from the cameras110, microphones120, and the behavioral sensors130at step505. At step510, the timings of the prompts300are used to divide the individually recorded data from the cameras110, microphones120, and the behavioral sensors130into separate segments for each prompt segment306. As explained above, some of these prompt segments306are subdivided into multiple time segments310, such as by identifying switching events as explained above. This occurs at step515. In some circumstances, prompt segments306will consist of only a single time segment310. At step520, the preferred source is selected between the cameras110and between the microphones120for each time segment310. This allows for “switching” between cameras110during a presentation30to make the presentation30more interesting, and also allows for the best microphone120to be separately selected for each time segment310. Next, the method500needs to calculate scores360for each of the time segments310. To accomplish this, step525analyzes data from the behavioral (or body position) sensors130. Similarly, step530analyzes visual data from the cameras110. Step535analyzes audio data from the microphones120, which includes both pure audio data and text derived from the recorded audio data. Step540analyzes information about the relevant prompt300. In some embodiments, two or more of these analyses525-540are combined in order to create one or more score values360for the time segments310at step545. In other embodiments, each of these analyses525-540creates separate score values360. The method then ends at step550. It is not necessary to perform all of these steps to properly perform method500. For example, some embodiments will not perform all four types of analyses525-540to generate the score values360. In fact, steps525-540can be skipped altogether. In still further embodiments, step515is skipped, and the segmentation of recordings by prompts300as performed by step510is all that is necessary to create the time segments310. The ability to skip some steps is also found in the other methods described in this disclosure. The true scope of any method protected by this disclosure is determined by any claims that issue herefrom. Criteria Data186and User Data188 FIG.6provides a schematic illustration of the criteria data186maintained by system10. Criteria data186establishes the criteria by which individuals20are analyzed to determine suitability for some opportunity made available by the user of user computer system160. As explained above, this user may be looking to find a performer for a performance at a venue that they operate. Alternatively, the user may be looking to hire an employee for a job that is open in their enterprise. To help the system10identify individuals20that may be of interest to the user, the user will establish the criteria data186. In addition to identifying individuals20, this same criteria data is utilized to help customize, order, and edit the presentations30that are presented to the user. This is explained in further detail below. The criteria data186defines one or more objectives610. In the context of a job search, the objective may be a job610. The objective data610may identify the employer or user offering this job, the price range, the location, etc. The criteria data186will associate each objective610with one or more skills or criteria620that the individual20must have in order to qualify for the objective610. In the preferred embodiment, multiple skills620are associated with each objective610, which is again shown by the crow's foot notation inFIG.6. For example, one objective may take the form of a job opening for an emergency room nurse. The description about the job, its location, the estimated compensation, and other data would be stored in the objective database entity610. This objective or job610will have certain requirements that candidates must meet before they are considered qualified. These requirements constitute the objective's criteria or skills and will be stored in the criteria database entities620. These criteria620might include being a registered nurse, having experience in a hospital emergency room, and having experience with certain types of medical technology. Each of these criteria620might be also associated with a minimum value, such as five years of experience as a registered nurse, one year of experience in an emergency room setting, and a particular certification level on the medical technology. In some cases, criteria620can be considered sub-skills or specialty skills of other criteria620, thus they are shown inFIG.6as being linked to other criteria620through sub-skill link622. Although sub-skills are like all other skills620, this disclosure will sometimes refer to sub-skills using figure number622. In a first embodiment, the entity that creates the objective or job610is responsible for picking and linking the related skills620in data186. In a preferred embodiment, however, various objectives610are predefined in the criteria data. For example, since many hospitals may use the system10to help find candidates for an emergency room nurse job, predefined data610and skills620can be set up in the system for that job and its typical skill requirements. To identify the appropriate skills620for this type of a job, it is possible to consult government databases612that define numerous jobs and known criteria/pre-requisites for those jobs. Information from that database612will be used to link the objective data610with the prerequisite skills620. In addition, it is possible to use historical data190maintained by the system10to predefine certain objectives610and their related criteria620. Previous employers may have used the system10to establish objectives610including job openings for emergency room nurses. In addition, experts can predefine objectives610and their related criteria620in the system10, which can form another contribution to the historical data190of the system10. This historical data190will store this information, as well as the criteria620that was established by those previous employers for these types of jobs610. When a user decides to create a new objective for an emergency room nurse position, the predefined data from the government data612and the historical data190are ready. The system10is able to use this data612,190to automatically suggest skills620and minimum values required for the new objective610. In particular, this suggestion of criteria620for a new objective610can be based on predefined models or “frames” that link types of jobs610with typical sets of skills620and minimum values for the jobs. These types of frames are described in further detail below. As shown inFIG.6, the criteria data186also links individual skills620(and sub-skills622) to prompts300(and sub-prompts302) in a many-to-many relationship. This link indicates that a particular prompt300(an instruction, question, or request of individual20) is designed to elicit information or data from the individual concerning that skill620. For example, if the skill620associated with an objective610were defined as five years of experience as a registered nurse, the prompt300could be a simple question “Are you a registered nurse? If so, talk about why you decided to become a registered nurse and let us know the number of years of experience you have as a registered nurse.” This prompt300can be refined over time to elicit both the factual data associated with the skill620(are they a registered nurse and the number of years of experience they have) and a more open ended request to reveal more about the individual (“talk about why you decided to become a registered nurse”). As explained above, sub-prompts302can be used to request more details or further explanations about a particular topic. Sub-prompts302can also be associated with sub-skills622, so if the system10knows that a particular objective610is available that requires both a skill620and a sub-skill622, the system10will ask both the prompt300associated with the skill620and the sub-prompt302that is associated with the sub-skill622. It is to be expected that one prompt300might be associated with more than one skill620, such as where a prompt300is designed to elicit information about a variety of skills620. Similarly, one skill620might be associated with multiple prompts300, where multiple prompts300or sub-prompts302ask about different aspects of a single skill. This is shown inFIG.6through the use of the crow's foot notation connecting skills620with prompts300. This connection between these two data elements300,620is created by the system10through a careful selection and wording624of the prompts300. In other words, if the system10knows that skill620is important to an objective610, and no prompts300are designed to elicit information about that skill620, a new prompt300will be created for that purpose (or an existing prompt300will be edited). Thus, it is the explicit process624of defining the prompts300that ensures that the connection between the prompts300and the skills620are created. In most cases, the generation of the prompts300is created through a manual process by human experts. Although the prompts300are created using human intelligence, the entire system10and the methods described herein operate based on the established relationships in the data180and the novel computerized processes described herein. FIG.7shows the schematic relationships between data within the user data188. This figure also shows the relationship between that data188and the criteria data186. The user data188maintains data700about the individual users of the user computer system160. This user data700may include personally identifiable information (such as their name and contact information) and security information (such as a password) for that user. Each user700in data188is associated with a role710. The role710is associated with the user's role inside an entity720. The entity data720defines a corporation, company, non-profit entity, or other type of entity that may be responsible for defining the objectives610in the system10. The role data710links the user700with the entity720, effectively defining the role of the user700in that entity720. For example, the user data700might contain information about Mary Jones. Ms. Jones is a hiring manager at Mercy Hospital. Thus, Ms. Jones's user database entry700will be linked to a role database entry710that defines a hiring manager, which is linked to an entity database entry720for Mercy Hospital. InFIG.7, it is the entity database entry720that is directly linked to the objective or job610, which indicates that the objective610is being provided by that entity.FIG.7continues to show that the objective610is linked to a plurality of skills620, and each skill is associated with one or more prompts300.FIG.7also shows that the role710is also associated with a plurality of skills620. This link between roles710and skills620provides one of the more powerful techniques that system10has of customizing presentations, as is explained below. Linking of Data As explained above, historical data190can store predefined frames that link an objective or job610with a plurality of skills620. One such frame is frame800shown inFIG.8. Frame800represents one type of job610that is specific for a type of entity720. In particular, this frame800represents an emergency room nurse objective820for a hospital type entity810. When a new user uses the system10, the user can associate their company or entity with a hospital type entity, thus immediately making available frames (such as frame800) that are pre-established in the historical data190. This frame800is based on an analysis of government data612and/or an internal analysis of historical data190where other hospitals810have specified the skills620necessary for this type of job820. As shown inFIG.8, frame800creates an association between the emergency room nursing job820and a plurality of specific skills830-880that an applicant would need to possess to be a good candidate for this type of job. To simplify this discussion, the skills are identified by letter. As shown in the figure, the job of an emergency room nurse requires skill-A830, skill-B840, skill-E850, skill-F860, skill-H870, and skill-I880. Skills with letters not listed (such as skill-C and skill-D) are pre-defined in the criteria data186, but they are not skills620typically required for this job820. Skill-A830may be an important skill620for this job820, and (as shown inFIG.8) also requires a particular specialty or sub-skill622, namely Sub-skill-A.1832. In addition, skill-E850might require two sub-skills622, namely Sub-skill-E.1852and Sub-skill-E.2854. Because this information can be pre-defined in frame800, if a user from Mercy Hospital elects to create a new objective610for an emergency room nurse, the pre-associated skills830-880will be automatically presented to the user so that they will not have to manually define the relationship between their objective610and the pre-defined skills620in the system10. It is important to remember that frames such as frame800simplify the process of establishing the necessary skills620for a newly created objective610, but they do not limit the definition of that objective610. A user creating the new objective610can identify an appropriate frame, such as frame800, but is then free to edit and modify the identified skills620in order to make the database definition of their objective610meet their real-world needs. FIG.9shows another frame900that links a particular role710with a subset of the skills620that are associated with an objective610. This frame900is important because the system10uses frames like this in order to perform the automatic versions30shown inFIG.1. The importance of this frame900becomes clear upon reflection on the fact that most recording sessions of individuals20contain a great deal of audio and visual material. If a hiring manager at a hospital was looking to hire a candidate for a particular objective610, that user may need to review the audio and visual data created during a great number of booth sessions by different individuals20. While this system10is designed to guide the individual20using the booth100with only relevant prompts300, some of the information acquired about that individual20will not be of particular interest to a hiring manager. However, some of the material that is of little interest to the hiring manager would be of significantly more interest to a co-worker who was participating in the hiring decision. The co-worker may be very interested in the friendliness of the individual and their ability to share responsibilities with others, while these particular traits or skills620would be of less interest to a hiring manager that must narrow down a pool of applicants from dozens or hundreds to a much smaller number of final candidates. In other words, while certain skills620may be of relevant to all individuals20applying for an objective610, not every skill620will be of interest to every user that reviews the recorded sessions of those individuals20. Frame900again shows the skills830-880that have been identified as relevant in the context of an emergency room nurse objective820for a hospital type entity810. This context is explicitly shown inFIG.9by the listing of this entity810and objective820in the upper left of the frame900.FIG.9also includes a list920of all skills620established in the system10, including non-relevant skills620that are not associated with objective820in frame800. The purpose of this list920is primarily to help in the description of the present invention, and those non-relevant skills are presented merely to provide context to the more relevant skills830-880that were identified in frame800.FIG.9surrounded these skills830-880that are relevant in list920by a heavy, bolded border. Thus, skills in list920with a thin border were not considered relevant to objective820by frame800. This frame900is established for one particular role710, in this case the hiring manager role910. The frame900directly links this role910to a subset930of the skills830-880associated with the objective820. According to this frame900, users of the hiring manager role910are mostly interested in skill-A830, skill-A.1832, skill-B840, skill-E850, skill-E.1852, skill-E.2854, and skill-H870. These skills830,832,840,850,852,854, and860are also identified inFIG.9as subset930. This means that hiring managers would generally prefer to learn more about these skills in subset930when reviewing recorded sessions for individuals20interested in an emergency room nurse objective820. This subset930does not include all of the skills620defined in the system (set920) nor does it include all of the skills830-880defined in frame800as being relevant to the specific objective820(it does not include skill-F860or skill-I880). Rather, it includes only those skills930from the latter set830-880that are of particular interest to that role910. The right-most column in the frame900identifies the prompts300that are associated with the skills620in list920. As explained above in connection withFIGS.6and7, each skill620is associated with one or more prompts300that help elicit information from an individual20concerning that skill620. As can be seen from frame900, each skill620in list920is associated with at least one prompt300. For instance, skill-B840is associated in a one-to-one manner with prompt-1940. This means that prompt-1940, when provided to individual20, should create audio and visual data relevant to determining the level of skill-B840for that individual20. Prompt-2942is associated with both skill-A830and sub-skill-A.1832, meaning that this prompt-2942should gather information relevant to both of these skills830,832. Information about skill-A830is also gathered by another prompt300, namely prompt-4944. Method for Linking Criteria Data and User Data FIG.10contains a flowchart describing a process1000for linking criteria data186together and with user data188. This method1000is designed to create frames such as frame800and frame900, and then use those frames800,900when a new user wishes to create a new objective610(a new emergency room nurse job opening). At step1010, the system10identifies which entity types720and objectives610are appropriate candidates for frames like frame800. If the system10has many hospital customers, then identifying objectives relevant for those types of entities would be important. If the system10is used by other employer types, identifying objectives for those types would be necessary in step1010. Furthermore, if the system10were used for purposes other than employment and recruiting, such as for musician auditions, the entity types and objectives identified in step1010would be related to those types of customers. This step1010can be performed manually by researching entity types720and objectives610by using available resources, such as government data612. Alternatively, the system10can increase its knowledge of entity types and frame objectives by analyzing its own historical data190based on entries previously input by prior users or by experts. By researching potential entity customers of the system10, their objectives (jobs), and their user roles, experts may develop additions to the historical data190designed to facilitate the easy use of the system by these types of entities. At step1020, the system10must established skills620that are relevant to the entity types and frame objectives created in step1010. Once again, the information needed to establish these records can be found in government data612or historical information190already held by the system10. At step1030, information known about frame objectives and skills620are used to create frames such as frame800. These frames link an objective such as emergency room nurse820for a particular type of entity (hospital entity810) with a set of defined skills620(such as skills830-880). This is described in more detail above in connection withFIG.8. At step1040, prompts300are created with the specific intent of eliciting information from individuals20concerning the skills620established in step1030. In most cases, these prompts300will be carefully crafted and then specifically link with one or more skills620in step1050. This is seen inFIG.9. At step1060, the system10will associate a role710with a subset of the skills620linked to each objective in the frames defined in step1030.FIG.9shows that a particular role, such as the hiring manager role910, is associated with only a subset930of skills that are associated with the frame objective820. The selection of this subset930can occur in a variety of ways. One technique is to simply ask a plurality of users having that role what is of interest to them. Users can also be asked to prioritize the skills associated with the objective820, and higher-priority skills can be selected as the subset930. Alternatively, experts can select this subset930based on their knowledge of the industry. A further technique is described below in connection withFIGS.11and12, in which users having particular roles710are monitored in their use of the system10. At step1070, a user computer system160requests that the system server170establishes an objective or job record610within the criteria data186. Before this is possible, the user must be logged into the system server170, such as by providing a user identifier and password. This associates the user with a user record700in the user data188, which in turn is associated with a role710and an entity720. Because of this, the system server170will have knowledge about the user's entity720and will be able to suggest pre-defined objectives or jobs for that entity720during the process of creating the new objective record610. Even if the pre-defined objective is not a perfect fit for the current objective610, the user can be encouraged to identify the closest fit. In the next step1080, the system server170utilizes its best available frame, such as frame800, to suggest criteria or skills620for the newly entered objective record610. For example, if the user defined a new emergency room nurse job opening in a hospital setting, frame800would be utilized to suggest that skills830-880be assigned to this new objective610. In the following step1090, the system server170will present the suggested skills620to the user for editing and modification. In many circumstances, no changes will be necessary. However, each objective610may be unique, and thus the user creating the objective may refine the skills620and the level (such as experience level) associated with the skills620as necessary. This method would then end at step1095. The above-described method1000assumes the automatic or expert creation of frame800and frame900. In other embodiments, entities (customers) can customer define the frames800,900for themselves, so that each user700associated with that entity720of a particular role710will use the frame800,900explicitly defined by that customer. Different customers can establish frames800,900and even different roles710. In the creation of these frames800,900, the customer would of course be able to start with the automatically or expertly created frame800,900of the type described above, and then modify them as desired. Furthermore, it is to be expected that separate frames800,900will be created for separate industries, such as frames specific to medical recruitment. In this way, entities could have a turn-key experience based on recruiting and hiring best practices in their industry without requiring extensive customization and set-up. Roles710within the industry could be preconfigured to use a specific set of frames800,900while still being customizable. Monitoring Use by Role InFIG.11, a role monitoring system1100is presented in which a user system1110accesses a system server1130over a network1120. This role monitoring system1100is similar to the system10shown inFIG.1, and in fact system10can be used in place of system1100for the purpose of role monitoring. However, not all of the elements of system10are required in system1100. The user system1110is operated by a user having a known role710at an entity720. In this system1100, the user is allowed to select the audio and visual segments (or video segments) recorded of multiple individuals20in the booth100. As explained above, the individuals20response to prompts300, and the audio and visual recordings are divided into video segments according to the prompt segments306. In this context of system1100, it is not necessary to further divide these audio and visual segments into further time segments310that are of a shorter duration than the prompt segments306, but that is possible. In order to evaluate the multiple individuals20that have recorded sessions in the booth100, the user will need to compare the individual20against the skills620that are relevant to an open objective610at the user's entity. In this case, rather than the system1100creating a customized version30of the recorded session just for that user, the user of system1110is given freedom to select a particular prompt300and then watch the video segment(s) for an individual20for the prompt segment associated with the selected prompt300. In a first embodiment, the system server1130will allow selection of only those particular prompt segments306that may be of interest to this user, such as the set of prompts segments1140shown inFIG.11. These prompt segments1140(for prompt-1, prompt-2, prompt-3, prompt-4, prompt-7, and prompt-9) may be those segments for prompts300that the system1110preselects as relevant to the user of user system1110based on the role710assigned to that user. Note that these are the very prompts segments1140that frame900would identify as eliciting information relevant to the subset of skills930of interest to the hiring manager role910for the job of an emergency room nurse820. In a second embodiment, the system server1130allows the user system1110to select video segments from any prompt300provided to the users20. The system1100is designed to monitor the interaction between the user system1110and the system server1130. In particular, the users are monitored to determine which of the prompt segments1140are selected by the user for review, and in what order.FIG.12shows a table1200that presents this data. Six separate users1210are shown in this table1200, with each user1210in his or her own row. Each column1220indicates which prompt segment1140was selected in which chronological order. For instance, the first row represents the selection of user1. User1first selected the video segment(s) related to prompt-3, then prompt-2, then prompt-4, then prompt-9, then prompt-1, and then prompt-7. This user1has likely reviewed the video material for numerous individuals20interested in this objective610. Thus, the user knows which prompts300are most likely to elicit information that is relevant to that user's analysis of this individual20. Rather than going chronologically through the video segments, the user wants to see the individual's response to prompt3first, then2,4,9,1, and7. The second user also preferred to view prompts3and2first, but the second user reversed the order of these prompts, selecting first prompt2, then3,1,4,7, and9. This table1200contains data from analyzing six different users1210. Each user row1210might represent a single use of system1100by a user, meaning that a user that uses the system1100to review multiple individuals will create multiple rows1210in table1200. In other embodiments, each user is associated with only a single row1210, with the data in a row1210representing the user's typical or average ordering of prompts. The bottom row1230in table1200represents aggregate ranking for the ordering. In this case, the aggregate ranking is the selection of the mode for each ordering, although each prompt can appear only one time in the mode row1230. For the first position, the second prompt was selected by three of the six users1210, which resulted in the mode row1230identifying the second prompt in the 1st column1220. Four users selected the third prompt in the second column1220, so the mode row1230identifies the third prompt in the second column1220. The mode row is designed to select the best overall ordering of prompts1140based on watching the interaction of multiple users using system1100. The actual mathematics to generate the mode row1230are relatively unimportant, as long as the table1200generates an ordering of the prompts based on the “herd mentality” of the observed users. For example, each prompt300could simply be assigned an average rank based upon the table1200, with the ordering of prompts300being based on the ordering of their average rank. In the preferred embodiment, a different table1200is created for each role710. Thus, a separate ranking is created, for instance, for hiring managers when compared to co-workers when searching for an emergency room nurse at a hospital. In some embodiments, the list of prompt segments1140provided by system server1130is not restricted based upon the role710of the user700. Rather, the user is allowed to select video material from any prompt300. InFIG.12, table1200includes entries for prompt12and prompt16, which were not among the prompts300preselected for the hiring manager role910in frame900. By monitoring users with the ability to select any prompt300, the system1100can actually verify the selection of prompts300by a frame900. If prompt12(associated with skill-F860in frame900) is routinely selected by hiring managers early in their review of an individual, the frame900can be altered to associate skill-F860as a skill of interest to hiring managers910. Alternatively, system1100can be used to help create frame900, with none of the skills830-880assigned to the emergency room nurse objective820being associated to the hiring manager role910until after the generation of table1200. This table1200would therefore show that users of the role of hiring managers are most interested in the prompts1,2,3,4,7, and9, or more specifically the skills620associated with these prompts300, which are skill-A830, skill-A.1832, skill-B840, skill-E850, skill-E.1852, skill-E.2854, and skill-H870. Generation of a Custom Version of a Recorded Session for a User FIG.13shows a particular example of actual criteria data186and user data188. In this example, a user record1300has been established for Mary Jones. This user record1300is associated with the hiring manager role record1310for the Mercy Hospital entity1320. An objective or job1330record has been created for Mercy Hospital for an emergency room nurse. Using the frame800created as described above, a plurality of skill records1340,1342have been associated with this emergency room nurse job record1330. InFIG.13, the skill records are divided into two groups. A first group1340is directly associated with the hiring manager role1310at Mercy Hospital1320. These records are the selected skills930that frame900associates with this role, namely skill-A830, skill-A.1832, skill-B840, skill-E850, skill-E.1852, skill-E.2854, and skill-H870. The other group of skills1342are those skills that are relevant to the emergency room nurse objective record1330but are not associated with the hiring manager role1310. These are skill-F860and skill-J 880. InFIG.13, the prompts1350related to these skills1340are identified and sorted. In one embodiment, these prompts1350are sorted according to the herd mentality scoring of table1200, created by monitoring other users of the same role interacting with audio visual recordings associated with these prompts1350. As shown inFIG.13, these prompts are sorted as follows:2,3,1,4,7,9. This actual user data188and criteria data186is then used to create a custom version30of a recorded session of individual20for Mary Jones. Based on this data186,188, prompts2,3,1,4,7, and9have been identified as prompts of interest for Ms. Jones, and they have been ordered in a manner that should present the most relevant and useful prompt response first. These prompts300are then used to identify relevant time segments310and the associated audio segments330,332and visual segments340,342,344for each time segment. These sorted and identified video segments can then be used to create a customized version30of the recorded data, as shown inFIG.14. In particular, customized version1400contains video segments (audio and visual data) for each of the identified prompt segments1140. The first prompt segment1410of the customized version1400contains audio2.2434and visual3.2436. Audio2.2434is the audio segment for the second prompt (the 2 after the decimal) taken from microphone-2124(the 2 before the decimal). Visual3.2436is the visual segment for the second prompt taken from camera-3116. The second prompt segment1420relates to audio and visual segments for prompt3. As can be seen inFIG.14, this segment1420includes two time segments310. Thus, in the middle of presenting the response of individual20to prompt3, the camera will switch from camera-2114to camera-3116, while microphone-2124will continue to provide the audio for the entire portion1420. The third prompt segment1430also contains two time segments310, meaning that the camera will switch from camera-2114to camera-1112in the middle of the response to this prompt300, while audio switches from microphone-2124to microphone-1122. The fourth prompt segment1440contains three time segments310, with the camera switching from camera-2114to camera-3116to camera-1112with the microphone input switching only from microphone-1122to microphone-2124. The fifth prompt segment1450contains only a single time segment, while the sixth prompt segment1460once again contains three time segments. This customized version1400is created for the recorded session of one individual20at the booth100. Even though the individual20may have responded to many more prompts300(such as twelve), and even though each camera110and each microphone120separately recorded visual and audio data, respectively, the version1400uses only a small portion of this information to create version1400for this user. To do this, the system10selects only a subset of the prompts300answered by the individual20based on their user's role710, orders the prompt segments306for these prompts300based on the herd analysis of table1200, identifies multiple time segments310for a plurality of the prompt segments306, identifies preferred audio and visual sources (aka, the video data) for each time segment310, and then combines this selected and ordered subset of information into the customized version1400of the recorded session. FIG.15sets forth a method1500using herd preferences to order the resulting prompts for a customized version30for a user. The method1500starts at step1510, in which system1100is created to allow users to have direct access to prompt responses of individuals20. In this way, as explained above, the users can select which prompt segments306they want to observe, and in which order they want to observe them. At step1520, the system1100observes the user, and records their user's use of the system1100, their selection and ordering of prompt segments306, and the user's role710. Based on this observation, it is possible to determining a selection and ranking of prompt segments306for a particular role710(and for this particular type of objective or job610for this entity type720). This occurs at step1530. Finally, at step1540, a new user to the system10is associated with a role710, and the stored selection and ranking of prompt segments306from step1530is used to order the time segments310for this new user (and for a new recorded session for a new individual20). The method then ends at step1550. Monitoring User Behavior FIG.16discloses a user monitoring system1600that is very similar to the role monitoring system1100ofFIG.11. In the user monitoring system1600, a user computer system1610of one particular user1610is monitored as opposed to multiple users of a similar role710. In this case, the user system1610of Mary Jones is being monitored. This system1610accesses a system server1630over a network1620. Like the role monitoring system1100, the user monitoring system1600is similar to the system10shown inFIG.1, and in fact system10can be used in place of system1600for the purpose of user monitoring. Once again, like the role monitoring system1100, system1600allows the user computer system1610to watch prompt segments1640associated with a plurality of prompts300. Unlike the user system1110in the role monitoring system1100and the user system160in system10, the user monitoring system1610has the ability to physically observe the actual user while they review the segments1640. The user monitoring system1610preferably has a visual camera1612monitoring the user and may further comprise a microphone1614to record audio of the user and a depth sensor1616capable of monitoring the physical body movements of the user. In this way, the user system1610monitors the user's facial and body reactions when the user is viewing the recorded prompt segment responses. In some embodiments, the user is given access to the booth100to observe the audio and visual segments1640. In this way, booth100operates as user system1610, allowing the cameras110, microphones120, and sensors130of the booth100to monitor the user. In other embodiments, smaller, less intrusive monitoring equipment1612,1614,1616is provided to users for use at their workstations. In still further embodiments, users give permission for cameras1612already present at their workstation (such as video conferencing cameras) to monitor their reactions while viewing the video segments1640. In a manner similar to the way that the system10develops scores360for individuals20using the booth100, the user monitoring system1600develops a score for the user viewing the prompt segments1640. In this case, the score is designed to measure the user's engagement with the segments1640being presented. After watching the user view numerous segments1640associated with prompts300, the system1600will be able to discern differences in the user's level of engagement with responses to different prompts300. InFIG.17, a chart or graph1700is shown in which the user's average engagement score1720when watching the prompt segments1640is compared against the prompts1710that prompted each segment1640. The resulting data1700can be used to determine whether or not the user is generally more or less engaged with the responses to particular prompts1710. As seen in chart1700, Mary Jones is consistently more engaged with prompt segments1640that respond to prompt-3. The second highest level of engagement is with prompt-4, with the third highest level being with prompt-2. Mary has consistently low levels of engagement for segments1640associated with prompt-1, prompt-7, and prompt-9. The information in chart1700can be used to re-sort and even edit the prompt segments1640included in the custom version30presented to Mary Jones in the future. As shown inFIG.18, Mary Jones will in the future receive a custom presentation1800that reorders the prompt segments1640according to her known engagement with similar material in the past. First to be presented to Mary is prompt1420for prompt-3, which was the second segment presented in version1400shown inFIG.14. This segment1420is presented first to Ms. Jones because she was found to have a higher engagement score with responses to prompt-3in chart1700than any other prompt300. Consequently, the custom version1800presents this material first. The second segment to be presented belongs to the fourth segment1440of the version1400relating to prompt-4, while the third segment of version1800is the first segment1410relating to prompt-2. This reordering is again a direct result of the monitoring of Mary Jones's interaction with responses to these prompts300. The last three segments in version1800are for prompt-1, prompt-7, and prompt-9. Mary Jones did not have a high engagement score1720for these prompts300, so they are presented last in custom version1800. Furthermore, because of this low engagement score1720, the audio and visual segments associated with prompt-1and prompt-8have been edited. Segment1810is associated with prompt-1, but it does not contain any material from time segment1b422. In other words, audio1.1bsegment424and visual1.1bsegment426are not included in segment1810. The system10decided that the low engagement score1720by Mary Jones for prompt-1indicated that not all of the time segments310for this prompt300should be included in the custom version1800. The system10had to select which time segment310to remove, namely time segment1a412or time segment1b422. A variety of rules can be utilized to make this selection. In one embodiment, the first time segment310for a prompt300is always included, but remaining time segments310are excluded. In other embodiments, the scores360associated with each time segment310are compared, with only the highest scoring time segment310being included. As shown inFIG.4, the overall scores418for time segment1a412are slightly higher than the scores428for time segment1b, so time segment1bis removed. Similarly, the last segment1820related to prompt-9is also edited. In version1400, all three time segments310for prompt-9were presented. In version1800, only video for the first time segment (9a) is included, with time segments9band9cbeing removed from the presentation1800. FIG.19shows actual data criteria data186and user data188similar to that presented inFIG.13. This data is still concerned with the emergency room nurse objective1330of Mercy Hospital1320. In this case, however, a different user1900having a different role1910is desiring to view recorded session data from booth100. The record for Abdul James1900indicates that Mr. James will be a co-worker (role1910) to any emergency room nurse hired to fill objective1330. As explained above in connection withFIGS.9and10, a subset930of criteria620associated with the objective610is associated with each role710. Thus, the subset of associated criteria1940for the co-worker role1910is different than the subset of associated criteria1340shown inFIG.13for the hiring manager role1310. The subset of associated criteria1940for co-workers1910includes skill-A830, skill-A.1832, skill-B840, and skill-J. The other skills1942are still associated with the job of emergency room nurse1330, but those skills1942are considered to be of lesser interest to co-worker role1910than the identified skills1940. Using the herd analysis ofFIGS.11and12for co-worker role1910, an ordering of the skills620and related prompts300can be determined for use in creating a customized version30for co-workers. The prompts1950for skills1940are selected by a frame such as frame900(but for co-workers), and are sorted inFIG.19by the herd analysis into the order of prompt-1, prompt-2, prompt-19, and prompt-4. This ordering of prompts1950leads to the creation of customized version2000shown inFIG.20. In this version2000, prompt segment1430for prompt-1is presented first, followed by segment1410for prompt-2. Prompt-19results in segment2010, followed by segment1440. Note that even though segment1440was presented last in this version2000, all time segments310associated with prompt-4were included. In another embodiments, the last prompt segment or segments could be automatically edited down to exclude lesser scored time segments310associated with a prompt300, especially when a large number of time segments310have been identified for a single prompt segment. FIG.21presents a process2100for monitoring a user of a user computer system1610in order to alter future custom versions30for that version. The user is monitored at step2110using system1600, with an engagement score being created for each review of one or more video segments associated with a prompt300. At step2120, an aggregate engagement score1720is determined for the user for each applicable prompt300. The applicable prompts300may be those prompts chosen for this user based on the methods described above. Alternatively, the user would be able to review responses to all prompts using system1600. Although the aggregate score can be calculated in a variety of manners, a simple average for each prompt should be sufficient. At step2130, prompts300are identified that generated higher than average engagement scores1720and lower than average engagement scores1720. In one embodiment, engagement scores1720can be divided into three tiers: much higher than typical, typical, and much lower than typical. The typical engagement score might be the average score1720for the user across all prompts. The engagement scores1720are then used at step2140to order prompt segments306for a customized presentation30in the future. Higher than typical scores1720can be used to move segments associated with those prompts to the beginning of the customized presentation30. Lower than typical scores1720can move associated segments to the end of the presentation30. Scores in the range of typical can remain in the normal sorted order, such as the order created by the herd analysis ofFIGS.11and12. In some embodiments, step2150will reduce the amount of content associated with lower than typical engagement scores1720. As explained above, this can involve removing time segments310when multiple time segments310are associated with a single prompt segment. The selection of time segments310to remove can be based on their chronological order, or the scores360generated for each segment. The method2100can be continuous, meaning that step2110will follow step2150, allowing the system1600to continue its monitoring of the user so as to continuously improve the customized version30created. Alternatively, the method2100can end at step2160at the conclusion of step2150. The above method2100was described in connection with analyzing an engagement score1720of the user. While the incorporated applications describe different methods for calculating different scores of monitored individual, it is not necessary that any of these exact scoring techniques be used in this context. Other scoring techniques related to visual or body monitoring can be used. In addition, while some techniques may refer to monitoring “engagement,” other monitoring scores can also be applicable. For instance, rather than monitoring the engagement of a user, one might monitor the “interest” of a user, or the “emotional connection” of the user, or the “attentiveness” of the user, or even the “concentration” of the user. Any scoring technique that generally predicts that a user finds responses to particular prompts300more useful can be utilized. Use of Behavioral Data FIG.22shows another example of criteria data186and user data188being used to create a custom presentation30. This data186,188is very similar to the data shown inFIG.13, in that it relates to the hiring manager role1310at Mercy Hospital1320, in the context of an emergency room nurse objective1330. In this case, the user700is Mark Rodriguez2200. Mr. Rodriguez is also a hiring manager1310, so frame900will automatically associate him at box2240with the subset of skills930associated with the hiring manager role, as shown inFIG.9. The other skills needed1342for the emergency room nursing job1330remain the same, namely skill-F860and skill-I880. The skills of interest to the hiring manager2240shown inFIG.22contains the same skills of interest to the hiring manager1310shown inFIG.13, with the addition of the “highest empathy score segment.” This addition can be the result of a customization made by Mr. Rodriguez to the automatically created version30provided by the system10. With this alteration, the system10will analyze the video segments of an individual to look for the segment that scored the highest on the empathy score. That segment will then be included in the created version30for Mr. Rodriguez. The user may have made this customization because he is particular concerned about the empathy of the individuals20applying for this job1330. The prompts2250that are used to generate the custom version30of the presentation are shown to include the Hi-Emp prompt. In addition to this Hi-Emp prompt, the prompts2250include prompt-2, prompt-3, prompt-1, prompt-4, prompt-7, and prompt-8. These prompts are sorted according to the herd analysis ofFIGS.11and12, with the Hi-Emp video segment inserted into the third position. As explained above, the response to a single prompt300may be subdivided into separate time segments310by finding switching events inside the time segment310for the prompt300. In these contexts, each of these subdivided time segments310might be separately scored and identified, with the Hi-Emp portion of the created presentation30being limited to single, high scoring time segment310as opposed to an entire response to a prompt300. Alternatively, all time segments310that relate to a single prompt segment are grouped together and given a single score for this analysis, and then the highest ranking response to a prompt300is presented in its entirety. The empathy score360is just one example of a score that can be used to alter to customized version30. In some contexts, a user700may be interested in seeing the individual at their most comfortable. Data from the behavioral sensors130can be analyzed over the course of a recording session to analyze which time segments310represent the individual at their most confident and relaxed. Other users700may be most interested in the time segments310that have the highest technical score360, or the highest engagement score360. Still other embodiments utilize the highest score test to select prompts segments for a version30as part of the framework for a role. For instance, framework900could be altered for all hiring managers to automatically include the segment with the highest empathy score. By including this in the framework900, hiring manager Mary Jones1300(and all other hiring managers1310) will have the highest empathy scoring segment in the version1400created for her. In some instances, the highest empathy scoring segment will already be included in the version30created using the data ofFIG.22. In that case, either the next-highest scoring segment is added instead, or the customized version30doesn't include any “Hi-Emp” segment since the highest scoring segment is already included. Some embodiments will not include a highest scoring segment in the prompt list2250unless the score360of that highest scoring segment is significantly higher than any of the already included segments. For example, assume that the recorded segments have the empathy scores shown in the partial list2300ofFIG.23. Prompt-11410has an empathy score of 7.5 (the average of time segment1a412and1b422). Prompt-21430has an empathy score of 3, while prompt-32310has a score of 4, prompt-42320has a score of 6.25, and prompt-6has a score of 9.4. The relevant frame (such as frame900) or customization may require that the highest empathy segment must have a score of at least 20% greater than any other displayed segment before it is added to the prompts2250used to create the customized version30. In this case, the highest scoring segment that will be included without this analysis is the prompt segment for prompt-11410, which has an empathy score of 7.5. If the 20% standard is used, the highest empathy scoring segment will be included only if the empathy score for that segment is at least 9.0 (20% more than the 7.5 score of prompt-11410). In this case, prompt-62330meets this standard, so the video segments associated with prompt-62330will be added to the prompts2250. Of course, this analysis is always performed on the recorded data for one particular individual20. While prompt-62330is included for a first individual20, the second individual20is unlikely to receive the same scores as shown in list2300, so the analysis must be re-performed for the second individual20. FIG.24shows a process2400for including high-scoring prompt segments306in a custom version30. The method2400starts at step2410by associating the roles with a desire to see a high-scoring prompt segment. This can be accomplished by changing the frame900, or by allowing individual users to specify such a desire in their own preference data. In the context ofFIG.22, Mark Rodriguez2200altered his preferences so as to include the highest empathy scoring segment. If multiple score values360are being assigned by the system10, this step2410will include specifying the particular score or scores to which this desire applies. At step2420, all time segments310are assigned relevant score values360. Alternatively, entire prompt segments306can be assigned to a score value, either independently or by averaging the score values360of their constituent time segments310. At step2430, time or prompt segments that have a statistically higher score360are identified. As explained above, the statistically higher score360can be determined by a minimum percentage that the segment must score higher than the highest segment already being included. In other embodiments, the scores360can be compared across all candidates, requiring a score that is in the top 5% of all candidates for a particular prompt300. Those skilled in the art can easily identify other statistical or numerical techniques for identifying worthy prompt segments306for inclusion in step2430. At step2440, the identified prompt segment (if any) is inserted into the ordered prompts300, such as prompts2250. At step2450, the version30is created and submitted to the user computer system160. If a prompt segment was selected at step2430, the version30will include that high-scoring prompt segment. Note that this determination is made by analyzing the actual scores generated for the recorded session for that individual20. Thus, the created version30for different individuals20will likely vary. The method ends at step2450. Overall Method FIG.25shows an overall method2500for creating custom versions or stories30for a recorded session of an individual20. The first step2505involves recording the session of the individual20in the booth100. Prompts300are provided to the individual20, which are used to divide the recorded data into time segments310. Additional time segments310can be identified within each prompt segment, such as by identifying switching events. Preferred sources from the plurality of cameras110and the plurality of microphones120are established for each time segment310, establishing the video for that time segment. Scores360are also assigned to each time segment. This step2505is further described above in connection with process500andFIGS.1-5. At step2510, a user of user computer system160is identified, such as by requiring a login to the system10. Once a user record700is identified for the user, a role710and entity720can also be identified. At this point, the user will also identify an objective or job610for which they want to review individuals20. At step2515, a frame such as frame900(or other process) is used to identify a set of prompt segments306that are relevant to the identified role710of the user700. This is accomplished by identifying the skills620relevant to a particular objective610, and then identifying a subset of those skills620that are of interest to a particular role710. At step2520, the herd analysis described in connection withFIGS.11and12are used to sort the prompt segments306for that role710. In addition to this sorting, it is also possible at step2525to monitor the user to determine which prompts300have been most impactful in the user's use of system10, as was explained in connection withFIGS.16,17, and18. In addition to using steps2520and2525to sort the prompt segments306, it is also possible to use these steps to subtract or redact portions of prompt segments306that are predicted to be of less interest to the user. The separate scores360assigned to time segments310can be used to identify relatively poor segments that can be removed from the resulting presentation. At step2530, some embodiments will also examine the separate scores360to identify additional prompt segments306to add into a presentation30. As explained in connection withFIGS.22-24, a prompt segment that obtains a high score or scores360may be of particular interest to certain roles710and users700, and therefore should be included even if not identified in any frame900for that role710. At step2535, the ordered set of prompt segment are then converted into an ordered set of time segments310. Some prompt segments306are associated with only a single time segment310, while other prompt segments306will be divided into multiple time segments310. Preference data associated with each time segment310(in step2505) are then used to identify the preferred source of recorded data (the video data) for each time segment in step2540. At this point, the preferred sources of data for the time segments310are utilized to create a version30of the recorded session of that individual20, and this version30is then shared with the user computer system160at step2545. The method ends at step2550. Customization steps2520,2525,2530,2535and2540could be performed in a different order than presented inFIG.25. For example, step2525, in which feedback is used to order and edit prompt responses, could be performed before step2520, where herd preferences are used to modify an order of a subset of prompt responses step2520. It is also possible to perform some but not all of the customization steps2520,2525,2530,2535and2540. A large advantage of using this method2550to create customized versions30of recorded sessions is that the same method2550can be applied to the recorded session of a variety of individuals20and a variety of different users. In the context of a job interview, a hiring manager1310such as Mark Rodriguez2200would desire to see recorded sessions for multiple individuals20. Since the method2500is applied to each individual20, great efficiencies are gained. Using system10, Mr. Rodriguez will automatically only see a portion of each individual's session, and the portion chosen (the identified prompt segments306) will be consistent across individuals20according to Mr. Rodriguez′ role710as a hiring manager1310. He might see, for example, each individual's answers to questions on how the individual20deals with adversity, handles problems in the workplace, and responds to questions about relocation. In addition, he will see that portion of each individual's session that is thought to show the individual20at their most empathetic. An example of how prompt segments306and time segments310can vary across multiple individuals is seen inFIG.26. The recorded session2600for a first individual in this example is divided into prompt segments306according to the timing of the six prompts presented to this individual. Since, inFIG.26, the vertical dimension represents time, it can be seen that the first individual spent a great deal of time answering prompt-31420, but much less time answering prompt-21410. A different recording session2620is shown for a second individual. The overall time of this recording session (the vertical length) is shorter for this second individual. Nonetheless, this second individual also responded to the same six prompts, and in fact spent much more time responding to prompt-21430than the first individual. Each of the prompt segments306shown inFIG.26are divided into time segments310using the techniques described above, with some prompt segments306(such as prompt segment52630and prompt segment62640) containing only a single time segment310. If Mr. Rodriguez is to view the video segments for prompt-2, prompt-3, prompt-1, and prompt-4, in that order, it is clear fromFIG.26which time segments310would be presented to him for each individual. This is true even though the individual timing information and time segment data for each prompt segment306varies significantly. Thus, the selection of video segments would not be complicated the fact that the second individual spent significantly more time on prompt-21410than the first individual, or by the fact that the second individual started responding to prompt-21410earlier in the recording session, or by the fact that prompt segment21410for the second individual contains three time segments310, while the same prompt segment1410for the first individual contains only one time segment310. In each case, the identified prompt segments306would be associated with time segments, and the resulting video segments would be presented in the appropriate order. In this manner, the identification of prompt segments306for the recorded sessions2600,2620allows for a consistent topical presentation30of less than the whole recorded sessions2600,2620for Mr. Rodriguez regardless of the individual differences between recorded sessions2600,2620. Another user, such as Abdul James1900, might be associated with a co-worker role1910for the same job1330and the same set of individual candidates20. Mr. James's version30of the recorded sessions2600,2620for each individual20, however, will be very different than that shown to Mr. Rodriguez. For instance, Mr. James might view prompt11430, prompt-21410, and then prompt-4,1440, in that order. The timing information for these prompt segments306will vary between the first individual and the second individual, but the actual prompt segments306selected will remain consistent in the customized presentations30for these two individual candidates20. FIG.27shows the recorded session2700of the same second individual fromFIG.26. WhileFIG.26shows a first recorded session for this individual20in which the individual responded to prompt-11430through prompt-62640in order,FIG.27shows a second recorded session for the same individual20where only two prompts were responded to. An individual20might re-record their responses to certain prompts if they are unsatisfied with their prior responses. In this case, the second individual returned to booth100and limited their second session to record a new answer to the fourth and second prompts, in that order (there is no need that the prompts300be provided to the individual20in a particular order). The newly recorded video can therefore be divided into prompt segment4.12710and prompt segment2.12720. When creating a customized presentation version30of this second individual20, the system10can now pull prompt segments from either recorded session2620or recorded session2700. As explained above, Mr. Rodriguez may receive a presentation30that includes video segments for prompt-2, prompt-3, prompt-1, and prompt-4. The video segments for prompt-3and prompt-1must come from recorded session2620as this is the only session that contains prompt segments306for those prompts. In contrast, the system10may choose between prompt segment21410from session2620and prompt segment2.12720from recorded session2700, and may also choose between prompt segment41440from session2620and prompt segment4.12710from session2700. In one embodiment, the system10merely selects the most recently recorded prompt segment306when multiple options are available. In other embodiments, however, the system10will use its automated scores360to determine a best overall prompt segment306, namely the one having the highest scoring time segments310. As used in this specification and the appended claims, the singular forms include the plural unless the context clearly dictates otherwise. The term “or” is generally employed in the sense of “and/or” unless the content clearly dictates otherwise. The phrase “configured” describes a system, apparatus, or other structure that is constructed or configured to perform a particular task or adopt a particular configuration. The term “configured” can be used interchangeably with other similar terms such as arranged, constructed, manufactured, and the like. All publications and patent applications referenced in this specification are herein incorporated by reference for all purposes. While examples of the technology described herein are susceptible to various modifications and alternative forms, specifics thereof have been shown by way of example and drawings. It should be understood, however, that the scope herein is not limited to the particular examples described. On the contrary, the intention is to cover modifications, equivalents, and alternatives falling within the spirit and scope herein. The many features and advantages of the invention are apparent from the above description. Numerous modifications and variations will readily occur to those skilled in the art. Since such modifications are possible, the invention is not to be limited to the exact construction and operation illustrated and described. Rather, the present invention should be limited only by the following claims. | 112,659 |
11861905 | DETAILED DESCRIPTION FIG.1illustrates a technology stack100indicative of technology layers configured to execute a set of capabilities, in accordance with an embodiment of the present invention. The technology stack100may include a customization layer102, an interaction layer104, a visualizations layer108, an analytics layer110, a patterns layer112, an events layer114, and a data layer118, without limitations. The different technology layers or the technology stack100may be referred to as an “Eagle” Stack100, which should be understood to encompass the various layers allow precise monitoring, analytics, and understanding of spatiotemporal data associated with an event, such as a sports event and the like. For example, the technology stack may provide an analytic platform that may take spatiotemporal data (e.g., 3D motion capture “XYZ” data) from National Basketball Association (NBA) arenas or other sports arenas and, after cleansing, may perform spatiotemporal pattern recognition to extract certain “events”. The extracted events may be for example (among many other possibilities) events that correspond to particular understandings of events within the overall sporting event, such as “pick and roll” or “blitz.” Such events may correspond to real events in a game, and may, in turn, be subject to various metrics, analytic tools, and visualizations around the events. Event recognition may be based on pattern recognition by machine learning, such as spatiotemporal pattern recognition, and in some cases, may be augmented, confirmed, or aided by human feedback. The customization layer102may allow performing custom analytics and interpretation using analytics, visualization, and other tools, as well as optional crowd-sourced feedback for developing team-specific analytics, models, exports, and related insights. For example, among many other possibilities, the customization layer102may facilitate in generating visualizations for different spatiotemporal movements of a football player, or group of players and counter movements associated with other players or groups of players during a football event. The interaction layer104may facilitate generating real-time interactive tasks, visual representations, interfaces, videos clips, images, screens, and other such vehicles for allowing viewing of an event with enhanced features or allowing interaction of a user with a virtual event derived from an actual real-time event. For example, the interaction layer104may allow a user to access features or metrics such as a shot matrix, a screens breakdown, possession detection, and many others using real-time interactive tools that may slice, dice, and analyze data obtained from the real-time event such as a sports event. The visualizations layer108may allow dynamic visualizations of patterns and analytics developed from the data obtained from the real-time event. The visualizations may be presented in the form of a scatter rank, shot comparisons, a clip view, and many others. The visualizations layer108may use various types of visualizations and graphical tools for creating visual depictions. The visuals may include various types of interactive charts, graphs, diagrams, comparative analytical graphs, and the like. The visualizations layer108may be linked with the interaction layer so that the visual depictions may be presented in an interactive fashion for a user interaction with real-time events produced on a virtual platform such as the analytic platform of the present invention. The analytics layer110may involve various analytics and Artificial Intelligence (AI) tools to perform analysis and interpretation of data retrieved from the real-time event such as a sports event so that the analyzed data results in insights that make sense out of the pulled big data from the real-time event. The analytics and AI tools may comprise such as search and optimization tools, inference rules engines, algorithms, learning algorithms, logic modules, probabilistic tools and methods, decision analytics tools, machine learning algorithms, semantic tools, expert systems, and the like without limitations. Output from the analytics layer110and patterns layer112is exportable by the user as a database that enables the customer to configure their own machines to read and access the events and metrics stored in the system. In accordance with various exemplary and non-limiting embodiments, patterns and metrics are structured and stored in an intuitive way. In general, the database utilized for storing the events and metric data is designed to facilitate easy export and to enable integration with a team's internal workflow. In one embodiment, there is a unique file corresponding to each individual game. Within each file, individual data structures may be configured in accordance with included structure definitions for each data type indicative of a type of event for which data may be identified and stored. For example, types of events that may be recorded for a basketball game include, but are not limited to, isos, handoffs, posts, screens, transitions, shots, closeouts, and chances. With reference to, for example, the data type “screens”, table 1 is an exemplary listing of the data structure for storing information related to each occurrence of a screen. As illustrated, each data type is comprised of a plurality of component variable definitions each comprised of a data type and a description of the variable. TABLE 1screensidINTInternal ID of this screen,possession_idSTRINGInternal ID of the possession in which thisevent took place,frameINTFrame ID, denoting frame number from thestart of the current period. Currently, thismarks the frame at which the screener andballhandler are closest,frame_timeINTTime stamp provided in SportVU data for aframe, measured in milliseconds in the currentepoch (i.e., from 00:00:00 UTC on 1Jan. 1970),game_codeINTGame code provided in SportVU data,periodINTRegulation periods 1-4, overtime periods5 and up,game_clockNUMBERNumber of seconds remaining in period,from 720.00 to 0.00,location_xNUMBERLocation along length of court, from 0 to 94,location_yNUMBERLocation along baseline of court, from 0 to 50,screenerINTID of screener, matches SportVU ID,ballhandlerINTID of the ballhandler, matches SportVU ID,screener_defenderINTID of the screener's defender, matchesSportVU ID,ballhandler_defenderINTID of the ballhandler's defender, matchesSportVU ID,oteamINTID of team on offense, matches IDs inSportVU data,dteamINTID of team on defense, matches IDs inSportVU data,rdefSTRINGString representing the observed actionsof the ballhandler's defender,sdefSTRINGString representing the observed actionsof the screener's defender,scr_typeSTRINGClassification of the screen into take,reject, or slip,outcomes_bhrARRAYActions by the ballhandler, taken from theoutcomes described at the end of thedocument, such as FGX or FGM,outcomes_scrARRAYActions by the screener, taken from theoutcomes described at the end of thedocument, such as FGX or FGM. These exported files, one for each game, enable other machines to read the stored understanding of the game and build further upon that knowledge. In accordance with various embodiments, the data extraction and/or export is optionally accomplished via a JSON schema. The patterns layer112may provide a technology infrastructure for rapid discovery of new patterns arising out of the retrieved data from the real-time event such as a sports event. The patterns may comprise many different patterns that corresponding to an understanding of the event, such as a defensive pattern (e.g., blitz, switch, over, under, up to touch, contain-trap, zone, man-to-man, or face-up pattern), various offensive patterns (e.g., pick-and-roll, pick-and-pop, horns, dribble-drive, off-ball screens, cuts, post-up, and the like), patterns reflecting plays (scoring plays, three-point plays, “red zone” plays, pass plays, running plays, fast break plays, etc.) and various other patterns associated with a player in the game or sports, in each case corresponding to distinct spatiotemporal events. The events layer114may allow creating new events or editing or correcting current events. For example, the events layer may allow for the analyzing of the accuracy of markings or other game definitions and may comment on whether they meet standards and sports guidelines. For example, specific boundary markings in an actual real-time event may not be compliant with the guidelines and there may exist some errors, which may be identified by the events layers through analysis and virtual interactions possible with the platform of the present invention. Events may correspond to various understandings of a game, including offensive and defensive plays, matchups among players or groups of players, scoring events, penalty or foul events, and many others. The data layer118facilitates management of the big data retrieved from the real-time event such as a sports event. The data layer118may allow creating libraries that may store raw data, catalogs, corrected data, analyzed data, insights, and the like. The data layer118may manage online warehousing in a cloud storage setup or in any other manner in various embodiments. FIG.2illustrates a process200as shown in the flow diagram, in accordance with an embodiment of the present invention. The process200may include retrieving spatiotemporal data associated with a sports or game and storing in a data library at step202. The spatiotemporal data may relate to a video feed that was captured by a 3D camera, such as one positioned in a sports arena or other venue, or it may come from another source. The process200may further include cleaning of the rough spatiotemporal data at data cleaning step204through analytical and machine learning tools and utilizing various technology layers as discussed in conjunction withFIG.1so as to generate meaningful insights from the cleansed data. The process200may further include recognizing spatiotemporal patterns through analysis of the cleansed data at step208. Spatiotemporal patterns may comprise a wide range of patterns that are associated with types of events. For example, a particular pattern in space, such as the ball bouncing off the rim, then falling below it, may contribute toward recognizing a “rebound” event in basketball. Patterns in space and time may lead to recognition of single events or multiple events that comprise a defined sequence of recognized events (such as in types of plays that have multiple steps). The recognized patterns may define a series of events associated with the sports that may be stored in a canonical event datastore210. These events may be organized according to the recognized spatiotemporal patterns; for example, a series of events may have been recognized as “pick,” “rebound,” “shot,” or like events in basketball, and they may be stored as such in the canonical event datastore210. The canonical event datastore210may store a wide range of such events, including individual patterns recognized by spatiotemporal pattern recognition and aggregated patterns, such as when one pattern follows another in an extended, multi-step event (such as in plays where one event occurs and then another occurs, such as “pick and roll” or “pick and pop” events in basketball, football events that involve setting an initial block, then springing out for a pass, and many others). The process200may further include querying or aggregation or pattern detection at step212. The querying of data or aggregation may be performed with the use of search tools that may be operably and communicatively connected with the data library or the events datastore for analyzing, searching, aggregating the rough data, cleansed, or analyzed data, or events data or the events patterns. At metrics and actionable intelligence214may be used for developing insights from the searched or aggregated data through artificial intelligence and machine learning tools. At interactive visualization218, for example, the metrics and actionable intelligence may convert the data into interactive visualization portals or interfaces for use by a user in an interactive manner. In embodiments, an interactive visualization portal or interface may produce a 3D reconstruction of an event, such as a game. In embodiments, a 3D reconstruction of a game may be produced using a process that presents the reconstruction from a point of view, such as a first person point of view of a participant in an event, such as a player in a game. Raw input XYZ data obtained from various data sources is frequently noisy, missing, or wrong. XYZ data is sometimes delivered with attached basic events already identified in it, such as possession, pass, dribble, and shot events; however, these associations are frequently incorrect. This is important because event identification further down the process (in Spatiotemporal Pattern Recognition) sometimes depends on the correctness of these basic events. For example, if two players' XY positions are switched, then “over” vs “under” defense would be incorrectly characterized, since the players' relative positioning is used as a critical feature for the classification. Even player-by-player data sources are occasionally incorrect, such as associating identified events with the wrong player. First, validation algorithms are used to detect all events, including the basic events such as possession, pass, dribble, shot, and rebound that are provided with the XYZ data. Possession/Non-possession models may use a Hidden Markov Model to best fit the data to these states. Shots and rebounds may use the possession model outputs, combined with 1) projected destination of the ball, and 2) player by player information (PBP) information. Dribbles may be identified using a trained ML algorithm and also using the output of the possession model. These algorithms may decrease the basic event labeling error rate by approximately 50% or more. Second, the system has a library of anomaly detection algorithms to identify potential problems in the data including, but not limited to, temporal discontinuities (intervals of missing data are flagged), spatial discontinuities (objects traveling is a non-smooth motion, “jumping”) and interpolation detection (data that is too smooth, indicating that post-processing was done by the data supplier to interpolate between known data points in order to fill in missing data). This problem data is flagged for human review so that events detected during these periods are subject to further scrutiny. Spatiotemporal Pattern Recognition Spatiotemporal pattern recognition (step208) is used to automatically identify relationships between physical and temporal patterns and various types of events. In the example of basketball, one challenge is how to turn x, y, z positions of ten players and one ball at twenty-five frames per second into usable input for machine learning and pattern recognition algorithms. For patterns, one is trying to detect (e.g., pick & rolls), the raw inputs may not suffice. The instances within each pattern category can look very different from each other. One, therefore, may benefit from a layer of abstraction and generality. Features that relate multiple actors in time are key components to the input. Examples include, but are not limited to, the motion of player one (P1) towards player two (P2), for at least T seconds, a rate of motion of at least V m/s for at least T seconds and at the projected point of intersection of paths A and B, and a separation distance less than D. In embodiments, an algorithm for spatiotemporal pattern recognition can use relative motion of visible features within a feed, duration of relative motion of such features, rate of motion of such features with respect to each other, rate of acceleration of such features with respect to each other, a projected point of intersection of such features, the separation distance of such features, and the like to identify or recognize a pattern with respect to visible features in a feed, which in turn can be used for various other purposes disclosed herein, such as recognition of a semantically relevant event or feature that relates to the pattern. In embodiments, these factors may be based on a pre-existing model or understanding of the relevance of such features, such as where values or thresholds may be applied within the pattern recognition algorithm to aid pattern recognition. Thus, thresholds or values may be applied to rates of motion, durations of motion, and the like to assist in pattern recognition. However, in other cases, pattern recognition may occur by adjusting weights or values of various input features within a machine learning system, without a pre-existing model or understanding of the significance of particular values and without applying thresholds or the like. Thus, the spatiotemporal pattern recognition algorithm may be based on at least one pattern recognized by adjusting at least one of an input type and a weight within a machine learning system. This recognition may occur independently of any a priori model or understanding of the significance of particular input types, features, or characteristics. In embodiments, an input type may be selected from the group consisting of relative direction of motion of at least two visible features, duration of relative motion of visible features with respect to each other, rate of motion of at least two visible features with respect to each other, acceleration of motion of at least two visible feature with respect to each other, projected point of intersection of at least two visible features with respect to each other and separation distance between at least two visible features with respect to each other, and the like. In embodiments of the present disclosure, there is provided a library of such features involving multiple actors over space and time. In the past machine learning (ML) literature, there has been relatively little need for such a library of spatiotemporal features, because there were few datasets with these characteristics on which learning could have been considered as an option. The library may include relationships between actors (e.g., players one through ten in basketball), relationships between the actors and other objects such as the ball, and relationships to other markers, such as designated points and lines on the court or field, and to projected locations based on predicted motion. Another key challenge is there has not been a labeled dataset for training the ML algorithms. Such a labeled dataset may be used in connection with various embodiments disclosed herein. For example, there has previously been no XYZ player-tracking dataset that already has higher level events, such as pick and roll (P&R) events labeled at each time frame they occur. Labeling such events, for many different types of events and sub-types, is a laborious process. Also, the number of training examples required to adequately train the classifier may be unknown. One may use a variation of active learning to solve this challenge. Instead of using a set of labeled data as training input for a classifier trying to distinguish A and B, the machine finds an unlabeled example that is closest to the boundary between As and Bs in the feature space. The machine then queries a human operator/labeler for the label for this example. It uses this labeled example to refine its classifier and then repeats. In one exemplary embodiment of active learning, the system also incorporates human input in the form of new features. These features are either completely devised by the human operator (and inputted as code snippets in the active learning framework), or they are suggested in template form by the framework. The templates use the spatiotemporal pattern library to suggest types of features that may be fruitful to test. The operator can choose a pattern, and test a particular instantiation of it, or request that the machine test a range of instantiations of that pattern. Multi-Loop Iterative Process Some features are based on outputs of the machine learning process itself. Thus, multiple iterations of training are used to capture this feedback and allow the process to converge. For example, a first iteration of the ML process may suggest that the Bulls tend to ice the P&R. This fact is then fed into the next iteration of ML training as a feature, which biases the algorithm to label Bulls' P&R defense as ices. The process converges after multiple iterations. In practice, two iterations have typically been sufficient to yield good results. In accordance with exemplary embodiments, a canonical event datastore210may contain a definitive list of events that the system knows occurred during a game. This includes events extracted from the XYZ data, as well as those specified by third-party sources, such as PBP data from various vendors. The events in the canonical event datastore210may have game clock times specified for each event. The canonical event datastore210may be fairly large. To maintain efficient processing, it is shared and stored in-memory across many machines in the cloud. This is similar in principle to other methods such as Hadoop™; however, it is much more efficient, because in embodiments involving events, such as sporting events, where there is some predetermined structure that is likely to be present (e.g., the 24-second shot clock, or quarters or halves in a basketball game), it makes key structural assumptions about the data. Because the data is from sports games, for example, in embodiments one may enforce that no queries will run across multiple quarters/periods. Aggregation steps can occur across quarters/periods, but query results will not. This is one instantiation of this assumption. Any other domain in which locality of data can be enforced will also fall into this category. Such a design allows rapid and complex querying across all of the data, allowing arbitrary filters, rather than relying on either 1) long-running processes, or 2) summary data, or 3) pre-computed results on pre-determined filters. In accordance with exemplary and non-limiting embodiments, data is divided into small enough shards that each worker shard has a low latency response time. Each distributed machine may have multiple workers corresponding to the number of processes the machine can support concurrently. Query results do not rely on more than one shard, since we enforce that events not cross quarter/period boundaries. Aggregation functions all run incrementally rather than in batch process so that as workers return results, these are incorporated into the final answer immediately. To handle results such as rankings pages, where many rows may be returned, the aggregator uses hashes to keep track of the separate rows and incrementally updates them. Referring toFIG.3, an exploration loop may be enabled by the methods and systems disclosed herein, where questioning and exploration can occur, such as using visualizations (e.g., data effects, referred to as DataFX in this disclosure), processing can occur, such as to identify new events and metrics, and understanding emerges, leading to additional questions, processing and understanding. Referring toFIG.4, the present disclosure provides an instant player rankings feature as depicted in the illustrated user interface. A user can select among various types of available rankings402, as indicated in the drop down list410, such as rankings relating to shooting, rebounding, rebound ratings, isolations (Isos), picks, postups, handoffs, lineups, matchups, possessions (including metrics and actions), transitions, plays and chances. Rankings can be selected in a menu element404for players, teams, or other entities. Rankings can be selected for different types of play in the menu element408, such as for offense, defense, transition, special situations, and the like. The ranking interface allows a user to quickly query the system to answer a particular question instead of thumbing through pages of reports. The user interface lets a user locate essential factors and evaluate talent of a player to make more informed decisions. FIGS.5A and5Bshow certain basic, yet quite in-depth, pages in the systems described herein, referred to in some cases as the “Eagle system.” This user interface may allow the user to rank players and teams by a wide variety of metrics. This may include identified actions, metrics derived from these actions, and other continuous metrics. Metrics may relate to different kinds of events, different entities (players and teams), different situations (offense and defense) and any other patterns identified in the spatiotemporal pattern recognition system. Examples of items on which various entities can be ranked in the case of basketball include chances, charges, closeouts, drives, frequencies, handoffs, isolations, lineups, matches, picks, plays, possessions, postups, primary defenders, rebounding (main and raw), off ball screens, shooting, speed/load and transitions. The Rankings UI makes it easy for a user to understand relative quality of one row item versus other row items, along any metric. Each metric may be displayed in a column, and that row's ranking within the distribution of values for that metrics may be displayed for the user. Color coding makes it easy for the user to understand relative goodness. FIGS.6A and6Bshow a set of filters in the UI, which can be used to filter particular items to obtain greater levels of detail or selected sets of results. Filters may exist for seasons, games, home teams, away teams, earliest and latest date, postseason/regular season, wins/losses, offense home/away, offensive team, defensive team, layers on the court for offense/defense, players off court for offense/defense, locations, offensive or defensive statistics, score differential, periods, time remaining, after timeout play start, transition/no transition, and various other features. The filters602for offense may include selections for the ballhandler, the ballhandler position, the screener, the screener position, the ballhandler outcome, the screener outcome, the direction, the type of pick, the type of pop/roll, the direction of the pop/roll, and presence of the play (e.g., on the wing or in the middle). Many other examples of filters are possible, as a filter can exist for any type of parameter that is tracked with respect to an event that is extracted by the system or that is in the spatiotemporal data set used to extract events. The present disclosure also allows situational comparisons. The user interface allows a user to search for a specific player that may fit into the offense. The highly accurate dataset and easy to use interface allow the user to compare similar players in similar situations. The user interface may allow the user to explore player tendencies. The user interface may allow locating shot locations and also may provide advanced search capabilities. Filters enable users to subset the data in a large number of ways and immediately receive metrics calculated on the subset. Using multiple loops for convergence in machine learning enables the system to return the newly filtered data and metrics in real-time, whereas existing methods would require minutes to re-compute the metrics given the filters, leading to inefficient exploration loops (FIG.3). Given that the data exploration and investigation process often require many loops, these inefficiencies can otherwise add up quickly. As illustrated with reference toFIGS.6A and6B, there are many filters that may enable a user to select specific situations of interest to analyze. These filters may be categorized into logical groups, including, but not limited to, Game, Team, Location, Offense, Defense, and Other. The possible filters may automatically change depending on the type of event being analyzed, for example, Shooting, Rebounding, Picks, Handoffs, Isolations, Postups, Transitions, Closeouts, Charges, Drives, Lineups, Matchups, Play Types, Possessions. For all event types, under the Game category, filters may include Season, specific Games, Earliest Date, Latest Date, Home Team, Away Team, where the game is being played Home/Away, whether the outcome was Wins/Losses, whether the game was a Playoff game, and recency of the game. For all event types, under the Team category, filters may include Offensive Team, Defensive Team, Offensive Players on Court, Defenders Players on Court, Offensive Players Off Court, Defenders Off Court. For all event types, under the Location category, the user may be given a clickable court map that is segmented into logical partitions of the court. The user may then select any number of these partitions in order to filter only events that occurred in those partitions. For all event types, under the Other category, the filters may include Score Differential, Play Start Type (Multi-Select: Field Goal ORB, Field Goal DRB, Free Throw ORB, Free Throw DRB, Jump Ball, Live Ball Turnover, Defensive Out of Bounds, Sideline Out of Bounds), Periods, Seconds Remaining, Chance After Timeout (T/F/ALL), Transition (T/F/ALL). For Shooting, under the Offense category, the filters may include Shooter, Position, Outcome (Made/Missed/All), Shot Value, Catch and Shoot (T/F/ALL), Shot Distance, Simple Shot Type (Multi-Select: Heave, Angle Layup, Driving Layup, Jumper, Post), Complex Shot Type (Multi-Select: Heave, Lob, Tip, Standstill Layup, Cut Layup, Driving Layup, Floater, Catch and Shoot), Assisted (T/F/ALL), Pass From (Player), Blocked (T/F/ALL), Dunk (T/F/ALL), Bank (T/F/ALL), Goaltending (T/F/ALL), Shot Attempt Type (Multi-select: FGA No Foul, FGM Foul, FGX Foul), Shot SEFG (Value Range), Shot Clock (Range), Previous Event (Multi-Select: Transition, Pick, Isolation, Handoff, Post, None). For Shooting, under the Defense category, the filters may include Defender Position (Multi-Select: PG, SG, SF, PF, CTR), Closest Defender, Closest Defender Distance, Blocked By, Shooter Height Advantage. For Picks, under the Offense category, the filters may include Ballhandler, Ballhandler Position, Screener, Screener Position, Ballhandler Outcome (Pass, Shot, Foul, Turnover), Screener Outcome (Pass, Shot, Foul, Turnover), Direct or Indirect Outcome, Pick Type (Reject, Slip, Pick), Pop/Roll, Direction, Wing/Middle, Middle/Wing/Step-Up. For Picks, under the Defense category, the filters may include Ballhandler Defender, Ballhandler Defender Position, Screener Defender, Screener Defender Position, Ballhandler Defense Type (Over, Under, Blitz, Switch, Ice), Screener Defense Type (Soft, Show, Ice, Blitz, Switch), Ballhandler Defense (Complex) (Over, Under, Blitz, Switch, Ice, Contain Trap, Weak), Screener Defense (Complex) (Over, Under, Blitz, Switch, Ice, Contain Trap, Weak, Up to Touch). For Drives, under the Offense category, the filters may include Ballhandler, Ballhandler Position, Ballhandler Outcome, Direct or Indirect, Drive Category (Handoff, Iso, Pick, Closeout, Misc.), Drive End (Shot Near Basket, Pullup, Interior Pass, Kickout, Pullout, Turnover, Stoppage, Other), Direction, Blowby (T/F). For Drives, under the Defense category, the filters may include Ballhandler Defender, Ballhandler Defender Position, Help Defender Present (T/F), Help Defenders. For most other events, under the Offense category, the filters may include Ballhandler, Ballhandler Position, Ballhandler Outcome, Direct or Indirect. For most other events, under the Defense category, the filters may include Ballhandler Defender, Ballhandler Defender Position. For Postups, under the Offense category, the filters may additionally include Area (Left, Right, Middle). For Postups, under the Defense category, the filters may additionally include Double Team (T/F). The present disclosure provides detailed analysis capabilities, such as through the depicted user interface embodiment ofFIG.7. In an example depicted inFIG.7, the user interface may be used to know if a player should try and ice the pick and roll or not between two players. Filters can go from all picks, to picks involving a selected player as ballhandler, to picks involving that ballhandler with a certain screener, to the type of defense played by that screener. By filtering down to particular matchups (by player combinations and actions taken), the system allows rapid exploration of the different options for coaches and players, and selection of preferred actions that had the best outcomes in the past. Among other things, the system may give a detailed breakdown of a player's opponent and a better idea of what to expect during a game. The user interface may be used to know and highlight opponent capabilities. A breakdowns UI may make it easy for a user to drill down to a specific situation, all while gaining insight regarding frequency and efficacy of relevant slices through the data. The events captured by the present system may be capable of being manipulated using the UI.FIG.8shows a visualization, where a drop-down feature802allows a user to select various parameters related to the ballhandler, such as to break down to particular types of situations involving that ballhandler. These types of “breakdowns” facilitate improved interactivity with video data, including enhanced video data created with the methods and systems disclosed herein. Most standard visualizations are static images. For large and complex datasets, especially in cases where the questions to be answered are unknown beforehand, interactivity enables the user to explore the data, ask new questions, get new answers. Visualizations may be color coded good (e.g., orange) to bad (e.g., blue) based on outcomes in particular situations for easy understanding without reading the detailed numbers. Elements like the sizes of partitions can be used, such as to denote frequency. Again, a user can comprehend significance from a glance. In embodiments, each column represents a variable for partitioning the dataset. It is easy for a user to add, remove, and re-arrange columns by clicking and dragging. This makes it easy to experiment with different visualizations. Furthermore, the user can drill into a particular scenario by clicking on the partition of interest, which zooms into that partition, and redraws the partitions in the columns to the right so that they are re-scaled appropriately. This enables the user to view the relative sample sizes of the partitions in columns to the right, even when they are small relative to all possible scenarios represented in columns further to the left. In embodiments, a video icon takes a user to video clips of the set of plays that correspond to a given partition. Watching the video gives the user ideas for other variables to use for partitioning. Various interactive visualizations may be created to allow users to better understand insights that arise from the classification and filtering of events, such as ones that emphasize color coding for easy visual inspection and detection of anomalies (e.g., a generally good player with lots of orange but is bad/blue in one specific dimension). Conventionally, most standard visualizations are static images. However, for large and complex datasets, especially in cases where the questions to be answered are unknown beforehand, interactivity enables the user to explore the data, ask new questions, get new answers. For example, a breakdown view may be color coded good (orange) to bad (blue) for easy understanding without reading the numbers. Sizes of partitions may denote the frequency of events. Again, one can comprehend from a glance at the events that occur most frequently. Each column of a visualization may represent a variable for partitioning the dataset. It may be easy to add, remove, and re-arrange columns by clicking and dragging. This makes it easy to experiment with possible visualizations. In embodiments, a video icon may take a user to video clips, such as of the set of plays that correspond to that partition. Watching the video gives the user ideas for other variables to use for partitioning. In embodiments, a ranking view is provided. Upon mousing over each row of a ranking view, histograms above each column may give the user a clear contextual understanding that row's performance for each column variable. The shape of a distribution is often informative. Color-coded bars within each cell may also provide a view of each cell's performance that is available, without mousing over. Alternatively, the cells themselves may be color-coded. Referring toFIGS.9and10, a system may provide a personalized video in embodiments of the methods and systems described herein. For example, with little time to scout the opposition, the system can provide a user with relevant information to quickly prepare the team. The team may rapidly retrieve the most meaningful plays, cut, and compiled to specific needs of players. The system may provide immediate video cut-ups. In embodiments, the present disclosure provides a video that is synchronized with identified actions. For example, if spatiotemporal machine learning identifies a segment of a video as showing a pick and roll involving two players, then that video segment may be tagged, so that when that event is found (either by browsing or by filtering to that situation), the video can be displayed. Because the machine understands the precise moment that an event occurs in the video, a user-customizable segment of video can be created. For example, the user can retrieve video corresponding to x seconds before, and y seconds after, each event occurrence. Thus, the video may be tagged and associated with events. The present disclosure may provide a video that may allow customization by numerous filters of the type disclosed above, relating to finding a video that satisfies various parameters, that displays various events, or combinations thereof. For example, in embodiments, an interactive interface provided by the present disclosure allows watching videos clips for specific game situations or actions. Reports may provide a user with easy access to printable pages summarizing pre-game information about an opponent, scouting report for a particular player, or a post-game summary. For example, the reports may collect actionable useful information in one to two easy-to-digest pages. These pages may be automatically scheduled to be sent to other staff members, e.g., post-game reports sent to coaches after each game. Referring toFIG.11, a report may include statistics for a given player, as well as visual representations, such as of locations1102where shots were taken, including shots of a particular type (such as catch and shoot shots). The UI as illustrated inFIG.12provides a court comparison view1202among several parts of a sports court (and can be provided among different courts as well). For example, filters1204may be used to select the type of statistic to show for a court. The statistics can be filtered to show results filtered by left side1208or right side1214. Where the statistics indicate an advantage, the advantages can be shown, such as the advantages of left centerFIG.1210and advantages of right centerFIG.1212. In sports, the field of play is an important domain constant or elements. Many aspects of the game are best represented for comparison on a field of play. In embodiments, a four court comparison view1202is a novel way to compare two players, two teams, or other entities, to gain an overview view of each player/team (Leftmost and RightmostFIGS.1208,1214) and understand each one's strengths/weaknesses (Left and Right CenterFIGS.1210,1212). The court view UI1302as illustrated inFIG.13provides a court view1304of a sport arena, in accordance with an embodiment of the present disclosure. Statistics for very specific court locations can be presented on a portion1308of the court view. The UI may provide a view of custom markings, in accordance with an embodiment of the present invention. Referring toFIG.14, filters may enable users to subset the data in a large number of ways, and immediately receive metrics calculated on the subset. Descriptions of particular events may be captured and made available to users. Various events may be labeled in a game, as reflected inFIG.15, which provides a detailed view of a timeline1502of a game, broken down by possession1504, by chances1508, and by specific events1510that occurred along the timeline1502, such as determined by spatiotemporal pattern recognition, by human analysis, or by a combination of the two. Filter categories available by a user interface of the present disclosure may include ones based on seasons, games, home teams, away teams, earliest date, latest date, postseason/regular season, wins/losses, offense home/away, offensive team, defensive team, players on the court for offense/defense, players off court for offense/defense, location, score differential, periods, time remaining, play type (e.g., after timeout play) and transition/no transition. Events may include ones based on primitive markings, such as shots, shots with a corrected shot clock, rebounds, passes, possessions, dribbles, and steals, and various novel event types, such as SEFG (shot quality), EFG+, player adjusted SEFG, and various rebounding metrics, such as positioning, opportunity percentage, attack, conversion percentage, rebounding above position (RAP), attack+, conversion+ and RAP+. Offensive markings may include simple shot types (e.g., angled layup, driving layup, heave, post shot, jumper), complex shot types (e.g., post shot, heave, cut layup, standstill layup, lob, tip, floater, driving layup, catch and shoot stationary, catch and shoot on the move, shake & raise, over screen, pullup and stepback), and other information relating to shots (e.g., catch and shoot, shot clock, 2/3S, assisted shots, shooting foul/not shooting foul, made/missed, blocked/not blocked, shooter/defender, position/defender position, defender distance and shot distance). Other events that may be recognized, such as through the spatiotemporal learning system, may include ones related to picks (ballhandler/screener, ballhandler/screener defender, pop/roll, wing/middle, step-up screens, reject/slip/take, direction (right/left/none), double screen types (e.g., double, horns, L, and handoffs into pick), and defense types (ice, blitz, switch, show, soft, over, under, weak, contain trap, and up to touch), ones related to handoffs (e.g., receive/setter, receiver/setter defender, handoff defense (ice, blitz, switch, show, soft, over, or under), handback/dribble handoff, and wing/step-up/middle), ones related to isolations (e.g., ballhandler/defender and double team), and ones related to post-ups (e.g., ballhandler/defender, right/middle/left and double teams). Defensive markings are also available, such as ones relating to closeouts (e.g., ballhandler/defender), rebounds (e.g., players going for rebounds (defense/offense)), pick/handoff defense, post double teams, drive blow-bys and help defender on drives), ones relating to off ball screens (e.g., screener/cutter and screener/cutter defender), ones relating to transitions (e.g., when transitions/fast breaks occur, players involved on offense and defense, and putback/no putback), ones relating to how plays start (e.g., after timeout/not after timeout, sideline out of bounds, baseline out of bounds, field goal offensive rebound/defensive rebound, free throw offensive rebound/defensive rebound and live ball turnovers), and ones relating to drives, such as ballhandler/defender, right/left, blowby/no blowby, help defender presence, identity of help defender, drive starts (e.g., handoff, pick, isolation or closeout) and drive ends (e.g., shot near basket, interior pass, kickout, pullup, pullout, stoppage, and turnover). These examples and many others from basketball and other sports may be defined, based on any understanding of what constitutes a type of event during a game. Markings may relate to off ball screens (screener/cutter), screener/cutter defender, screen types (down, pro cut, UCLA, wedge, wide pin, back, flex, clip, zipper, flare, cross, and pin in). FIG.16shows a system1602for querying and aggregation. In embodiments, data is divided into small enough shards that each worker has low latency response time. Each distributed machine may have multiple workers corresponding to the number of processes the machine can support concurrently. Query results do not rely on more than one shard, since we enforce that events not cross quarter/period boundaries. Aggregation functions all run incrementally rather than in batch process, so that as workers return results, these are incorporated into the final answer immediately. To handle results such as rankings pages, where many rows may be returned, the aggregator uses hashes to keep track of the separate rows and incrementally updates them. FIG.17shows a process flow for a hybrid classification process that uses human labelers together with machine learning algorithms to achieve high accuracy. This is similar to the flow described above in connection withFIG.2, except with the explicit inclusion of the human-machine validation process. By taking advantage of aligned video as described herein, one may provide an optimized process for human validation of machine labeled data. Most of the components are similar to those described in connection withFIG.2and in connection with the description of aligned video, such as the XYZ data source1702, cleaning process1704, spatiotemporal pattern recognition module1712, event processing system1714, video source1708, alignment facility1710and video snippets facility1718. Additional components include a validation and quality assurance process1720and an event-labeling component1722. Machine learning algorithms are designed to output a measure of confidence. For the most part, this corresponds to the distance from a separating hyperplane in the feature space. In embodiments, one may define a threshold for confidence. If an example is labeled by the machine and has confidence above the threshold, the event goes into the canonical event datastore210and nothing further is done. If an example has a confidence score below the threshold, then the system may retrieve the video corresponding to this candidate event, and ask a human operator to provide a judgment. The system asks two separate human operators for labels. If the given labels agree, the event goes into the canonical event datastore210. If they do not, a third person, known as the supervisor, is contacted for a final opinion. The supervisor's decision may be final. The canonical event datastore210may contain both human marked and completely automated markings. The system may use both types of marking to further train the pattern recognition algorithms. Event labeling is similar to the canonical event datastore210, except that sometimes one may either 1) develop the initial gold standard set entirely by hand, potentially with outside experts, or 2) limit the gold standard to events in the canonical event datastore210that were labeled by hand, since biases may exist in the machine labeled data. FIG.18shows test video input for use in the methods and systems disclosed herein, including views of a basketball court from simulated cameras, both simulated broadcast camera views1802, as well as purpose-mounted camera views1804. FIG.19shows additional test video input for use in the methods and systems disclosed herein, including input from broadcast video1902and from purpose-mounted cameras1904in a venue. Referring toFIG.20, probability maps2004may be computed based on likelihood there is a person standing at each x, y location. FIG.21shows a process flow of an embodiment of the methods and systems described herein. Initially, in an OCR process2118, machine vision techniques are used to automatically locate the “score bug” and determine the location of the game clock, score, and quarter information. This information is read and recognized by OCR algorithms. Post-processing algorithms using various filtering techniques are used to resolve issues in the OCR. Kalman filtering/HMMs used to detect errors and correct them. Probabilistic outputs (which measure the degree of confidence) assist in this error detection/correction. Next, in a refinement process2120, sometimes, a score bug is nonexistent or cannot be detected automatically (e.g., sometimes during PIP or split screens). In these cases, remaining inconsistencies or missing data is resolved with the assistance of human input. Human input is designed to be sparse so that labelers do not have to provide input at every frame. Interpolation and other heuristics are used to fill in the gaps. Consistency checking is done to verify the game clock. Next, in an alignment process,2112the Canonical Datastore2110(referred to elsewhere in this disclosure alternatively as the event datastore) contains a definitive list of events that the system knows occurred during a game. This includes events extracted from the XYZ data2102, such as after cleansing2104and spatiotemporal pattern recognition2108, as well as those specified by third-party sources such as player-by-player data sets2106, such as available from various vendors. Differences among the data sources can be resolved, such as by a resolver process. The events in the canonical datastore2110may have game clock times specified for each event. Depending on the type of event, the system knows that the user will be most likely to be interested in a certain interval of game play tape before and after that game clock. The system can thus retrieve the appropriate interval of video for the user to watch. One challenge pertains to the handling of dead ball situations and other game clock stoppages. The methods and systems disclosed herein include numerous novel heuristics to enable computation of the correct video frame that shows the desired event, which has a specified game clock, and which could be before or after the dead ball since those frames have the same game clock. The game clock is typically specified only at the one-second level of granularity, except in the final minute of each quarter. Another advance is to use machine vision techniques to verify some of the events. For example, video of a made shot will typically show the score being increased, or will show a ball going through a hoop. Either kind of automatic observation serves to help the alignment process result in the correct video frames being shown to the end user. Next, in a query UI component2130, the UI enables a user to quickly and intuitively request all video clips associated with a set of characteristics: player, team, play type, ballhandler, ballhandler velocity, time remaining, quarter, defender, etc. In addition, when a user is watching a video clip, the user can request all events that are similar to whatever just occurred in the video. The system uses a series of cartoon-like illustration to depict possible patterns that represent “all events that are similar.” This enables the user to choose the intended pattern, and quickly search for other results that match that pattern. Next, the methods and systems may enable delivery of enhanced video, or video snips2124, which may include rapid transmission of clips from stored data in the cloud. The system may store video as chunks (e.g., one-minute chunks), such as in AWS S3, with each subsequent file overlapping with a previous file, such as by 30 seconds. Thus, each video frame may be stored twice. Other instantiations of the system may store the video as different sized segments, with different amounts of overlap, depending on the domain of use. In embodiments, each video file is thus kept at a small size. The 30-second duration of overlap may be important because most basketball possessions (or chances in our terminology) do not last more than 24 seconds. Thus, each chance can be found fully contained in one video file, and in order to deliver that chance, the system does not need to merge content from multiple video files. Rather, the system simply finds the appropriate file that contains the entire chance (which in turn contains the event that is in the query result), and returns that entire file, which is small. With the previously computed alignment index, the system is also able to inform the UI to skip ahead to the appropriate frame of the video file in order to show the user the query result as it occurs in that video file. This delivery may occur using AWS S3 as the file system, the Internet as transport, and a browser-based interface as the UI. It may find other instantiations with other storage, transport, and UI components. FIG.22shows certain metrics that can be extracted using the methods and systems described herein, relating to rebounding in basketball. These metrics include positioning metrics, attack metrics, and conversion metrics. For positioning, the methods and systems described herein first address how to value the initial position of the players when the shot is taken. This is a difficult metric to establish. The methods and systems disclosed herein may give a value to the real estate that each player owns at the time of the shot. This breaks down into two questions: (1) what is the real estate for each player? (2) what is it worth? To address the first question, one may apply the technique of using Voronoi (or Dirichlet) tessellations. Voronoi tessellations are often applied to problems involving spatial allocation. These tessellations partition a space into Voronoi cells given a number of points in that space. For any point, it is the intersection of the self-containing half-spaces defined by hyper-planes equidistant from that point to all other points. That is, a player's cell is all the points on the court that are closer to the player than any other player. If all players were equally capable they should be able to control any rebound that occurred in this cell. One understands that players are not equally capable however this establishment of real estate is to set a baseline for performance. Over performance or under performance of this baseline will be indicative of their ability. To address the second question, one may condition based on where the shot was taken and calculate a spatial probability distribution of where all rebounds for similar shots were obtained. For each shot attempt, one may choose a collection of shots closest to the shot location that provides enough samples to construct a distribution. This distribution captures the value of the real estate across the court for a given shot. To assign each player a value for initial positioning, i.e., the value of the real estate at the time of the shot, one may integrate the spatial distribution over the Voronoi cell for that player. This yields the likelihood of that player getting the rebound if no one moved when the shot was taken and they controlled their cell. We note that because we use the distribution of locations of the rebound conditioned on the shot, it is not a matter of controlling more area or even necessarily area close to the basket, but the most valuable area for that shot. While the most valuable areas are typically close to the basket, there are some directional effects. For an attack or hustle metric, one may look at phases following a shot, such as an initial crash phase. To analyze this, one may look at the trajectory of the ball and calculate the time that it gets closest to the center of the rim. At this point, one may reapply the Voronoi-based analysis and calculate the rebound percentages of each player, i.e., the value of the real the estate that each player has at the time the ball hits the rim. The change in this percentage from the time the shot is taken to the time it hits the rim is the value or likelihood the player had added during the phase. Players can add value by crashing the boards, i.e., moving closer to the basket towards places where the rebound is likely to go, or by blocking out, i.e., preventing other players by taking valuable real estate that is already established. A useful, novel metric for the crash phase is generated by subtracting the rebound probability at the shot from the rebound probability at the rim. The issue is that the ability to add probability is not independent of the probability at the shot. Consider a case of a defensive player who plays close to the basket. The player is occupying high-value real estate, and once the shot is taken, other players are going to start coming into this real estate. It is difficult for players with high initial positioning value to have positive crash deltas. Now consider a player out by the three-point line. Their initial value is very low and moving any significant distance toward the rim will give them a positive crash delta. Thus, it is not fair to compare these players on the same scale. To address this, one may look at the relationship of the raw crash deltas (the difference between the probability at rim and probability at shot) compared to the probability at shot. In order to normalize for this effect, one may subtract the value of the regression at the player's initial positioning value from the raw crash delta to form the player's Crash value. Intuitively, the value indicates how much more probability is added by this player beyond what a player with similar initial positioning would add. One may apply this normalization methodology to all the metrics the initial positioning affects the other dimensions and it can be beneficial to control for it. A player has an opportunity to rebound the ball if they are the closest player to the ball once the ball gets below ten feet (or if they possess the ball while it is above ten feet). The player with the first opportunity may not get the rebound so multiple opportunities could be created after a single field goal miss. One may tally the number of field goal misses for which a player generated an opportunity for themselves and divided by the number of field goals to create an opportunity percentage metric. This indicates the percentage of field goal misses for which that player ended up being closest to the ball at some point. The ability for a player to generate opportunities beyond his initial position is the second dimension of rebounding: Hustle. Again, one may then apply the same normalization process as described earlier for Crash. The reason that there are often multiple opportunities for rebounds for every missed shot is that being closest to the ball does not mean that a player will convert it into a rebound. Thus, the third dimension of rebounding, conversion. The raw conversion metric for players is calculated simply by dividing the number of rebounds obtained by the number of opportunities generated. Formally, given a shot is described by its 2D coordinates on the court, s_x and s_y, which is followed by a rebound r, also described by its coordinates on the court of r_x and r_y, one may estimate P(r_y, r_x|s_x, s_y), the probability density of the rebound occurring at each position on the court given its shot location. This may be accomplished by first discretizing the court into, for example, 156 bins, created by separating the court into 13 equally spaced columns, and 12 equally spaced rows. Then, given some set S of shots from a particular bin, the rebounds from S will be distributed in the bins of the court according to a multinomial distribution. One may then apply maximum likelihood estimation to determine the probability of a rebound in each of the bins of the court, given the training set S. This process may be performed for bins that shots may fall in, giving 156 distributions for the court. Using these distributions, one may determine P(r_y, r_x|s_x, s_y). First, the shot is mapped to an appropriate bin. The probability distribution determined in the previous step is then utilized to determine the probability of the shot being rebounded in every bin of the court. One assumes that within a particular bin, the rebound is uniformly likely to occur in any coordinate. Thus, a probability density of the probability of the rebound falling in the bin is assigned to all points in the bin. Using the probability density P(r_y, r_x|s_x, s_y), one may determine the probability that each particular player grabs the rebound given their location and the position of the other players on the court. To accomplish this, one may first create a Voronoi diagram of the court, where the set of points is the location (p_x, p_y) for each player on the court. In such a diagram, each player is given a set of points that they control. Formally one may characterize the set of points that player P k controls in the following manner, where X is all points on the court, and d denotes the Cartesian distance between 2 points. Rk={x∈X|d(x,Pk)≤d(x,Pj) for allj≠k} Now there exist the two components for determining the probability that each player gets the rebound given their location, specifically, the shot's location, and the location of all the other players on the court. One may determine this value by assuming that if a ball is rebounded, it will be rebounded by the closest available player. Therefore, by integrating the probability of a rebound over each location in the player's Voronoi cell, we determine their rebound probability: ∫RP(rx,ry|sx,sy)dxdy The preceding section describes a method for determining the players rebounding probability, assuming that the players are stationary. However, players often move in order to get into better positions for the rebound, especially when they begin in poor positions. One may account for these phenomena. Let the player's raw rebound probability be denoted by rpand let d be an indicator variable denoting whether the player is on defense. One may then attempt to estimate the player's probability of getting a rebound, which we express in the following manner: P(r|rp,d) One does this by performing two linear regressions, one for the offensive side of the ball and one for the defensive. One may attempt to estimate p(r|rp, d) in the following manner: P(r|rp,d=0)=Ao*rp+Bo P(r|rp,d=1)=Ad*rp+Bd This results in four quantities to estimate. One may do this by performing an ordinary least squares regression for offensive and defensive players' overall rebounds in the test set. One may use 1 as a target variable when the player rebounds the ball, and 0 when he does not. This regression is performed for offense to determine Aoand Boand for defense to determine Adand Bd. One can then use the values to determine the final probability of each player getting the rebound given the shots location and the other players on the court. Novel shooting metrics can also be created using this system. One is able to determine the probability of a shot being made given various features of the shot s, denoted as F. Formally each shot can be characterized by a feature vector of the following form. [dist(hoop, shooter), dist(shooter, defender0), |angle(hoop, shooter, defender0)|,|angle(shooter, hoop, hoopother), I(shot=catchAndShoot), dist(shooter, defender1)] Here, the hoop represents the basket the shooter is shooting at, defender0refers to the closest defender to the shooter, defender1refers to the second closest defender, and hoopotherrefers to the hoop on the other end of the court. The angle function refers to the angle between three points, with the middle point serving as the vertex. I(shot=catchAndShoot) is an indicator variable, set to 1 if the shooter took no dribbles in the individual possession before shooting the shot, otherwise set to 0. Given these features, one seeks to estimate P(s=make). To do this, one may first split the shots into 2 categories, one for where dist (hoop, shooter) is less than 10, and the other for the remaining shots. Within each category one may find coefficients β0, β1, . . . , β5for the following equation: 1/(1+e{circumflex over ( )}(−t) where t=F0*β0+F1*β1+ . . . +F5*β5 Here, F0through F5denote the feature values for the particular shot. One may find the coefficient values β0, β1, . . . , β5using logistic regression on the training set of shots S. The target for the regression is 0 when the shot is missed and 1 when the shot is made. By performing two regressions, one is able to find appropriate values for the coefficients, for both shots within 10 feet, and longer shots outside 10 feet. As depicted inFIG.23, three or four dimensions can be dynamically displayed on a 2-D graph scatter rank view2302, including the x, y, size of the icon, and changes over time. Each dimension may be selected by the user to represent a variable of the user's choice. Also, on mouse-over, related icons may highlight, e.g., mousing over one player may highlight all players on the same team. As depicted inFIGS.24A and24B, reports2402can be customized by the user so that a team can create a report that is specifically tailored to that team's process and workflow. Another feature is that the report may visually display not only the advantages and disadvantages for each category shown but also the size of that advantage or disadvantage, along with the value and rank of each side being compared. This visual language enables a user to quickly scan the report and understanding the most important points. Referring toFIG.25, an embodiment of a quality assurance UI2502is provided. The QA UI2502presents the human operator with both an animated 2D overhead view2510of the play, as well as a video clip2508of the play. A key feature is that only the few seconds relevant to that play are shown to the operator, instead of an entire possession, which might be over 20 seconds long, or even worse, requiring the human operator to fast forward in the game tape to find the event herself. Keyboard shortcuts are used for all operations, to maximize efficiency. Referring toFIG.26, the operator's task is simplified to its core, so that we lighten the cognitive load as much as possible: if the operator is verifying a category of plays X, the operator has to simply choose, in an interface element2604of the embodiment of the QA UI2602whether the play shown in the view2608is valid (Yes or No), or (Maybe). She can also deem the play to be a (Duplicate), a (Compound) play that means it is just one type-X action in a consecutive sequence of type-X actions, or choose to (Flag) the play for supervisor review for any reason. Features of the UI2602include the ability to fast word, rewind, submit and the like, as reflected in the menu element2612. A table2610can allow a user to indicate the validity of plays occurring at designated times. FIG.27shows a method of camera pose detection, also known as “court solving.”FIG.27also shows the result of automatic detection of the “paint,” and use of the boundary lines to solve for the camera pose. The court lines and hoop location, given the solved camera pose, are then shown projected back onto the original image. This projection is from the first iteration of the solving process, and one can see that the projected court and the actual court do not yet align perfectly. One may use machine vision techniques to find the hoop and to find the court lines (e.g., paint boundaries), then use found lines to solve for the camera pose. Multiple techniques may be used to determine court lines, including detecting the paint area. Paint area detection can be done automatically. One method involves automatically removing the non-paint area of the court by automatically executing a series of “flood fill” type actions across the image, selecting for court-colored pixels. This leaves the paint area in the image, and it is then straightforward to find the lines/points. One may also detect all lines on the court that are visible, e.g., background or 3-point arc. In either case, intersections provide points for camera solving. A human interface2702may be provided for providing points or lines to assist algorithms, to fine-tune the automatic solver. Once all inputs are provided, the camera pose solver is essentially a randomized hill climber that uses the mathematical models as a guide (since it may be under-constrained). It may use multiple random initializations. It may advance a solution if it is one of the best in that round. When an iteration is done, it may repeat until the error is small.FIG.46shows the result of automatic detection of the “paint,” and use of the boundary lines to solve for the camera pose. The court lines and hoop location, given the solved camera pose, are then shown projected back onto the original image. This projection is from the first iteration of the solving process, and one can see that the projected court and the actual court do not yet align perfectly. FIG.28relates to camera pose detection. The second step2802shown in the Figure shows how the human can use this GUI to manually refine camera solutions that remain slightly off. FIG.29relates to auto-rotoscoping. Rotoscoping2902is required in order to paint graphics around players without overlapping the players' bodies. Rotoscoping is partially automated by selecting out the parts of the image with similar color as the court. Masses of color left in the image can be detected to be human silhouettes. The patch of color can be “vectorized” by finding a small number of vectors that surround the patch, but without capturing too many pixels that might not represent a player's body. FIGS.30A,30B, and30Crelate to scripted storytelling with an asset library3002. To produce the graphics-augmented clips, a company may either lean heavily on a team of artists, or a company may determine how best to handle scripting based on a library of assets. For example, instead of manually tracing a player's trajectory and increasing the shot probability in each frame as the player gets closer to the ball, a scripting language allows the methods and systems described herein to specify this augmentation in a few lines of code. In another example, for rebound clips, the Voronoi partition and the associated rebound positioning percentages can be difficult to compute for every frame. A library of story element effects may list each of these current and future effects. Certain combinations of scripted story element effects may be best suited for certain types of clips. For example, a rebound and put-back will likely make use of the original shot probability, the rebound probabilities including Voronoi partitioning, and then go back to the shot probability of the player going for the rebound. This entire script can be learned as being well-associated with the event type in the video. Over time, the system can automatically infer the best, or at least retrieve an appropriate, storyline to match up with a selected video clip containing certain events. This enables augmented video clips, referred to herein as DataFX clips, to be auto-generated and delivered throughout a game. FIGS.31-38show examples of DataFX visualizations. The visualization ofFIG.31requires court position to be solved in order to lay down grid, player “puddles”. Shot arc also requires backboard/hoop solution. InFIG.32, Voronoi tessellation, heat map, shot and rebound arcs all require the camera pose solution. The highlight of the player uses rotoscoping. InFIG.33, in addition to the above, players are rotoscoped for highlighting.FIGS.34-38show additional visualizations that are based on the use of the methods and systems disclosed herein. In embodiments, DataFX (video augmented with data-driven special effects) may be provided for pre-, during, or post-game viewing, for analytic and entertainment purposes. DataFX may combine advanced data with Hollywood-style special effects. Pure numbers can be boring, while pure special effects can be silly, but the combination of the two and the results can be very powerful. Example features used alone or in combination in DataFX can include use of a Voronoi overlay on court, a Grid overlay on court, a Heatmap overlay on court, a Waterfall effect showing likely trajectories of the ball after a missed field goal attempt, a Spray effect on a shot, showing likely trajectories of the shot to the hoop, Circles and glows around highlighted players, Statistics and visual cues over or around players, Arrows and other markings denoting play actions, Calculation overlays on court, and effects showing each variable taken into account. FIGS.39A through41Bshow a product referred to as “Clippertron.” Provided is a method and system whereby fans can use their distributed mobile devices to control individually and/or collectively what is shown on the Jumbotron or video board(s). An embodiment enables the fan to go through mobile application dialogs in order to choose the player (FIG.39A), shot type (FIG.39B), and shot location (FIG.39D) to be shown on the video board (FIG.39C). The fan can also enter in his or her own name so that it is displayed alongside the highlight clip. Clips are shown on the Video Board in real time or queued up for display. Variations include getting information about the fan's seat number (FIG.40). This could be used to show a live video feed of the fan while their selected highlight is being shown on the video board. Referred to as “FanMix” is a web-based mobile application that enables in-stadium fans to control the Jumbotron and choose highlight clips to push to the Jumbotron. An embodiment of FanMix enables fans to choose their favorite player, shot type, and shot location using a mobile device web interface. Upon pressing the submit button, a highlight showing this particular shot is sent to the Jumbotron and displayed according to placement order in a queue. Enabling this capability is that video is lined up to each shot within a fraction of a second. This allows many clips to be shown in quick succession, each showing video from the moment of release to the ball going through the hoop. In some cases, the video may start from the beginning of a play, instead of when a play begins. The methods and systems disclosed herein may include methods and systems for allowing a user or group of users to control presentation of a large scale display in an event venue, where the options for control are based on a context of the content as determined by machine extraction of semantically relevant events from the content. The methods and systems disclosed herein may include methods and systems for enabling interaction with a large scale display system and may include taking a video feed; using machine learning to develop an understanding of an event within the video feed, the understanding including identifying context information relating to the event; automatically, under computer control, extracting the content displaying the event and associating the extracted content with the context information; producing a video content data structure that includes the associated context information; and providing an application by which at least one user can interact with the video content data structure, wherein the options for user interaction are based on the context information, wherein the interaction with the video content data structure controls the presentation of the content on a large scale display. In embodiments, one or more users may interact with menus on an application, such as a smart phone application, in an arena or other location that has a large-scale display. The users may express preferences, such as by voting, for what content should be displayed, including selecting preferred types of events and/or contexts (which may be organized as noted above based on semantically relevant filters), selecting what metrics should be displayed (options for which may be offered based on context information for particular extracted video events), and the like. In embodiments, a large scale display in a venue where a live event is taking place may offer games, quizzes, or the like, where users may respond by text, SMS, or the like. The content of such games or quizzes may be constructed at least in part based on a machine semantic understanding of the live event, such as asking users which player has the most rebounds in the first quarter, or the like. The methods and systems disclosed herein may include methods and systems for a user to control Jumbotron clips based on contextualized content filters. The methods and systems disclosed herein may include methods and systems for a Jumbotron fan quiz based on machine semantic understanding of a live game. The methods and systems disclosed herein may include methods and systems wherein the application comprises a quiz for a user, wherein the quiz is constructed based at least in part on a machine semantic understanding of a live game that is taking place in a venue where the large-scale display is located. In embodiments, a fan quiz may ask questions based on proprietary machine learned metrics such as “which player took the hardest shots in this quarter.” The methods and systems disclosed herein may include methods and systems for embedding a machine extracted video cut in an application, where the selection of the embedded cut for the application is based on the context of the video cut. First Person Point of View (POV) In embodiments, interactive visualization218, as illustrated inFIG.2, may include producing a reconstruction of an event, such as a game, such as a 3D reconstruction or rendering. In embodiments, a 3D reconstruction or rendering of an event may be produced using a process that presents the event from a defined point of view, such as the first person point of view of a participant in the event, such as a player.FIG.39Fillustrates an embodiment of such as process, referred to herein in some cases as a first person POV process, or simply a first person process. A first person process may allow the user to select a player's view to follow. A first person process may automatically pin a user's view to the head of the selected player. The end result of a first person process may be dynamically rendered from the view of the selected player as a play occurs. A first person process may be an automated first person process. An automated first person process may produce a 3D reconstruction or rendering of a game and render each frame from the view of a player selected by a user. A first person process may be a virtual reality-based first person process. A virtual reality-based first person process may produce a 3D reconstruction or rendering of a game that allows a user to control the orientation of a view from the head movements of a user. In embodiments, the point of view may be controlled by, for example, player head tracking. In embodiments, users may choose a player whose point of view will be presented. Location of a view may be controlled automatically via head tracking data. View orientation may be controlled by the head movements of a user. In embodiments, the head movements of a user may be recorded by virtual reality (VR) technology. VR technology may be Oculus Rift™ technology and the like. Point Cloud Construction As illustrated inFIG.39F, a first person process may include constructing a point cloud that provides a 3D model of a real world scene. Point cloud construction may begin by producing binary, background-subtracted images for each time-synchronized frame on each camera. Using these binary images and the calibrations of each camera, a 3D convex hull may be produced by discretizing the scene into voxels and filling each voxel, if the voxel is contained within the ray projected from the camera through the image visual hull. The image visual hull may be the silhouette of the scene, for example. The silhouette of the scene may be a shape-form silhouette. The resulting convex hull may contain voxels that may not actually be present in the world, due to reconstructing only of the visual hull. In order to achieve a more precise point cloud, the 3D convex hull may be carved using photo consistency methods. Photo consistency methods may back-project the surface of a 3D reconstructed visual hull onto each visible camera. Photo consistency methods may also check to ensure the color of the pixels is consistent with the same pixel from another camera, or with nearby pixels, such as to avoid unrealistic discontinuities. If the colors from each visible camera do not agree, the voxel may be carved. This process may be repeated for the entire convex hull, producing the final carved point cloud. Point cloud construction may estimate the skeletal pose of all participants in a real world scene. Point cloud construction may fit a hand-made participant model to the estimated pose of each participant in a real world scene. In an example, the real world scene could be a sports court and the participants could be all the players on the sports court. In this example, point cloud construction could fit a hand-made player model to the estimated pose of each player on the sports court. Point cloud construction may include meshing techniques, which may be used to improve the quality of a final visualization for a user. Meshing techniques may be used to mesh multiple point counts. Meshing techniques may be used to provide a view that may be very close to a point cloud, for example. Player Identification A first person process may use player identification to enable the user to select from which player's view to render the 3D reconstruction. Player identification may involve multiple steps in order to produce reliable results. Player identification may start by performing jersey number detection, as illustrated inFIG.39F. Jersey numbers may be mapped to player names. Jersey numbers may then be mapped to player names using official rosters and the like. Jersey number detection may be performed frame-by-frame. Frame-by-frame jersey number detection may be performed by scanning and classifying each window as a number or as nothing, such as using a support vector machine (SVM), a supervised machine learning model used for classification. The SVM may be trained, such as using training sets of manually marked jersey numbers from the game video, for example. Results from individual frame-by-frame detection may be stitched together to form temporal tracks. Individual frame by frame detection may be stitched together to form temporal tracks using a k-shortest paths algorithm. Jersey number tracks may be associated with existing, more continuous player tracking data. Associating jersey number tracks with existing, more continuous player tracking data may produce robust tracks of identifiable players. Head Tracking A first person process may use head tracking in order to control the location of the view within a 3D reconstruction, as illustrated inFIG.39F. Head tracking may involve multiple steps in order to produce reliable results. The first step of head tracking may be the same as for player identification. The first step of head tracking may include head detection. Head detection may create a model on heads instead of on jersey numbers. Head detection may be performed frame by frame. Head detection may include frame by frame head detection. Frame-by-frame head detection may be performed by scanning each image. Frame-by-frame head detection may be performed by scanning each image and classifying each window as a head or not. Classifying each window as a head or not may be performed using an SVM. An SVM may be trained. An SVM may be trained using manually marked head samples from previously recorded games. An SVM may be be a team-dk-SVM. The results of the detection may then be used in 2D tracking to produce temporal 2D tracklets of each head within a camera's frame. 2D tracklets may then be triangulated using the results of all cameras to produce a 3D estimation of the location of all heads on the court. A 3D estimation of the location of all heads on the court may be 3D tracklets. 3D tracklets may then be stitched together. 3D tracklets may then be stitched together using an algorithm. An algorithm may be a k-shortest paths (KSP) algorithm. 3D tracklets may be stitched together to produce potential final head tracking results. Linear programming may be used to choose optimal head paths. Gaze Estimation As illustrated inFIG.39F, a first person process may use gaze estimation. Gaze estimation may be used to control the orientation of a view mounted on the player's head within the 3D reconstruction. Gaze estimation may be computed by assuming a player is looking in the direction opposite the numbers on the back of the player. Jersey number detection may be performed frame by frame. Frame by frame jersey number detection may be performed by scanning and classifying each window as a number or nothing using an SVM. The SVM may be trained using manually marked jersey numbers from an existing game video. An assumption may be made to determine the angle of a jersey number located on the back or front of a player's jersey. An assumption may be that a jersey number is only visible when the jersey number is perfectly aligned with a camera that made the detection. Cameras may have a known location in space. Because the cameras have a known location in space, the vector between the jersey and the camera may be computed using the known location of the camera in space. Frame-by-frame estimation may be performed after a vector is calculated. The results of the frame-by-frame estimation may be filtered to provide a smoothed experience for a first person process. FIGS.41A-41Brelates to an offering referred to as “inSight.” This offering allows pushing of relevant stats to fans' mobile devices4104. For example, if player X just made a three-point shot from the wing, this would show statistics about how often he made those types of shots4108, versus other types of shots, and what types of play actions he typically made these shots off of. inSight does for hardcore fans what Eagle (the system described above) does for team analysts and coaches. Information, insights, and intelligence may be delivered to fans' mobile devices while they are seated in the arena. This data is not only beautiful and entertaining but is also tuned into the action on the court. For example, after a seemingly improbable corner three by a power forward, the fan is immediately pushed information that shows the shot's frequency, difficulty, and the likelihood of being made. In embodiments, the platform features described above as “Eagle,” or a subset thereof may be provided, such as in a mobile phone form factor for the fan. An embodiment may include a storyboard stripped down, such as from a format for an 82″ touch screen to a small 4″ screen. Content may be pushed to a device that corresponds to the real time events happening in the game. Fans may be provided access to various effects (e.g., DataFX features described herein) and to the other features of the methods and systems disclosed herein. FIGS.42A-42CandFIG.43show touchscreen product interface elements4202,4204,4208,4302and4304. These are essentially many different skins and designs on the same basic functionality described throughout this disclosure. Advanced stats are shown in an intuitive large-format touch screen interface. A touchscreen may act as a storyboard for showing various visualizations, metric and effects that conform to an understanding of a game or element thereof. Embodiments include a large format touch screen for commentators to use during a broadcast. While InSight serves up content to a fan, the Storyboard enables commentators on TV to access content in a way that helps them tell the most compelling story to audiences. Features include providing a court view, a hexagonal Frequency+Efficiency View, a “City/Matrix” View with grids of events, a Face/Histogram View, Animated intro sequences that communicate to a viewer that each head's position means that player's relative ranking, an Animated face shuttle that shows re-ranking when metric is switched, a Scatter Rank View, a ranking using two variables (one on each axis), a Trends View, integration of metrics with on-demand video and the ability to r-skin or simplify for varying levels of commentator ability. In embodiments, new metrics can be used for other activities, such as driving new types of fantasy games, e.g., point scoring in fantasy leagues could be based on new metrics. In embodiments, DataFX can show the player how his points were scored, e.g., overlay that runs a counter over an RB's head showing yards rushed while the video shows RB going down the field. In embodiments, one can deliver, for example, video clips (possibly enhanced by DataFX effects) corresponding to plays that scored points for a fantasy user's team for that night or week. Using an inSight-like mobile interface, a social game can be made so that much of the game play occurs in real time while the fan is watching the game. Using Insight-like mobile device features, a social game can be managed so that game play occurs in real time while a fan is watching the game, experiencing various DataFX effects and seeing fantasy scoring-relevant metrics on screen during the game. In embodiments, the methods and systems may include a fantasy advice or drafting tool for fans, presenting rankings and other metrics that aid in player selection. Just as Eagle enables teams to get more wins by devising better tactics and strategy, we could provide an Eagle-like service for fantasy players that gives the players a winning edge. The service/tool would enable fans to research all the possible players, and help them execute a better draft or select a better lineup for an upcoming week/game. DataFX can also be used for instant replays with DataFX optimized so that it can produce “instant replays” with DataFX overlays. This relies on a completely automated solution for court detection, camera pose solving, player tracking, and player rotoscoping. Interactive DataFX may also be adapted for display on a second screen, such as a tablet, while a user watches a main screen. Real time or instant replay viewing and interaction may be used to enable such effects. On a second screen-type viewing experience, the fan could interactively toggle on and off various elements of DataFX. This enables the fan to customize the experience and to explore many different metrics. Rather than only DataFX-enabled replays, the system could be further optimized so that DataFX is overlaid in true real time, enabling the user to toggle between a live video feed and a live video feed that is overlaid with DataFX. The user would then also be able to choose the type of DataFX to overlay, or which player(s) to overlay it on. A touch screen UI may be established for interaction with DataFX. Many of the above embodiments may be used for basketball, as well as for other sports and for other items that are captured in video, such as TV shows, movies, or live video (e.g., news feeds). For sports, a player tracking data layer may be employed to enable the computer to “understand” every second of every game. This enables the computer to deliver content that is extracting from portions of the game and to augment that content with relevant story-telling elements. The computer thus delivers personalized interactive augmented experiences to the end user. For non-sports domains, such as TV shows or movies, there is no player tracking data layer that assists the computer in understanding the event. Rather, in this case, the computer derives, in some other way, an understanding of each scene in a TV show or movie. For example, the computer might use speech recognition to extract the dialogue throughout a show. In further examples, the computer might use computer vision to recognize objects in each scene, such as robots in the Transformer movie. In further examples, the computer might use combinations of these inputs and others to recognize things like explosions. In further examples, the sound track could also provide clues. The resulting system would use this understanding to deliver the same kind of personalized interactive augmented experience as we have described for the sports domain. For example, a user could request to see the Transformer movie series, but only a compilation of the scenes where there are robots fighting and no human dialogue. This enables “short form binge watching,” where users can watch content created by chopping up and recombining bits of content from original video. The original video could be sporting events, other events TV shows, movies, and other sources. Users can thus gorge on video compilations that target their individual preferences. This also enables a summary form of watching, suitable for catching up with current events or currently trending video, without having to watch entire episodes or movies. FIG.44provides a flow under which the platform may ingest and align the content of one or more broadcast video feeds and one or more tracking camera video feeds. At a step4412, a broadcast video feed may be ingested, which may consist of an un-calibrated and un-synchronized video feed. The ingested broadcast video feed may be processed by performing optical character recognition at a step4414, such as to extract information from the broadcast video feed that may assist with aligning events within the feed with events identified in other sources of video for the same event. This may include recognizing text and numerical elements in the broadcast video feed, such as game scores, the game clock, player numbers, player names, text feeds displayed on the video, and the like. For example, the time on the game clock, or the score of a game, may assist with time-alignment of a broadcast feed with another video feed. At a step4404objects may be detected within the broadcast video feed, such as using machine-based object-recognition technologies. Objects may include players (including based on recognizing player numbers), body parts of players (e.g., heads of players, torsos of players, etc.) equipment (such as the ball in a basketball game), and many others. Once detected at the step4404, objects may be tracked over time in a step4418, such as in progressive frames of the broadcast video feed. Tracked objects may be used to assist in calibrating the broadcast video intrinsic and extrinsic camera parameters by associating the tracked objects with the same objects as identified in another source, such as a tracking camera video feed. At a step4402, in parallel with the steps involved in ingesting and processing a broadcast video feed, video feeds from tracking cameras, such as tracking cameras for capturing 3D motion in a venue (like a sports arena), may be ingested. The tracking camera video feeds may be calibrated and synchronized to a frame of reference, such as one defined by the locations of a set of cameras that are disposed at known locations within the venue where the tracking camera system is positioned. At a step4406, one or more objects may be detected within the tracking camera video feed, including various objects of the types noted above, such as players, numbers, items of equipment, and the like. In embodiments, spatiotemporal coordinates of the objects may be determined by processing the information from the tracking camera video feed, the coordinates being determined for the recognized objects based on the frame of reference defined by the camera positions of the tracking system. In embodiments, the coordinates being determined for the recognized objects can be based on the court or the field on which the game is played. In embodiments, the coordinates being determined for the recognized objects are based on the boundaries, lines, markers, indications, and the like associated with the court or the field on which the game is played. The video feed from the tracking camera system and the information about spatiotemporal object positions may be used to generate a point cloud at a step4416, within which voxel locations of the objects detected at the step4406may be identified at a step4418. The tracking camera video feed that was processed to detect and track objects may be further processed at a step4410by using spatiotemporal pattern recognition (such as machine-based spatiotemporal pattern recognition as described throughout this disclosure) to identify one or more events, which may be a wide range of events as described throughout this disclosure, such as events that correspond to patterns in a game or sport. In embodiments, other feeds may be available that may contain additional information about events that are contained in the tracking camera video feed. For example, a data feed, such as a play-by-play feed, for a game may be ingested at a step4422. At a step4420, the information from multiple sources may be aligned, such as aligning the play-by-play data feed from the step4422with events recognized at the step4410. Similarly, at a step4424the recognized event data in the tracking camera video feed at the step4410may be aligned with events recognized in the broadcast video feed at the step4414, resulting in time-aligned broadcast video, tracking camera, and other (e.g., play-by-play) feeds. Once the tracking camera video feed and the broadcast video feed are time-aligned for an event, objects detected at the step4404in the broadcast video feed and tracked at the step4418(e.g., players' heads) may be used at a step4428to calibrate the broadcast video camera position, such as by identifying the broadcast video camera position within the frame of reference of the tracking camera system used to capture the tracking camera video feed. This may include comparing sizes and orientations of the same object as it was detected at the step4404in the broadcast video feed and at the step4406in the tracking camera system video feed. In embodiments, calibration parameters of the broadcast camera can be determined by, among other things, comparing positions of detected objects in the video with detected three-dimensional positions of the corresponding objects that can be obtained using the calibrated tracking system. In embodiments, heads of the players in the game can be suitable objects because the heads of the players can be precisely located relative to other portions of the bodies of the players. Once calibrated, the broadcast video camera information can be processed as another source just like any of the tracking cameras. This may include re-calibrating the broadcast video camera position for each of a series of subsequent events, as the broadcast video camera may move or change zoom between events. Once the broadcast video camera position is calibrated to the frame of reference of the tracking camera system, at a step4430pixel locations in the broadcast video feed may be identified, corresponding to objects in the broadcast video feed, which may include using information about voxel locations of objects in the point cloud generated from the motion tracking camera feed at the step4418and/or using image segmentation techniques on the broadcast video feed. The process ofFIG.44thus provides time-aligned broadcast video feeds, tracking camera event feeds, and play-by-play feeds, where within each feed pixel locations or voxel locations of objects and backgrounds are known, so that various activities can be undertaken to process the feeds, such as for augmenting the feeds, performing pattern recognition on objects and events within them (such as to find plays following particular patterns), automatically clipping or cutting them to produce content (such as capturing a reaction in broadcast video to an event displayed in or detected by the tracking camera feeds based on a time sequence of time-aligned events), and many others as described throughout this disclosure. In some embodiments, the platform may use stationary features on a playing surface (e.g., a basketball court) to calibrate the broadcast video camera parameters and to time align two or more video feeds. For example, the platform may utilize stationary lines (e.g., yard lines, top of the three point line, a half court line, a center field line, side lines, intersections between half court or field lines and side lines, logos, goal posts, and the like) to calibrate the broadcast video camera parameters. In these embodiments, the stationary features may be detected in the broadcast video feed and in the tracking video feed. In embodiments, the platform may determine the x, y, and z locations of the stationary features in the tracking video feed, and may calibrate the broadcast video camera parameters based on the x, y, z coordinates of the stationary features or voxel coordinates. For example, in embodiments, the platform may cross-reference the pixel locations of a stationary feature in the broadcast video feed with the x, y, z coordinates of the stationary feature in the tracking camera feeds. Once the broadcast video feed is calibrated with respect to one or more tracking camera feeds, moving objects tracked in the broadcast video can be cross-referenced against the locations of the respective moving objects from the tracking camera video feeds. In some of these embodiments, the platform may track moving objects in the broadcast video feed and the tracking camera feed(s) with respect to the locations of the stationary features in the respective broadcast video feed and tracking camera feeds to time align the broadcast video feed and tracking camera feeds. For example, the platform may time align one or more broadcast video feeds and one or more tracking camera feeds at respective time slices where a player crosses a logo or other stationary features on the playing surface in each of the respective feeds (broadcast video and tracking camera feeds). Referring toFIG.45, embodiments of the methods and systems disclosed herein may involve handling multiple video input feeds4502, information from one or more tracking systems4512(such as player tracking systems that may provide time-stamped location data and other information, such as physiological monitoring information, activity type information, etc.), and one or more other input sources4510(such as sources of audio information, play-by-play information, statistical information, event information, etc.). In embodiments, live video input feeds4502are encoded by one or more encoding systems4504to produce a series of video segment files4508, each consisting of a video chunk, optionally of short duration, e.g., four seconds. Video segment files4514from different input feeds corresponding to the same time interval are considered as part of a temporal group4522associated with that time interval. The temporal group4522may also include information and other content from tracking systems4512and other input sources4510. In embodiments, each video segment file4508may independently and in parallel undergo various processing operations4518in one or more processing systems, such as transcoding to various file formats, streaming protocols, and the like. The derived video files4520output from the processing operations4518may be associated with the same temporal group4522. Temporal grouping4522enables time synchronization among the original and derived files without having to further maintain or track timing or synchronization information. Such processing operations4518may include, without limitation, standard video on demand (VOD) transcoding, such as into lower bit rate video files. Processing operations4518may also include augmentation, such as with graphics, audio overlays, or data, producing augmented derived video files4520. Other data derived from the video streams or obtained from other input sources4510(e.g., coordinate positions of players and objects obtained via optical or chip tracking systems4512), which may typically become available with a small time delay relative to the live video input streams4502, may also be synchronized to the video segment files4508in a temporal group4522, such as by adding them as metadata files to the corresponding temporal group or by binding them to the video segment files4514. In embodiments, a manifest file4524based on these temporal groups4522may be created to enable streaming of the original video input feed4502, the video segment files4514and/or derived video files4520as a live, delayed or on-demand stream. Synchronization among the output streams may enable combining and/or switching4528seamlessly among alternative video feeds (e.g., different angles, encoding, augmentations or the like) and data feeds of a live streamed event. Among other benefits, synchronization across original video input feeds4502, video segment files4508, derived video files4520with encoded, augmented or otherwise processed content, and backup video feeds, described by a manifest file4524, may allow client-side failover from one stream to another without time discontinuity in the viewing of the event. For instance, if an augmented video stream resulting from processing operations4518is temporarily unavailable within the time offset at which the live stream is being viewed or falls below a specified buffering amount, a client application4530consuming the video feed may temporarily fail over to an un-augmented video input feed4502or encoded video segment file4508. In embodiments, the granularity with which the client application4530switches back to the augmented stream4528when available may depend on semantically defined boundaries in the video feed, which in embodiments may be based on a semantic understanding of events within the video feed, such as achieved by the various methods and systems described in connection with the technology stack100and the processes described throughout this disclosure. For example, a switch back to derived video file4520with various augmentations added in processing operations4518may be timed to occur after a change of possession, a timeout, a change in camera angle, a change in point-of-view, or other appropriate points in the action, so that the switching occurs while minimizing disruption of the viewing experience. Switching may also be controlled by semantic understanding4532of the content of different video input feeds4502at each time instant; for example, if a camera is not pointing at the current action on the court, an alternative video input feed4502, video segment file4514or derived video file4520may be selected. In embodiments, a “smart pipe” may be provided consisting of multiple aligned content channels (e.g., audio, video, or data channels) that are indexed both temporally and spatially. Spatial indexing and alignment4534may include indexing of pixels in 2D streams, voxels in 3D streams, and other objects, such as polygonal meshes used for animation, 3D representation, or the like. In embodiments, a wide variety of elements may be indexed, such as, without limitation, events, and locations of objects (including players, game objects, and objects in the environment, such as a court or arena) involved in those events. In embodiments, a further variety of elements may be indexed including information and statistics related to events and locations. In embodiments, a further variety of elements may be indexed including locations of areas corresponding to floor areas, background areas, signage areas, or the like where information, augmentations, graphics, animations, advertising, or the like may be displayed over a content frame. In embodiments, a further variety of elements may be indexed including indices or indicators of what information, augmentation elements or the like that are available to augment a video feed in a content channel such as ones that may be selected individually or in combination. In embodiments, a further variety of elements may be indexed including predefined combinations of content (e.g., particular combinations of audio, video, information, augmentation elements, replays, or other content elements), such as constituting channels or variations from which end-users may choose ones that they prefer. Thus, a system for spatial indexing and alignment4534may provide spatial indexing and alignment information to the processing operations4518(or may be included therein), such that the derived video files4520(and optionally various objects therein) that are indexed both temporally and spatially. In such a case, the “smart pipe” for synchronized, switchable and combinable content streams4528may contain sufficient indexed and aligned content to allow the creation of derived content, the creation of interactive applications, and the like, each optionally tied to live and recorded events (such as sporting events). In embodiments, the tracking systems4512, the spatial indexing and alignment4534and the semantic understanding4532may be part of the larger alignment, tracking, and semantic system included in the systems and methods disclosure herein that may take various inputs including original video feeds and play-by-play feeds, and may produce X, Y, Z tracking data and semantic labels. The X, Y, Z tracking data and semantic labels may be stored as separate metadata files in the temporal group4522or used to produce derived video files4520in the temporal group4522. In embodiments, any combination of inputs such as from a tracking camera system, a 3D camera array, broadcast video, a smartphone video, lidar, and the like may be used to automatically obtain a 3D understanding of a game. The automatically obtained 3D understanding of the game may be used to index voxels of 3D representations (e.g., AR/VR video) or pixels of any 2D video footage (e.g., from tracking cameras, broadcast, smartphones, reconstructed video from any point of view such as first person point of view of players in the game) or alternatively to voxels/pixels, other graphics representations such as polygonal meshes. In embodiments, a “smart pipe” may consist of multiple aligned content channels (e.g., audio, video, or data channels) that are indexed both temporally and spatially (e.g., indexing of pixels/voxels/polygonal meshes) with events and locations of players/objects involved in those events. By way of this example, the indexing both temporally and spatially with events and locations of players/objects involved in those events may also include information and statistics related to events and locations. The indexing both temporally and spatially with events and locations of players/objects involved in those events may also include locations of areas corresponding to floor or background areas where information, augmentations (e.g., filters that manipulate the look of the ball/players) or advertising may be displayed over each video frame. In embodiments, available pieces of information and augmentation elements may be selected individually or in combination. In embodiments, combinations of audio, video, information, augmentation, replays, and the like may constitute channels for end-users to choose from. The smart pipe may contain sufficient indexed and aligned content to create derived content and interactive apps tied to live and recorded games. In embodiments, the composition of video via frames, layers and/or tracks may be generated interactively by distributed sources, e.g., base video of the sporting event, augmentation/information layers/frames from different providers, audio tracks from alternative providers, advertising layers/frames from other providers, leveraging indexing and synchronization concepts, and the like. By way of this example, the base layers and/or tracks may be streamed to the various providers as well as to the clients. In embodiments, additional layers and/or tracks may be streamed directly from the providers to the clients and combined at the client. In embodiments, the composition of video via frames, layers and/or tracks and combinations thereof may be generated interactively by distributed sources and may be based on user personalizations. In embodiments, the systems and methods described herein may include a software development kit (SDK)4804that enables content being played at a client media player4808to dynamically incorporate data or content from at least one separate content feed4802. In these embodiments, the SDK4804may use timecodes or other timing information in the video to align the client's current video playout time with data or content from the at least one separate content feed4802, in order to supply the video player with relevant synchronized media content4810. In operation, as shown inFIG.48, a system4800(e.g., the system described herein) may output one or more content feeds4802-1,4802-2. . .4802-N. The content feeds may include video, audio, text, and/or data (e.g., statistics of a game, player names). In some embodiments, the system4800may output a first content feed4802-1that includes a video and/or audio that is to be output (e.g., displayed) by a client media player4808. The client media player4808may be executed by a user device (e.g., a mobile device, a personal computing device, a tablet computing device, and the like). The client media player4808is configured to receive the first content feed4802and to output the content feed4802via a user interface (e.g., display device and/or speakers) of the user device. Additionally or alternatively, the client media player4808may receive a third-party content feed4812from a third-party data source (not shown). For example, the client media player4808may receive a live-game video stream from the operator of an arena. Regardless of the source, a content feed4802-2or4812may include timestamps or other suitable temporal indicia to identify different positions (e.g., frames or chunks) in the content feed. The client media player4808may incorporate the SDK4804. The SDK4804may be configured to receive additional content feeds4802-2. . .4802-N to supplement the outputted media content. For example, a content feed4802-2may include additional video (e.g., a highlight or alternative camera angle). In another example, a content feed4802-2may include data (e.g., statistics or commentary relating to particular game events). Each additional content feed4802-2. . .4802-N may include timestamps or other suitable temporal indicia as well. The SDK4804may receive the additional content feed(s)4802-2. . .4802-N and may augment the content feed being output by the media player with the one or more additional content feeds4802-2. . .4802-N based on the timestamps of the respective content feeds4802-1,4802-2, . . .4802-N to obtain dynamic synchronized media content4810. For example, while playing a live feed (with a slight lag) or a video-on-demand (VOD) feed of a basketball game, the SDK4804may receive a first additional content feed4802containing a graphical augmentation of a dunk in the game and a second additional content feed4802indicating the statistics of the player who performed the dunk. The SDK4804may incorporate the additional content feeds into the synchronized media content4810, by augmenting the dunk in the live or VOD feed with the graphical augmentation and the statistics. In some embodiments, a client app using the SDK may allow client-side selection or modification of which subset of the available additional content feeds to incorporate. In some implementations, the SDK4804may include one or more templates that define a manner by which the different content feeds4802may be laid out. Furthermore, the SDK4804may include instructions that define a manner by which the additional content feeds4802are to be synchronized with the original content feed. In embodiments, the systems and methods disclosed herein may include joint compression of channel streams such as successive refinement source coding to reduce streaming bandwidth and/or reduce channel switching time, and the like. In embodiments, the systems and methods disclosed herein may include event analytics and/or location-based games including meta-games, quizzes, fantasy league and sport, betting, and other gaming options that may be interactive with many of the users at and connected to the event such as identity-based user input, e.g., touching or clicking a player predicted to score next. In embodiments, the event analytics and/or location-based games may include location-based user input such as touching or clicking a location where a rebound or other play or activity is expected to be caught, to be executed, and the like. In embodiments, the event analytics and/or location-based games may include timing-based user input such clicking or pressing a key to indicate when a user thinks a shot should be taken, a defensive play should be initiated, a time-out should be requested, and the like. In embodiments, the event analytics and/or location-based games may include prediction-based scoring including generating or contributing to a user score based on the accuracy of an outcome prediction associated with the user. By way of this example, the outcome prediction may be associated with outcomes of individual offensive and defensive plays in the games and/or may be associated with scoring and/or individual player statistics at predetermined time intervals (e.g., quarters, halves, whole games, portions of seasons, and the like). In embodiments, the event analytics and/or location-based games may include game state-based scoring including generating or contributing to a user score based on expected value of user decision calculated using analysis of instantaneous game state and/or comparison with evolution of game state such as maximum value or realized value of the game state in a given chance or possession. In embodiments, the systems and methods disclosed herein may include interactive and immersive reality games based on actual game replays. By way of this example, the interactive and immersive reality games may include the use of one or more simulations to diverge from actual game events (partially or in their entirety) based on user input or a collection of user input. In embodiments, the interactive and immersive reality games may include an action-time resolution engine that may be configured to determine a plausible sequence of events to rejoin the actual game timeline relative to, in some examples, the one or more simulations to diverge from actual game events (partially or in their entirety) based on user input or a collection of user input. In embodiments, the interactive and immersive reality games may include augmented reality simulations that may integrate game event sequences, using cameras on located on one or more backboards and/or along locations adjacent to the playing court. In embodiments, the systems and methods disclosed herein may include simulated sports games that may be based on detailed player behavior models. By way of this example, the detailed player behavior models may include tendencies to take different actions and associated probabilities of success of different actions under different scenarios including teammate/opponent identities, locations, score differential, period number, game clock, shot clock, and the like. In embodiments, the systems and methods disclosed herein may include social chat functions and social comment functions that may be inserted into a three-dimensional scene of a live event. By way of this example, the social chat and comment functions that may be inserted into the three-dimensional scene of the live event may include avatars inserted into the crowd that may display comments within speech bubbles above the avatars. In other examples, the social chat and comment functions may be inserted into a three-dimensional scene of the live event as a running commentary adjacent to other graphics or legends associated with the event. In embodiments, the systems and methods disclosed herein may include the automating of elements of broadcast production such as automatic control of camera pan, tilt, and zoom. By way of this example, the automating of elements of broadcast production may also include automatic switching between camera views. In embodiments, the automating of elements of broadcast production may include automatic live and color commentary generation and automatic placement and content from synthetic commentators in the form of audio or in the form of one or more audio and video avatars with audio content that may be mixed with semantic and contextual based reactions from the live event and/or from other users. By way of this example, the automated elements of broadcast production may include automated generation of commentary in audio only or audio and video form including AR augmentation and associated content by, for example, combining semantic machine understanding of events in the game and semantic machine understanding of camera views, camera cuts, and camera close-ups in broadcast or another video. In embodiments, the automated generation of commentary may also be based on semantic machine understanding of broadcaster/game audio, statistics from semantic machine understanding of past games, information/statistics from other sources, and combinations thereof. In embodiments, a ranking of potential content items may be based on at least one of the rarity of events, comparison against the rest of the league, diversity with respect to previously shown content, personalization based on channel characteristics, explicit user preferences, inferred user preferences, the like, or combinations thereof. In embodiments, the automated generation of commentary may include the automatic selection of top-ranked content items or a short list of top-ranked content items shown to a human operator for selection. In embodiments, and as shown inFIG.49, the systems and methods disclosed herein may include machine-automated or machine-assisted generation of aggregated clips4902. Examples of aggregated clips4902include highlights and/or condensed games. The aggregated clip may be comprised of one or more selected media segments (e.g., video and/or audio segments). In the example ofFIG.49, a multimedia system4900may include an event datastore4910, an interest determination module4920, and a clip generation module4930. The event datastore4910may store event records4912. Each event records4912may correspond to a respective event (e.g., an offensive possession, a shot, a dunk, a defensive play, a blitz, a touchdown pass). An event record4912may include an event ID4914that uniquely identifies the event. An event record4912may also include event data4916that corresponds to the event. For example, event data4916may include a media segment (e.g., video and/or audio) that captures the event or a memory address that points to the media segment that captures the event. The event record4912may further include event metadata4918. Event metadata4918may include any data that is pertinent to the event. Examples of event metadata4918may include, but is not limited to, an event type (e.g., a basketball shot, a dunk, a football blitz, a touchdown, a soccer goal), a list of relevant players (e.g., the shooter and defender, the quarterback, the goal scorer), a time corresponding to the event (e.g., when during the game did the event occur), a length of the event (e.g., how many seconds is the media segment that captures the event), a semantic understanding of the event, the potential impact event on win probability (e.g., a delta of win probability from before and after the event), references (e.g., event IDs) to other events that are pertinent to event (e.g., other events during a run made by a team, and/or any other suitable types of metadata. In some embodiments, the event metadata4918may further include an interest score of the event, where the interest score of an event may be a numerical value indicating a degree of likelihood that a user would find the event interesting (e.g., worthy of watching). In embodiments, an interest determination module4920determines an interest level of an event or group of related events. In some of these embodiments, the interest determination module4920determines an interest score of an event or group of related events. The interest score may be relative to other events in a particular game or relative to events spanning multiple games and/or sports. In some embodiments, the interest determination module4920may determine the interest score of a particular event or group of events based on the event metadata4918of the respective event(s). In some embodiments, the interest determination module4920may incorporate one or more machine-learned models that receive event metadata4918of an event or group of related events and outputs a score based on the event metadata4918. A machine-learned model may, for example, receive an event type, and other relevant features (e.g., time, impact on win probability, relevant player) and may determine the score based thereon. The machine-learned models may be trained in a supervised, semi-supervised manner, or unsupervised manner. The interest determination module4920may determine the interest score of an event or group of related events in other manners as well. For example, the interest determination module4920may utilize rules-based scoring techniques to score an event or group of related events. In some embodiments, the interest determination module4920is configured to determine an interest score for a particular user. In these embodiments, the interest scores may be used to generate personalized aggregated clips4902for a user. In these embodiments, the interest determination module4920may receive user-specific data that may be indicative of a user's personal biases. For example, the interest determination module4920may receive user-specific data that may include, but is not limited to, a user's favorite sport, the user's favorite team, the user's list of favorite players, a list of events recently watched by the user, a list of events recently skipped by the user, and the like. In some of these embodiments, the interest determination module4920may feed the user-specific data into machine-learned models along with event metadata4818of an event to determine an interest score that is specific to a particular user. In these embodiments, the interest determination module4920may output the user-specific interest score to the clip generation module4930. In some embodiments, one or more humans may assign interest levels to various events. In these embodiments, the human-assigned interest levels may be used to determine which events to include in an aggregated clip4902. Furthermore, the human-assigned interest levels may be used to train a model used to determine interest scores of respective events. The clip generation module4930generates aggregated clips4902based on one or more identified events. The clip generation module4930may determine one or more events to include in an aggregated clip based on the interest level of the events relating to a game or collection of games. In some embodiments, the clip generation module4930determines the events to include in an aggregated clip4902based on the interest level of the respective events. The clip generation module4930may implement optimization or reinforcement learning to determine which events (depicted in media segments) to include in an aggregated clip4902. For instance, the clip generation module4930may include media segments depicting events having the highest relative interest scores and media segments of additional events that may be relevant to the high scoring events. In embodiments, the clip generation module4930may determine how many events to include in the aggregated clip4902depending on the intended purpose of the aggregated clip4902. For example, a highlight may be shorter in duration than a condensed game. In embodiments, the length of an aggregated clip4902may be a predetermined parameter (e.g., three minutes). In these embodiments, the clip generation module4930may select a sufficient number of events to span the predetermined duration. For example, the clip generation module4930may identify a set of media segments of events having requisite interest scores, where the aggregated duration of the set of media segments is approximately equal to the predetermined duration. In embodiments, the clip generation module4930may be configured to generate personalized aggregated clips. In these embodiments, the clip generation module4930may receive user-specific interest scores corresponding to events of a particular game or time period (e.g., “today's personalized highlights). The clip generation module4930may utilize the user-specific interest scores of the events, a user's history (e.g., videos watched or skipped), and/or user profile data (e.g., location, favorite teams, favorite sports, favorite players) to determine which events to include in a personalized aggregated clip4902. In embodiments, the clip generation module4930may determine how many events to include in the personalized aggregated clip4902depending on the intended purpose of the aggregated clip4902and/or the preferences of the user. For example, if a user prefers to have longer condensed games (i.e., more events in the aggregated clip), the clip generation module4930may include more media segments in the aggregated clip. In some embodiments, the length of an aggregated clip4902may be a predetermined parameter (e.g., three minutes) that may be explicitly set by the user. In these embodiments, the clip generation module4930may select a sufficient number of events to span the predetermined duration set by the user. For example, the clip generation module4930may identify a set of media segments of events having requisite interest scores, where the aggregated duration of the set of media segments is approximately equal to the predetermined duration. In embodiments, the clip generation module4930requests the scores of one or more events from the interest determination module4920when the clip generation module4930is tasked with generating aggregated clips4902. Alternatively, the interest determination module4920may score each event defined in the event datastore4910. Upon determining which events to include in an aggregated clip4902, the clip generation module4930may retrieve the media segments corresponding to the identified events. For example, the clip generation module4930may retrieve the event records4912of the identified events using the event IDs4914of the identified events. The clip generation module4930may then generate the aggregated clip based on the event data4916contained in the retrieved event records4912. The sequence of events depicted in the aggregated clip4902may be generated in any suitable manner. For example, the events may be depicted sequentially as they occurred or in order of ascending or descending interest score. The clip generation module4930may transmit the aggregated clip4902to a user device and/or store the aggregated clip4902in memory. In embodiments, and in the example ofFIG.50, the systems and methods disclosed herein may be configured to provide “dynamic videos”5002. A dynamic video5002may refer to the concatenated display of media segments (e.g., video and/or audio) that can be dynamically selected with short time granularity (e.g., frame-level or chunk-level granularity). A dynamic video5002may be comprised of one or more constituent media segments of dynamically determined length, content, and sequencing. The dynamic video5002may include constituent media segments that are stitched together in a single file or a collection of separate files that may each contain a respective constituent media segment. The constituent media segments of a dynamic video5002may be related based on one or more suitable relationships. For example, the constituent media segments may be of a same event taken from different camera angles, of different events of a same game, of different events from different games but of the same sport and on the same day, of different events relating to the same player or team, and/or of different events but the same subject, topic, or sentiment. Additionally, in some embodiments, the constituent media segments may be supplemented or augmented with graphical and/or text overlays. The graphical and/or text overlays may be confined to a single media segment or may span across multiple constituent media segments. In the illustrated example, a multimedia system5000provides the dynamic videos5002to a user device5080. The user device5080may be a mobile device (e.g., smartphone), a personal digital assistant, a laptop computing device, a personal computer, a tablet computing device, a gaming device, a smart television, and/or any other suitable electronic device with the capability to present the dynamic videos. The user device5080may include a media player5082that outputs the dynamic video5002via a user interface5084. The media player5082may also receive user commands via the user interface5084. The user interface5084may include a display device (e.g., an LED screen or a touchscreen), a physical keyboard (e.g., a qwerty keyboard), an input device (e.g., a mouse), an audio device (e.g., speakers), and the like. The user device5080may further include a communication unit5088that effectuates communication with external devices directly and/or via a network. For example, the communication unit5088may include one or more wireless and/or wired transceivers that communicate using any suitable communication protocol. The multimedia system5000may include a media datastore5010, a communication unit5030, and a dynamic video module5020. The media datastore5010may store media records5012. A media record5012may correspond to a media segment that captures one or more events. A media record may include a media ID5014that uniquely identifies the media record5012. A media record5012may include media data5016. The media data5016may include the media segment itself or a memory address of the media segment. The media record5012may further include media metadata5018. The media metadata5018may include any data that is pertinent to the media segment. Examples of media metadata5018may include, but is not limited to, one or more event identifiers the identify one or more events depicted in the media segment, one or more event types that describe the one or more events depicted in the media segment, a list of relevant players depicted in the multimedia segment, a time corresponding to the media segment (e.g., a starting time of the media segment with respect to a game), a time length of the media segment, a semantic understanding of the media segment, the potential impact of the events depicted in the media segment on win probability (e.g., a delta of win probability from before and after the event), references (e.g., media IDs) to other media segments that are pertinent to the media segment (e.g., other angles of the same events depicted in the media segment), and/or any other suitable types of metadata. In embodiments, the media records5012may further reference entire content feeds (e.g., an entire game or a livestream of a game). In these embodiments, the media metadata5018of a media record may include any suitable information relating to the content feed. For example, the media metadata5018may include an identifier of the game to which the content feed corresponds, an indicator whether the content feed is live or recorded, identifiers of the teams playing in the game, identifiers of players playing in the game, and the like. The dynamic video module5020is configured to generate dynamic videos and to deliver dynamic videos to a user device5080. The dynamic video module5020may select the media segments to include in the dynamic video5002in any suitable manner. In some embodiments, the dynamic video module5020may implement optimization and/or reinforcement learning-based approaches to determine the selection, length, and/or sequence of the constituent media segments. In these embodiments, the dynamic video module5020may utilize the media metadata5018of the media records5012stored in the media datastore5010to determine the selection, length, and/or sequence of the constituent media segments. The dynamic video module5020may additionally or alternatively implement a rules based approach to determine which media segments to include in the dynamic video. For example, the dynamic video module5020may be configured to include alternative camera angles of an event if multiple media segments depicting the same event exist. In this example, the dynamic video module5020may be further configured to designate media clips taken from alternative camera angles as supplementary media segments (i.e., media segments that can be switched to at the user device) rather than sequential media segments. In embodiments, the dynamic video module5020may be configured to generate dynamic video clips from any suitable sources, including content feeds. In these embodiments, the dynamic video module5020may generate dynamic videos5002having any variety of constituent media segments by cutting media segments from one or more content feeds and/or previously cut media segments. Furthermore, the dynamic video module5020may add any combination of augmentations, graphics, audio, statistics, text, and the like to the dynamic video. In some embodiments, the dynamic video module5020is configured to provide personalized dynamic videos5002. The dynamic video module5020may utilize user preferences (either predicted, indicated, or inferred) to customize the dynamic video. The dynamic video5002may utilize a user's profile, location, and/or history to determine the user preferences. A user profile may indicate a user's favorite teams, players, sports, and the like. In another example, the dynamic video module5020may be able to predict a user's favorite teams and players based on the location of the user. In yet another example, the dynamic video module5020may be configured to infer user viewing preferences based on the viewing history of the user (e.g., telemetry data reported by the media player of the user). For example, if the user history indicates that the user routinely skips over media segments that are longer than 30 seconds, the dynamic video module5020may infer that the user prefers media segments that are less than 30 seconds long. In another example, the dynamic video module5020may determine that the user typically “shares” media segments that include reactions of players or spectators to a notable play. In this example, the dynamic video module5020may infer that the user prefers videos that include reactions of players or spectators, and therefore, media segments that tend to be longer in duration. In another example, the user history may indicate that the user watches media segments of a particular type of event (e.g., dunks), but skips over other types of events (e.g., blocked shots). In this example, the dynamic video module5020may infer that the user prefers to consume media segments of dunks over media segments of blocked shots. In operation, the dynamic video module5020can utilize the indicated, predicted, and/or inferred user preferences to determine which media segments to include in the dynamic video and/or the duration of the media segments (e.g., should the media segment be shorter or longer). The dynamic video module5020may utilize an optimization and/or reinforcement-based learning approach to determine which media segments to include in the dynamic video5002, the duration of the dynamic video5002, and the sequence of the media segments in the dynamic video5002. The multimedia system5000may transmit a dynamic video5002to a user device5080. The media player5082receives the dynamic video5002via the communication unit5088and outputs one or more of the media segments contained in the dynamic video5002via the user interface5084. The media player5082may be configured to record user telemetry data (e.g., which media segments the user consumers, which media segments the user skips, and/or terms that the user searches for) and to report the telemetry data to the multimedia system5000. The media player5082may be configured to receive commands from a user via the user interface5084. The commands may be executed locally by the media player5082and/or may be communicated to the multimedia system5000. In some embodiments, the media player5082may be configured to allow selection of the media segments that are displayed based on user input and/or AI-controls. In the former scenario, the media player5082may be configured to receive user commands via the user interface5084. For example, the media player5082may allow a user to enter search terms or to choose from a displayed set of suggestions. In response to the search terms or the user selections, the media player5082may initialize (e.g., request and begin outputting) a dynamic video5002, in which the media player5082displays a machine-controlled sequence of media segments related to the search terms/user selection. A user may issue additional commands via the user interface5084(e.g., via the keyboard or by touching or directional swiping on a touchscreen) to request media segments related in different ways to the current media segment, to indicate when to move on to the next media segment, and/or to interactively pull up statistics and other information. For example, swiping upwards may indicate that the user wishes to see a different camera angle of the same event, swiping downwards may indicate that the user wishes to see an augmented replay of the same event, and swiping right may indicate that the user wishes to move on to the next clip. A set of keyword tags corresponding to each clip may be shown to facilitate the user adding one or more of the displayed tags to the set of search terms that determines potentially relevant media segments to display. The media player5082may report the user's inputs or interactions with the media player5082, if any, to the multimedia system5000. In response to such commands, the multimedia system500may use such data to adapt subsequent machine-controlled choices of media segment duration, content type, and/or sequencing in the dynamic video. For example, the user's inputs or interactions may be used to adjust the parameters and/or reinforcement signals of an optimization or reinforcement learning-based approach for making machine-controlled choices in the dynamic video5002. In embodiments, the dynamic video module5020may be configured to generate the dynamic video in real time. In these embodiments, the dynamic video module5020may begin generating and transmitting the dynamic video5002. During display of the dynamic video5002by the media player5082, the dynamic video module5020may determine how to sequence/curate the dynamic video. For instance, the dynamic video module5020may determine (either based on a machine-learning-based decision or from explicit instruction from the user) that the angle of a live feed should be switched to a different angle. In this situation, the dynamic video module5020may update the dynamic video5002with a different video feed that is taken from an alternative angle. In another example, a user may indicate (either explicitly or implicitly) that she is uninterested in a type of video being shown (e.g., baseball highlights). In response to the determination that the user is uninterested, the dynamic video module5020may retrieve media segments relating to another topic (e.g., basketball) and may begin stitching those media segments into the dynamic video5002. In this example, the dynamic video module5020may be configured to cut out any media segments that are no longer relevant (e.g., additional baseball highlights). It is noted that in some embodiments, the dynamic video module5020may transmit alternative content feeds and/or media segments in the dynamic video5002. In these embodiments, the media player5082may be configured to switch between feeds and/or media segments. In embodiments, the automating of elements of broadcast production may include automatic live commentary generation that may be used to assist referees for in situ evaluation or post-mortem evaluation. The automatic live commentary generation that may be used to assist referees may also be used to train referees in unusual situations that may be seen infrequently in actual games but may be reproduced or formed from AR content based on or purposefully deviated from live game events. By way of the above examples, the referee assistance, evaluation, training, and the like associated with the improvement of referee decisions may be based on semantic machine understanding of game events. In embodiments, the systems and methods disclosed herein may include the use of player-specific information in three-dimensional position identification and reconstruction to improve trade-offs among camera requirements. Toward that end, fewer or lower resolution cameras may be used, computational complexity/delay may be reduced and output quality/accuracy may be increased when compared to typical methods. With reference toFIG.46, the player-specific information in three-dimensional position identification and reconstruction4600may be shown to improve the balance in trade-offs of camera requirements including improved localization of keypoints4602such as a head, joints, and the like, by using player models4604of specific players in conjunction with player identification4608such as identifying a jersey number or automatically recognizing a face and remote sensing technology to capture the players such as one or more video cameras, lidar, ultrasound, Wi-Fi visualization, and the like. By way of this example, the improved localization of keypoints may include optimizing over constraints on distances between keypoints from player models combined with triangulation measurements from multiple cameras. In embodiments, the improved localization of keypoints may also include using the player models4604to enable 3D localization with a single camera. In embodiments, the system and methods disclosed herein may also include the use of the player models4604fitted to detected keypoints to create 3D reconstructions4620or to improve 3D reconstructions in combination with point cloud techniques. Point cloud techniques may include a hybrid system including the player models4604that may be used to replace areas where the point cloud reconstruction does not conform adequately to the model. In further examples, the point cloud techniques may include supplementing the point cloud in scenarios where the point cloud may have a low density of points. In embodiments, the improved localization of keypoints may include the use of player height information combined with face detection, gaze detection, posture detection, or the like to locate the point of view of players. In embodiments, the improved localization of keypoints may also include the use of camera calibration4630receiving one or more video feeds4632, the 3D reconstruction4610and projection onto video in order to improve player segmentation for broadcast video4640. In embodiments, the systems and methods disclosed herein may include using a state-based machine learning model with hierarchical states. By way of this example, the state-based machine learning model with hierarchical states may include input training state labels at the finest granularity. In embodiments, the machine learning model may be trained at the finest level of granularity as well as at intermediate levels of aggregated states. In embodiments, the output and cost function optimization may be at the highest level of state aggregation. In embodiments, the machine learning model may be trained using an ensemble of active learning methods for multiclass classification including weighting of methods based on a confusion matrix and a cost function that may be used to optimize the distribution of qualitatively varied instances for active learning. FIG.51illustrates an example of a client device5100configured to display augmented content to a user according to some embodiments of the present disclosure. In the illustrated example, the client device5100may include a processing device5102, a storage device5104, a communication unit5106that effectuates communication between the client device and other devices via one or more communication networks (e.g., the Internet and/or a cellular network), and a user interface5108(e.g., a touchscreen, a monitor, a mouse, a keyboard, and the like). The processing device5102may include one or more processors and memory that stores computer-executable instructions that are executed by the one or more processors. The processing device5102may execute a video player application5200. In embodiments, the video player application5200is configured to allow a user to consume video and related content from different content channels (e.g., audio, video, and/or data channels). In some of the embodiments, the video and related content may be delivered in time-aligned content channels (e.g., a “smart pipe”), where the content may be indexed temporally and/or spatially. In embodiments, the spatial indexing may include indexing the pixels or groups of pixels of multiple streams, 3D pixels (e.g., voxels) or groups of 3D pixels, and/or objects (e.g., polygonal meshes used for animation, overlay graphics, and the like). In these embodiments, a wide variety of elements may be indexed temporally (e.g., in relation to individual video frames) and/or spatially (e.g., in relation to pixels, groups of pixels, or “real world” locations depicted in the video frames). Examples of elements that may be indexed include events (match/game identifier), objects (players, game objects, objects in the environment such as court or playing field) involved in an event, information and statistics relating to the event and locations, locations of areas corresponding to the environment (e.g., floor areas, background areas, signage areas) where information, augmentations, graphics, animations, and advertising can be displayed in a frame, indicia of what information, augmentation elements, and the like that are available to augment a video feed in a content channel, combinations of content (e.g., particular combinations of audio, video, information, augmentation elements, replays, or other suitable elements), and/or references to other content channels corresponding to the event (such that end-users can select between streams). In this way, the video player may allow a user to interact with the video, such that the user can request the video player to display information relating to a time and/or location in the video feed, display relevant information relating to the event, switch between video feeds of the event, view advertisements, and the like. In these embodiments, the smart pipe may allow the video player application5200to create dynamic content at the client device5100. FIG.52illustrates an example implementation of the video player application5200according to some embodiments of the present disclosure. The video player application5200may include a GUI module5202, an integration module5204, an access management module5206, a video transformation module5208, a time transformation module5210, and a data management module5212. The video player application5200may include additional or alternative modules not discussed herein without departing from the scope of the disclosure. In embodiments, the GUI module5202receives commands from a user and displays video content, including augmented video content, to the user via the user interface5108. In embodiments, the GUI module5202displays a menu/selection screen (e.g., drop down menus, selection elements, and/or search bars) and receives commands from a user corresponding to the available menus/selection items via a user via the user interface5108. For example, the GUI module5202may receive an event selection via a drop down menu and/or a search bar/results page. In embodiments, an event selection may be indicative of a particular sport and/or a particular match. In response to an event selection, the GUI module5202may provide the event selection to the integration module5204. In response, the GUI module5202may receive a video stream (of one or more video streams capturing the selected event) from the video transformation module5208and may output a video corresponding to the video feed via the user interface5112. The GUI module5202may allow a user to provide commands with respect to the video content, including commands such as pause, fast forward, and rewind. The GUI module5202may receive additional or alternative commands, such as “make a clip,” drill down commands (e.g., provide stats with respect to a player, display players on the playing surface, show statistics corresponding to a particular location, and the like), switch feed commands (e.g., switch to a different viewing angle), zoom in/zoom out commands, select link commands (e.g., selection of an advertisement), and the like. The integration module5204receives an initial user command to view a particular sport or game and instantiates an instance of a video player (also referred to as a “video player instance”). In embodiments, the integration module5204receives a source event identifier (ID), an access token, and/or a domain ID. The source event ID may indicate a particular game (e.g., MLB: Detroit Tigers v. Houston Astros). The access token may indicate a particular level of access that a user has with respect to a game or league (e.g., the user may access advanced content or MLB games may include multi-view feed). The domain ID may indicate a league or type of event (e.g., NBA, NFL, FIFA). In embodiments, the integration module may instantiate a video player instance in response to the source event ID, the domain ID, and the access token. The integration module5204may output the video player instance to the access management module5206. In some embodiments, the integration module5204may further output a time indicator to the access management module5206. A time indicator may be indicative of a time corresponding to a particular frame or group of frames within the video content. In some of these embodiments, the time indicator may be a wall time. Other time indicators, such as a relative stream (e.g., 10 seconds from t=0), may be used. The access management module5206receives the video player instance and manages security and/or access to video content and/or data by the video player from a multimedia system. In embodiments, the access management module5206may expose a top layer API to facilitate the ease of access to data by the video player instance. The access management module5206may determine the level of access to provide the video player instance based on the access token. In embodiments, the access management module5206implements a single exported SDK that allows a data source (e.g., multimedia servers) to manage access to data. In other embodiments, the access management module5206implements one or more customized exported SDKs that each contain respective modules for interacting with a respective data source. The access management module5206may be a pass through layer, whereby the video player instance is passed to the video transformation module5208. The video transformation module5208receives the video player instance and obtains video feeds and/or additional content provided by a multimedia server (or analogous device) that may be displayed with the video encoded in the video feeds. In embodiments, the video transformation module5208receives the video content and/or additional content from the data management module5212. In some of these embodiments, the video transformation module5208may receive a smart pipe that contains one or more video feeds, audio feeds, data feeds, and/or an index. In embodiments, the video feeds may be time-aligned video feeds, such that the video feeds offer different viewing angles or perspectives of the event to be displayed. In embodiments, the index may be a spatio-temporal index. In these embodiments, the spatio-temporal index identifies information associated with particular video frames of a video and/or particular locations depicted in the video frames. In some of these embodiments, the locations may be locations in relation to a playing surface (e.g., at the fifty yard line or at the free throw line) or defined in relation to individual pixels or groups of pixels. It is noted that the pixels may be two-dimensional pixels or three-dimensional pixels (e.g., voxels). The spatio-temporal index may index participants on a playing surface (e.g., players on a basketball court), statistics relating to the participants (e.g., Player A has scored 32 points), statistics relating to a location on the playing surface (e.g., Team A has made 30% of three-pointers from a particular area on a basketball court), advertisements, score bugs, graphics, and the like. In some embodiments, the spatio-temporal index may index wall times corresponding to various frames. For example, the spatio-temporal index may indicate a respective wall time for each video frame in a video feed (e.g., a real time at which the frame was captured/initially streamed). The video transformation module5208receives the video feeds and the index and may output a video to the GUI module5202. In embodiments, the video transformation module5208is configured to generate augmented video content and/or switch between different video feeds of the same event (e.g., different camera angles of the event). In embodiments, the video transformation module5208may overlay one or more GUI elements that receive user selections into the video being output. For example, the video transformation module5208may overlay one or more visual selection elements over the video feed currently being output by the GUI module5202. The visual selection elements may allow a user to view information relating to the event depicted in the video feed, to switch views, or to view a recent highlight. In response to the user providing a command via the user interface of the client device5100, the video transformation module5208may augment the currently displayed video feed with augmentation content, switch the video feed to another video feed, or perform other video transformation related operations. The video transformation module5208may receive a command to display augmentation content. For example, the video transformation module5208may receive a command to display information corresponding to a particular location (e.g., a pixel or group of pixels) and a particular frame. In response to the command, the video transformation module5208may reference the spatio-temporal index to determine an object (e.g., a player) that is located at the particular location in the particular frame. The video transformation module5208may retrieve information relating to the object. For example, the video transformation module5208may retrieve a name of a player or statistics relating to a player or a location on the playing surface. The video transformation module5208may augment the current video feed with the retrieved content. In embodiments, the video transformation module5208may request the content (e.g., information) from the multimedia server via the data management module5212. In other embodiments, the content may be transmitted in a data feed with the video feeds and the spatio-temporal index. In response to receiving the requested content (which may be textual or graphical), the video transformation module5208may overlay the requested content on the output video. The video transformation module5208may determine a location in each frame at which to display the requested data. In embodiments, the video transformation module5208may utilize the index to determine a location at which the requested content may be displayed, whereby the index may define locations in each frame where specific types of content may be displayed. In response to determining the location at which the requested content may be displayed, the video transformation module5208may overlay the content onto the video at the determined location. In another example, the video transformation module5208may receive a command to display an advertisement corresponding to a particular frame and location. In response to the command, the video transformation module5208determines the advertisement to display from the spatio-temporal index based on the particular frame and location. In embodiments, the video transformation module5208may retrieve the advertisement from the multimedia server (or another device). In other embodiments, the advertisement may be transmitted with the video feeds and the spatio-temporal index. In response to obtaining the advertisement, the video transformation module5208may determine a location at which the advertisement is to be displayed (e.g., in the manner discussed above), and may overlay the advertisement onto the video at the determined location. In embodiments, the video transformation module5208may receive a command to switch between video feeds in response to a user command to switch feeds. In response to such a command, the video transformation module5208switches the video feed from the current video feed to a requested video feed, while maintaining time-alignment between the video (i.e., the video continues at the same point in time but from a different feed). For example, in streaming a particular basketball game and receiving a request to change views, the video transformation module5208may switch from a sideline view to an under the basket view without interrupting the action of the game. The video transformation module5208may time align the video feeds (i.e., the current video feed and the video feed being switched to) in any suitable manner. In some embodiments, the video transformation module5208obtains a wall time from the time transformation module5210corresponding to a current frame or upcoming frame. The video transformation module5208may provide a frame identifier of the current frame or the upcoming frame to the video transformation module5208. In embodiments, the frame identifier may be represented in block plus offset form (e.g., a block identifier and a number of frames within the block). In response to the frame identifier, the time transformation module5210may return a wall time corresponding to the frame identifier. The video transformation module5208may switch to the requested video feed, whereby the video transformation module5208begins playback at a frame corresponding to the received wall time. In these embodiments, the video transformation module5208may obtain the wall time corresponding to the current or upcoming frame from the time transformation module5210, and may obtain a frame identifier of a corresponding frame in the video feed being switched to based on the received wall time. In some embodiments, the video transformation module5208may obtain a “block plus offset” of a frame in the video feed being switched to based on the wall time. The block plus offset may identify a particular frame within a video stream as a block identifier of a particular video frame and an offset indicating a number of frames into the block where the particular video frame is sequenced. In some of these embodiments, the video transformation module5208may provide the video transformation module5208with the wall time and an identifier of the video feed being switched, and may receive a frame identifier in block plus offset format from the time transformation module5210. In some embodiments, the video transformation module5208may reference the index using a frame identifier of a current or upcoming frame in the current video feed to determine a time aligned video frame in the requested video feed. It is noted that while the “block plus offset” format is described, other formats of frame identifiers may be used without departing from the scope of the disclosure. In response to obtaining a frame identifier, the video transformation module5208may switch to the requested video feed at the determined time aligned video frame. For example, the video transformation module5208may queue up the requested video feed at the determined frame identifier. The video transformation module5208may then begin outputting video corresponding to the requested video feed at the determined frame identifier. In embodiments, the time transformation module5210receives an input time value in a first format and returns an output time value in a second format. For example, the time transformation module5210may receive a frame indicator in a particular format (e.g., block plus offset”) that indicates a particular frame of a particular video feed (e.g., the currently displayed video feed of an event) and may return a wall time corresponding to the frame identifier (e.g., the time at which the particular frame was captured or was initially broadcast). In another example, the time transformation module5210receives a wall time indicating a particular time in a broadcast and a request for a frame identifier of a particular video feed. In response to the wall time and the frame identifier request, the time transformation module5210determines a frame identifier of a particular video frame within a particular video feed and may output the frame identifier in response to the request. The time transformation module5210may determine the output time in response to the input time in any suitable manner. In embodiments, the time transformation module5210may utilize an index corresponding to an event (e.g., the spatio-temporal index corresponding to an event) to determine a wall time in response to a frame identifier and/or a frame identifier in response to a wall time. In these embodiments, the spatio-temporal index may be keyed by frame identifiers and/or wall times, whereby the spatio-temporal index returns a wall time in response to a frame identifier and/or a frame identifier in response to a wall time and a video feed identifier. In other embodiments, the time transformation module5210calculates a wall time in response to a frame identifier and/or a frame identifier in response to a wall time. In some of these embodiments, each video feed may include metadata that includes a starting wall time that indicates a wall time at which the respective video feed began being captured/broadcast, a number of frames per block, and a frame rate of the encoding. In these embodiments, the time transformation module5210may calculate a wall time in response to a frame identifier based on the starting time of the video feed indicated by the frame identifier, the number of frames per block, and the frame indicated by the frame identifier (e.g., the block identifier and the offset value). Similarly, the time transformation module5210may calculate a frame identifier of a requested video feed in response to a wall time based on the starting time of the requested video feed, the received wall time, the number of frames per block, and the encoding rate. In some embodiments, the time transformation module5210may be configured to transform a time with respect to first video feed to a time with respect to a second video feed. For example, the time transformation module5210may receive a first frame indicator corresponding to a first video feed and may output a second frame indicator corresponding to a second video feed, where the first frame indicator and the second frame indicator respectively indicate time-aligned video frames. In some of these embodiments, the time transformation module5210may utilize an index corresponding to an event (e.g., the spatio-temporal index corresponding to an event) to determine the second frame identifier in response to the second frame identifier. In these embodiments, the spatio-temporal index may be keyed by frame identifiers and may index frame identifiers of video frames that are time-aligned with the video frame referenced by each respective frame identifier. In other embodiments, the time transformation module5210calculates the second frame identifier in response to the first identifier. In some of these embodiments, the time transformation module5210may convert the first frame identifier to a wall time, as discussed above, and then may calculate the second frame identifier based on the wall time, as described above. In embodiments, the data management module5212requests and/or receives data from external resources and provides the data to a requesting module. For example, the data management module5212may receive the one or more video feeds from a multimedia server. The data management module5212may further receive an index (e.g., spatio-temporal index) corresponding to an event being streamed. For example, in some embodiments, the data management module5212may receive a smart pipe corresponding to an event. The data management module5212may provide the one or more video feeds and the index to the video transformation module5208. In embodiments, the data management module5212may expose one or more APIs of the video player application to external resources, such multimedia servers and/or related data servers (e.g., a server that provides game information such as player names, statistics, and the like). In some embodiments, the external resources may push data to the data management module5212. Additionally or alternatively, the data management module5212may be configured to pull the data from the external resources. In embodiments, the data management module5212may receive requests for data from the video transformation module5208. For example, the data management module5212may receive a request for information relating to a particular frame identifier, a location within the frame indicated by a frame identifier, and/or an object depicted in the frame indicated by a frame identifier. In these embodiments, the data management module5212may obtain the requested information and may return the requested information to the video transformation module5208. In some embodiments, the external resource may push any information that is relevant to an event to the data management module5212. In these embodiments, the data management module5212may obtain the requested data from the pushed data. In other embodiments, the data management module5212may be configured to pull any requested data from the external resource. In these embodiments, the data management module5212may transmit a request to the external resource, whereby the request indicates the information sought. For example, the request may indicate a particular frame identifier, a location within the frame indicated by a frame identifier, or an object (e.g., a player) depicted in the frame indicated by the frame identifier. In response to the request, the data management module5212may receive the requested information, which is passed to video transformation module5208. In embodiments, the data management module5212may be configured to obtain individual video feeds corresponding to an event. In some of these embodiments, the data management module5212may receive a request from the video transformation module5208for a particular video feed corresponding to an event. In response to the request, the data management module5212may return the requested video feed to the video transformation module5208. The video feed may have been pushed to the video application by an external resource (e.g., multimedia platform), or may be requested (pulled) from the external resource in response to the request. With reference toFIG.47, the machine learning model may include active learning and active quality assurance on a live spatiotemporal machine learning workflow4700in accordance with the various embodiments. The machine learning workflow4700includes a machine learning (ML) algorithm4702that may produce live and automatic machine learning (ML) classification output4704(with minimum delay) as well as selected events for human quality assurance (QA)4708based on live spatiotemporal data4710. In embodiments, the live spatiotemporal machine learning workflow4700includes the data from the human question and answer sessions that may then be fed back into a machine learning (ML) algorithm4720(which may be the same as the ML algorithm4702), which may be rerun on the corresponding segments of data, to produce a time-delayed classification output4724with improved classification accuracy of neighboring events, where the time delay corresponds to the QA process. In embodiments, the machine learning workflow4700includes data from the QA process4708being fed into ML training data4722to improve the ML algorithm models for subsequent segments such as improving on the ML algorithm4702and/or the ML algorithm4702. Live spatiotemporal data4730may be aligned with other imperfect sources of data related to a sequence of spatial-temporal events. In embodiments, the alignment across imperfect sources of data related to a sequence of spatial-temporal events may include alignment using novel generalized distance metrics for spatiotemporal sequences combining event durations, ordering of events, additions/deletions of events, a spatial distance of events, and the like. In embodiments, the systems and methods disclosed herein may include modeling and dynamically interacting with an n-dimensional point-cloud. By way of this example, each point may be represented as an n-sphere whose radius may be determined by letting each n-sphere grow until it comes into contact with a neighboring n-sphere from a specified subset of the given point-cloud. This method may be similar to a Voronoi diagram in that may allocate a single n-dimensional cell for every point in the given cloud, with two distinct advantages. The first advantage includes that the generative kernel of each cell may also be its centroid. The second advantage includes continuously changing shifts in the resulting model when points are relocated in a continuous fashion (e.g., as a function of time in an animation, or the like). In embodiments, ten basketball players may be represented as ten nodes that are divided into two subsets of five teammates. At any given moment, each player's cell may be included in a circle extending in radius until it comes to be mutually tangent with an opponent's cell. By way of this example, players on the same team will have cells that overlap. In embodiments, the systems and methods disclosed herein may include a method for modeling locale as a function of time, some other specified or predetermined variable, or the like. In embodiments, coordinates of a given point or plurality of points are repeatedly sampled over a given window of time. By way of this example, the sampled coordinates may then be used to generate a convex hull, and this procedure may be repeated as desired and may yield a plurality of hulls that may be stacked for a discretized view of spatial variability over time. In embodiments, a single soccer player might have their location on a pitch sampled every second over the course of two minutes leading to a point cloud of location data and an associated convex hull. By way of this example, the process may begin anew with each two-minute window and the full assemblage of generated hulls may be, for example, rendered in a translucent fashion and may be layered so as to yield a map of the given player's region of activity. In embodiments, the systems and methods disclosed herein may include a method for sampling and modeling data by applying the recursive logic of a quadtree to a topologically deformed input or output space. In embodiments, the location of shots in a basketball game may be sampled in arc-shaped bins, which may be partitioned by angle-of-incidence to the basket and the natural logarithm of distance from the basket, and, in turn, yielding bins which may be subdivided and visualized according to the same rules governing a rectilinear quadtree. In embodiments, the systems and methods disclosed herein may include a method for modeling multivariate point-cloud data such that location coordinates map to the location, while velocity (or some other relevant vector) may be represented as a contour map of potential displacements at various time intervals. In embodiments, a soccer player running down a pitch may be represented by a node surrounded by nested ellipses each indicating a horizon of displacement for a given window of time. In embodiments, the systems and methods disclosed herein may include a method for modeling and dynamically interacting with a directed acyclic graph such that every node may be rendered along a single line, while the edges connecting nodes may be rendered as curves deviating from this line in accordance with a specified variable. In embodiments, these edges may be visualized as parabolic curves wherein the height of each may correspond to the flow, duration, latency, or the like of the process represented by the given edge. The methods and systems disclosed herein may include methods and systems for enabling a user to express preferences relating to display of video content and may include using machine learning to develop an understanding of at least one event, one metric related to the event, or relationships between events, metrics, venue, or the like within at least one video feed to determine at least one type for the event; automatically, under computer control, extracting the video content displaying the event and associating the machine learning understanding of the type for the event with the video content in a video content data structure; providing a user interface by which a user can indicate a preference for at least one type of content; and upon receiving an indication of the preference by the user, retrieving at least one video content data structure that was determined by the machine learning to have content of the type preferred by the user and providing the user with a video feed containing the content of the preferred type. In embodiments, the user interface is of at least one of a mobile application, a browser, a desktop application, a remote control device, a tablet, a touch screen device, a virtual reality or augmented reality headset, and a smart phone. In embodiments, the user interface further comprises an element for allowing a user to indicate a preference as to how content will be presented to the user. In embodiments, the machine learning further comprises determining an understanding of a context for the event and the context is stored with the video content data structure. In embodiments, the user interface further comprises an element for allowing a user to indicate a preference for at least one context. In embodiments, upon receiving an indication of a preference for a context, video content corresponding to the context preference is retrieved and displayed to the user. In embodiments, the context comprises at least one of the presence of a preferred player in the video feed, a preferred matchup of players in the video feed, a preferred team in the video feed, and a preferred matchup of teams in the video feed. In embodiments, the user interface allows a user to select at least one of a metric and a graphic element to be displayed on the video feed, wherein at least one of the metric and the graphic is based at least in part on the machine understanding. The methods and systems disclosed herein may include methods and systems for enabling a mobile application allowing user interacting with video content method and may include taking a video feed; using machine learning to develop an understanding of an event within the video feed, the understanding including identifying context information relating to the event; automatically, under computer control, extracting the content displaying the event and associating the extracted content with the context information; producing a video content data structure that includes the associated context information; and using the context information for a plurality of such video content data structures to generate, automatically under computer control, producing a story or video clip that includes the video content data structure, wherein the content of the story is based on a user preference. In embodiments, the user preference for a type of content is based on at least one of a user expressed preference and a preference that is inferred based on user interaction with an item of content. The methods and systems disclosed herein may include methods and systems for enabling a user to express preferences relating to display of video content and may include a machine learning facility for developing an understanding of at least one event within at least one video feed to determine at least one type for the event; a video production facility for automatically, under computer control, extracting the video content displaying the event and associating the machine learning understanding of the type for the event with the video content in a video content data structure; a server for serving data to a user interface by which a user can indicate a preference for at least one type of content; and upon receiving at the server an indication of the preference by the user, retrieving at least one video content data structure that was determined by the machine learning to have content of the type preferred by the user and providing the user with a video feed containing the content of the preferred type. In embodiments, the user interface is of at least one of a mobile application, a browser, a desktop application, a remote control device, a tablet, and a smart phone. In embodiments, the user interface further comprises an element for allowing a user to indicate a preference as to how content will be presented to the user. In embodiments, the machine learning further comprises determining an understanding of a context for the event and the context is stored with the video content data structure. In embodiments, the user interface further comprises an element for allowing a user to indicate a preference for at least one context. In embodiments, upon receiving an indication of a preference for a context, video content corresponding to the context preference is retrieved and displayed to the user. In embodiments, the context comprises at least one of the presence of a preferred player in the video feed, a preferred matchup of players in the video feed, a preferred team in the video feed, and a preferred matchup of teams in the video feed. In embodiments, the user interface allows a user to select at least one of a metric and a graphic element to be displayed on the video feed, wherein the metric is based at least in part on the machine understanding. The methods and systems disclosed herein may include methods and systems delivering personalized video content and may include using machine learning to develop an understanding of at least one event within at least one video feed to determine at least one type for the event; automatically, under computer control, extracting the video content displaying the event and associating the machine learning understanding of the type for the event with the video content in a video content data structure; developing a personal profile for a user based on at least one of expressed preferences of the user, information about the user, and information collected about actions taken by the user with respect to at least one type of video content; and upon receiving an indication of the user profile, retrieving at least one video content data structure that was determined by the machine learning to have content of the type likely to be preferred by the user based on the user profile. The methods and systems disclosed herein may include methods and systems for delivering personalized video content and may include using machine learning to develop an understanding of at least one event within at least one video feed to determine at least one type for the event, wherein the video feed is a video feed for a professional game; using machine learning to develop an understanding of at least one event within a data feed relating to the motion of a non-professional player; based on the machine learning understanding of the video feed for the professional game and the data feed of the motion of the non-professional player, automatically, under computer control, providing an enhanced video feed that represents the non-professional player playing within the context of the professional game. In embodiments, the methods and systems may further include providing a facility having cameras for capturing 3D motion data and capturing video of a non-professional player to provide the data feed for the non-professional player. In embodiments, the non-professional player is represented by mixing video of the non-professional player with video of the professional game. In embodiments, the non-professional player is represented as an animation having attributes based on the data feed about the non-professional player. The methods and systems disclosed herein may also include one or more of the following features and capabilities: spatiotemporal pattern recognition (including active learning of complex patterns and learning of actions such as P&R, postups, play calls); hybrid methods for producing high quality labels, combining automated candidate generation from XYZ data, and manual refinement; indexing of video by automated recognition of game clock; presentation of aligned optical and video; new markings using combined display, both manual and automated (via pose detection etc.); metrics: shot quality, rebounding, defense and the like; visualizations such as Voronoi, heatmap distribution, etc.; embodiment on various devices; video enhancement with metrics & visualizations; interactive display using both animations and video; gesture and touch interactions for sports coaching and commentator displays; and cleaning of XYZ data using, for example, HMM, PBP, video, hybrid validation. Further details as to data cleaning step204are provided herein. Raw input XYZ is frequently noisy, missing, or wrong. XYZ data is also delivered with attached basic events such as possession, pass, dribble, shot. These are frequently incorrect. This is important because event identification further down the process (Spatiotemporal Pattern Recognition) sometimes depends on the correctness of these basic events. As noted above, for example, if two players' XY positions are switched, then “over” vs. “under” defense would be incorrectly switched, since the players' relative positioning is used as a critical feature for the classification. Also, PBP data sources are occasionally incorrect. First, one may use validation algorithms to detect all events, including the basic events such as possession, pass, dribble, shot, and rebound that are provided with the XYZ data. Possession/Non-possession may use a Hidden Markov Model to best fit the data to these states. Shots and rebounds may use the possession model outputs, combined with 1) projected destination of the ball, and 2) PBP information. Dribbles may be identified using a trained ML algorithm and also using the output of the possession model. Specifically, once possessions are determined, dribbles may be identified with a hidden Markov model. The hidden Markov model consists of three states:1. Holding the ball while the player is still able to dribble.2. Dribbling the ball.3. Holding the ball after the player has already dribbled. A player starts in State 1 when he gains possession of the ball. At all times players are allowed to transition to either their current state, or the state with a number one higher than their current state, if such a state exists. The players' likelihood of staying in their current state or transitioning to another state may be determined by the transition probabilities of the model as well as the observations. The transition probabilities may be learned empirically from the training data. The observations of the model consist of the player's speed, which is placed into two categories, one for fast movement, and one for slow movement, as well as the ball's height, which is placed into categories for low and high height. The cross product of these two observations represents the observation space for the model. Similar to the transition probabilities, the observation probabilities, given a particular state, may be learned empirically from the training data. Once these probabilities are known, the model is fully characterized and may be used to classify when the player is dribbling on unknown data. Once it is known that the player is dribbling, it remains to be determined when the actual dribbles occur. This may be done with a Support Vector Machine that uses domain specific information about the ball and player, such as the height of the ball as a feature to determine whether at that instant the player is dribbling. A filtering pass may also be applied to the resulting dribbles to ensure that they are sensibly separated, so that for instance, two dribbles do not occur within 0.04 seconds of each other. Returning to the discussion of the algorithms, these algorithms decrease the basic event labeling error rate by a significant factor, such as about 50%. Second, the system has a library of anomaly detection algorithms to identify potential problems in the data. These include temporal discontinuities (intervals of missing data are flagged); spatial discontinuities (objects traveling is a non-smooth motion, “jumping”); interpolation detection (data that is too smooth, indicating that post-processing was done by the data supplier to interpolate between known data points in order to fill in missing data). This problem data is flagged for human review so that events detected during these periods are subject to further scrutiny. Spatio-player tracking may be undertaken in at least two types, as well as in a hybrid combined type. For tracking with broadcast video, the broadcast video is obtained from multiple broadcast video feeds. Typically, this will include a standard “from the stands view” from the center stands midway-up, a backboard view, a stands view from a lower angle from each corner, and potentially other views. Optionally, PTZ (pan tilt zoom) sensor information from each camera is also returned. An alternative is a Special Camera Setup method. Instead of broadcast feeds, this uses feeds from cameras that are mounted specifically for the purposes of player tracking. The cameras are typically fixed in terms of their location, pan, tilt, zoom. These cameras are typically mounted at high overhead angles; in the current instantiation, typically along the overhead catwalks above the court. A Hybrid/Combined System may be used. This system would use both broadcast feeds and feeds from the purpose-mounted cameras. By combining both input systems, accuracy is improved. Also, the outputs are ready to be passed on to the DataFX pipeline for immediate processing, since the DataFX will be painting graphics on top of the already-processed broadcast feeds. Where broadcast video is used, the camera pose is solved in each frame, since the PTZ may change from frame to frame. Optionally, cameras that have PTZ sensors may return this info to the system, and the PTZ inputs are used as initial solutions for the camera pose solver. If this initialization is deemed correct by the algorithm, it will be used as the final result; otherwise, refinement will occur until the system receives a usable solution. As described above, players may be identified by patches of color on the court. The corresponding positions are known since the camera pose is known, and we can perform the proper projections between 3D space and pixel space. Where purpose mounted cameras are used, multiple levels of resolution may be involved. Certain areas of the court or field require more sensitivity, e.g., on some courts, the color of the “paint” area makes it difficult to track players when they are in the paint. Extra cameras with higher dynamic range and higher zoom are focused on these areas. The extra sensitivity enables the computer vision techniques to train separate algorithms for different portions of the court, tuning each algorithm to its type of inputs and the difficulty of that task. In a combination system, by combining the fixed and broadcast video feeds, the outputs of a player tracking system can feed directly into the DataFX production, enabling near-real-time DataFX. Broadcast video may also produce high-definition samples that can be used to increase accuracy. The methods and systems disclosed herein may include methods and systems for enabling interaction with a broadcast video content stream and may include a machine learning facility for developing an understanding of at least one event within a video feed for a video broadcast, the understanding including identifying context information relating to the event; and a touch screen user interface by which a broadcaster can interact with the video feed, wherein the options for broadcaster interaction are based on the context information, wherein the interaction with the touch screen controls the content of the broadcast video event. In embodiments, the touch screen interface is a large screen adapted to be seen by viewers of the video broadcast as the broadcaster uses the touch screen. In embodiments, a smaller touch screen is used by a commentator on air to control the information content being displayed, and the images/video on the touch screen is simultaneously displayed on a larger screen that is filmed and broadcast or is simultaneously displayed directly in the broadcast feed. In embodiments, the broadcaster can select from a plurality of context-relevant metrics, graphics, or combinations thereof to be displayed on the screen. In embodiments, the broadcaster can display a plurality of video feeds that have similar contexts as determined by the machine learning facility. In embodiments, the similarity of contexts is determined by comparing events within the video feeds. In embodiments, the broadcaster can display a superimposed view of at least two video feeds to facilitate a comparison of events from a plurality of video feeds. In embodiments, the comparison is of similar players from different, similar, or identical time periods. In embodiments, a similarity of players is determined by machine understanding of the characteristics of the players from the different time periods. In embodiments, the broadcaster can display a plurality of highlights that are automatically determined by a machine understanding of a live sports event that is the subject of the video feed. In embodiments, the highlights are determined based on similarity to highlights that have been identified for other events. The methods and systems disclosed herein may include methods and systems for enabling interaction with a broadcast video content stream and may include developing a machine learning understanding of at least one event within a video feed for a video broadcast, the understanding including identifying context information relating to the event; and providing a touch screen user interface by which a broadcaster can interact with the video feed, wherein the options for broadcaster interaction are based on the context information, wherein the interaction with the touch screen controls the content of the broadcast video event. In embodiments, the touch screen interface is a large screen adapted to be seen by viewers of the video broadcast as the broadcaster uses the touch screen. In embodiments, the broadcaster can select from a plurality of context-relevant metrics to be displayed on the screen. In embodiments, the broadcaster can display a plurality of video feeds that have similar contexts as determined by the machine learning facility. In embodiments, the similarity of contexts is determined by comparing events within the video feeds. In embodiments, the broadcaster can display a superimposed view of at least two video feeds to facilitate a comparison of events from a plurality of video feeds. In embodiments, the comparison is of similar players from different time periods. In embodiments, a similarity of players is determined by the machine understanding of the characteristics of the players from the different time periods. In embodiments, the broadcaster can display a plurality of highlights that are automatically determined by a machine understanding of a live sports event that is the subject of the video feed. In embodiments, the highlights are determined based on similarity to highlights that have been identified for other events. The methods and systems disclosed herein may include methods and systems for enabling interaction with a broadcast video content stream and may include taking a video feed; using machine learning to develop an understanding of an event within the video feed, the understanding including identifying context information relating to the event; automatically, under computer control, extracting the content displaying the event and associating the extracted content with the context information; producing a video content data structure that includes the associated context information; and providing an application by which a user can interact with the video content data structure, wherein the options for user interaction are based on the context information, wherein the interaction with the video content data structure controls the presentation of a broadcast video event on a display screen. Methods and systems disclosed herein may include tracklet stitching. Optical player tracking results in short to medium length tracklets, which typically end when the system loses track of a player or the player collides (or passes close to) with another player. Using team identification and other attributes, algorithms can stitch these tracklets together. Where a human being is in the loop, systems may be designed for rapid interaction and for disambiguation and error handling. Such a system is designed to optimize human interaction with the system. Novel interfaces may be provided to specify the motion of multiple moving actors simultaneously, without having to match up movements frame by frame. In embodiments, custom clipping is used for content creation, such as involving OCR. Machine vision techniques may be used to automatically locate the “score bug” and determine the location of the game clock, score, and quarter information. This information is read and recognized by OCR algorithms. Post-processing algorithms using various filtering techniques are used to resolve issues in the OCR. Kalman filtering/HMMs may be used to detect errors and correct them. Probabilistic outputs (which measure the degree of confidence) assist in this error detection/correction. Sometimes, a score is nonexistent or cannot be detected automatically (e.g., sometimes during PIP or split screens). In these cases, remaining inconsistencies or missing data is resolved with the assistance of human input. Human input is designed to be sparse so that labelers do not have to provide input at every frame. Interpolation and other heuristics are used to fill in the gaps. Consistency checking is done to verify game clock. For alignment2112, as discussed in connection withFIG.21, another advance is to use machine vision techniques to verify some of the events. For example, video of a made shot will typically show the score being increased or will show a ball going through a hoop. Either kind of automatic observation serves to help the alignment process result in the correct video frames being shown to the end user. In accordance with an exemplary and non-limiting embodiment, augmented or enhanced video with extracted semantics-based experience is provided based, at least in part, on 3D position/motion data. In accordance with other exemplary embodiments, there is provided embeddable app content for augmented video with an extracted semantics-based experience. In yet another exemplary embodiment, there is provided the ability to automatically detect the court/field, and relative positioning of the camera, in (near) real time using computer vision techniques. This may be combined with automatic rotoscoping of the players in order to produce dynamic augmented video content. The methods and systems disclosed herein may include methods and systems for embedding video content in an application and may include taking a video feed; using machine learning to develop an understanding of an event within the video feed, the understanding including identifying context information relating to the event; automatically, under computer control, extracting the content displaying the event and associating the extracted content with the context information; producing a video content data structure that includes the associated context information; taking an application that displays video content; and embedding the video content data structure in the application. In embodiments, the user interface of the application offers the user the option to control the presentation of the video content from the video content data structure in the application. In embodiments, the control of the presentation is based on at least one of a user preference and a user profile. In embodiments, the application is a mobile application that provides a story about an event and wherein the video content data structure comprises at least one of a content card and a digital still image. The methods and systems disclosed herein may include methods and systems for enabling a mobile application that allows user interaction with video content and may include a video ingestion facility for taking a video feed; a machine learning facility for developing an understanding of an event within the video feed, the understanding including identifying context information relating to the event; and a video production facility for automatically, under computer control, extracting the content displaying the event, associating the extracted content with the context information and producing a video content data structure that includes the associated context information; and using the context information for a plurality of such video content data structures to generate, automatically under computer control, a story that includes a sequence of the video content data structures. In embodiments, the content of the story is based on a user profile that is based on at least one of an expressed user preference, information about a user interaction with video content, and demographic information about the user. In embodiments, the methods and systems may further include determining a pattern relating to a plurality of events in the video feed and associating the determined pattern with the video content data structure as additional context information. In embodiments, the pattern relates to a highlight event within the video feed. In embodiments, the highlight event is associated with at least one of a player and a team. In embodiments, the embedded application allows a user to indicate at least one of a player and a team for which the user wishes to obtain video feeds containing the highlight events. In embodiments, the pattern relates to a comparison of events occurring at least one of within the video feed or within a plurality of video feeds. In embodiments, the comparison is between events occurring over time. In embodiments, the embedded application allows a user to select at least one player to obtain a video providing a comparison between the player and at least one of a past representation of the same player and a representation of another player. In embodiments, the pattern is a cause-and-effect pattern related to the occurrence of a following type of event after the occurrence of a pre-cursor type of event. In embodiments, the embedded application allows the user to review video cuts in a sequence that demonstrate the cause-and-effect pattern. In embodiments, the application provides a user interface for allowing a user to enter at least one of text and audio input to provide a narrative for a sequence of events within the video feed. In embodiments, the user may select a sequence of video events from within the feed for display in the application. In embodiments, upon accepting the user narrative, the system automatically generates an electronic story containing the events from the video feed and the narrative. The methods and systems disclosed herein may include methods and systems for enabling a mobile application that allows user interaction with video content and may include taking a video feed; using a machine learning facility to develop an understanding of an event within the video feed, the understanding including identifying context information relating to the event; and automatically, under computer control, extracting the content displaying the event, associating the extracted content with the context information and producing a video content data structure that includes the associated context information. In embodiments, the methods and systems may further include using the context information for a plurality of such video content data structures to generate, automatically under computer control, a story that includes a sequence of the video content data structures. In embodiments, the user may interact with an application, such as on a phone, laptop, or desktop, or with a remote control, to control the display of broadcast video. As noted above in connection with interaction with a mobile application, options for user interaction may be customized based on the context of an event, such as by offering options to display context-relevant metrics for the event. These selections may be used to control the display of broadcast video by the user, such as by selecting preferred, context-relevant metrics that appear as overlays, sidebars, scrolling information, or the like on the video display as various types of events take place in the video stream. For example, a user may select settings for a context like a three point shot attempt, so that when the video displays three point shot attempts, particular metrics (e.g., the average success percentage of the shooter) are shown as overlays above the head of the shooter in the video. The methods and systems disclosed herein may include methods and systems for personalizing content for each type of user based on determining the context of the content through machine analysis of the content and based on an indication by the user of a preference for a type of presentation of the content. The methods and systems disclosed herein may include methods and systems for enabling a user to express preferences relating to display of video content and may include: taking a video feed; using machine learning to develop an understanding of an event within the video feed, the understanding including identifying context information relating to the event; automatically, under computer control, extracting the content displaying the event and associating the extracted content with the context information; producing a video content data structure that includes the associated context information; and providing a user interface by which a user can indicate a preference for how content that is associated with a particular type of context will be presented to the user. In embodiments, a user may be presented with an interface element for a mobile application, browser, desktop application, remote control, tablet, smart phone, or the like, for indicating a preference as to how content will be presented to the user. In embodiments, the preference may be indicated for a particular context, such a context determined by a machine understanding of an event. In embodiments, a user may select to see certain metrics, graphics or additional information overlaid on top of the existing broadcast for certain types of semantic events such as players expected field goal percentage when they possess the ball or the type and effectiveness of defense being played on a pick and roll. The methods and systems disclosed herein may include methods and systems for automatically generating stories/content based on the personal profile of a viewer and their preferences or selections of contextualized content. The methods and systems disclosed herein may include methods and systems for enabling a mobile application allowing user interacting with video content method and may include taking a video feed; using machine learning to develop an understanding of an event within the video feed, the understanding including identifying context information relating to the event; automatically, under computer control, extracting the content displaying the event and associating the extracted content with the context information; producing a video content data structure that includes the associated context information; and using the context information for a plurality of such video content data structures to generate, automatically under computer control, a story that includes the video content data structures, wherein the content of the story is based on a user preference. In embodiments, the user preference for a type of content is based on at least one of a user expressed preference and a preference that is inferred based on user interaction with an item of content. In embodiments, items of content that are associated, based on machine understanding, with particular events in particular contexts can be linked together, or linked with other content, to produce modified content such as stories. For example, a game summary, such as extracted from an online report about an event, may be augmented with machine-extracted highlight cuts that correspond to elements featured in the game summary, such as highlights of important plays, images of particular players, and the like. These stories can be customized for a user, such as linking a story about a game played by the user's favorite team with video cuts of the user's favorite player that were taken during the game. The methods and systems disclosed herein may include methods and systems for using machine learning to extract context information and semantically relevant events and situations from a video content stream, such that the events and situations may be presented according to the context of the content. The methods and systems disclosed herein may include methods and systems for embedding video content in an application and may include taking a video feed; using machine learning to develop an understanding of an event within the video feed, the understanding including identifying context information relating to the event; automatically, under computer control, extracting the content displaying the event and associating the extracted content with the context information; producing a video content data structure that includes the associated context information; taking an application that displays video content; and embedding the video content data structure in the application, wherein the location of the embedded video content in the application is based on the context information. In embodiments, context-identified video cuts can be used to enrich or enhance applications, such as by embedding the cuts in relevant locations in the applications. For example, a mobile application displaying entertainment content may be automatically populated with video cuts of events that are machine-extracted and determined to be of the appropriate type (based on context), for the application. A video game application can be enhanced, such as by including real video cuts of plays that fit a particular context (e.g., showing a pick-and-roll play where players A and B are matched up against players C and D in a real game, and the same matchup is determined to occur in the video game). To facilitate embedding the application, a set of protocols, such as APIs, may be defined, by which available categories (such as semantic categories, types of contexts, types of events, and the like) are specified, such that an application may call for particular types of events, which can, in turn, be embedded in the application. Similarly, an application may be constructed with appropriate pointers, calls, objects, or the like, that allow a designer to specify, and call for, particular types of events, which may be automatically extracted from a library of machine-extracted, context-identified events and then embedded where appropriate into the application code. In embodiments, an application may provide stories about events, such as sporting events, and the machine-extracted content may include content cards or digital stills that are tagged by context so that they can be placed in appropriate locations in a story. The application can provide automatically generated content and stories, enhanced by content from a live game. In embodiments, an application may recommend video clips based on the use of keywords that match machine learned semantics that enable users to post or share video clips automatically tailored to text that they are writing. For example, clips may be recommended that include the presence of a particular player, that include a particular type of play (e.g., “dunks”) and/or that are from a particular time period (e.g., “last night,” etc.). In accordance with an exemplary and non-limiting embodiment, there is described a method for the extraction of events and situations corresponding to semantically relevant concepts. In yet other embodiments, semantic events may be translated and cataloged into data and patterns. The methods and systems disclosed herein may include methods and systems for embedding content cards or digital stills with contextualized content stories/visualizations into a mobile application. They may include automatically generated content, such as stories, extracted from a live game delivered to users via an application, such as a mobile application, an augmented reality glasses application, a virtual reality glasses application, or the like. In embodiments, the application is a mobile application that provides a story about an event and wherein the video content data structure comprises at least one of a content card and a digital still image. The methods and systems disclosed herein may include methods and systems for applying contextualized content from actual sporting events to video games to improve the reality of the game play. The methods and systems disclosed herein may include methods and systems for improving a video game and may include taking a video feed; using machine learning to develop an understanding of at least one first real event within the video feed, the understanding including identifying context information relating to the first real event; taking a game event coded for display within a video game; matching the context information for the real event with the context of the game event in the video game; comparing the display of the game event to the video for the real event; and modifying the coding of the game event based on the comparison. In embodiments, context information can be used to identify video cuts that can be used to improve video games, such as by matching the context of a real event with a similar context in a coded video game event, comparing the video for the real event with the video game display of a similar event, and modifying the video event to provide a more faithful simulation of the real event. The methods and systems disclosed herein may include methods and systems for taking the characteristics of a user either from a video capture of their recreational play or through user generated features and importing the user's avatar into a video game. The methods and systems disclosed herein may include methods and systems for interactive contextualized content that can be filtered and adjusted via a touch screen interface. In embodiments, the user interface is a touch screen interface. The methods and systems disclosed herein may include methods and systems for real time display of relevant fantasy and betting metrics overlaid on a live game feed. The methods and systems disclosed herein may include methods and systems for real time adjustment of betting lines and/or additional betting option creation based on in-game contextual content. The methods and systems disclosed herein may include methods and systems for taking a video feed and using machine learning to develop an understanding of at least one first event within the video feed. The understanding includes identifying context information relating to the first event. The methods and systems also include determining a metric based on the machine understanding. The metric is relevant to at least one of a wager and a fantasy sports outcome. The methods and systems include presenting the metric as an overlay for an enhanced video feed. In embodiments, the metrics described throughout this disclosure may be placed as overlays on video feeds. For example, metrics calculated based on machine-extracted events that are relevant to betting lines, fantasy sports outcomes, or the like, can be presented as overlays, scrolling elements, or the like on a video feed. The metrics to be presented can be selected based on context information, such as showing fantasy metrics for players who are on screen at the time or showing the betting line where a scoring play impacts the outcome of a bet. As noted above, the displays may be customized and personalized for a user, such as based on that user's fantasy team for a given week or that user's wagers for the week. The methods and systems disclosed herein may include methods and systems for taking a video feed of a recreational event; using machine learning to develop an understanding of at least one event within the video feed, the understanding including identifying context information relating to the event; and based on the machine understanding, providing content including information about a player in the recreational event based on the machine understanding and the context. The methods and systems may further include providing a comparison of the player to at least one professional player according to at least one metric that is based on the machine understanding. In embodiments, machine understanding can be applied to recreational venues, such as for capturing video feeds of recreational games, practices, and the like. Based on machine understanding, highlight clips, metrics, and the like, as disclosed throughout this disclosure, may be extracted by processing the video feeds, including machine understanding of the context of various events within the video. In embodiments, metrics, video, and the like can be used to provide players with personalized content, such as a highlight reel of good plays, or a comparison to one or more professional players (in video cuts, or with semantically relevant metrics). Context information can allow identification of similar contexts between recreational and professional events, so that a player can see how a professional acted in a context that is similar to one faced by the recreational player. The methods and systems may enable the ability to use metrics and events recorded from a video stream to enable the creation of a recreational fantasy sports game with which users can interact. The methods and systems may enable the ability for to recognize specific events or metrics from a recreational game and compare them to similar or parallel events from a professional game to help coach a recreational player or team or for the creation of a highlight reel that features both recreational and professional video cuts. The methods and systems disclosed herein may include methods and systems for providing enhanced video content and may include using machine learning to develop an understanding of a plurality of events within at least one video feed to determine at least one type for each of the plurality of events; extracting a plurality of video cuts from the video feed and indexing the plurality of video cuts based on at least one type of event determined by the understanding developed by machine learning; and making the indexed and extracted video cuts available to a user. In embodiments, the user is enabled to at least one of edit, cut, and mix the video cuts to provide an enhanced video containing at least one of the video cuts. In embodiments, the user is enabled to share the enhanced video. In embodiments, the methods and systems may further include indexing at least one shared, enhanced video with the semantic understanding of the type of events in that was determined by machine learning. In embodiments, the methods and systems may further include using the index information for the shared, enhanced video to determine a similarity between the shared, enhanced video and at least one other video content item. In embodiments, the similarity is used to identify additional extracted, indexed video cuts that may be of interest to the user. In embodiments, the similarity is used to identify other users who have shared similarly enhanced video. In embodiments, the similarity is used to identify other users who are likely to have an interest in the shared, enhanced video. In embodiments, the methods and systems may further include recommending at least one of the shared, enhanced video and one of the video cuts based on an understanding of the preferences of the other users. In embodiments, the similarity is based at least in part on user profile information for users who have indicated an interest in the video cut and the other video content item. The methods and systems disclosed herein may include methods and systems for providing enhanced video content and may include using machine learning to develop an understanding of a plurality of events within at least one video feed to determine at least one type for each of the plurality of events; extracting a plurality of video cuts from the video feed and indexing the plurality of video cuts to form an indexed set of extracted video cuts, wherein the indexing is based on at least one type of event determined by the understanding developed by machine learning; determining at least one pattern relating to a plurality of events in the video feed; adding the determined pattern information to the index for the indexed set of video cuts; and making the indexed and extracted video cuts available to a user. In embodiments, the user is enabled to at least one of edit, cut, and mix the video cuts to provide an enhanced video containing at least one of the video cuts. In embodiments, the user is enabled to share the enhanced video. In embodiments, the video cuts are clustered based on the patterns that exist within the video cuts. In embodiments, the pattern is determined automatically using machine learning and based on the machine understanding of the events in the video feed. In embodiments, the pattern is a highlight event within the video feed. In embodiments, the highlight event is presented to the user when the indexed and extracted video cut is made available to the user. In embodiments, the user is prompted to watch a longer video feed upon viewing the indexed and extracted video cut. In accordance with an exemplary and non-limiting embodiment, there is provided a touch screen or other gesture-based interface experience based, at least in part, on extracted semantic events. The methods and systems disclosed herein may include methods and systems for machine extracting semantically relevant events from 3D motion/position data captured at a venue, calculating a plurality of metrics relating to the events, and presenting the metrics in a video stream based on the context of the video stream. The methods and systems disclosed herein may include methods and systems for producing machine-enhanced video streams and may include taking a video feed from 3D motion and position data from a venue; using machine learning to develop an understanding of at least one first event within the video feed, the understanding including identifying context information relating to the first event; calculating a plurality of metrics relating to the events; and producing an enhanced video stream that presents the metrics in the video stream, wherein the presentation of at least one metric is based on the context information for the event with which the metric is associated in the video stream. In embodiments, semantically relevant events determined by machine understanding of 3D motion/position data for an event from a venue can be used to calculate various metrics, which may be displayed in the video stream of the event. Context information, which may be determined based on the types and sequences of events, can be used to determine what metrics should be displayed at a given position within the video stream. These metrics may also be used to create new options for users to place wagers on or be integrated into a fantasy sports environment. The methods and systems disclosed herein may include methods and systems enabling a user to cut or edit video based on machine learned context and share the video clips. These may further include allowing a user to interact with the video data structure to produce an edited video data stream that includes the video data structure. In embodiments, the interaction includes at least one of editing, cutting, and sharing a video clip that includes the video data structure. The methods and systems may enable the ability for users to interact with video cuts through an interface to enhance the content with graphics or metrics based on a pre-set set of options, and then share a custom cut and enhanced clip. The methods and systems may include the ability to automatically find similarity in different video clips based on semantic context contained in the clips, and then cluster clips together or to recommend additional clips for viewing. The methods and systems may include the ability to extract contextualized content from a feed of a recreational event to immediately deliver content to players, including comparing a recreational player to a professional player based on machine learned understanding of player types. In accordance with an exemplary and non-limiting embodiment, there is described a second screen interface unique to extracted semantic events and user selected augmentations. In yet other embodiments, the second screen may display real-time, or near real time, contextualized content. In accordance with further exemplary and non-limiting embodiments, the methods and systems disclosed herein may include methods and systems for taking a video feed; using machine learning to develop an understanding of an event within the video feed, the understanding including identifying context information relating to the event; automatically, under computer control, extracting the content displaying the event and associating the extracted content with the context information; and producing a video content data structure that includes the associated context information. In embodiments, the methods and systems may further include determining a plurality of semantic categories for the context information and filtering a plurality of such video content data structures based on the semantic categories. In embodiments, the methods and systems may further include matching the events that occur in one video feed to those that occur in a separate video feed such that the semantic understanding captured in the first video feed can be used to at least one of filter and cut a separate second video feed based on the same events. In embodiments, the methods and systems may further include determining a pattern relating to a plurality of the events and providing a content data structure based on the pattern. In embodiments, the pattern comprises a plurality of important plays in a sports event that are identified based on comparison to similar plays from previous sports events. In embodiments, the pattern comprises a plurality of plays in a sports event that is determined to be unusual based on comparison to video feeds from other sports events. In embodiments, the methods and systems may further include extracting semantic events over time to draw a comparison of at least one of a player and a team over time. In embodiments, the methods and systems may further include superimposing video of events extracted from video feeds from at least two different time periods to illustrate the comparison. In embodiments, the methods and systems may further include allowing a user to interact with the video data structure to produce an edited video data stream that includes the video data structure. In embodiments, the interaction includes at least one of editing, mixing, cutting, and sharing a video clip that includes the video data structure. In embodiments, the methods and systems may further include enabling users to interact with the video cuts through a user interface to enhance the video content with at least one graphic element selected from a menu of options. In embodiments, the methods and systems may further include enabling a user to share the enhanced video content. In embodiments, the methods and systems may further include enabling a user to find similar video clips based on the semantic context identified in the clips. In embodiments, the methods and systems may further include using the video data structure and the context information to construct modified video content for a second screen that includes the video data structure. In embodiments, the content for the second screen correlates to the timing of an event displayed on a first screen. In embodiments, the content for the second screen includes a metric determined based on the machine understanding, wherein the metric is selected based on the context information. The methods and systems disclosed herein may include methods and systems for displaying contextualized content of a live event on a second screen that correlates to the timing of the live event on the first screen. These may include using the video data structure and the context information to construct modified video content for a second screen that includes the video data structure. In embodiments, the content for the second screen correlates to the timing of an event displayed on a first screen. In embodiments, the content for the second screen includes a metric determined based on the machine understanding, wherein the metric is selected based on the context information. In embodiments, machine extracted metrics and video cuts can be displayed on a second screen, such as a tablet, smart phone, or smart remote control screen, such as showing metrics that are relevant to what is happening, in context, on a main screen. The methods and systems disclosed herein may include methods and systems for an ingestion facility adapted or configured to ingest a plurality of video feeds; a machine learning system adapted or configured to apply machine learning on a series of events in a plurality of video feeds in order to develop an understanding of an event within the video feed, the understanding including identifying context information relating to the event; an extraction facility adapted or configured to automatically, under computer control, extract the content displaying the event and associate the extracted content with the context information; and a video publishing facility for producing a video content data structure that includes the associated context information. In embodiments, the methods and systems may further include an analytic facility adapted or configured to determine a plurality of semantic categories for the context information and filter a plurality of such video content data structures based on the semantic categories. In embodiments, the methods and systems may further include a matching engine adapted or configured to match the events that occur in one video feed to those that occur in a separate video feed such that the semantic understanding captured in the first video feed can be used to at least one of filter and cut a separate second video feed based on the same events. In embodiments, the methods and systems may further include a pattern recognition facility adapted or configured to determine a pattern relating to a plurality of the events and providing a content data structure based on the pattern. The methods and systems disclosed herein may include methods and systems for displaying machine extracted, real time, contextualized content based on machine identification of a type of event occurring in a live video stream. The methods and systems disclosed herein may include methods and systems for taking a video feed; using machine learning to develop an understanding of an event within the video feed, the understanding including identifying context information relating to the event; automatically, under computer control, extracting the content displaying the event and associating the extracted content with the context information; and producing a video content data structure that includes the associated context information. The methods and systems disclosed herein may include methods and systems for providing context information in video cuts that are generated based on machine extracted cuts that are filtered by semantic categories. The methods and systems disclosed herein may include methods and systems for determining a plurality of semantic categories for the context information and filtering a plurality of the video content data structures based on the semantic categories. The methods and systems disclosed herein may include methods and systems for matching the events that occur in one video feed to those that occur in a separate video feed such that the semantic understanding captured in the first video feed can be used to filter and cut a separate second video feed based on these same events. The methods and systems disclosed herein may include methods and systems for enabling user interaction with a mobile application that displays extracted content, where the user interaction is modified based on the context of the content (e.g., the menu is determined by context). The methods and systems disclosed herein may include methods and systems for enabling an application allowing user interaction with video content and may include an ingestion facility adapted or configured to access at least one video feed, wherein the ingestion facility may be executing on at least one processor; a machine learning facility operating on the at least one video feed to develop an understanding of an event within the video feed, the understanding including identifying context information relating to the event; an extraction facility adapted or configured to automatically, under computer control, extract the content displaying the event and associate the extracted content with the context information; a video production facility adapted or configured to produce a video content data structure that includes the associated context information; and an application having a user interface by which a user can interact with the video content data structure, wherein the options for user interaction are based on the context information. In embodiments, the application is a mobile application. In embodiments, the application is at least one of a smart television application, a virtual reality headset application and an augmented reality application. In embodiments, the user interface is a touch screen interface. In embodiments, the user interface allows a user to enhance the video feed by selecting a content element to be added to the video feed. In embodiments, the content element is at least one of a metric and a graphic element that is based on the machine understanding. In embodiments, the user interface allows the user to select content for a particular player of a sports event. In embodiments, the user interface allows the user to select content relating to a context involving the matchup of two particular players in a sports event. In embodiments, the system takes at least two video feeds from different time periods, the machine learning facility determines a context the includes a similarity between at least one of a plurality of players and a plurality of plays in the two feeds and the user interface allows the user to select at least one of the players and the plays to obtain a video feed that illustrates a comparison. In embodiments, the user interface includes options for at least one of editing, cutting, and sharing a video clip that includes the video data structure. In embodiments, the video feed comprises 3D motion camera data captured from a live sports venue. In embodiments, the ability of the machine learning facility to develop the understanding is developed by feeding the machine learning facility a plurality of events for which context has already been identified. The methods and systems disclosed herein may include methods and systems for enabling a mobile application allowing user interaction with video content and may include taking at least one video feed; applying machine learning on the at least one video feed to develop an understanding of an event within the video feed, the understanding including identifying context information relating to the event; automatically, under computer control, extracting the content displaying the event and associating the extracted content with the context information; producing a video content data structure that includes the associated context information; and providing a mobile application having a user interface by which a user can interact with the video content data structure, wherein the options for user interaction are based on the context information. In embodiments, the user interface is a touch screen interface. In embodiments, the user interface allows a user to enhance the video feed by selecting a content element to be added to the video feed. In embodiments, the content element is at least one of a metric and a graphic element that is based on the machine understanding. In embodiments, the user interface allows the user to select content for a particular player of a sports event. In embodiments, the user interface allows the user to select content relating to a context involving the matchup of two particular players in a sports event. In embodiments, the system takes at least two video feeds from different time periods, the machine learning facility determines a context the includes a similarity between at least one of a plurality of players and a plurality of plays in the two feeds and the user interface allows the user to select at least one of the players and the plays to obtain a video feed that illustrates a comparison. In embodiments, the user interface includes options for at least one of editing, cutting, and sharing a video clip that includes the video data structure. In embodiments, the video feed comprises 3D motion camera data captured from a live sports venue. In embodiments, the ability of the machine learning facility to develop the understanding is developed by feeding the machine learning facility a plurality of events for which context has already been identified. The methods and systems disclosed herein may include methods and systems for enabling a mobile application allowing user interacting with video content and may include taking a video feed; using machine learning to develop an understanding of an event within the video feed, the understanding including identifying context information relating to the event; automatically, under computer control, extracting the content displaying the event and associating the extracted content with the context information; producing a video content data structure that includes the associated context information; and providing a mobile application by which a user can interact with the video content data structure, wherein the options for user interaction are based on the context information. In embodiments, machine extracted content, with associated context information, may be provided to users via a mobile application, through which the users may display and interact with the content, such as by selecting particular types of content based on a desired semantic category (such as by selecting the category in list, menu, or the like), playing content (including pausing, rewinding, fast forwarding, and the like), and manipulating content (such as positioning content within a display window, zooming, panning, and the like). In embodiments, the nature of the permitted interaction may be governed by the context information associated with the content, where the context information is based on a machine understanding of the content and its associated context. For example, where the content is related to a particular type of play within a context of an event like a game, such as rebounding opportunities in basketball, the user may be permitted to select from a set of metrics that are relevant to rebounding, so that the selected metrics from a context-relevant set are displayed on the screen with the content. If the context is different, such as if the content relates to a series of pick-and-roll plays by a particular player, different metrics may be made available for selection by the user, such as statistics for that player, or metrics appropriate for pick-and-rolls. Thus, the machine-extracted understanding of an event, including context information, can be used to customize the content displayed to the user, including to allow the user to select context-relevant information for display. The methods and systems disclosed herein may include methods and systems for allowing a user to control a presentation of a broadcast video event, where the options for control are based on a context of the content as determined by machine extraction of semantically relevant events from the content. In accordance with an exemplary and non-limiting embodiment, there is described a method for “painting” translated semantic data onto an interface. In accordance with an exemplary and non-limiting embodiment, there is described spatiotemporal pattern recognition based, at least in part, on optical XYZ alignment for semantic events. In yet other embodiments, there is described the verification and refinement of spatiotemporal semantic pattern recognition based, at least in part, on hybrid validation from multiple sources. In accordance with an exemplary and non-limiting embodiment, there is described human identified video alignment labels and markings for semantic events. In yet other embodiments, there is described machine learning algorithms for spatiotemporal pattern recognition based, at least in part, on human identified video alignment labels for semantic events. In accordance with an exemplary and non-limiting embodiment, there is described automatic game clock indexing of video from sporting events using machine vision techniques, and cross-referencing this index with a semantic layer that indexes game events. The product is the ability to query for highly detailed events and return the corresponding video in near real-time. In accordance with an exemplary and non-limiting embodiment, there is described unique metrics based, at least in part, on spatiotemporal patterns including, for example, shot quality, rebound ratings (positioning, attack, conversion) and the like. In accordance with an exemplary and non-limiting embodiment, there is described player tracking using broadcast video feeds. In accordance with an exemplary and non-limiting embodiment, there is described player tracking using a multi-camera system. In accordance with an exemplary and non-limiting embodiment, there is described video cut-up based on extracted semantics. A video cut-up is a remix made up of small clips of video that are related to each other in some meaningful way. The semantic layer enables real-time discovery and delivery of custom cut-ups. The semantic layer may be produced in one of two ways: (1) Video combined with data produces a semantic layer, or (2) video directly to a semantic layer. Extraction may be through ML or human tagging. In some exemplary embodiments, video cut-up may be based, at least in part, on extracted semantics, controlled by users in a stadium and displayed on a Jumbotron. In other embodiments, video cut-up may be based, at least in part, on extracted semantics, controlled by users at home and displayed on broadcast TV. In yet other embodiments, video cut-up may be based, at least in part, on extracted semantics, controlled by individual users and displayed on the web, tablet, or mobile for that user. In yet other embodiments, video cut-up may be based, at least in part, on extracted semantics, created by an individual user, and shared with others. Sharing could be through inter-tablet/inter-device communication, or via mobile sharing sites. In accordance with further exemplary and non-limiting embodiments, the methods and systems disclosed herein may include methods and systems for enabling an application allowing user interaction with video content and may include an ingestion facility for taking at least one video feed; a machine learning facility operating on the at least one video feed to develop an understanding of an event within the video feed, the understanding including identifying context information relating to the event; an extraction facility for automatically, under computer control, extracting the content displaying the event and associating the extracted content with the context information; a video production facility for producing a video content data structure that includes the associated context information; and an application having a user interface by which a user can interact with the video content data structure, wherein the options for user interaction are based on the context information. In embodiments, the application is a mobile application. In embodiments, the application is at least one of a smart television application, a virtual reality headset application and an augmented reality application. In embodiments, the user interface is a touch screen interface. In embodiments, the user interface allows a user to enhance the video feed by selecting a content element to be added to the video feed. In embodiments, the content element is at least one of a metric and a graphic element that is based on the machine understanding. In embodiments, the user interface allows the user to select content for a particular player of a sports event. In embodiments, the user interface allows the user to select content relating to a context involving the matchup of two particular players in a sports event. In embodiments, the system takes at least two video feeds from different time periods, the machine learning facility determines a context the includes a similarity between at least one of a plurality of players and a plurality of plays in the two feeds and the user interface allows the user to select at least one of the players and the plays to obtain a video feed that illustrates a comparison. In embodiments, the user interface includes options for at least one of editing, cutting, and sharing a video clip that includes the video data structure. In embodiments, the video feed comprises 3D motion camera data captured from a live sports venue. In embodiments, the ability of the machine learning facility to develop the understanding is developed by feeding the machine learning facility a plurality of events for which context has already been identified. The methods and systems disclosed herein may include methods and systems for enabling a mobile application allowing user interaction with video content and may include taking at least one video feed; applying machine learning on the at least one video feed to develop an understanding of an event within the video feed, the understanding including identifying context information relating to the event; automatically, under computer control, extracting the content displaying the event and associating the extracted content with the context information; producing a video content data structure that includes the associated context information; and providing a mobile application having a user interface by which a user can interact with the video content data structure, wherein the options for user interaction are based on the context information. In embodiments, the user interface is a touch screen interface. In embodiments, the user interface allows a user to enhance the video feed by selecting a content element to be added to the video feed. In embodiments, the content element is at least one of a metric and a graphic element that is based on the machine understanding. In embodiments, the user interface allows the user to select content for a particular player of a sports event. In embodiments, the user interface allows the user to select content relating to a context involving the matchup of two particular players in a sports event. In embodiments, the system takes at least two video feeds from different time periods, the machine learning facility determines a context the includes a similarity between at least one of a plurality of players and a plurality of plays in the two feeds and the user interface allows the user to select at least one of the players and the plays to obtain a video feed that illustrates a comparison. In embodiments, the user interface includes options for at least one of editing, cutting, and sharing a video clip that includes the video data structure. In embodiments, the video feed comprises 3D motion camera data captured from a live sports venue. In embodiments, the ability of the machine learning facility to develop the understanding is developed by feeding the machine learning facility a plurality of events for which context has already been identified. The methods and systems disclosed herein may include methods and systems for an analytic system and may include a video ingestion facility for ingesting at least one video feed; a machine learning facility that develops an understanding of at least one event within the video feed, wherein the understanding identifies at least a type of the event and a time of the event in an event data structure; a computing architecture enabling a model that takes one or more event data structures as input and applies at least one calculation to transform the one or more event data structures into an output data structure; and a data transport layer of the computing architecture for populating the model with the event data structures as input to the model. In embodiments, the output data structure includes at least one prediction. In embodiments, the prediction is of an outcome of at least one of a sporting event and at least one second event occurring within a sporting event. In embodiments, the video feed is of a live sporting event, wherein the prediction is made during the live sporting event, and wherein the prediction relates to the same sporting event. In embodiments, the prediction is based on event data structures from a plurality of video feeds. In embodiments, the prediction is used for at least one of placing a wager, setting a line for a wager, interacting with a fantasy program, setting a parameter of a fantasy program, providing insight to a coach and providing information to a fan. In embodiments, the model takes inputs from a plurality of data sources in addition to the event data structures obtained from the video feed. In embodiments, the methods and systems may further include a pattern analysis facility that takes a plurality of the event data structures and enables analysis of patterns among the event data structures. In embodiments, the pattern analysis facility includes at least one tool selected from the group consisting of a pattern visualization tool, a statistical analysis tool, a machine learning tool, and a simulation tool. In embodiments, the methods and systems may further include a second machine learning facility for refining the model based on outcomes of a plurality of predictions made using the model. The methods and systems disclosed herein may include methods and systems for an analytic method and may include ingesting at least one video feed in a computing platform capable of handling video data; developing an understanding of at least one event within the video feed using machine learning, wherein the understanding identifies at least a type of the event and a time of the event in an event data structure; providing a computing architecture that enables a model that takes one or more event data structures as input and applies at least one calculation to transform the one or more event data structures into an output data structure; and populating the model with the event data structures as input to the model. In embodiments, the output data structure includes at least one prediction. In embodiments, the prediction is of an outcome of at least one of a sporting event and at least one-second event occurring within a sporting event. In embodiments, the video feed is of a live sporting event, wherein the prediction is made during the live sporting event, and wherein the prediction relates to the same sporting event. In embodiments, the prediction is based on event data structures from a plurality of video feeds. In embodiments, the prediction is used for at least one of placing a wager, setting a line for a wager, interacting with a fantasy program, setting a parameter of a fantasy program, providing insight to a coach and providing information to a fan. In embodiments, the model takes inputs from a plurality of data sources in addition to the event data structures obtained from the video feed. In embodiments, the methods and systems may further include providing a pattern analysis facility that takes a plurality of the event data structures and enables analysis of patterns among the event data structures. In embodiments, the pattern analysis facility includes at least one tool selected from the group consisting of a pattern visualization tool, a statistical analysis tool, a machine learning tool, and a simulation tool. In embodiments, the methods and systems may further include at least one of providing and using a second machine learning facility to refine the model based on outcomes of a plurality of predictions made using the model. The methods and systems disclosed herein may include methods and systems for taking a video feed; using machine learning to develop an understanding of a semantically relevant event within the video feed; indexing video segments of the video feed with information indicating the semantically relevant events identified within the feed by the machine learning; and applying machine learning to a plurality of the semantically relevant events to determine a pattern of events. In embodiments, the pattern is within a video feed. In embodiments, the pattern is across a plurality of video feeds. In embodiments, the pattern corresponds to a narrative structure. In embodiments, the narrative structure corresponds to a recurring pattern of events. In embodiments, the narrative structure relates to a sporting event and wherein the pattern relates to at least one of a blow-out victory pattern, a comeback win pattern, a near comeback pattern, a back-and-forth game pattern, an individual achievement pattern, an injury pattern, a turning point moment pattern, a close game pattern, and a team achievement pattern. In embodiments, the indexed video segments are arranged to support the narrative structure. In embodiments, the arranged segments are provided in an interface for developing a story using the segments that follow the narrative structure and wherein a user may at least one of edit and enter additional content for the story. In embodiments, summary content for the narrative structure is automatically generated, under computer control, to provide a story that includes the video sequences. In embodiments, the methods and systems may further include delivering a plurality of the automatically generated stories from at least one of a defined time period and of a defined type, allowing a user to indicate whether they like or dislike the delivered stories, and using the indications to inform later delivery of at least one additional story. In embodiments, the pattern is relevant to a prediction. In embodiments, the prediction is related to a wager, and the pattern corresponds to similar patterns that were used to make predictions that resulted in successful wagers in other situations. The methods and systems disclosed herein may include methods and systems for machine-extracting semantically relevant events from a video content stream and determining a pattern relating to the events. The methods and systems also include providing a content stream based on the pattern. In embodiments, the content stream is used to provide coaching information based on the pattern. In embodiments, the content stream is used to assist the prediction of an outcome in a fantasy sports contest. In embodiments, the pattern is used to provide content for a viewer of a sporting event. The methods and systems disclosed herein may include methods and systems for machine-extracting semantically relevant events from a video content stream; determining a pattern relating to the events; storing the pattern information with the extracted events; and providing a user with the option to view and interact with the patterns, wherein at least one of the patterns and the interaction options are personalized based on a profile of the user. In embodiments, the profile is based on at least one of user indication of a preference, information about actions of the user, and demographic information about the user. In embodiments, the pattern comprises at least one of a trend and a statistic that is curated to correspond with the user profile. In embodiments, the pattern relates to a comparison of a professional athlete to another athlete. In embodiments, the other athlete is the user and the comparison are based on a playing style of the user as determined by at least one of information indicated by the user and a video feed of the user. In embodiments, the pattern relates to an occurrence of an injury. In embodiments, the pattern information is used to provide coaching to prevent an injury. In embodiments, the methods and systems may further include automatically generating, under computer control, an injury prevention regimen based on the pattern and based on information about the user. The methods and systems disclosed herein may include methods and systems for machine-extracting semantically relevant events from a video content stream, determining a pattern relating to the events, and providing a content stream based on the pattern. The methods and systems may further include determining a pattern relating to a plurality of the events and providing a content data structure based on the pattern. In embodiments, machine-extracted information about events and contexts may be used to determine one or more patterns, such as by analyzing time series, correlations, and the like in the machine-extracted events and contexts. For example, tendencies of a team to follow running a certain play with a particular play may be determined by comparing instances of the two plays over time. Embodiments may include extracting particularly interesting or potential “game changing” plays by understanding the context of an individual event and comparing it to similar events from previous games. Embodiments may include extracting situations or plays that are particularly rare or unique by understanding the context of an individual event and comparing it to similar events from previous games. Embodiments may include extracting semantic events over time to draw a comparison of a player's or team's trajectory over time and superimposing video to draw out this comparison. The methods and systems disclosed herein may include methods and systems for a model to predict the outcome of a game or events within a game based on a contextualized understanding of a live event for use in betting/fantasy, coaching, augmented fan experiences, or the like. The methods and systems disclosed herein may include methods and systems for an analytic system and may include taking a video feed; using machine learning to develop an understanding of at least one first event within the video feed, the understanding including identifying context information relating to the first event; taking a model used to predict the outcome of at least one of a live game and at least one second event within a live game; and populating the model with the machine understanding of the first event and the context information to produce a prediction of an outcome of at least one of the game and the second event. In embodiments, the model is used for at least one of placing a wager, setting a line for a wager, interacting with a fantasy program, setting a parameter of a fantasy program, providing insight to a coach and providing information to a fan. In embodiments, machine-extracted event and context information can be used to populate one or more predictive models, such as models used for betting, fantasy sports, coaching, and entertainment. The machine understanding, including various metrics described throughout this disclosure, can provide or augment other factors that are used to predict an outcome. For example, outcomes from particular matchups can be machine extracted and used to predict outcomes from similar matchups in the future. For example, based on the machine understood context of a moment in an individual game, and the machine understanding of similar moments from previous games, a model can be created to predict the outcome of an individual play or a series of plays on which an individual can place a bet or on which a betting line may be set. In embodiments, the methods and systems disclosed herein may include methods and systems for suggestions of bets to make based on patterns of previously successful bets. For example, a user may be prompted with an option to place a bet based on previous betting history on similar events or because a particular moment is an opportunistic time to place a bet based on the context of a game and other user generated preferences or risk tolerances. The methods and systems disclosed herein may include methods and systems for automated storytelling, such as the ability to use patterns extracted from semantic events, metrics derived from tracking data, and combinations thereof to populate interesting stories about the content. The methods and systems disclosed herein may include methods and systems for enabling automated generation of stories and may include taking a video feed; using machine learning to develop an understanding of a semantically relevant event within the video feed, the understanding including identifying context information relating to the event; providing a narrative structure for a story, wherein the narrative structure is arranged based on the presence of semantic types of events and the context of those events; and automatically, under computer control, generating a story following the narrative structure, wherein the story is populated based on a sequence of the machine-understood events and the context information. In embodiments, patterns from semantic events may be used to populate stories. Various narrative structures can be developed, corresponding to common patterns of events (e.g., stories about blow-out victories, comeback wins, back-and-forth games, games that turned on big moments, or the like). Machine extracting of events and contexts can allow identification of patterns in the events and contexts that allow matching to one or more of the narrative structures, as well as population of the story with content for the events, such as video cuts or short written summaries that are determined by the machine extraction (e.g., “in the first quarter, Team A took the lead, scoring five times on the pick-and-roll.”). The methods and systems disclosed herein may include methods and systems for enabling a mobile application allowing user interacting with video content and may include taking a video feed; using machine learning to develop an understanding of an event within the video feed, the understanding including identifying context information relating to the event; automatically, under computer control, extracting the content displaying the event and associating the extracted content with the context information; producing a video content data structure that includes the associated context information; and providing a mobile application by which a user can interact with the video content data structure, wherein the options for user interaction are based on the context information. In embodiments, machine extracted content, with associated context information, may be provided to users via a mobile application, through which the users may display and interact with the content, such as by selecting particular types of content based on a desired semantic category (such as by selecting the category in list, menu, or the like), playing content (including pausing, rewinding, fast forwarding, and the like), and manipulating content (such as positioning content within a display window, zooming, panning, and the like). In embodiments, the nature of the permitted interaction may be governed by the context information associated with the content, where the context information is based on a machine understanding of the content and its associated context. For example, where the content is related to a particular type of play within a context of an event like a game, such as rebounding opportunities in basketball, the user may be permitted to select from a set of metrics that are relevant to rebounding, so that the selected metrics from a context-relevant set are displayed on the screen with the content. If the context is different, such as if the content relates to a series of pick-and-roll plays by a particular player, different metrics may be made available for selection by the user, such as statistics for that player, or metrics appropriate for pick-and-rolls. Thus, the machine-extracted understanding of an event, including context information, can be used to customize the content displayed to the user, including to allow the user to select context-relevant information for display. The methods and systems disclosed herein may include methods and systems for allowing a user to control the presentation of a broadcast video event, where the options for control are based on a context of the content as determined by machine extraction of semantically relevant events from the content. In accordance with an exemplary and non-limiting embodiment, X, Y, and Z data may be collected for purposes of inferring player actions that have a vertical component. The methods and systems disclosed herein may employ a variety of computer vision, machine learning, and/or active learning techniques and tools to extract, analyze and process data elements originating from sources, such as, but not limited to, input data sources relating to sporting events and items in them, such as players, venues, items used in sports (such as balls, pucks, and equipment), and the like. These data elements may be available as video feeds in an example, such that the video feeds may be captured by image recognition devices, video recognition devices, image and video capture devices, audio recognition devices, and the like, including by use of various devices and components such as a camera (such as a tracking camera or broadcast camera), a microphone, an image sensor, or the like. Audio feeds may be captured by microphones and similar devices, such as integrated on or with cameras or associated with independent audio capture systems. Input feeds may also include tracking data from chips or sensors (such as wearable tracking devices using accelerometers and other motion sensors), as well as data feeds about an event, such as a play-by-play data feed, a game clock data feed, and the like. In the case of input feeds, facial recognition systems may be used to capture facial images of players, such as to assist in recognition of players (such as in cases where player numbers are absent or obscured) and to capture and process expressions of players, such as emotional expressions, micro-expressions, or the like. These expressions may be associated with events, such as to assist in machine understanding (e.g., an expression may convey that the event was exciting, meaningful, the like, that it was disappointing to one constituency, that it was not important, or the like). Machine understanding may thus be trained to recognize expressions and provide an expression-based understanding of events, such as to augment one or more data structures associated with an event for further use in the various embodiments described herein. For example, a video feed may be processed based on a machine understanding of expressions to extract cuts that made players of one team happy. As another example, a cut showing an emotional reaction (such as by a player, fan, teammate, or coach) to an event may be associated with a cut of the event itself, providing a combined cut that shows the event and the reaction it caused. The various embodiments described throughout this disclosure the involve machine understanding, extraction of cuts, creation of data structures that are used or processed for various purposes, combining cuts, augmenting data feeds, producing stories, personalizing content, and the like should all be understood to encompass, where appropriate, use of machine understanding of emotional expression within a video feed, including based on use of computer vision techniques, including facial recognition techniques and expression recognition techniques. The computer vision, machine learning and/or active learning tools and techniques (together referred to as computer-controlled intelligent systems for simplicity herein) may receive the data elements from various input feeds and devices as a set of inputs either in real-time (such as in case of a live feed or broadcast) or at a different time (such as in case of a delayed broadcast of the sporting or any other event) without limitations. The computer-controlled intelligent systems may process the set of inputs, apply machine learning and natural language processing using artificial intelligence (AI) and natural language processing (NLP) capabilities to produce a set of services and outputs. In an example, the set of services and outputs may signify spatial-temporal positions of the players and sports accessories/objects such as a bat, ball, football, and the like. In an example, the set of services and outputs may represent spatial-temporal alignments of the inputs such as the video feeds, etc. For example, a broadcast video feed may be aligned in time with another input feed, such as input from one or more motion tracking cameras, inputs from player tracking systems (such as wearable devices), and the like. The set of services and outputs may include machine understood contextual outputs involving machine learning or understanding that may be built using various levels of artificial intelligence, algorithmic processes, computer-controlled tasks, custom rules, and the like, such as described throughout this disclosure. The machine understanding may include various levels of semantic identification, as well as information of position and speed information for various items or elements, identification of basic events such as various types of shots and screens during a sporting event, and identification of complex events or a sequence of events such as various types of plays, higher level metrics and patterns involving such as game trajectory, style of play, strengths and weaknesses of teams and team members/players from each team, and the like. The machine learning tools and input feed alignment may allow automatic generation of content and information such as statistics, predictions, comparisons, and analysis. The machine learning tools may further allow to generate outputs based on a user query input such as to determine various predictive analytics for a particular team player in view of historical shots and screens in a particular context, determine possibilities of success and failures in particular zones and game scenarios conditioned to particular user inputs, and the like. The machine understanding tools may simulate entire aspects of real-life sporting events on a computer screen utilizing visualization and modeling examples. The services and outputs generated by the intelligent computer-controlled systems may be used in a variety of ways such as generation of a live feed or a delayed feed during a sporting event in real time or at a later broadcasting time after the sporting event. The services and outputs may allow generating various analyses of statistics, trends, and strategy before events or across multiple events. The services and outputs may facilitate an interactive user session to extract contextual details relating to instantaneous sporting sessions of the sporting events in association with user defined queries, constraints, and rules. In an example, the services and outputs generated by the computer-controlled intelligent systems may enable spatiotemporal analysis of various game attributes and elements for exploring, learning, analyzing such sporting events and utilize analytics results to generate predictive models and predictive analytics for gaming strategy. These services and outputs may provide valuable insights and learnings that are otherwise not visible. The methods and systems disclosed herein may employ delay-dependent computer vision and machine learning systems (or the intelligent computer-controlled systems) for providing delay-dependent services and outputs with respect to the occurrence of a sporting event. The services and outputs as discussed herein may be employed in different applications with varying time delays relative to the actual occurrence of the sporting event. For example, the actual event may occur at a time T1and the content feeding or broadcasting may occur at a time T2with a time delay of T2-T1. The time delay may be small such as of a few seconds so as the content is useful in a live commentary or augmentation of a live video. In such cases, the machine learning tools may for example utilize real-time services and outputs and benefit from the spatiotemporal features and attributes to generate game patterns and automatic validations during the event itself such as to highlight certain event aspects in the commentary and/or validate momentary sessions when there are confusions during the event for decision making. The time delay may be longer in certain situations such as for replays, post-event analysis, predictive modeling, and future strategies, and the like. The methods and systems disclosed herein may support the provisioning of the services and outputs at various time delays by determining processing steps and their order of execution according to delay requirements. The system may be configured to operate such that the services and outputs may be obtained at arbitrary times with an increasing accuracy or time resolution or such that the system targets specific delay requirements as specified by users or defined in accordance with intended applications. For example, if in an application, computational resources are insufficient to process all frames originating from input devices such as cameras etc. at maximum accuracy at a video frame rate within a desired delay, then instead of processing the input video frames in sequential orders, processing may be ordered in such a way that at any time there is a uniform or approximately uniform distribution of processed frames. In some cases, processing decisions may also be influenced by other computational efficiency considerations for certain tasks that operate on video segments, such as an opportunity to reuse certain computations across successive frames in tracking algorithms. In some examples, processing techniques such as inference and interpolation over processed frames may be used to provide a tracking output whose accuracy and time resolution improves with delay as more frames are processed. If a target delay is specified, each component of processing application (such as background subtraction, detection of various elements) may be assigned an execution time budget within which to compute its output, such that the specified delay is met by a combination of the components. In some examples, the specified time delays may also consider video qualities needed at sending destinations so as to ensure that enough computation resources are allocated for appropriate resolutions and transmission rates at the destinations during broadcasting of the content. In certain cases, a normal resolution may be sufficient while in other cases a higher resolution may be needed. In various embodiments, the intelligent computer-controlled systems may be capable of defining appropriate resolutions, data transmission rates, and computation resources allocation in view of the delay requirements. The methods and systems disclosed herein may facilitate enabling calibration of a moving camera or any other image recognition device via tracking of moving points in a sporting event. Existing techniques for finding unknown camera calibration parameters from captured images or videos of sporting events rely on identifying a set of known locations, such as intersections of lines on the court or field. In accordance with such techniques, calibrating the moving camera as it changes its position or zooms across frames is challenging since there may be only a few of such known locations in the frames. The methods and systems disclosed herein may enable finding the calibration parameters of the moving or operator-controlled camera by using positions of moving points located by an associated tracking system. In an example, these positions may represent locations and spatial coordinates of a player's or a referee's head or hand or legs in the sporting event which may be identified by the tracking system. The tracking system may be an optical tracking system or a chip-based tracking system, which may be configured to determine positions of locations tags. In various examples, several other types of camera control, calibration, and position determining systems may be employed along with the tracking systems. For example, a fixed spotting camera may be used to capture a view and a moving camera contained within the tracking system may be used to capture the positions of the moving points in the frames. The moving camera may be configured to perform several functions such as zoom, tilt, pan, and the like. The tracking system may be configured to perform calibration and identification of the positions based on a tracking algorithm that may execute pre-defined instructions to compute relevant information necessary to drive the tracking system across the frames. The methods and systems disclosed herein may facilitate enabling pre-processing of images from calibrated cameras to improve object detection and recognition. The methods and systems disclosed herein may enable providing for accurate detection and recognition of humans, such as players or referees, and objects, such as a ball, a game clock, jersey numbers and the like with better performance and lower complexity. In embodiments, the tasks of object detection and recognition may be performed on the basis of knowledge of known calibration parameters of the cameras in the tracking system and known properties of the objects being detected such as their size, orientation, or positions etc. For example, perspectives and distortions introduced by the cameras can be undone by applying a transformation such that the objects being detected may have a consistent scale and orientation in transformed images. The transformed images may be used as inputs to detection and recognition algorithms by image processing devices so as to enable faster and more accurate object detection and recognition performance with lower complexity as compared to performing object detection and recognition directly on original images. In such cases, an output generated by the image processing devices may be used as inputs, along with other inputs described herein, to enable or refine the various machine learning and algorithmic capabilities described throughout this disclosure. In some embodiments, machine learning capabilities may be introduced to build improved processing utilizing machine learning tools as discussed above in the document. The methods and systems described herein may be deployed in part or in whole through a machine that executes computer software, program codes, and/or instructions on a processor. The processor may be part of a server, client, network infrastructure, mobile computing platform, stationary computing platform, or other computing platforms. A processor may be any kind of computational or processing device capable of executing program instructions, codes, binary instructions, and the like. The processor may be or include a signal processor, digital processor, embedded processor, microprocessor, or any variant such as a co-processor (math co-processor, graphic co-processor, communication co-processor and the like) and the like that may directly or indirectly facilitate execution of program code or program instructions stored thereon. In addition, the processor may enable the execution of multiple programs, threads, and codes. The threads may be executed simultaneously to enhance the performance of the processor and to facilitate simultaneous operations of the application. By way of implementation, methods, program codes, program instructions and the like described herein may be implemented in one or more thread. The thread may spawn other threads that may have assigned priorities associated with them; the processor may execute these threads based on priority or any other order based on instructions provided in the program code. The processor may include memory that stores methods, codes, instructions, and programs as described herein and elsewhere. The processor may access a storage medium through an interface that may store methods, codes, and instructions as described herein and elsewhere. The storage medium associated with the processor for storing methods, programs, codes, program instructions or other type of instructions capable of being executed by the computing or processing device may include but may not be limited to one or more of a CD-ROM, DVD, memory, hard disk, flash drive, RAM, ROM, cache, and the like. A processor may include one or more cores that may enhance speed and performance of a multiprocessor. In embodiments, the process may be a dual core processor, quad core processors, other chip-level multiprocessor and the like that combine two or more independent cores (called a die). The methods and systems described herein may be deployed in part or in whole through a machine that executes computer software on a server, client, firewall, gateway, hub, router, or other such computer and/or networking hardware. The software program may be associated with a server that may include a file server, print server, domain server, Internet server, intranet server and other variants such as secondary server, host server, distributed server, and the like. The server may include one or more of memories, processors, computer readable media, storage media, ports (physical and virtual), communication devices, and interfaces capable of accessing other servers, clients, machines, and devices through a wired or a wireless medium, and the like. The methods, programs or codes as described herein and elsewhere may be executed by the server. In addition, other devices required for execution of methods as described in this application may be considered as a part of the infrastructure associated with the server. The server may provide an interface to other devices including, without limitation, clients, other servers, printers, database servers, print servers, file servers, communication servers, distributed servers, and the like. Additionally, this coupling and/or connection may facilitate remote execution of program across the network. The networking of some or all of these devices may facilitate parallel processing of a program or method at one or more location without deviating from the scope. In addition, any of the devices attached to the server through an interface may include at least one storage medium capable of storing methods, programs, code and/or instructions. A central repository may provide program instructions to be executed on different devices. In this implementation, the remote repository may act as a storage medium for program code, instructions, and programs. The software program may be associated with a client that may include a file client, print client, domain client, Internet client, intranet client and other variants such as secondary client, host client, distributed client, and the like. The client may include one or more of memories, processors, computer readable media, storage media, ports (physical and virtual), communication devices, and interfaces capable of accessing other clients, servers, machines, and devices through a wired or a wireless medium, and the like. The methods, programs or codes as described herein and elsewhere may be executed by the client. In addition, other devices required for execution of methods as described in this application may be considered as a part of the infrastructure associated with the client. The client may provide an interface to other devices including, without limitation, servers, other clients, printers, database servers, print servers, file servers, communication servers, distributed servers, and the like. Additionally, this coupling and/or connection may facilitate remote execution of program across the network. The networking of some or all of these devices may facilitate parallel processing of a program or method at one or more location without deviating from the scope. In addition, any of the devices attached to the client through an interface may include at least one storage medium capable of storing methods, programs, applications, code and/or instructions. A central repository may provide program instructions to be executed on different devices. In this implementation, the remote repository may act as a storage medium for program code, instructions, and programs. The methods and systems described herein may be deployed in part or in whole through network infrastructures. The network infrastructure may include elements such as computing devices, servers, routers, hubs, firewalls, clients, personal computers, communication devices, routing devices and other active and passive devices, modules and/or components as known in the art. The computing and/or non-computing device(s) associated with the network infrastructure may include, apart from other components, a storage medium such as flash memory, buffer, stack, RAM, ROM, and the like. The processes, methods, program codes, instructions described herein and elsewhere may be executed by one or more of the network infrastructural elements. The methods, program codes, and instructions described herein and elsewhere may be implemented on a cellular network having multiple cells. The cellular network may either be frequency division multiple access (FDMA) network or code division multiple access (CDMA) network. The cellular network may include mobile devices, cell sites, base stations, repeaters, antennas, towers, and the like. The cell network may be a GSM, GPRS, 3G, EVDO, mesh, or other networks types. The methods, programs codes, and instructions described herein and elsewhere may be implemented on or through mobile devices. The mobile devices may include navigation devices, cell phones, mobile phones, mobile personal digital assistants, laptops, palmtops, netbooks, pagers, electronic books readers, music players and the like. These devices may include, apart from other components, a storage medium such as a flash memory, buffer, RAM, ROM and one or more computing devices. The computing devices associated with mobile devices may be enabled to execute program codes, methods, and instructions stored thereon. Alternatively, the mobile devices may be configured to execute instructions in collaboration with other devices. The mobile devices may communicate with base stations interfaced with servers and configured to execute program codes. The mobile devices may communicate on a peer to peer network, mesh network, or other communications networks. The program code may be stored on the storage medium associated with the server and executed by a computing device embedded within the server. The base station may include a computing device and a storage medium. The storage device may store program codes and instructions executed by the computing devices associated with the base station. The computer software, program codes, and/or instructions may be stored and/or accessed on machine readable media that may include: computer components, devices, and recording media that retain digital data used for computing for some interval of time; semiconductor storage known as random access memory (RAM); mass storage typically for more permanent storage, such as optical discs, forms of magnetic storage like hard disks, tapes, drums, cards and other types; processor registers, cache memory, volatile memory, non-volatile memory; optical storage such as CD, DVD; removable media such as flash memory (e.g., USB sticks or keys), floppy disks, magnetic tape, paper tape, punch cards, standalone RAM disks, Zip drives, removable mass storage, off-line, and the like; other computer memory such as dynamic memory, static memory, read/write storage, mutable storage, read only, random access, sequential access, location addressable, file addressable, content addressable, network attached storage, storage area network, bar codes, magnetic ink, and the like. The methods and systems described herein may transform physical and/or intangible items from one state to another. The methods and systems described herein may also transform data representing physical and/or intangible items from one state to another. The elements described and depicted herein, including in flow charts and block diagrams throughout the figures, imply logical boundaries between the elements. However, according to software or hardware engineering practices, the depicted elements and the functions thereof may be implemented on machines through computer executable media having a processor capable of executing program instructions stored thereon as a monolithic software structure, as standalone software modules, or as modules that employ external routines, code, services, and so forth, or any combination of these, and all such implementations may be within the scope of the present disclosure. Examples of such machines may include, but may not be limited to, personal digital assistants, laptops, personal computers, mobile phones, other handheld computing devices, medical equipment, wired or wireless communication devices, transducers, chips, calculators, satellites, tablet PCs, electronic books, gadgets, electronic devices, devices having artificial intelligence, computing devices, networking equipment, servers, routers, and the like. Furthermore, the elements depicted in the flow chart and block diagrams or any other logical component may be implemented on a machine capable of executing program instructions. Thus, while the foregoing drawings and descriptions set forth functional aspects of the disclosed systems, no particular arrangement of software for implementing these functional aspects should be inferred from these descriptions unless explicitly stated or otherwise clear from the context. Similarly, it may be appreciated that the various steps identified and described above may be varied and that the order of steps may be adapted to particular applications of the techniques disclosed herein. All such variations and modifications are intended to fall within the scope of this disclosure. As such, the depiction and/or description of an order for various steps should not be understood to require a particular order of execution for those steps, unless required by a particular application, or explicitly stated or otherwise clear from the context. The methods and/or processes described above, and steps thereof, may be realized in hardware, software or any combination of hardware and software suitable for a particular application. The hardware may include a general purpose computer and/or dedicated computing device or specific computing device or particular aspect or component of a specific computing device. The processes may be realized in one or more microprocessors, microcontrollers, embedded microcontrollers, programmable digital signal processors or other programmable devices, along with internal and/or external memory. The processes may also, or instead, be embodied in an application specific integrated circuit, a programmable gate array, programmable array logic, or any other device or combination of devices that may be configured to process electronic signals. It may further be appreciated that one or more of the processes may be realized as a computer executable code capable of being executed on a machine readable medium. The computer executable code may be created using a structured programming language such as C, an object oriented programming language such as C++, or any other high-level or low-level programming language (including assembly languages, hardware description languages, and database programming languages and technologies) that may be stored, compiled or interpreted to run on one of the above devices, as well as heterogeneous combinations of processors, processor architectures, or combinations of different hardware and software, or any other machine capable of executing program instructions. Thus, in one aspect, each method described above and combinations thereof may be embodied in computer executable code that, when executing on one or more computing devices, performs the steps thereof. In another aspect, the methods may be embodied in systems that perform the steps thereof, and may be distributed across devices in a number of ways, or all of the functionality may be integrated into a dedicated, standalone device or other hardware. In another aspect, the means for performing the steps associated with the processes described above may include any of the hardware and/or software described above. All such permutations and combinations are intended to fall within the scope of the present disclosure. While the methods and systems described herein have been disclosed in connection with certain preferred embodiments shown and described in detail, various modifications and improvements thereon may become readily apparent to those skilled in the art. Accordingly, the spirit and scope of the methods and systems described herein are not to be limited by the foregoing examples but is to be understood in the broadest sense allowable by law. All documents referenced herein are hereby incorporated by reference in their entirety. | 296,938 |
11861906 | DETAILED DESCRIPTION Various embodiments now will be described more fully hereinafter with reference to the accompanying drawings. It should be understood that the invention may be embodied in many different forms and should not be construed as limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the invention to those skilled in the art. Like numbers refer to like elements throughout. FIG.1illustrates a technology stack100indicative of technology layers configured to execute a set of capabilities, in accordance with an embodiment of the present invention. The technology stack100may include a customization layer102, an interaction layer104, a visualizations layer108, an analytics layer110, a patterns layer112, an events layer114, and a data layer118, without limitations. The different technology layers or the technology stack100may be referred to as an “Eagle” Stack100, which should be understood to encompass the various layers allow precise monitoring, analytics, and understanding of spatiotemporal data associated with an event, such as a sports event and the like. For example, the technology stack may provide an analytic platform that may take spatiotemporal data (e.g., 3D motion capture “XYZ” data) from National Basketball Association (NBA) arenas or other sports arenas and, after cleansing, may perform spatiotemporal pattern recognition to extract certain “events”. The extracted events may be for example (among many other possibilities) events that correspond to particular understandings of events within the overall sporting event, such as “pick and roll” or “blitz.” Such events may correspond to real events in a game, and may, in turn, be subject to various metrics, analytic tools, and visualizations around the events. Event recognition may be based on pattern recognition by machine learning, such as spatiotemporal pattern recognition, and in some cases, may be augmented, confirmed, or aided by human feedback. The customization layer102may allow performing custom analytics and interpretation using analytics, visualization, and other tools, as well as optional crowd-sourced feedback for developing team-specific analytics, models, exports, and related insights. For example, among many other possibilities, the customization layer102may facilitate in generating visualizations for different spatiotemporal movements of a football player, or group of players and counter movements associated with other players or groups of players during a football event. The interaction layer104may facilitate generating real-time interactive tasks, visual representations, interfaces, videos clips, images, screens, and other such vehicles for allowing viewing of an event with enhanced features or allowing interaction of a user with a virtual event derived from an actual real-time event. For example, the interaction layer104may allow a user to access features or metrics such as a shot matrix, a screens breakdown, possession detection, and many others using real-time interactive tools that may slice, dice, and analyze data obtained from the real-time event such as a sports event. The visualizations layer108may allow dynamic visualizations of patterns and analytics developed from the data obtained from the real-time event. The visualizations may be presented in the form of a scatter rank, shot comparisons, a clip view, and many others. The visualizations layer108may use various types of visualizations and graphical tools for creating visual depictions. The visuals may include various types of interactive charts, graphs, diagrams, comparative analytical graphs, and the like. The visualizations layer108may be linked with the interaction layer so that the visual depictions may be presented in an interactive fashion for a user interaction with real-time events produced on a virtual platform such as the analytic platform of the present invention. The analytics layer110may involve various analytics and Artificial Intelligence (AI) tools to perform analysis and interpretation of data retrieved from the real-time event such as a sports event so that the analyzed data results in insights that make sense out of the pulled big data from the real-time event. The analytics and AI tools may comprise such as search and optimization tools, inference rules engines, algorithms, learning algorithms, logic modules, probabilistic tools and methods, decision analytics tools, machine learning algorithms, semantic tools, expert systems, and the like without limitations. Output from the analytics110and patterns layers112is exportable by the user as a database that enables the customer to configure their own machines to read and access the events and metrics stored in the system. In accordance with various exemplary and non-limiting embodiments, patterns and metrics are structured and stored in an intuitive way. In general, the database utilized for storing the events and metric data is designed to facilitate easy export and to enable integration with a team's internal workflow. In one embodiment, there is a unique file corresponding to each individual game. Within each file, individual data structures may be configured in accordance with included structure definitions for each data type indicative of a type of event for which data may be identified and stored. For example, types of events that may be recorded for a basketball game include, but are not limited to, isos, handoffs, posts, screens, transitions, shots, closeouts, and chances. With reference to, for example, the data type “screens”, Table 1 is an exemplary listing of the data structure for storing information related to each occurrence of a screen. As illustrated, each data type is comprised of a plurality of component variable definitions each comprised of a data type and a description of the variable. TABLE 1screensidINTInternal ID of this screen.possession_idSTRINGInternal ID of the possession in which this event took place.frameINTFrame ID, denoting frame number from the start of the currentperiod. Currently, this marks the frame at which the screenerand ballhandler are closest.frame_timeINTTime stamp provided in SportVU data for a frame, measuredin milliseconds in the current epoch (i.e., from 00:00:00 UTCon 1 Jan. 1970).game_codeINTGame code provided in SportVU data.periodINTRegulation periods 1-4, overtime periods 5 and up.game_clockNUMBERNumber of seconds remaining in period, from 720.00 to 0.00.location_xNUMBERLocation along length of court, from 0 to 94.location_yNUMBERLocation along baseline of court, from 0 to 50.screenerINTID of screener, matches SportVU ID.ballhandlerINTID of the ballhandler, matches SportVU ID.screener_defenderINTID of the screener's defender, matches SportVU ID.ballhandler_defenderINTID of the ballhandler's defender, matches SportVU ID.oteamINTID of team on offense, matches IDs in SportVU data.dteamINTID of team on defense, matches IDs in SportVU data.rdefSTRINGString representing the observed actions of the ballhandler'sdefender.sdefSTRINGString representing the observed actions of the screener's defender.scr_typeSTRINGClassification of the screen into take, reject, or slip.outcomes_bhrARRAYActions by the ballhandler, taken from the outcomes described atthe end of the document, such as FGX or FGM.outcomes_scrARRAYActions by the screener, taken from the outcomes described at theend of the document, such as FGX or FGM. These exported files, one for each game, enable other machines to read the stored understanding of the game and build further upon that knowledge. In accordance with various embodiments, the data extraction and/or export is optionally accomplished via a JSON schema. The patterns layer112may provide a technology infrastructure for rapid discovery of new patterns arising out of the retrieved data from the real-time event such as a sports event. The patterns may comprise many different patterns that corresponding to an understanding of the event, such as a defensive pattern (e.g., blitz, switch, over, under, up to touch, contain-trap, zone, man-to-man, or face-up pattern), various offensive patterns (e.g., pick-and-roll, pick-and-pop, horns, dribble-drive, off-ball screens, cuts, post-up, and the like), patterns reflecting plays (scoring plays, three-point plays, “red zone” plays, pass plays, running plays, fast break plays, etc.) and various other patterns associated with a player in the game or sports, in each case corresponding to distinct spatiotemporal events. The events layer114may allow creating new events or editing or correcting current events. For example, the events layer may allow for the analyzing of the accuracy of markings or other game definitions and may comment on whether they meet standards and sports guidelines. For example, specific boundary markings in an actual real-time event may not be compliant with the guidelines and there may exist some errors, which may be identified by the events layers through analysis and virtual interactions possible with the platform of the present invention. Events may correspond to various understandings of a game, including offensive and defensive plays, matchups among players or groups of players, scoring events, penalty or foul events, and many others. The data layer118facilitates management of the big data retrieved from the real-time event such as a sports event. The data layer118may allow creating libraries that may store raw data, catalogs, corrected data, analyzed data, insights, and the like. The data layer118may manage online warehousing in a cloud storage setup or in any other manner in various embodiments. FIG.2illustrates a process flow diagram200, in accordance with an embodiment of the present invention. The process200may include retrieving spatiotemporal data associated with a sports or game and storing in a data library at step202. The spatiotemporal data may relate to a video feed that was captured by a 3D camera, such as one positioned in a sports arena or other venue, or it may come from another source. The process200may further include cleaning of the rough spatiotemporal data at step204through analytical and machine learning tools and utilizing various technology layers as discussed in conjunction withFIG.1so as to generate meaningful insights from the cleansed data. The process200may further include recognizing spatiotemporal patterns through analysis of the cleansed data at step208. Spatiotemporal patterns may comprise a wide range of patterns that are associated with types of events. For example, a particular pattern in space, such as the ball bouncing off the rim, then falling below it, may contribute toward recognizing a “rebound” event in basketball. Patterns in space and time may lead to recognition of single events or multiple events that comprise a defined sequence of recognized events (such as in types of plays that have multiple steps). The recognized patterns may define a series of events associated with the sports that may be stored in an event datastore at step210. These events may be organized according to the recognized spatiotemporal patterns; for example, a series of events may have been recognized as “pick,” “rebound,” “shot,” or like events in basketball, and they may be stored as such in the event datastore210. The event datastore210may store a wide range of such events, including individual patterns recognized by spatiotemporal pattern recognition and aggregated patterns, such as when one pattern follows another in an extended, multi-step event (such as in plays where one event occurs and then another occurs, such as “pick and roll” or “pick and pop” events in basketball, football events that involve setting an initial block, then springing out for a pass, and many others). The process200may further include querying or aggregation or pattern detection at step212. The querying of data or aggregation may be performed with the use of search tools that may be operably and communicatively connected with the data library or the events datastore for analyzing, searching, aggregating the rough data, cleansed, or analyzed data, or events data or the events patterns. At step214, metrics and actionable intelligence may be used for developing insights from the searched or aggregated data through artificial intelligence and machine learning tools. At step218, for example, the metrics and actionable intelligence may convert the data into interactive visualization portals or interfaces for use by a user in an interactive manner. In embodiments, an interactive visualization portal or interface may produce a 3D reconstruction of an event, such as a game. In embodiments, a 3D reconstruction of a game may be produced using a process that presents the reconstruction from a point of view, such as a first person point of view of a participant in an event, such as a player in a game. Raw input XYZ data obtained from various data sources is frequently noisy, missing, or wrong. XYZ data is sometimes delivered with attached basic events already identified in it, such as possession, pass, dribble, and shot events; however, these associations are frequently incorrect. This is important because event identification further down the process (in Spatiotemporal Pattern Recognition) sometimes depends on the correctness of these basic events. For example, if two players' XY positions are switched, then “over” vs “under” defense would be incorrectly characterized, since the players' relative positioning is used as a critical feature for the classification. Even player-by-player data sources are occasionally incorrect, such as associating identified events with the wrong player. First, validation algorithms are used to detect all events, including the basic events such as possession, pass, dribble, shot, and rebound that are provided with the XYZ data. Possession/Non-possession models may use a Hidden Markov Model to best fit the data to these states. Shots and rebounds may use the possession model outputs, combined with 1) projected destination of the ball, and 2) player by player information (PBP) information. Dribbles may be identified using a trained ML algorithm and also using the output of the possession model. These algorithms may decrease the basic event labeling error rate by approximately 50% or more. Second, the system has a library of anomaly detection algorithms to identify potential problems in the data including, but not limited to, temporal discontinuities (intervals of missing data are flagged), spatial discontinuities (objects traveling is a non-smooth motion, “jumping”) and interpolation detection (data that is too smooth, indicating that post-processing was done by the data supplier to interpolate between known data points in order to fill in missing data). This problem data is flagged for human review so that events detected during these periods are subject to further scrutiny. Spatiotemporal Pattern Recognition Spatiotemporal pattern recognition208is used to automatically identify relationships between physical and temporal patterns and various types of events. In the example of basketball, one challenge is how to turn x, y, z positions of ten players and one ball at twenty-five frames per second into usable input for machine learning and pattern recognition algorithms. For patterns, one is trying to detect (e.g., pick & rolls), the raw inputs may not suffice. The instances within each pattern category can look very different from each other. One, therefore, may benefit from a layer of abstraction and generality. Features that relate multiple actors in time are key components to the input. Examples include, but are not limited to, the motion of player one (P1) towards player two (P2), for at least T seconds, a rate of motion of at least V m/s for at least T seconds and at the projected point of intersection of paths A and B, and a separation distance less than D. In embodiments, an algorithm for spatiotemporal pattern recognition can use relative motion of visible features within a feed, duration of relative motion of such features, rate of motion of such features with respect to each other, rate of acceleration of such features with respect to each other, a projected point of intersection of such features, the separation distance of such features, and the like to identify or recognize a pattern with respect to visible features in a feed, which in turn can be used for various other purposes disclosed herein, such as recognition of a semantically relevant event or feature that relates to the pattern. In embodiments, these factors may be based on a pre-existing model or understanding of the relevance of such features, such as where values or thresholds may be applied within the pattern recognition algorithm to aid pattern recognition. Thus, thresholds or values may be applied to rates of motion, durations of motion, and the like to assist in pattern recognition. However, in other cases, pattern recognition may occur by adjusting weights or values of various input features within a machine learning system, without a pre-existing model or understanding of the significance of particular values and without applying thresholds or the like. Thus, the spatiotemporal pattern recognition algorithm may be based on at least one pattern recognized by adjusting at least one of an input type and a weight within a machine learning system. This recognition may occur independently of any a priori model or understanding of the significance of particular input types, features, or characteristics. In embodiments, an input type may be selected from the group consisting of relative direction of motion of at least two visible features, duration of relative motion of visible features with respect to each other, rate of motion of at least two visible features with respect to each other, acceleration of motion of at least two visible feature with respect to each other, projected point of intersection of at least two visible features with respect to each other and separation distance between at least two visible features with respect to each other, and the like. In embodiments of the present disclosure, there is provided a library of such features involving multiple actors over space and time. In the past machine learning (ML) literature, there has been relatively little need for such a library of spatiotemporal features, because there were few datasets with these characteristics on which learning could have been considered as an option. The library may include relationships between actors (e.g., players one through ten in basketball), relationships between the actors and other objects such as the ball, and relationships to other markers, such as designated points and lines on the court or field, and to projected locations based on predicted motion. Another key challenge is there has not been a labeled dataset for training the ML algorithms. Such a labeled dataset may be used in connection with various embodiments disclosed herein. For example, there has previously been no XYZ player-tracking dataset that already has higher level events, such as pick and roll (P&R) events) labeled at each time frame they occur. Labeling such events, for many different types of events and sub-types, is a laborious process. Also, the number of training examples required to adequately train the classifier may be unknown. One may use a variation of active learning to solve this challenge. Instead of using a set of labeled data as training input for a classifier trying to distinguish A and B, the machine finds an unlabeled example that is closest to the boundary between As and Bs in the feature space. The machine then queries a human operator/labeler for the label for this example. It uses this labeled example to refine its classifier and then repeats. In one exemplary embodiment of active learning, the system also incorporates human input in the form of new features. These features are either completely devised by the human operator (and inputted as code snippets in the active learning framework), or they are suggested in template form by the framework. The templates use the spatiotemporal pattern library to suggest types of features that may be fruitful to test. The operator can choose a pattern, and test a particular instantiation of it, or request that the machine test a range of instantiations of that pattern. Multi-Loop Iterative Process Some features are based on outputs of the machine learning process itself. Thus, multiple iterations of training are used to capture this feedback and allow the process to converge. For example, a first iteration of the ML process may suggest that the Bulls tend to ice the P&R. This fact is then fed into the next iteration of ML training as a feature, which biases the algorithm to label Bulls' P&R defense as ices. The process converges after multiple iterations. In practice, two iterations have typically been sufficient to yield good results. In accordance with exemplary embodiments, a canonical event datastore210may contain a definitive list of events that the system knows occurred during a game. This includes events extracted from the XYZ data, as well as those specified by third-party sources, such as PBP data from various vendors. The events in the canonical event datastore210may have game clock times specified for each event. The datastore210may be fairly large. To maintain efficient processing, it is shared and stored in-memory across many machines in the cloud. This is similar in principle to other methods such as Hadoop™; however, it is much more efficient, because in embodiments involving events, such as sporting events, where there is some predetermined structure that is likely to be present (e.g., the 24-second shot clock, or quarters or halves in a basketball game), it makes key structural assumptions about the data. Because the data is from sports games, for example, in embodiments one may enforce that no queries will run across multiple quarters/periods. Aggregation steps can occur across quarters/periods, but query results will not. This is one instantiation of this assumption. Any other domain in which locality of data can be enforced will also fall into this category. Such a design allows rapid and complex querying across all of the data, allowing arbitrary filters, rather than relying on either 1) long-running processes, or 2) summary data, or 3) pre-computed results on pre-determined filters. In accordance with exemplary and non-limiting embodiments, data is divided into small enough shards that each worker shard has a low latency response time. Each distributed machine may have multiple workers corresponding to the number of processes the machine can support concurrently. Query results never rely on more than one shard, since we enforce that events never cross quarter/period boundaries. Aggregation functions all run incrementally rather than in batch process so that as workers return results, these are incorporated into the final answer immediately. To handle results such as rankings pages, where many rows must be returned, the aggregator uses hashes to keep track of the separate rows and incrementally updates them. Referring toFIG.3, an exploration loop may be enabled by the methods and systems disclosed herein, where questioning and exploration can occur, such as using visualizations (e.g., data effects, referred to as DataFX in this disclosure), processing can occur, such as to identify new events and metrics, and understanding emerges, leading to additional questions, processing and understanding. Referring toFIG.4, the present disclosure provides an instant player rankings feature as depicted in the illustrated user interface. A user can select among various types of available rankings402, as indicated in the drop down list410, such as rankings relating to shooting, rebounding, rebound ratings, isolations (Isos), picks, postups, handoffs, lineups, matchups, possessions (including metrics and actions), transitions, plays and chances. Rankings can be selected in a menu element404for players, teams, or other entities. Rankings can be selected for different types of play in the menu element408, such as for offense, defense, transition, special situations, and the like. The ranking interface allows a user to quickly query the system to answer a particular question instead of thumbing through pages of reports. The user interface lets a user locate essential factors and evaluate talent of a player to make more informed decisions. FIGS.5A and5Bshow certain basic, yet quite in-depth, pages in the systems described herein, referred to in some cases as the “Eagle system.” This user interface may allow the user to rank players and teams by a wide variety of metrics. This may include identified actions, metrics derived from these actions, and other continuous metrics. Metrics may relate to different kinds of events, different entities (players and teams), different situations (offense and defense) and any other patterns identified in the spatiotemporal pattern recognition system. Examples of items on which various entities can be ranked in the case of basketball include chances, charges, closeouts, drives, frequencies, handoffs, isolations, lineups, matches, picks, plays, possessions, postups, primary defenders, rebounding (main and raw), off ball screens, shooting, speed/load and transitions. The Rankings UI makes it easy for a user to understand relative quality of one row item versus other row items, along any metric. Each metric may be displayed in a column, and that row's ranking within the distribution of values for that metrics may be displayed for the user. Color coding makes it easy for the user to understand relative goodness. FIGS.6A and6Bshow a set of filters in the UI, which can be used to filter particular items to obtain greater levels of detail or selected sets of results. Filters may exist for seasons, games, home teams, away teams, earliest and latest date, postseason/regular season, wins/losses, offense home/away, offensive team, defensive team, layers on the court for offense/defense, players off court for offense/defense, locations, offensive or defensive statistics, score differential, periods, time remaining, after timeout play start, transition/no transition, and various other features. The filters602for offense may include selections for the ballhandler, the ballhandler position, the screener, the screener position, the ballhandler outcome, the screener outcome, the direction, the type of pick, the type of pop/roll, the direction of the pop/roll, and presence of the play (e.g., on the wing or in the middle). Many other examples of filters are possible, as a filter can exist for any type of parameter that is tracked with respect to an event that is extracted by the system or that is in the spatiotemporal data set used to extract events. The present disclosure also allows situational comparisons. The user interface allows a user to search for a specific player that may fit into the offense. The highly accurate dataset and easy to use interface allow the user to compare similar players in similar situations. The user interface may allow the user to explore player tendencies. The user interface may allow locating shot locations and also may provide advanced search capabilities. Filters enable users to subset the data in a large number of ways and immediately receive metrics calculated on the subset. Using multiple loops for convergence in machine learning enables the system to return the newly filtered data and metrics in real-time, whereas existing methods would require minutes to re-compute the metrics given the filters, leading to inefficient exploration loops (FIG.3). Given that the data exploration and investigation process often require many loops, these inefficiencies can otherwise add up quickly. As illustrated with reference toFIGS.6A and6B, there are many filters that may enable a user to select specific situations of interest to analyze. These filters may be categorized into logical groups, including, but not limited to, Game, Team, Location, Offense, Defense, and Other. The possible filters may automatically change depending on the type of event being analyzed, for example, Shooting, Rebounding, Picks, Handoffs, Isolations, Postups, Transitions, Closeouts, Charges, Drives, Lineups, Matchups, Play Types, Possessions. For all event types, under the Game category, filters may include Season, specific Games, Earliest Date, Latest Date, Home Team, Away Team, where the game is being played Home/Away, whether the outcome was Wins/Losses, whether the game was a Playoff game, and recency of the game. For all event types, under the Team category, filters may include Offensive Team, Defensive Team, Offensive Players on Court, Defenders Players on Court, Offensive Players Off Court, Defenders Off Court. For all event types, under the Location category, the user may be given a clickable court map that is segmented into logical partitions of the court. The user may then select any number of these partitions in order to filter only events that occurred in those partitions. For all event types, under the Other category, the filters may include Score Differential, Play Start Type (Multi-Select: Field Goal ORB, Field Goal DRB, Free Throw ORB, Free Throw DRB, Jump Ball, Live Ball Turnover, Defensive Out of Bounds, Sideline Out of Bounds), Periods, Seconds Remaining, Chance After Timeout (T/F/ALL), Transition (T/F/ALL). For Shooting, under the Offense category, the filters may include Shooter, Position, Outcome (Made/Missed/All), Shot Value, Catch and Shoot (T/F/ALL), Shot Distance, Simple Shot Type (Multi-Select: Heave, Angle Layup, Driving Layup, Jumper, Post), Complex Shot Type (Multi-Select: Heave, Lob, Tip, Standstill Layup, Cut Layup, Driving Layup, Floater, Catch and Shoot), Assisted (T/F/ALL), Pass From (Player), Blocked (T/F/ALL), Dunk (T/F/ALL), Bank (T/F/ALL), Goaltending (T/F/ALL), Shot Attempt Type (Multi-select: FGA No Foul, FGM Foul, FGX Foul), Shot SEFG (Value Range), Shot Clock (Range), Previous Event (Multi-Select: Transition, Pick, Isolation, Handoff, Post, None). For Shooting, under the Defense category, the filters may include Defender Position (Multi-Select: PG, SG, SF, PF, CTR), Closest Defender, Closest Defender Distance, Blocked By, Shooter Height Advantage. For Picks, under the Offense category, the filters may include Ballhandler, Ballhandler Position, Screener, Screener Position, Ballhandler Outcome (Pass, Shot, Foul, Turnover), Screener Outcome (Pass, Shot, Foul, Turnover), Direct or Indirect Outcome, Pick Type (Reject, Slip, Pick), Pop/Roll, Direction, Wing/Middle, Middle/Wing/Step-Up. For Picks, under the Defense category, the filters may include Ballhandler Defender, Ballhandler Defender Position, Screener Defender, Screener Defender Position, Ballhandler Defense Type (Over, Under, Blitz, Switch, Ice), Screener Defense Type (Soft, Show, Ice, Blitz, Switch), Ballhandler Defense (Complex) (Over, Under, Blitz, Switch, Ice, Contain Trap, Weak), Screener Defense (Complex) (Over, Under, Blitz, Switch, Ice, Contain Trap, Weak, Up to Touch). For Drives, under the Offense category, the filters may include Ballhandler, Ballhandler Position, Ballhandler Outcome, Direct or Indirect, Drive Category (Handoff, Iso, Pick, Closeout, Misc.), Drive End (Shot Near Basket, Pullup, Interior Pass, Kickout, Pullout, Turnover, Stoppage, Other), Direction, Blowby (T/F). For Drives, under the Defense category, the filters may include Ballhandler Defender, Ballhandler Defender Position, Help Defender Present (T/F), Help Defenders. For most other events, under the Offense category, the filters may include Ballhandler, Ballhandler Position, Ballhandler Outcome, Direct or Indirect. For most other events, under the Defense category, the filters may include Ballhandler Defender, Ballhandler Defender Position. For Postups, under the Offense category, the filters may additionally include Area (Left, Right, Middle). For Postups, under the Defense category, the filters may additionally include Double Team (T/F). The present disclosure provides detailed analysis capabilities, such as through the depicted user interface embodiment ofFIG.7. In an example depicted inFIG.7, the user interface may be used to know if a player should try and ice the pick and roll or not between two players. Filters can go from all picks, to picks involving a selected player as ballhandler, to picks involving that ballhandler with a certain screener, to the type of defense played by that screener. By filtering down to particular matchups (by player combinations and actions taken), the system allows rapid exploration of the different options for coaches and players, and selection of preferred actions that had the best outcomes in the past. Among other things, the system may give a detailed breakdown of a player's opponent and a better idea of what to expect during a game. The user interface may be used to know and highlight opponent capabilities. A breakdowns UI may make it easy for a user to drill down to a specific situation, all while gaining insight regarding frequency and efficacy of relevant slices through the data. The events captured by the present system may be capable of being manipulated using the UI.FIG.8shows a visualization, where a drop-down feature802allows a user to select various parameters related to the ballhandler, such as to break down to particular types of situations involving that ballhandler. These types of “breakdowns” facilitate improved interactivity with video data, including enhanced video data created with the methods and systems disclosed herein. Most standard visualizations are static images. For large and complex datasets, especially in cases where the questions to be answered are unknown beforehand, interactivity enables the user to explore the data, ask new questions, get new answers. Visualizations may be color coded good (e.g., orange) to bad (e.g., blue) based on outcomes in particular situations for easy understanding without reading the detailed numbers. Elements like the sizes of partitions can be used, such as to denote frequency. Again, a user can comprehend significance from a glance. In embodiments, each column represents a variable for partitioning the dataset. It is easy for a user to add, remove, and re-arrange columns by clicking and dragging. This makes it easy to experiment with different visualizations. Furthermore, the user can drill into a particular scenario by clicking on the partition of interest, which zooms into that partition, and redraws the partitions in the columns to the right so that they are re-scaled appropriately. This enables the user to view the relative sample sizes of the partitions in columns to the right, even when they are small relative to all possible scenarios represented in columns further to the left. In embodiments, a video icon takes a user to video clips of the set of plays that correspond to a given partition. Watching the video gives the user ideas for other variables to use for partitioning. Various interactive visualizations may be created to allow users to better understand insights that arise from the classification and filtering of events, such as ones that emphasize color coding for easy visual inspection and detection of anomalies (e.g., a generally good player with lots of orange but is bad/blue in one specific dimension). Conventionally, most standard visualizations are static images. However, for large and complex datasets, especially in cases where the questions to be answered are unknown beforehand, interactivity enables the user to explore the data, ask new questions, get new answers. For example, a breakdown view may be color coded good (orange) to bad (blue) for easy understanding without reading the numbers. Sizes of partitions may denote the frequency of events. Again, one can comprehend from a glance the events that occur most frequently. Each column of a visualization may represent a variable for partitioning the dataset. It may be easy to add, remove, and re-arrange columns by clicking and dragging. This makes it easy to experiment with possible visualizations. In embodiments, a video icon may take a user to video clips, such as of the set of plays that correspond to that partition. Watching the video gives the user ideas for other variables to use for partitioning. In embodiments, a ranking view is provided. Upon moussing over each row of a ranking view, histograms above each column may give the user a clear contextual understanding that row's performance for each column variable. The shape of a distribution is often informative. Color-coded bars within each cell may also provide a view of each cell's performance that is always available, without moussing over. Alternatively, the cells themselves may be color-coded. The system may provide a personalized video in embodiments of the methods and systems described herein. For example, with little time to scout the opposition, the system can provide a user with relevant information to quickly prepare the team. The team may rapidly retrieve the most meaningful plays, cut, and compiled to specific needs of players. The system may provide immediate video cut-ups. In embodiments, the present disclosure provides a video that is synchronized with identified actions. For example, if spatiotemporal machine learning identifies a segment of a video as showing a pick and roll involving two players, then that video segment may be tagged, so that when that event is found (either by browsing or by filtering to that situation), the video can be displayed. Because the machine understands the precise moment that an event occurs in the video, a user-customizable segment of video can be created. For example, the user can retrieve video corresponding to x seconds before, and y seconds after, each event occurrence. Thus, the video may be tagged and associated with events. The present disclosure may provide a video that may allow customization by numerous filters of the type disclosed above, relating to finding a video that satisfies various parameters, that displays various events, or combinations thereof. For example, in embodiments, an interactive interface provided by the present disclosure allows watching videos clips for specific game situations or actions. Reports may provide a user with easy access to printable pages summarizing pre-game information about an opponent, scouting report for a particular player, or a post-game summary. For example, the reports may collect actionable useful information in one to two easy-to-digest pages. These pages may be automatically scheduled to be sent to other staff members, e.g., post-game reports sent to coaches after each game. Referring toFIG.11, a report may include statistics for a given player, as well as visual representations, such as of locations1102where shots were taken, including shots of a particular type (such as catch and shoot shots). The UI as illustrated inFIG.12provides a court comparison view1202among several parts of a sports court (and can be provided among different courts as well). For example, filters1204may be used to select the type of statistic to show for a court. The statistics can be filtered to show results filtered by left side1208or right side1214. Where the statistics indicate an advantage, the advantages can be shown, such as of left side advantages1210and right side advantages1212. In sports, the field of play is an important domain constant or elements. Many aspects of the game are best represented for comparison on a field of play. In embodiments, a four court comparison view1202is a novel way to compare two players, two teams, or other entities, to gain an overview view of each player/team (Leftmost and RightmostFIGS.1208,1214and understand each one's strengths/weaknesses (Left and Right CenterFIGS.1210,1212). The court view UI1302as illustrated inFIG.13provides a court view1304of a sport arena1304, in accordance with an embodiment of the present disclosure. Statistics for very specific court locations can be presented on a portion1308of the court view. The UI may provide a view of custom markings, in accordance with an embodiment of the present invention. Referring toFIG.14, filters may enable users to subset the data in a large number of ways, and immediately receive metrics calculated on the subset. Descriptions of particular events may be captured and made available to users. Various events may be labeled in a game, as reflected inFIG.15, which provides a detailed view of a timeline1502of a game, broken down by possession1504, by chances1508, and by specific events1510that occurred along the timeline1502, such as determined by spatiotemporal pattern recognition, by human analysis, or by a combination of the two. Filter categories available by a user interface of the present disclosure may include ones based on seasons, games, home teams, away teams, earliest date, latest date, postseason/regular season, wins/losses, offense home/away, offensive team, defensive team, players on the court for offense/defense, players off court for offense/defense, location, score differential, periods, time remaining, play type (e.g., after timeout play) and transition/no transition. Events may include ones based on primitive markings, such as shots, shots with a corrected shot clock, rebounds, passes, possessions, dribbles, and steals, and various novel event types, such as SEFG (shot quality), EFG+, player adjusted SEFG, and various rebounding metrics, such as positioning, opportunity percentage, attack, conversion percentage, rebounding above position (RAP), attack+, conversion+ and RAP+. Offensive markings may include simple shot types (e.g., angled layup, driving layup, heave, post shot, jumper), complex shot types (e.g., post shot, heave, cut layup, standstill layup, lob, tip, floater, driving layup, catch and shoot stationary, catch and shoot on the move, shake & raise, over screen, pullup and stepback), and other information relating to shots (e.g., catch and shoot, shot clock, 2/3S, assisted shots, shooting foul/not shooting foul, made/missed, blocked/not blocked, shooter/defender, position/defender position, defender distance and shot distance). Other events that may be recognized, such as through the spatiotemporal learning system, may include ones related to picks (ballhandler/screener, ballhandler/screener defender, pop/roll, wing/middle, step-up screens, reject/slip/take, direction (right/left/none), double screen types (e.g., double, horns, L, and handoffs into pick), and defense types (ice, blitz, switch, show, soft, over, under, weak, contain trap, and up to touch), ones related to handoffs (e.g., receive/setter, receiver/setter defender, handoff defense (ice, blitz, switch, show, soft, over, or under), handback/dribble handoff, and wing/step-up/middle), ones related to isolations (e.g., ballhandler/defender and double team), and ones related to post-ups (e.g., ballhandler/defender, right/middle/left and double teams). Defensive markings are also available, such as ones relating to closeouts (e.g., ballhandler/defender), rebounds (e.g., players going for rebounds (defense/offense)), pick/handoff defense, post double teams, drive blow-bys and help defender on drives), ones relating to off ball screens (e.g., screener/cutter and screener/cutter defender), ones relating to transitions (e.g., when transitions/fast breaks occur, players involved on offense and defense, and putback/no putback), ones relating to how plays start (e.g., after timeout/not after timeout, sideline out of bounds, baseline out of bounds, field goal offensive rebound/defensive rebound, free throw offensive rebound/defensive rebound and live ball turnovers), and ones relating to drives, such as ballhandler/defender, right/left, blowby/no blowby, help defender presence, identity of help defender, drive starts (e.g., handoff, pick, isolation or closeout) and drive ends (e.g., shot near basket, interior pass, kickout, pullup, pullout, stoppage, and turnover). These examples and many others from basketball and other sports may be defined, based on any understanding of what constitutes a type of event during a game. Markings may relate to off ball screens (screener/cutter), screener/cutter defender, screen types (down, pro cut, UCLA, wedge, wide pin, back, flex, clip, zipper, flare, cross, and pin in). FIG.16shows a system1602for querying and aggregation. In embodiments, data is divided into small enough shards that each worker has low latency response time. Each distributed machine may have multiple workers corresponding to the number of processes the machine can support concurrently. Query results never rely on more than one shard, since we enforce that events never cross quarter/period boundaries. Aggregation functions all run incrementally rather than in batch process, so that as workers return results, these are incorporated into the final answer immediately. To handle results such as rankings pages, where many rows must be returned, the aggregator uses hashes to keep track of the separate rows and incrementally updates them. FIG.17shows a process flow for a hybrid classification process that uses human labelers together with machine learning algorithms to achieve high accuracy. This is similar to the flow described above in connection withFIG.2, except with the explicit inclusion of the human-machine validation process. By taking advantage of aligned video as described herein, one may provide an optimized process for human validation of machine labeled data. Most of the components are similar to those described in connection withFIG.2and in connection with the description of aligned video, such as the XYZ data source1702, cleaning process1704, spatiotemporal pattern recognition module1712, event processing system1714, video source1708, alignment facility1710and video snippets facility1718. Additional components include a validation and quality assurance process1720and an event-labeling component1722. Machine learning algorithms are designed to output a measure of confidence. For the most part, this corresponds to the distance from a separating hyperplane in the feature space. In embodiments, one may define a threshold for confidence. If an example is labeled by the machine and has confidence above the threshold, the event goes into the canonical event datastore210and nothing further is done. If an example has a confidence score below the threshold, then the system may retrieve the video corresponding to this candidate event, and ask a human operator to provide a judgment. The system asks two separate human operators for labels. If the given labels agree, the event goes into the canonical event datastore210. If they do not, a third person, known as the supervisor, is contacted for a final opinion. The supervisor's decision may be final. The canonical event datastore210may contain both human marked and completely automated markings. The system may use both types of marking to further train the pattern recognition algorithms. Event labeling is similar to the canonical event datastore210, except that sometimes one may either 1) develop the initial gold standard set entirely by hand, potentially with outside experts, or 2) limit the gold standard to events in the canonical event datastore210that were labeled by hand, since biases may exist in the machine labeled data. FIG.18shows test video input for use in the methods and systems disclosed herein, including views of a basketball court from simulated cameras, both simulated broadcast camera views1802, as well as purpose-mounted camera views1804. FIG.19shows additional test video input for use in the methods and systems disclosed herein, including input from broadcast video1902and from purpose-mounted cameras1904in a venue. Referring toFIG.20, probability maps2004may be computed based on likelihood there is a person standing at each x, y location. FIG.21shows a process flow of an embodiment of the methods and systems described herein. Initially, in an OCR process2118, machine vision techniques are used to automatically locate the “score bug” and determine the location of the game clock, score, and quarter information. This information is read and recognized by OCR algorithms. Post-processing algorithms using various filtering techniques are used to resolve issues in the OCR. Kalman filtering/HMMs used to detect errors and correct them. Probabilistic outputs (which measure degree of confidence) assist in this error detection/correction. Next, in a refinement process2120, sometimes, a score bug is nonexistent or cannot be detected automatically (e.g., sometimes during PIP or split screens). In these cases, remaining inconsistencies or missing data is resolved with the assistance of human input. Human input is designed to be sparse so that labelers do not have to provide input at every frame. Interpolation and other heuristics are used to fill in the gaps. Consistency checking is done to verify game clock. Next, in an alignment process,2112the Canonical Datastore2110(referred to elsewhere in this disclosure alternatively as the event datastore) contains a definitive list of events that the system knows occurred during a game. This includes events extracted from the XYZ data2102, such as after cleansing2104and spatiotemporal pattern recognition2108, as well as those specified by third-party sources such as player-by-player data sets2106, such as available from various vendors. Differences among the data sources can be resolved, such as by a resolver process. The events in the canonical datastore2110may have game clock times specified for each event. Depending on the type of event, the system knows that the user will be most likely to be interested in a certain interval of game play tape before and after that game clock. The system can thus retrieve the appropriate interval of video for the user to watch. One challenge pertains to the handling of dead ball situations and other game clock stoppages. The methods and systems disclosed herein include numerous novel heuristics to enable computation of the correct video frame that shows the desired event, which has a specified game clock, and which could be before or after the dead ball since those frames have the same game clock. The game clock is typically specified only at the one-second level of granularity, except in the final minute of each quarter. Another advance is to use machine vision techniques to verify some of the events. For example, video of a made shot will typically show the score being increased, or will show a ball going through a hoop. Either kind of automatic observation serves to help the alignment process result in the correct video frames being shown to the end user. Next, in a query UI component2130, the UI enables a user to quickly and intuitively request all video clips associated with a set of characteristics: player, team, play type, ballhandler, ballhandler velocity, time remaining, quarter, defender, etc. In addition, when a user is watching a video clip, the user can request all events that are similar to whatever just occurred in the video. The system uses a series of cartoon-like illustration to depict possible patterns that represent “all events that are similar.” This enables the user to choose the intended pattern, and quickly search for other results that match that pattern. Next, the methods and systems may enable delivery of enhanced video, or video snips2124, which may include rapid transmission of clips from stored data in the cloud. The system may store video as chunks (e.g., one-minute chunks), such as in AWS S3, with each subsequent file overlapping with a previous file, such as by 30 seconds. Thus, each video frame may be stored twice. Other instantiations of the system may store the video as different sized segments, with different amounts of overlap, depending on the domain of use. In embodiments, each video file is thus kept at a small size. The 30-second duration of overlap may be important because most basketball possessions (or chances in our terminology) do not last more than 24 seconds. Thus, each chance can be found fully contained in one video file, and in order to deliver that chance, the system does not need to merge content from multiple video files. Rather, the system simply finds the appropriate file that contains the entire chance (which in turn contains the event that is in the query result), and returns that entire file, which is small. With the previously computed alignment index, the system is also able to inform the UI to skip ahead to the appropriate frame of the video file in order to show the user the query result as it occurs in that video file. This delivery may occur using AWS S3 as the file system, the Internet as transport, and a browser-based interface as the UI. It may find other instantiations with other storage, transport, and UI components. FIG.22shows certain metrics that can be extracted using the methods and systems described herein, relating to rebounding in basketball. These metrics include positioning metrics, attack metrics, and conversion metrics. For positioning, the methods and systems described herein first address how to value the initial position of the players when the shot is taken. This is a difficult metric to establish. The methods and systems disclosed herein may give a value to the real estate that each player owns at the time of the shot. This breaks down into two questions: (1) what is the real estate for each player? (2) what is it worth? To address the first question, one may apply the technique of using Voronoi (or Dirichlet) tessellations. Voronoi tessellations are often applied to problems involving spatial allocation. These tessellations partition a space into Voronoi cells given a number of points in that space. For any point, it is the intersection of the self-containing half-spaces defined by hyper-planes equidistant from that point to all other points. That is, a player's cell is all the points on the court that are closer to the player than any other player. If all players were equally capable they should be able to control any rebound that occurred in this cell. One understands that players are not equally capable however this establishment of real estate is to set a baseline for performance. Over performance or under performance of this baseline will be indicative of their ability. To address the second question, one may condition based on where the shot was taken and calculate a spatial probability distribution of where all rebounds for similar shots were obtained. For each shot attempt, one may choose a collection of shots closest to the shot location that provides enough samples to construct a distribution. This distribution captures the value of the real estate across the court for a given shot. To assign each player a value for initial positioning, i.e., the value of the real estate at the time of the shot, one may integrate the spatial distribution over the Voronoi cell for that player. This yields the likelihood of that player getting the rebound if no one moved when the shot was taken and they controlled their cell. We note that because we use the distribution of locations of the rebound conditioned on the shot, it is not a matter of controlling more area or even necessarily area close to the basket, but the most valuable area for that shot. While the most valuable areas are almost always close to the basket, there are some directional effects. For an attack or hustle metric, one may look at phases following a shot, such as an initial crash phase. To analyze this, one may look at the trajectory of the ball and calculate the time that it gets closest to the center of the rim. At this point, one may reapply the Voronoi-based analysis and calculate the rebound percentages of each player, i.e., the value of the real the estate that each player has at the time the ball hits the rim. The change in this percentage from the time the shot is taken to the time it hits the rim is the value or likelihood the player had added during the phase. Players can add value by crashing the boards, i.e., moving closer to the basket towards places where the rebound is likely to go, or by blocking out, i.e., preventing other players by taking valuable real estate that is already established. A useful, novel metric for the crash phase is generated by subtracting the rebound probability at the shot from the rebound probability at the rim. The issue is that the ability to add probability is not independent of the probability at the shot. Consider a case of a defensive player who plays close to the basket. The player is occupying high-value real estate, and once the shot is taken, other players are going to start coming into this real estate. It is difficult for players with high initial positioning value to have positive crash deltas. Now consider a player out by the three-point line. Their initial value is very low and moving any significant distance toward the rim will give them a positive crash delta. Thus, it is not fair to compare these players on the same scale. To address this, one may look at the relationship of the raw crash deltas (the difference between the probability at rim and probability at shot) compared to the probability at shot. In order to normalize for this effect, one may subtract the value of the regression at the player's initial positioning value from the raw crash delta to form the player's Crash value. Intuitively, the value indicates how much more probability is added by this player beyond what a player with similar initial positioning would add. One may apply this normalization methodology to all the metrics the initial positioning affects the other dimensions and it can be beneficial to control for it. A player has an opportunity to rebound the ball if they are the closest player to the ball once the ball gets below ten feet (or if they possess the ball while it is above ten feet). The player with the first opportunity may not get the rebound so multiple opportunities could be created after a single field goal miss. One may tally the number of field goal misses for which a player generated an opportunity for themselves and divided by the number of field goals to create an opportunity percentage metric. This indicates the percentage of field goal misses for which that player ended up being closest to the ball at some point. The ability for a player to generate opportunities beyond his initial position is the second dimension of rebounding: Hustle. Again, one may then apply the same normalization process as described earlier for Crash. The reason that there are often multiple opportunities for rebounds for every missed shot is that being closest to the ball does not mean that a player will convert it into a rebound. Thus, the third dimension of rebounding, conversion. The raw conversion metric for players is calculated simply by dividing the number of rebounds obtained by the number of opportunities generated. Formally, given a shot is described by its 2D coordinates on the court, s_x and s_y, which is followed by a rebound r, also described by its coordinates on the court of r_x and r_y, one may estimate P(r_y, r_x|s_x, s_y), the probability density of the rebound occurring at each position on the court given its shot location. This may be accomplished by first discretizing the court into, for example, 156 bins, created by separating the court into 13 equally spaced columns, and 12 equally spaced rows. Then, given some set S of shots from a particular bin, the rebounds from S will be distributed in the bins of the court according to a multinomial distribution. One may then apply maximum likelihood estimation to determine the probability of a rebound in each of the bins of the court, given the training set S. This process may be performed for bins that shots may fall in, giving 156 distributions for the court. Using these distributions, one may determine P(r_y, r_x|s_x, s_y). First, the shot is mapped to an appropriate bin. The probability distribution determined in the previous step is then utilized to determine the probability of the shot being rebounded in every bin of the court. One assumes that within a particular bin, the rebound is uniformly likely to occur in any coordinate. Thus, a probability density of the probability of the rebound falling in the bin is assigned to all points in the bin. Using the probability density P(r_y, r_x|s_x, s_y), one may determine the probability that each particular player grabs the rebound given their location and the position of the other players on the court. To accomplish this, one may first create a Voronoi diagram of the court, where the set of points is the location (p_x, p_y) for each player on the court. In such a diagram, each player is given a set of points that they control. Formally one may characterize the set of points that player P_k controls in the following manner, where X is all points on the court, and d denotes the Cartesian distance between 2 points. Rk={x∈X|d(x,Pk)≤d(x,Pj)forallj≠k} Now there exist the two components for determining the probability that each player gets the rebound given their location, specifically, the shot's location, and the location of all the other players on the court. One may determine this value by assuming that if a ball is rebounded, it will always be rebounded by the closest available player. Therefore, by integrating the probability of a rebound over each location in the player's Voronoi cell, we determine their rebound probability: ∫RP(rx,yu|sx,sy)dxdy The preceding section describes a method for determining the players rebounding probability, assuming that the players are stationary. However, players often move in order to get into better positions for the rebound, especially when they begin in poor positions. One may account for these phenomena. Let the player's raw rebound probability be denoted rp and let d be an indicator variable denoting whether the player is on defense. On may then attempt to estimate the player's probability of getting a rebound, which we express in the following manner: P(r|rp,d) One does this by performing two linear regressions, one for the offensive side of the ball and one for the defensive. One may attempt to estimate p(r|rp, d) in the following manner: P(r|rp,d=0)=Ao*rp+Bo P(r|rp,d=1)=Ad*rp+Bd This results in four quantities to estimate. One may do this by performing an ordinary least squares regression for offensive and defensive players' overall rebounds in the test set. One may use 1 as a target variable when the player rebounds the ball, and 0 when he does not. This regression is performed for offense to determine Ao and Bo and for defense to determine Ad and Bd. One can then use the values to determine the final probability of each player getting the rebound given the shots location and the other players on the court. Novel shooting metrics can also be created using this system. One is able to determine the probability of a shot being made given various features of the shot s, denoted as F. Formally each shot can be characterized by a feature vector of the following form. [dist(hoop,shooter),dist(shooter,defender0),|angle(hoop,shooter,defender0)|,|angle(shooter,hoop,hoopother),I(shot=catchAndShoot),dist(shooter,defender1)] Here, the hoop represents the basket the shooter is shooting at, defender0 refers to the closest defender to the shooter, defender1 refers to the second closest defender, and hoopother refers to the hoop on the other end of the court. The angle function refers to the angle between three points, with the middle point serving as the vertex. I(shot=catchAndShoot) is an indicator variable, set to 1 if the shooter took no dribbles in the individual possession before shooting the shot, otherwise set to 0. Given these features, one seeks to estimate P(s=make). To do this, one may first split the shots into 2 categories, one for where dist (hoop, shooter) is less than 10, and the other for the remaining shots. Within each category one may find coefficients β0, β1, . . . , β5 for the following equation: 1/(1+e{circumflex over ( )}(−t)) where t=F0*β0+F1*β1+ . . . +F5*β5 Here, F0 through F5 denote the feature values for the particular shot. One may find the coefficient values β0, β1, . . . , β5 using logistic regression on the training set of shots S. The target for the regression is 0 when the shot is missed and 1 when the shot is made. By performing two regressions, one is able to find appropriate values for the coefficients, for both shots within 10 feet, and longer shots outside 10 feet. As depicted inFIG.23, three or four dimensions can be dynamically displayed on a 2-D graph scatter rank view2302, including the x, y, size of the icon, and changes over time. Each dimension may be selected by the user to represent a variable of the user's choice. Also, on mouse-over, related icons may highlight, e.g., moussing over one player may highlight all players on the same team. As depicted inFIG.40, reports2402can be customized by the user so that a team can create a report that is specifically tailored to that team's process and workflow. Another feature is that the report may visually display not only the advantages and disadvantages for each category shown, but also the size of that advantage or disadvantage, along with the value and rank of each side being compared. This visual language enables a user to quickly scan the report and understanding the most important points. Referring toFIG.25, an embodiment of a quality assurance UI2502is provided. The QA UI2502presents the human operator with both an animated 2D overhead view2510of the play, as well as a video clip2508of the play. A key feature is that only the few seconds relevant to that play are shown to the operator, instead of an entire possession, which might be over 20 seconds long, or even worse, requiring the human operator to fast forward in the game tape to find the event herself. Keyboard shortcuts are used for all operations, to maximize efficiency. Referring toFIG.26, the operator's task is simplified to its core, so that we lighten the cognitive load as much as possible: if the operator is verifying a category of plays X, the operator has to simply choose, in an interface element2604of the embodiment of the QA UI2602whether the play shown in the view2608is valid (Yes or No), or (Maybe). She can also deem the play to be a (Duplicate), a (Compound) play that means it is just one type-X action in a consecutive sequence of type-X actions, or choose to (Flag) the play for supervisor review for any reason. Features of the UI2602include the ability to fast word, rewind, submit and the like, as reflected in the menu element2612. A table2610can allow a user to indicate the validity of plays occurring at designated times. FIG.27shows a method of camera pose detection, also known as “court solving.”FIG.27also shows the result of automatic detection of the “paint,” and use of the boundary lines to solve for the camera pose. The court lines and hoop location, given the solved camera pose, are then shown projected back onto the original image2702. This projection is from the first iteration of the solving process, and one can see that the projected court and the actual court do not yet align perfectly. One may use machine vision techniques to find the hoop and to find the court lines (e.g., paint boundaries), then use found lines to solve for the camera pose. Multiple techniques may be used to determine court lines, including detecting the paint area. Paint area detection can be done automatically. One method involves automatically removing the non-paint area of the court by automatically executing a series of “flood fill” type actions across the image, selecting for court-colored pixels. This leaves the paint area in the image, and it is then straightforward to find the lines/points. One may also detect all lines on the court that are visible, e.g., background or 3-point arc. In either case, intersections provide points for camera solving. A human interface2702may be provided for providing points or lines to assist algorithms, to fine-tune the automatic solver. Once all inputs are provided, the camera pose solver is essentially a randomized hill climber that uses the mathematical models as a guide (since it may be under-constrained). It may use multiple random initializations. It may advance a solution if it is one of the best in that round. When an iteration is done, it may repeat until the error is small.FIG.46shows the result of automatic detection of the “paint,” and use of the boundary lines to solve for the camera pose. The court lines and hoop location, given the solved camera pose, are then shown projected back onto the original image. This projection is from the first iteration of the solving process, and one can see that the projected court and the actual court do not yet align perfectly. FIG.28relates to camera pose detection. The second step2802shown in the Figure shows how the human can use this GUI to manually refine camera solutions that remain slightly off. FIG.29relates to auto-rotoscoping. Rotoscoping2902is required in order to paint graphics around players without overlapping the players' bodies. Rotoscoping is partially automated by selecting out the parts of the image with similar color as the court. Masses of color left in the image can be detected to be human silhouettes. The patch of color can be “vectorized” by finding a small number of vectors that surround the patch, but without capturing too many pixels that might not represent a player's body. FIGS.30A,30B, and30Crelate to scripted storytelling with an asset library3002. To produce the graphics-augmented clips, a company may either lean heavily on a team of artists, or a company may determine how best to handle scripting based on a library of assets. For example, instead of manually tracing a player's trajectory and increasing the shot probability in each frame as the player gets closer to the ball, a scripting language allows the methods and systems described herein to specify this augmentation in a few lines of code. In another example, for rebound clips, the Voronoi partition and the associated rebound positioning percentages can be difficult to compute for every frame. A library of story element effects may list each of these current and future effects. Certain combinations of scripted story element effects may be best suited for certain types of clips. For example, a rebound and put-back will likely make use of the original shot probability, the rebound probabilities including Voronoi partitioning, and then go back to the shot probability of the player going for the rebound. This entire script can be learned as being well-associated with the event type in the video. Over time, the system can automatically infer the best, or at least retrieve an appropriate, story line to match up with a selected video clip containing certain events. This enables augmented video clips, referred to herein as DataFX clips, to be auto-generated and delivered throughout a game. FIGS.31-38show examples of DataFX visualizations. The visualization ofFIG.31requires court position to be solved in order to lay down grid, player “puddles”. Shot arc also requires backboard/hoop solution. InFIG.32, Voronoi tessellation, heat map, shot and rebound arcs all require the camera pose solution. The highlight of the player uses rotoscoping. InFIG.33, in addition to the above, players are rotoscoped for highlighting.FIGS.34-38show additional visualizations that are based on the use of the methods and systems disclosed herein. In embodiments, DataFX (video augmented with data-driven special effects) may be provided for pre-, during, or post-game viewing, for analytic and entertainment purposes. DataFX may combine advanced data with Hollywood-style special effects. Pure numbers can be boring, while pure special effects can be silly, but the combination of the two and the results can be very powerful. Example features used alone or in combination in DataFX can include use of a Voronoi overlay on court, a Grid overlay on court, a Heat map overlay on court, a Waterfall effect showing likely trajectories of the ball after a missed field goal attempt, a Spray effect on a shot, showing likely trajectories of the shot to the hoop, Circles and glows around highlighted players, Statistics and visual cues over or around players, Arrows and other markings denoting play actions, Calculation overlays on court, and effects showing each variable taken into account. FIGS.39A through41Bshow a product referred to as “Clippertron.” Provided is a method and system whereby fans can use their distributed mobile devices to control individually and/or collectively what is shown on the Jumbotron or video board(s). An embodiment enables the fan to go through mobile application dialogs in order to choose the player, shot type, and shot location to be shown on the video board. The fan can also enter in his or her own name so that it is displayed alongside the highlight clip. Clips are shown on the Video Board in real time or queued up for display. Variations include getting information about the fan's seat number. This could be used to show a live video feed of the fan while their selected highlight is being shown on the video board. Referred to as “FanMix” is a web-based mobile application that enables in-stadium fans to control the Jumbotron and choose highlight clips to push to the Jumbotron. An embodiment of FanMix enables fans to choose their favorite player, shot type, and shot location using a mobile device web interface. Upon pressing the submit button, a highlight showing this particular shot is sent to the Jumbotron and displayed according to placement order in a queue. Enabling this capability is that video is lined up to each shot within a fraction of a second. This allows many clips to be shown in quick succession, each showing video from the moment of release to the ball going through the hoop. In some cases, the video may start from the beginning of a play, instead of when a play begins. The methods and systems disclosed herein may include methods and systems for allowing a user or group of users to control presentation of a large scale display in an event venue, where the options for control are based on a context of the content as determined by machine extraction of semantically relevant events from the content The methods and systems disclosed herein may include methods and systems for enabling interaction with a large scale display system and may include taking a video feed; using machine learning to develop an understanding of an event within the video feed, the understanding including identifying context information relating to the event; automatically, under computer control, extracting the content displaying the event and associating the extracted content with the context information; producing a video content data structure that includes the associated context information; and providing an application by which at least one user can interact with the video content data structure, wherein the options for user interaction are based on the context information, wherein the interaction with the video content data structure controls the presentation of the content on a large scale display. In embodiments, one or more users may interact with menus on an application, such as a smart phone application, in an arena or other location that has a large-scale display. The users may express preferences, such as by voting, for what content should be displayed, including selecting preferred types of events and/or contexts (which may be organized as noted above based on semantically relevant filters), selecting what metrics should be displayed (options for which may be offered based on context information for particular extracted video events), and the like. In embodiments, a large scale display in a venue where a live event is taking place may offer games, quizzes, or the like, where users may respond by text, SMS, or the like. The content of such games or quizzes may be constructed at least in part based on a machine semantic understanding of the live event, such as asking users which player has the most rebounds in the first quarter, or the like. The methods and systems disclosed herein may include methods and systems for a user to control Jumbotron clips based on contextualized content filters. The methods and systems disclosed herein may include methods and systems for a Jumbotron fan quiz based on machine semantic understanding of a live game The methods and systems disclosed herein may include methods and systems wherein the application comprises a quiz for a user, wherein the quiz is constructed based at least in part on a machine semantic understanding of a live game that is taking place in a venue where the large-scale display is located. In embodiments, a fan quiz may ask questions based on proprietary machine learned metrics such as “which player took the hardest shots in this quarter.” The methods and systems disclosed herein may include methods and systems for embedding a machine extracted video cut in an application, where the selection of the embedded cut for the application is based on the context of the video cut. First Person Point of View (POV) In embodiments, interactive visualization218, as illustrated inFIG.2, may include producing a reconstruction of an event, such as a game, such as a 3D reconstruction or rendering. In embodiments, a 3D reconstruction or rendering of an event may be produced using a process that presents the event from a defined point of view, such as the first person point of view of a participant in the event, such as a player.FIG.39Fillustrates an embodiment of such as process, referred to herein in some cases as a first person POV process, or simply a first person process. A first person process may allow the user to select a player's view to follow. A first person process may automatically pin a user's view to the head of the selected player. The end result of a first person process may be dynamically rendered from the view of the selected player as a play occurs. A first person process may be an automated first person process. An automated first person process may produce a 3D reconstruction or rendering of a game and render each frame from the view of a player selected by a user. A first person process may be a virtual reality-based first person process. A virtual reality-based first person process may produce a 3D reconstruction or rendering of a game that allows a user to control the orientation of a view from the head movements of a user. In embodiments, the point of view may be controlled by, for example, player head tracking. In embodiments, users may choose a player whose point of view will be presented. Location of a view may be controlled automatically via head tracking data. View orientation may be controlled by the head movements of a user. In embodiments, the head movements of a user may be recorded by virtual reality (VR) technology. VR technology may be Oculus Rift™ technology and the like. Point Cloud Construction As illustrated inFIG.39F, a first person process may include constructing a point cloud that provides a 3D model of a real world scene. Point cloud construction may begin by producing binary, background-subtracted images for each time-synchronized frame on each camera. Using these binary images and the calibrations of each camera, a 3D convex hull may be produced by discretizing the scene into voxels and filling each voxel, if the voxel is contained within the ray projected from the camera through the image visual hull. The image visual hull may be the silhouette of the scene, for example. The silhouette of the scene may be a shape-form silhouette. The resulting convex hull may contain voxels that may not actually be present in the world, due to reconstructing only of the visual hull. In order to achieve a more precise point cloud, the 3D convex hull may be carved using photo consistency methods. Photo consistency methods may back-project the surface of a 3D reconstructed visual hull onto each visible camera. Photo consistency methods may also check to ensure the color of the pixels is consistent with the same pixel from another camera, or with nearby pixels, such as to avoid unrealistic discontinuities. If the colors from each visible camera do not agree, the voxel may be carved. This process may be repeated for the entire convex hull, producing the final carved point cloud. Point cloud construction may estimate the skeletal pose of all participants in a real world scene. Point cloud construction may fit a hand-made participant model to the estimated pose of each participant in a real world scene. In an example, the real world scene could be a sports court and the participants could be all the players on the sports court. In this example, point cloud construction could fit a hand-made player model to the estimated pose of each player on the sports court. Point cloud construction may include meshing techniques, which may be used to improve the quality of a final visualization for a user. Meshing techniques may be used to mesh multiple point counts. Meshing techniques may be used to provide a view that may be very close to a point cloud, for example. Player Identification A first person process may use player identification to enable the user to select from which player's view to render the 3D reconstruction. Player identification may involve multiple steps in order to produce reliable results. Player identification may start by performing jersey number detection, as illustrated inFIG.39F. Jersey numbers may be mapped to player names. Jersey numbers may then be mapped to player names using official rosters and the like. Jersey number detection may be performed frame-by-frame. Frame-by-frame jersey number detection may be performed by scanning and classifying each window as a number or as nothing, such as using a support vector machine (SVM), a supervised machine learning model used for classification. The SVM may be trained, such as using training sets of manually marked jersey numbers from the game video, for example. Results from individual frame-by-frame detection may be stitched together to form temporal tracks. Individual frame by frame detection may be stitched together to form temporal tracks using a k-shortest paths algorithm. Jersey number tracks may be associated with existing, more continuous player tracking data. Associating jersey number tracks with existing, more continuous player tracking data may produce robust tracks of identifiable players. Head Tracking A first person process may use head tracking in order to control the location of the view within a 3D reconstruction, as illustrated inFIG.39F. Head tracking may involve multiple steps in order to produce reliable results. The first step of head tracking may be the same as for player identification. The first step of head tracking may include head detection. Head detection may create a model on heads instead of on jersey numbers. Head detection may be performed frame by frame. Head detection may include frame by frame head detection. Frame-by-frame head detection may be performed by scanning each image. Frame-by-frame head detection may be performed by scanning each image and classifying each window as a head or not. Classifying each window as a head or not may be performed using an SVM. An SVM may be trained. An SVM may be trained using manually marked head samples from previously recorded games. An SVM maybe be a team-dk-SVM. The results of the detection may then be used in 2D tracking to produce temporal 2D tracklets of each head within a camera's frame. 2D tracklets may then be triangulated using the results of all cameras to produce a 3D estimation of the location of all heads on the court. A 3D estimation of the location of all heads on the court may be 3D tracklets. 3D tracklets may then be stitched together. 3D tracklets may then be stitched together using an algorithm. An algorithm may be a k-shortest paths (KSP) algorithm. 3D tracklets may be stitched together to produce potential final head tracking results. Linear programming may be used to choose optimal head paths. Gaze Estimation As illustrated inFIG.39F, a first person process may use gaze estimation. Gaze estimation may be used to control the orientation of a view mounted on the player's head within the 3D reconstruction. Gaze estimation may be computed by assuming a player is always looking in the direction opposite the numbers on the back of the player. Jersey number detection may be performed frame by frame. Frame by frame jersey number detection may be performed by scanning and classifying each window as a number or nothing using an SVM. The SVM may be trained using manually marked jersey numbers from an existing game video. An assumption may be made to determine the angle of a jersey number located on the back or front of a player's jersey. An assumption may be that a jersey number is only visible when the jersey number is perfectly aligned with a camera that made the detection. Cameras may have a known location in space. Because the cameras have a known location in space, the vector between the jersey and the camera may be computed using the known location of the camera in space. Frame-by-frame estimation may be performed after a vector is calculated. The results of the frame-by-frame estimation may be filtered to provide a smoothed experience for a first person process. FIG.41relates to an offering referred to as “inSight.” This offering allows pushing of relevant stats to fans' mobile devices4104. For example, if player X just made a three-point shot from the wing, this would show statistics about how often he made those types of shots4108, versus other types of shots, and what types of play actions he typically made these shots off of. inSight does for hardcore fans what Eagle (the system described above) does for team analysts and coaches. Information, insights, and intelligence may be delivered to fans' mobile devices while they are seated in the arena. This data is not only beautiful and entertaining, but is also tuned into the action on the court. For example, after a seemingly improbable corner three by a power forward, the fan is immediately pushed information that shows the shot's frequency, difficulty, and likelihood of being made. In embodiments, the platform features described above as “Eagle,” or a subset thereof may be provided, such as in a mobile phone form factor for the fan. An embodiment may include a storyboard stripped down, such as from a format for an 82″ touch screen to a small 4″ screen. Content may be pushed to a device that corresponds to the real time events happening in the game. Fans may be provided access to various effects (e.g., DataFX features described herein) and to the other features of the methods and systems disclosed herein. FIGS.42and43show touchscreen product interface elements4202,4204,4208,4302and4304. These are essentially many different skins and designs on the same basic functionality described throughout this disclosure. Advanced stats are shown in an intuitive large-format touch screen interface. A touchscreen may act as a storyboard for showing various visualizations, metric and effects that conform to an understanding of a game or element thereof. Embodiments include a large format touch screen for commentators to use during a broadcast. While InSight serves up content to a fan, the Storyboard enables commentators on TV to access content in a way that helps them tell the most compelling story to audiences. Features include providing a court view, a hexagonal Frequency+Efficiency View, a “City/Matrix” View with grids of events, a Face/Histogram View, Animated intro sequences that communicate to a viewer that each head's position means that player's relative ranking, an Animated face shuttle that shows re-ranking when metric is switched, a ScatterRank View, a ranking using two variables (one on each axis), a Trends View, integration of metrics with on-demand video and the ability to r-skin or simplify for varying levels of commentator ability. In embodiments, new metrics can be used for other activities, such as driving new types of fantasy games, e.g., point scoring in fantasy leagues could be based on new metrics. In embodiments, DataFX can show the player how his points were scored, e.g., overlay that runs a counter over an RB's head showing yards rushed while the video shows RB going down the field. In embodiments, one can deliver, for example, video clips (possibly enhanced by DataFX effects) corresponding to plays that scored points for a fantasy user's team for that night or week. Using an inSight-like mobile interface, a social game can be made so that much of the game play occurs in real time while the fan is watching the game. Using Insight-like mobile device features, a social game can be managed so that game play occurs in real time while a fan is watching the game, experiencing various DataFX effects and seeing fantasy scoring-relevant metrics on screen during the game. In embodiments, the methods and systems may include a fantasy advice or drafting tool for fans, presenting rankings and other metrics that aid in player selection. Just as Eagle enables teams to get more wins by devising better tactics and strategy, we could provide an Eagle-like service for fantasy players that gives the players a winning edge. The service/tool would enable fans to research all the possible players, and help them execute a better draft or select a better lineup for an upcoming week/game. DataFX can also be used for instant replays with DataFX optimized so that it can produce “instant replays” with DataFX overlays. This relies on a completely automated solution for court detection, camera pose solving, player tracking, and player roto-scoping. Interactive DataFX may also be adapted for display on a second screen, such as a tablet, while a user watches a main screen. Real time or instant replay viewing and interaction may be used to enable such effects. On a second screen-type viewing experience, the fan could interactively toggle on and off various elements of DataFX. This enables the fan to customize the experience and to explore many different metrics. Rather than only DataFX-enabled replays, the system could be further optimized so that DataFX is overlaid in true real time, enabling the user to toggle between a live video feed and a live video feed that is overlaid with DataFX. The user would then also be able to choose the type of DataFX to overlay, or which player(s) to overlay it on. A touch screen UI may be established for interaction with DataFX. Many of the above embodiments may be used for basketball, as well as for other sports and for other items that are captured in video, such as TV shows, movies, or live video (e.g., news feeds). For sports, a player tracking data layer may be employed to enable the computer to “understand” every second of every game. This enables the computer to deliver content that is extracting from portions of the game and to augment that content with relevant story-telling elements. The computer thus delivers personalized interactive augmented experiences to the end user. For non-sports domains, such as TV shows or movies, there is no player tracking data layer that assists the computer in understanding the event. Rather, in this case, the computer must derive, in some other way, an understanding of each scene in a TV show or movie. For example, the computer might use speech recognition to extract the dialogue throughout a show. Or it could use computer vision to recognize objects in each scene, such as robots in the Transformer movie. Or it could use a combination of these inputs and others to recognize things like explosions. The sound track could also provide clues. The resulting system would use this understanding to deliver the same kind of personalized interactive augmented experience as we have described for the sports domain. For example, a user could request to see the Transformer movie series, but only a compilation of the scenes where there are robots fighting and no human dialogue. This enables “short form binge watching,” where users can watch content created by chopping up and recombining bits of content from original video. The original video could be sporting events, other events TV shows, movies, and other sources. Users can thus gorge on video compilations that target their individual preferences. This also enables a summary form of watching, suitable for catching up with current events or currently trending video, without having to watch entire episodes or movies. FIG.44provides a flow under which the platform may ingest and align the content of one or more broadcast video feeds and one or more tracking camera video feeds. At a step4412, a broadcast video feed may be ingested, which may consist of an un-calibrated and un-synchronized video feed. The ingested broadcast video feed may be processed by performing optical character recognition at a step4414, such as to extract information from the broadcast video feed that may assist with aligning events within the feed with events identified in other sources of video for the same event. This may include recognizing text and numerical elements in the broadcast video feed, such as game scores, the game clock, player numbers, player names, text feeds displayed on the video, and the like. For example, the time on the game clock, or the score of a game, may assist with time-alignment of a broadcast feed with another video feed. At a step4404objects may be detected within the broadcast video feed4404, such as using machine-based object-recognition technologies. Objects may include players (including based on recognizing player numbers), body parts of players (e.g., heads of players, torsos of players, etc.) equipment (such as the ball in a basketball game), and many others. Once detected at the step4404, objects may be tracked over time in a step4418, such as in progressive frames of the broadcast video feed. Tracked objects may be used to assist in calibrating the broadcast video intrinsic and extrinsic camera parameters by associating the tracked objects with the same objects as identified in another source, such as a tracking camera video feed. Abc123 At a step4402, in parallel with the steps involved in ingesting and processing a broadcast video feed, video feeds from tracking cameras, such as tracking cameras for capturing 3D motion in a venue (like a sports arena), may be ingested. The tracking camera video feeds may be calibrated and synchronized to a frame of reference, such as one defined by the locations of a set of cameras that are disposed at known locations within the venue where the tracking camera system is positioned. At a step4406, one or more objects may be detected within the tracking camera video feed, including various objects of the types noted above, such as players, numbers, items of equipment, and the like. In embodiments, spatiotemporal coordinates of the objects may be determined by processing the information from the tracking camera video feed, the coordinates being determined for the recognized objects based on the frame of reference defined by the camera positions of the tracking system. In embodiments, the coordinates being determined for the recognized objects can be based on the court or the field on which the game is played. In embodiments, the coordinates being determined for the recognized objects are based on the boundaries, lines, markers, indications, and the like associated with the court or the field on which the game is played. The video feed from the tracking camera system and the information about spatiotemporal object positions may be used to generate a point cloud at a step4416, within which voxel locations of the objects detected at the step4406may be identified at a step4418. The tracking camera video feed that was processed to detect and track objects may be further processed at a step4410by using spatiotemporal pattern recognition (such as machine-based spatiotemporal pattern recognition as described throughout this disclosure) to identify one or more events, which may be a wide range of events as described throughout this disclosure, such as events that correspond to patterns in a game or sport. In embodiments, other feeds may be available that may contain additional information about events that are contained in the tracking camera video feed. For example, a data feed, such as a play-by-play feed, for a game may be ingested at a step4422. At a step4420, the information from multiple sources may be aligned, such as aligning the play-by-play data feed from the step4422with events recognized at the step4410. Similarly, at a step4424the recognized event data in the tracking camera video feed at the step4410may be aligned with events recognized in the broadcast video feed at the step4414, resulting in time-aligned broadcast video, tracking camera, and other (e.g., play-by-play) feeds. Once the tracking camera video feed and the broadcast video feed are time-aligned for an event, objects detected at the step4404in the broadcast video feed and tracked at the step4418(e.g., players' heads) may be used at a step4428to calibrate the broadcast video camera position, such as by identifying the broadcast video camera position within the frame of reference of the tracking camera system used to capture the tracking camera video feed. This may include comparing sizes and orientations of the same object as it was detected at the step4404in the broadcast video feed and at the step4406in the tracking camera system video feed. In embodiments, calibration parameters of the broadcast camera can be determined by, among other things, comparing positions of detected objects in the video with detected three-dimensional positions of the corresponding objects that can be obtained using the calibrated tracking system. In embodiments, heads of the players in the game can be suitable objects because the heads of the players can be precisely located relative to other portions of the bodies of the players. Once calibrated, the broadcast video camera information can be processed as another source just like any of the tracking cameras. This may include re-calibrating the broadcast video camera position for each of a series of subsequent events, as the broadcast video camera may move or change zoom between events. Once the broadcast video camera position is calibrated to the frame of reference of the tracking camera system, at a step4430pixel locations in the broadcast video feed may be identified, corresponding to objects in the broadcast video feed, which may include using information about voxel locations of objects in the point cloud generated from the motion tracking camera feed at the step4418and/or using image segmentation techniques on the broadcast video feed. The process ofFIG.44thus provides time-aligned broadcast video feeds, tracking camera event feeds, and play-by-play feeds, where within each feed pixel locations or voxel locations of objects and backgrounds are known, so that various activities can be undertaken to process the feeds, such as for augmenting the feeds, performing pattern recognition on objects and events within them (such as to find plays following particular patterns), automatically clipping or cutting them to produce content (such as capturing a reaction in broadcast video to an event displayed in or detected by the tracking camera feeds based on a time sequence of time-aligned events), and many others as described throughout this disclosure. In some embodiments, the platform may use stationary features on a playing surface (e.g., a basketball court) to calibrate the broadcast video camera parameters and to time align two or more video feeds. For example, the platform may utilize stationary lines (e.g., yard lines, top of the three point line, a half court line, a center field line, side lines, intersections between half court or field lines and side lines, logos, goal posts, and the like) to calibrate the broadcast video camera parameters. In these embodiments, the stationary features may be detected in the broadcast video feed and in the tracking video feed. In embodiments, the platform may determine the x, y, and z locations of the stationary features in the tracking video feed, and may calibrate the broadcast video camera parameters based on the x, y, z coordinates of the stationary features or voxel coordinates. For example, in embodiments, the platform may cross-reference the pixel locations of a stationary feature in the broadcast video feed with the x, y, z coordinates of the stationary feature in the tracking camera feeds. Once the broadcast video feed is calibrated with respect to one or more tracking camera feeds, moving objects tracked in the broadcast video can be cross-referenced against the locations of the respective moving objects from the tracking camera video feeds. In some of these embodiments, the platform may track moving objects in the broadcast video feed and the tracking camera feed(s) with respect to the locations of the stationary features in the respective broadcast video feed and tracking camera feeds to time align the broadcast video feed and tracking camera feeds. For example, the platform may time align one or more broadcast video feeds and one or more tracking camera feeds at respective time slices where a player crosses a logo or other stationary features on the playing surface in each of the respective feeds (broadcast video and tracking camera feeds). Referring toFIG.45, embodiments of the methods and systems disclosed herein may involve handling multiple video feeds4502, information from one or more tracking systems4512(such as player tracking systems that may provide time-stamped location data and other information, such as physiological monitoring information, activity type information, etc.), and one or more other input sources4510(such as sources of audio information, play-by-play information, statistical information, event information, etc.). In embodiments, live video input feeds4502are encoded by one or more encoding systems4504to produce a series of video segment files4508, each consisting of a video chunk, optionally of short duration, e.g., four seconds. Video segment files4508from different input feeds corresponding to the same time interval are considered as part of a temporal group4522associated with that time interval. The temporal group4522may also include information and other content from tracking systems4512and other input sources4510. In embodiments, each video segment file4508may independently and in parallel undergo various processing operations in one or more processing systems4518, such as transcoding to various file formats, streaming protocols, and the like. The derived output files4520of processing4518may be associated with the same temporal group4522. Temporal grouping4522enables time synchronization among the original and derived files without having to further maintain or track timing or synchronization information. Such processing operations4518may include, without limitation, standard video on demand (VOD) transcoding, such as into lower bit rate video files. Processing operations4518may also include augmentation, such as with graphics, audio overlays, or data, producing derived, augmented video files4518. Other data derived from the video streams or obtained from other sources4510(e.g., coordinate positions of players and objects obtained via optical or chip tracking systems4512), which may typically become available with a small time delay relative to the live video input streams4502, may also be synchronized to the video files4508in a temporal group4522, such as by adding them as metadata files to the corresponding temporal group or by binding them to the video files4514. In embodiments, a manifest file4524based on these temporal groups4522may be created to enable streaming of the original video feed4502, the video segment files4514and/or derived files4520as a live, delayed or on-demand stream. Synchronization among the output streams may enable combining and/or switching4528seamlessly among alternative video feeds (e.g., different angles, encoding, augmentations or the like) and data feeds of a live streamed event. Among other benefits, synchronization across original video feeds4502, video segment files4508, derived video feeds4520with encoded, augmented or otherwise processed content, and backup video feeds, described by a manifest file4524, may allow client-side failover from one stream to another without time discontinuity in the viewing of the event. For instance, if an augmented video stream resulting from processing4518is temporarily unavailable within the time offset at which the live stream is being viewed or falls below a specified buffering amount, a client application4530consuming the video feed may temporarily fail over to an un-augmented stream4502or encoded video segment file4508. In embodiments, the granularity with which the client application4530switches back to the augmented stream4518when available may depend on semantically defined boundaries in the video feed, which in embodiments may be based on a semantic understanding of events within the video feed, such as achieved by the various methods and systems described in connection with the technology stack100and the processes200described throughout this disclosure. For example, a switch back to derived file4520with various augmentations added in processing4518may be timed to occur after a change of possession, a timeout, a change in camera angle, a change in point-of-view, or other appropriate points in the action, so that the switching occurs while minimizing disruption of the viewing experience. Switching may also be controlled by semantic understanding4532of the content of different video feeds4502at each time instant; for example, if a camera is not pointing at the current action on the court, an alternative video feed4502, video segment file4514or derived file4520may be selected. In embodiments, a “smart pipe” may be provided consisting of multiple aligned content channels (e.g., audio, video, or data channels) that are indexed both temporally and spatially. Spatial indexing and alignment4534may include indexing of pixels in 2D streams, voxels in 3D streams, and other objects, such as polygonal meshes used for animation, 3D representation, or the like. In embodiments, a wide variety of elements may be indexed, such as, without limitation, events, and locations of objects (including players, game objects, and objects in the environment, such as a court or arena) involved in those events. In embodiments, a further variety of elements may be indexed including information and statistics related to events and locations. In embodiments, a further variety of elements may be indexed including locations of areas corresponding to floor areas, background areas, signage areas, or the like where information, augmentations, graphics, animations, advertising, or the like may be displayed over a content frame. In embodiments, a further variety of elements may be indexed including indices or indicators of what information, augmentation elements or the like that are available to augment a video feed in a content channel such as ones that may be selected individually or in combination. In embodiments, a further variety of elements may be indexed including predefined combinations of content (e.g., particular combinations of audio, video, information, augmentation elements, replays, or other content elements), such as constituting channels or variations from which end-users may choose ones that they prefer. Thus, a spatial indexing and alignment system4534may provide spatial indexing and alignment information to the processing system4518(or may be included therein), such that the derived files4520(and optionally various objects therein) that are indexed both temporally and spatially. In such a case, the “smart pipe” for synchronized, switchable and combinable content streams4528may contain sufficient indexed and aligned content to allow the creation of derived content, the creation of interactive applications, and the like, each optionally tied to live and recorded events (such as sporting events). In embodiments, the tracking systems4512, the spatial indexing and alignment4534and the semantic understanding4532may be part of the larger alignment, tracking, and semantic system included in the systems and methods disclosure herein that may take various inputs including original video feeds and play-by-play feeds, and may produce X, Y, Z tracking data and semantic labels. The X, Y, Z tracking data and semantic labels may be stored as separate metadata files in the temporal group4522or used to produce derived video files4520in the temporal group4522. In embodiments, any combination of inputs such as from a tracking camera system, a 3D camera array, broadcast video, a smartphone video, lidar, and the like may be used to automatically obtain a 3D understanding of a game. The automatically obtained 3D understanding of the game may be used to index voxels of 3D representations (e.g., AR/VR video) or pixels of any 2D video footage (e.g., from tracking cameras, broadcast, smartphones, reconstructed video from any point of view such as first person point of view of players in the game) or alternatively to voxels/pixels, other graphics representations such as polygonal meshes. In embodiments, a “smart pipe” may consist of multiple aligned content channels (e.g., audio, video, or data channels) that are indexed both temporally and spatially (e.g., indexing of pixels/voxels/polygonal meshes) with events and locations of players/objects involved in those events. By way of this example, the indexing both temporally and spatially with events and locations of players/objects involved in those events may also include information and statistics related to events and locations. The indexing both temporally and spatially with events and locations of players/objects involved in those events may also include locations of areas corresponding to floor or background areas where information, augmentations (e.g., filters that manipulate the look of the ball/players) or advertising may be displayed over each video frame. In embodiments, available pieces of information and augmentation elements may be selected individually or in combination. In embodiments, combinations of audio, video, information, augmentation, replays, and the like may constitute channels for end-users to choose from. The smart pipe may contain sufficient indexed and aligned content to create derived content and interactive apps tied to live and recorded games. In embodiments, composition of video via frames, layers and/or tracks may be generated interactively by distributed sources, e.g., base video of the sporting event, augmentation/information layers/frames from different providers, audio tracks from alternative providers, advertising layers/frames from other providers, leveraging indexing and synchronization concepts, and the like. By way of this example, the base layers and/or tracks may be streamed to the various providers as well as to the clients. In embodiments, additional layers and/or tracks may be streamed directly from the providers to the clients and combined at the client. In embodiments, the composition of video via frames, layers and/or tracks and combinations thereof may be generated interactively by distributed sources and may be based on user personalizations. In embodiments, the systems and methods described herein may include a software development kit (SDK)4804that enables content being played at a client4808to dynamically incorporate data or content from at least one separate content feed4802. In these embodiments, the SDK4804may use timecodes or other timing information in the video to align the client's current video playout time with data or content from the at least one separate content feed4802, in order to supply the video player with relevant synchronized media content4810. In operation, a system4800(e.g., the system described herein) may output one or more content feeds4802-1,4802-2. . .4802-N. The content feeds may include video, audio, text, and/or data (e.g., statistics of a game, player names). In some embodiments, the system4800may output a first content feed4802-1that includes a video and/or audio that is to be output (e.g., displayed) by a client media player4808. The client media player4808may be executed by a user device (e.g., a mobile device, a personal computing device, a tablet computing device, and the like). The client media player4808is configured to receive the first content feed4802and to output the content feed4802via a user interface (e.g., display device and/or speakers) of the user device. Additionally or alternatively, the client media player4808may receive a third-party content feed4812from a third-party data source (not shown). For example, the client media player4808may receive a live-game video stream from the operator of an arena. Regardless of the source, a content feed4802-2or4812may include timestamps or other suitable temporal indicia to identify different positions (e.g., frames or chunks) in the content feed. The client media player4808may incorporate the SDK4804. The SDK4804may be configured to receive additional content feeds4802-2. . .4802-N to supplement the outputted media content. For example, a content feed4802-2may include additional video (e.g., a highlight or alternative camera angle). In another example, a content feed4802-2may include data (e.g., statistics or commentary relating to particular game events). Each additional content feed4802-2. . .4802-N may include timestamps or other suitable temporal indicia as well. The SDK4804may receive the additional content feed(s)4802-2. . .4802-N and may augment the content feed being output by the media player with the one or more additional content feeds4802-2. . .4802-N based on the timestamps of the respective content feeds4802-1,4802-2, . . .4802-N to obtain dynamic synchronized media content4810. For example, while playing a live feed (with a slight lag) or a video-on-demand (VOD) feed of a basketball game, the SDK4804may receive a first additional content feed4802containing a graphical augmentation of a dunk in the game and a second additional content feed4802indicating the statistics of the player who performed the dunk. The SDK4804may incorporate the additional content feeds into the synchronized media content4810, by augmenting the dunk in the live or VOD feed with the graphical augmentation and the statistics. In some embodiments, a client app using the SDK may allow client-side selection or modification of which subset of the available additional content feeds to incorporate. In some implementations, the SDK4804may include one or more templates that define a manner by which the different content feeds4802may be laid out. Furthermore, the SDK4804may include instructions that define a manner by which the additional content feeds4802are to be synchronized with the original content feed. In embodiments, the systems and methods disclosed herein may include joint compression of channel streams such as successive refinement source coding to reduce streaming bandwidth and/or reduce channel switching time, and the like. In embodiments, the systems and methods disclosed herein may include event analytics and/or location-based games including meta-games, quizzes, fantasy league and sport, betting, and other gaming options that may be interactive with many of the users at and connected to the event such as identity-based user input, e.g., touching or clicking a player predicted to score next. In embodiments, the event analytics and/or location-based games may include location-based user input such as touching or clicking a location where a rebound or other play or activity is expected to be caught, to be executed, and the like. In embodiments, the event analytics and/or location-based games may include timing-based user input such clicking or pressing a key to indicate when a user thinks a shot should be taken, a defensive play should be initiated, a time-out should be requested, and the like. In embodiments, the event analytics and/or location-based games may include prediction-based scoring including generating or contributing to a user score based on the accuracy of an outcome prediction associated with the user. By way of this example, the outcome prediction may be associated with outcomes of individual offensive and defensive plays in the games and/or may be associated with scoring and/or individual player statistics at predetermined time intervals (e.g., quarters, halves, whole games, portions of seasons, and the like). In embodiments, the event analytics and/or location-based games may include game state-based scoring including generating or contributing to a user score based on expected value of user decision calculated using analysis of instantaneous game state and/or comparison with evolution of game state such as maximum value or realized value of the game state in a given chance or possession. In embodiments, the systems and methods disclosed herein may include interactive and immersive reality games based on actual game replays. By way of this example, the interactive and immersive reality games may include the use of one or more simulations to diverge from actual game events (partially or in their entirety) based on user input or a collection of user input. In embodiments, the interactive and immersive reality games may include an action-time resolution engine that may be configured to determine a plausible sequence of events to rejoin the actual game timeline relative to, in some examples, the one or more simulations to diverge from actual game events (partially or in their entirety) based on user input or a collection of user input. In embodiments, the interactive and immersive reality games may include augmented reality simulations that may integrate game event sequences, using cameras on located on one or more backboards and/or along locations adjacent to the playing court. In embodiments, the systems and methods disclosed herein may include simulated sports games that may be based on detailed player behavior models. By way of this example, the detailed player behavior models may include tendencies to take different actions and associated probabilities of success of different actions under different scenarios including teammate/opponent identities, locations, score differential, period number, game clock, shot clock, and the like. In embodiments, the systems and methods disclosed herein may include social chat functions and social comment functions that may be inserted into a three-dimensional scene of a live event. By way of this example, the social chat and comment functions that may be inserted into the three-dimensional scene of the live event may include avatars inserted into the crowd that may display comments within speech bubbles above the avatars. In other examples, the social chat and comment functions may be inserted into a three-dimensional scene of the live event as a running commentary adjacent to other graphics or legends associated with the event. In embodiments, the systems and methods disclosed herein may include the automating of elements of broadcast production such as automatic control of camera pan, tilt, and zoom. By way of this example, the automating of elements of broadcast production may also include automatic switching between camera views. In embodiments, the automating of elements of broadcast production may include automatic live and color commentary generation and automatic placement and content from synthetic commentators in the form of audio or in the form of one or more audio and video avatars with audio content that may be mixed with semantic and contextual based reactions from the live event and/or from other users. By way of this example, the automated elements of broadcast production may include automated generation of commentary in audio only or audio and video form including AR augmentation and associated content by, for example, combining semantic machine understanding of events in the game and semantic machine understanding of camera views, camera cuts, and camera close-ups in broadcast or another video. In embodiments, the automated generation of commentary may also be based on semantic machine understanding of broadcaster/game audio, statistics from semantic machine understanding of past games, information/statistics from other sources, and combinations thereof. In embodiments, a ranking of potential content items may be based on at least one of the rarity of events, comparison against the rest of the league, diversity with respect to previously shown content, personalization based on channel characteristics, explicit user preferences, inferred user preferences, the like, or combinations thereof. In embodiments, the automated generation of commentary may include the automatic selection of top-ranked content items or a short list of top-ranked content items shown to a human operator for selection. In embodiments, and as shown inFIG.49, the systems and methods disclosed herein may include machine-automated or machine-assisted generation of aggregated clips4902. Examples of aggregated clips4902include highlights and/or condensed games. The aggregated clip may be comprised of one or more selected media segments (e.g., video and/or audio segments). In the example ofFIG.49, a multimedia system4900may include an event datastore4910, an interest determination module4920, and a clip generation module4930. The event datastore4910may store event records4912. Each event records4912may correspond to a respective event (e.g., an offensive possession, a shot, a dunk, a defensive play, a blitz, a touchdown pass). An event record4912may include an event ID4914that uniquely identifies the event. An event record4912may also include event data4916that corresponds to the event. For example, event data4916may include a media segment (e.g., video and/or audio) that captures the event or a memory address that points to the media segment that captures the event. The event record4912may further include event metadata4918. Event metadata4918may include any data that is pertinent to the event. Examples of event metadata4918may include, but is not limited to, an event type (e.g., a basketball shot, a dunk, a football blitz, a touchdown, a soccer goal), a list of relevant players (e.g., the shooter and defender, the quarterback, the goal scorer), a time corresponding to the event (e.g., when during the game did the event occur), a length of the event (e.g., how many seconds is the media segment that captures the event), a semantic understanding of the event, the potential impact event on win probability (e.g., a delta of win probability from before and after the event), references (e.g., event IDs) to other events that are pertinent to event (e.g., other events during a run made by a team, and/or any other suitable types of metadata. In some embodiments, the event metadata4918may further include an interest score of the event, where the interest score of an event may be a numerical value indicating a degree of likelihood that a user would find the event interesting (e.g., worthy of watching). In embodiments, an interest determination module4920determines an interest level of an event or group of related events. In some of these embodiments, the interest determination module4920determines an interest score of an event or group of related events. The interest score may be relative to other events in a particular game or relative to events spanning multiple games and/or sports. In some embodiments, the interest determination module4920may determine the interest score of a particular event or group of events based on the event metadata4918of the respective event(s). In some embodiments, the interest determination module4920may incorporate one or more machine-learned models that receive event metadata4918of an event or group of related events and outputs a score based on the event metadata4918. A machine-learned model may, for example, receive an event type, and other relevant features (e.g., time, impact on win probability, relevant player) and may determine the score based thereon. The machine-learned models may be trained in a supervised, semi-supervised manner, or unsupervised manner. The interest determination module4920may determine the interest score of an event or group of related events in other manners as well. For example, the interest determination module4920may utilize rules-based scoring techniques to score an event or group of related events. In some embodiments, the interest determination module4920is configured to determine an interest score for a particular user. In these embodiments, the interest scores may be used to generate personalized aggregated clips4902for a user. In these embodiments, the interest determination module4920may receive user-specific data that may be indicative of a user's personal biases. For example, the interest determination module4920may receive user-specific data that may include, but is not limited to, a user's favorite sport, the user's favorite team, the user's list of favorite players, a list of events recently watched by the user, a list of events recently skipped by the user, and the like. In some of these embodiments, the interest determination module4920may feed the user-specific data into machine-learned models along with event metadata4818of an event to determine an interest score that is specific to a particular user. In these embodiments, the interest determination module4920may output the user-specific interest score to the clip generation module4930. In some embodiments, one or more humans may assign interest levels to various events. In these embodiments, the human-assigned interest levels may be used to determine which events to include in an aggregated clip4902. Furthermore, the human-assigned interest levels may be used to train a model used to determine interest scores of respective events. The clip generation module4930generates aggregated clips4902based on one or more identified events. The clip generation module4930may determine one or more events to include in an aggregated clip based on the interest level of the events relating to a game or collection of games. In some embodiments, the clip generation module4930determines the events to include in an aggregated clip4902based on the interest level of the respective events. The clip generation module4930may implement optimization or reinforcement learning to determine which events (depicted in media segments) to include in an aggregated clip4902. For instance, the clip generation module4930may include media segments depicting events having the highest relative interest scores and media segments of additional events that may be relevant to the high scoring events. In embodiments, the clip generation module4930may determine how many events to include in the aggregated clip4902depending on the intended purpose of the aggregated clip4902. For example, a highlight may be shorter in duration than a condensed game. In embodiments, the length of an aggregated clip4902may be a predetermined parameter (e.g., three minutes). In these embodiments, the clip generation module4930may select a sufficient number of events to span the predetermined duration. For example, the clip generation module4930may identify a set of media segments of events having requisite interest scores, where the aggregated duration of the set of media segments is approximately equal to the predetermined duration. In embodiments, the clip generation module4930may be configured to generate personalized aggregated clips. In these embodiments, the clip generation module4930may receive user-specific interest scores corresponding to events of a particular game or time period (e.g., “today's personalized highlights). The clip generation module4930may utilize the user-specific interest scores of the events, a user's history (e.g., videos watched or skipped), and/or user profile data (e.g., location, favorite teams, favorite sports, favorite players) to determine which events to include in a personalized aggregated clip4930. In embodiments, the clip generation module4930may determine how many events to include in the personalized aggregated clip4902depending on the intended purpose of the aggregated clip4902and/or the preferences of the user. For example, if a user prefers to have longer condensed games (i.e., more events in the aggregated clip), the clip generation module4930may include more media segments in the aggregated clip. In some embodiments, the length of an aggregated clip4902may be a predetermined parameter (e.g., three minutes) that may be explicitly set by the user. In these embodiments, the clip generation module4930may select a sufficient number of events to span the predetermined duration set by the user. For example, the clip generation module4930may identify a set of media segments of events having requisite interest scores, where the aggregated duration of the set of media segments is approximately equal to the predetermined duration. In embodiments, the clip generation module4930requests the scores of one or more events from the interest determination module4920when the clip generation module4930is tasked with generating aggregated clips4902. Alternatively, the interest determination module4920may score each event defined in the event datastore4910. Upon determining which events to include in an aggregated clip4902, the clip generation module4930may retrieve the media segments corresponding to the identified events. For example, the clip generation module4930may retrieve the event records4912of the identified events using the event IDs4914of the identified events. The clip generation module4914may then generate the aggregated clip based on the event data4916contained in the retrieved event records4912. The sequence of events depicted in the aggregated clip4902may be generated in any suitable manner. For example, the events may be depicted sequentially as they occurred or in order of ascending or descending interest score. The clip generation module4930may transmit the aggregated clip4902to a user device and/or store the aggregated clip4902in memory. In embodiments, and in the example ofFIG.50, the systems and methods disclosed herein may be configured to provide “dynamic videos”5002. A dynamic video5002may refer to the concatenated display of media segments (e.g., video and/or audio) that can be dynamically selected with short time granularity (e.g., frame-level or chunk-level granularity). A dynamic video5002may be comprised of one or more constituent media segments of dynamically determined length, content, and sequencing. The dynamic video5002may include constituent media segments that are stitched together in a single file or a collection of separate files that may each contain a respective constituent media segment. The constituent media segments of a dynamic video5002may be related based on one or more suitable relationships. For example, the constituent media segments may be of a same event taken from different camera angles, of different events of a same game, of different events from different games but of the same sport and on the same day, of different events relating to the same player or team, and/or of different events but the same subject, topic, or sentiment. Additionally, in some embodiments, the constituent media segments may be supplemented or augmented with graphical and/or text overlays. The graphical and/or text overlays may be confined to a single media segment or may span across multiple constituent media segments. In the illustrated example, a multimedia system5000provides the dynamic videos5002to a user device5080. The user device5080may be a mobile device (e.g., smartphone), a personal digital assistant, a laptop computing device, a personal computer, a tablet computing device, a gaming device, a smart television, and/or any other suitable electronic device with the capability to present the dynamic videos. The user device5080may include a multimedia player5082that outputs the dynamic video5002via a user interface5084. The multimedia player5082may also receive user commands via the user interface5084. The user interface5084may include a display device (e.g., an LED screen or a touchscreen), a physical keyboard (e.g., a qwerty keyboard), an input device (e.g., a mouse), an audio device (e.g., speakers), and the like. The user device5080may further include a communication unit5088that effectuates communication with external devices directly and/or via a network. For example, the communication unit5088may include one or more wireless and/or wired transceivers that communicate using any suitable communication protocol. The multimedia system5000may include a media datastore5010, a communication unit5030, and a dynamic video module5020. The media datastore5010may store media records5012. A media record5012may correspond to a media segment that captures one or more events. A media record may include a media ID5014that uniquely identifies the media record5012. A media record5012may include media data5016. The media data5016may include the media segment itself or a memory address of the media segment. The media record5012may further include media metadata5018. The media metadata5018may include any data that is pertinent to the media segment. Examples of media metadata4918may include, but is not limited to, one or more event identifiers the identify one or more events depicted in the media segment, one or more event types that describe the one or more events depicted in the media segment, a list of relevant players depicted in the multimedia segment, a time corresponding to the media segment (e.g., a starting time of the media segment with respect to a game), a time length of the media segment, a semantic understanding of the media segment, the potential impact of the events depicted in the media segment on win probability (e.g., a delta of win probability from before and after the event), references (e.g., media IDs) to other media segments that are pertinent to the media segment (e.g., other angles of the same events depicted in the media segment), and/or any other suitable types of metadata. In embodiments, the media records5012may further reference entire content feeds (e.g., an entire game or a livestream of a game). In these embodiments, the media metadata5018of a media record may include any suitable information relating to the content feed. For example, the media metadata5018may include an identifier of the game to which the content feed corresponds, an indicator whether the content feed is live or recorded, identifiers of the teams playing in the game, identifiers of players playing in the game, and the like. The dynamic video module5020is configured to generate dynamic videos and to deliver dynamic videos to a user device5080. The dynamic video module5020may select the media segments to include in the dynamic video5002in any suitable manner. In some embodiments, the dynamic video module5020may implement optimization and/or reinforcement learning-based approaches to determine the selection, length, and/or sequence of the constituent media segments. In these embodiments, the dynamic video module5020may utilize the media metadata5018of the media records5012stored in the media datastore5010to determine the selection, length, and/or sequence of the constituent media segments. The dynamic video module5020may additionally or alternatively implement a rules based approach to determine which media segments to include in the dynamic video. For example, the dynamic video module5020may be configured to always include alternative camera angles of an event if multiple media segments depicting the same event exist. In this example, the dynamic video module5020may be further configured to designate media clips taken from alternative camera angles as supplementary media segments (i.e., media segments that can be switched to at the user device) rather than sequential media segments. In embodiments, the dynamic video module5020may be configured to generate dynamic video clips from any suitable sources, including content feeds. In these embodiments, the dynamic video module5020may generate dynamic videos5002having any variety of constituent media segments by cutting media segments from one or more content feeds and/or previously cut media segments. Furthermore, the dynamic video module5020may add any combination of augmentations, graphics, audio, statistics, text, and the like to the dynamic video. In some embodiments, the dynamic video module5020is configured to provide personalized dynamic videos5002. The dynamic video module5020may utilize user preferences (either predicted, indicated, or inferred) to customize the dynamic video. The dynamic video5030may utilize a user's profile, location, and/or history to determine the user preferences. A user profile may indicate a user's favorite teams, players, sports, and the like. In another example, the dynamic video module5020may be able to predict a user's favorite teams and players based on the location of the user. In yet another example, the dynamic video module5020may be configured to infer user viewing preferences based on the viewing history of the user (e.g., telemetry data reported by the media player of the user). For example, if the user history indicates that the user routinely skips over media segments that are longer than 30 seconds, the dynamic video module5020may infer that the user prefers media segments that are less than 30 seconds long. In another example, the dynamic video module5020may determine that the user typically “shares” media segments that include reactions of players or spectators to a notable play. In this example, the dynamic video module5020may infer that the user prefers videos that include reactions of players or spectators, and therefore, media segments that tend to be longer in duration. In another example, the user history may indicate that the user watches media segments of a particular type of event (e.g., dunks), but skips over other types of events (e.g., blocked shots). In this example, the dynamic video module5020may infer that the user prefers to consume media segments of dunks over media segments of blocked shots. In operation, the dynamic video module5020can utilize the indicated, predicted, and/or inferred user preferences to determine which media segments to include in the dynamic video and/or the duration of the media segments (e.g., should the media segment be shorter or longer). The dynamic video module5020may utilize an optimization and/or reinforcement-based learning approach to determine which media segments to include in the dynamic video5002, the duration of the dynamic video5002, and the sequence of the media segments in the dynamic video5002. The multimedia system5000may transmit a dynamic video5002to a user device5080. The media player5082receives the dynamic video5002via the communication unit5088and outputs one or more of the media segments contained in the dynamic video5002via the user interface5084. The media player5082may be configured to record user telemetry data (e.g., which media segments the user consumers, which media segments the user skips, and/or terms that the user searches for) and to report the telemetry data to the multimedia system5000. The media player5084may be configured to receive commands from a user via the user interface5084. The commands may be executed locally by the media player5084and/or may be communicated to the multimedia system5000. In some embodiments, the media player5082may be configured to allow selection of the media segments that are displayed based on user input and/or AI-controls. In the former scenario, the media player5082may be configured to receive user commands via the user interface5084. For example, the media player5082may allow a user to enter search terms or to choose from a displayed set of suggestions. In response to the search terms or the user selections, the media player5082may initialize (e.g., request and begin outputting) a dynamic video5002, in which the media player5082displays a machine-controlled sequence of media segments related to the search terms/user selection. A user may issue additional commands via the user interface5084(e.g., via the keyboard or by touching or directional swiping on a touchscreen) to request media segments related in different ways to the current media segment, to indicate when to move on to the next media segment, and/or to interactively pull up statistics and other information. For example, swiping upwards may indicate that the user wishes to see a different camera angle of the same event, swiping downwards may indicate that the user wishes to see an augmented replay of the same event, and swiping right may indicate that the user wishes to move on to the next clip. A set of keyword tags corresponding to each clip may be shown to facilitate the user adding one or more of the displayed tags to the set of search terms that determines potentially relevant media segments to display. The media player5082may report the user's inputs or interactions with the media player5082, if any, to the multimedia system5000. In response to such commands, the multimedia system500may use such data to adapt subsequent machine-controlled choices of media segment duration, content type, and/or sequencing in the dynamic video. For example, the user's inputs or interactions may be used to adjust the parameters and/or reinforcement signals of an optimization or reinforcement learning-based approach for making machine-controlled choices in the dynamic video5002. In embodiments, the dynamic video module5020may be configured to generate the dynamic video in real time. In these embodiments, the dynamic video module5020may begin generating and transmitting the dynamic video5002. During display of the dynamic video5002by the media player5082, the dynamic video module5020may determine how to sequence/curate the dynamic video. For instance, the dynamic video module5020may determine (either based on a machine-learning-based decision or from explicit instruction from the user) that the angle of a live feed should be switched to a different angle. In this situation, the dynamic video module5020may update the dynamic video5002with a different video feed that is taken from an alternative angle. In another example, a user may indicate (either explicitly or implicitly) that she is uninterested in a type of video being shown (e.g., baseball highlights). In response to the determination that the user is uninterested, the dynamic video module5020may retrieve media segments relating to another topic (e.g., basketball) and may begin stitching those media segments into the dynamic video5002. In this example, the dynamic video module5020may be configured to cut out any media segments that are no longer relevant (e.g., additional baseball highlights). It is noted that in some embodiments, the dynamic video module5020may transmit alternative content feeds and/or media segments in the dynamic video5002. In these embodiments, the media player5082may be configured to switch between feeds and/or media segments. In embodiments, the automating of elements of broadcast production may include automatic live commentary generation that may be used to assist referees for in situ evaluation or post-mortem evaluation. The automatic live commentary generation that may be used to assist referees may also be used to train referees in unusual situations that may be seen infrequently in actual games but may be reproduced or formed from AR content based on or purposefully deviated from live game events. By way of the above examples, the referee assistance, evaluation, training, and the like associated with the improvement of referee decisions may be based on semantic machine understanding of game events. In embodiments, the systems and methods disclosed herein may include the use of player-specific information in three-dimensional position identification and reconstruction to improve trade-offs among camera requirements. Toward that end, fewer or lower resolution cameras may be used, computational complexity/delay may be reduced and output quality/accuracy may be increased when compared to typical methods. With reference toFIG.46, the player-specific information in three-dimensional position identification and reconstruction4600may be shown to improve the balance in trade-offs of camera requirements including improved localization of keypoints4602such as a head, joints, and the like, by using player models4604of specific players in conjunction with player identification4608such as identifying a jersey number or automatically recognizing a face and remote sensing technology to capture the players such as one or more video cameras, lidar, ultrasound, Wi-Fi visualization, and the like. By way of this example, the improved localization of keypoints may include optimizing over constraints on distances between keypoints from player models combined with triangulation measurements from multiple cameras. In embodiments, the improved localization of keypoints may also include using the player models4604to enable 3D localization with a single camera. In embodiments, the system and methods disclosed herein may also include the use of the player models4604fitted to detected keypoints to create 3D reconstructions4620or to improve 3D reconstructions in combination with point cloud techniques. Point cloud techniques may include a hybrid system including the player models4604that may be used to replace areas where the point cloud reconstruction does not conform adequately to the model. In further examples, the point cloud techniques may include supplementing the point cloud in scenarios where the point cloud may have a low density of points. In embodiments, the improved localization of keypoints may include the use of player height information combined with face detection, gaze detection, posture detection, or the like to locate the point of view of players. In embodiments, the improved localization of keypoints may also include the use of camera calibration4630receiving one or more video feeds4632, the 3D reconstruction4610and projection onto video in order to improve player segmentation for broadcast video4640. In embodiments, the systems and methods disclosed herein may include using a state-based machine learning model with hierarchical states. By way of this example, the state-based machine learning model with hierarchical states may include input training state labels at the finest granularity. In embodiments, the machine learning model may be trained at the finest level of granularity as well as at intermediate levels of aggregated states. In embodiments, the output and cost function optimization may be at the highest level of state aggregation. In embodiments, the machine learning model may be trained using an ensemble of active learning methods for multiclass classification including weighting of methods based on a confusion matrix and a cost function that may be used to optimize the distribution of qualitatively varied instances for active learning. FIG.51illustrates an example of a client device5100configured to display augmented content to a user according to some embodiments of the present disclosure. In the illustrated example, the client device5100may include a processing device5102, a storage device5104, a communication unit5106that effectuates communication between the client device and other devices via one or more communication networks (e.g., the Internet and/or a cellular network), and a user interface5108(e.g., a touchscreen, a monitor, a mouse, a keyboard, and the like). The processing device5102may include one or more processors and memory that stores computer-executable instructions that are executed by the one or more processors. The processing device5102may execute a video player application5200. In embodiments, the video player application5200is configured to allow a user to consume video and related content from different content channels (e.g., audio, video, and/or data channels). In some of the embodiments, the video and related content may be delivered in time-aligned content channels (e.g., a “smart pipe”), where the content may be indexed temporally and/or spatially. In embodiments, the spatial indexing may include indexing the pixels or groups of pixels of multiple streams, 3D pixels (e.g., voxels) or groups of 3D pixels, and/or objects (e.g., polygonal meshes used for animation, overlay graphics, and the like). In these embodiments, a wide variety of elements may be indexed temporally (e.g., in relation to individual video frames) and/or spatially (e.g., in relation to pixels, groups of pixels, or “real world” locations depicted in the video frames). Examples of elements that may be indexed include events (match/game identifier), objects (players, game objects, objects in the environment such as court or playing field) involved in an event, information and statistics relating to the event and locations, locations of areas corresponding to the environment (e.g., floor areas, background areas, signage areas) where information, augmentations, graphics, animations, and advertising can be displayed in a frame, indicia of what information, augmentation elements, and the like that are available to augment a video feed in a content channel, combinations of content (e.g., particular combinations of audio, video, information, augmentation elements, replays, or other suitable elements), and/or references to other content channels corresponding to the event (such that end-users can select between streams). In this way, the video player may allow a user to interact with the video, such that the user can request the video player to display information relating to a time and/or location in the video feed, display relevant information relating to the event, switch between video feeds of the event, view advertisements, and the like. In these embodiments, the smart pipe may allow the video player5200to create dynamic content at the client device5100. FIG.52illustrates an example implementation of the video player application5200according to some embodiments of the present disclosure. The video player application5200may include a GUI module5202, an integration module5204, an access management module5206, a video transformation module5208, a time transformation module5210, and a data management module5212. The video player application5200may include additional or alternative modules not discussed herein without departing from the scope of the disclosure. In embodiments, the GUI module5202receives commands from a user and displays video content, including augmented video content, to the user via the user interface5108. In embodiments, the GUI module5202displays a menu/selection screen (e.g., drop down menus, selection elements, and/or search bars) and receives commands from a user corresponding to the available menus/selection items via a user via the user interface5108. For example, the GUI module5202may receive an event selection via a drop down menu and/or a search bar/results page. In embodiments, an event selection may be indicative of a particular sport and/or a particular match. In response to an event selection, the GUI module5202may provide the event selection to the integration module5204. In response, the GUI module5202may receive a video stream (of one or more video streams capturing the selected event) from the video transformation module5208and may output a video corresponding to the video feed via the user interface5112. The GUI module5202may allow a user to provide commands with respect to the video content, including commands such as pause, fast forward, and rewind. The GUI module5202may receive additional or alternative commands, such as “make a clip,” drill down commands (e.g., provide stats with respect to a player, display players on the playing surface, show statistics corresponding to a particular location, and the like), switch feed commands (e.g., switch to a different viewing angle), zoom in/zoom out commands, select link commands (e.g., selection of an advertisement), and the like. The integration module5204receives an initial user command to view a particular sport or game and instantiates an instance of a video player (also referred to as a “video player instance”). In embodiments, the integration module5204receives a source event identifier (ID), an access token, and/or a domain ID. The source event ID may indicate a particular game (e.g., MLB: Detroit Tigers v. Houston Astros). The access token may indicate a particular level of access that a user has with respect to a game or league (e.g., the user may access advanced content or MLB games may include multi-view feed). The domain ID may indicate a league or type of event (e.g., NBA, NFL, FIFA). In embodiments, the integration module may instantiate a video player instance in response to the source event ID, the domain ID, and the access token. The integration module5204may output the video player instance to the access management module5206. In some embodiments, the integration module5204may further output a time indicator to the access management module5206. A time indicator may be indicative of a time corresponding to a particular frame or group of frames within the video content. In some of these embodiments, the time indicator may be a wall time. Other time indicators, such as a relative stream (e.g., 10 seconds from t=0), may be used, however. The access management module5206receives the video player instance and manages security and/or access to video content and/or data by the video player from a multimedia system. In embodiments, the access management module5206may expose a top layer API to facilitate the ease of access to data by the video player instance. The access management module5206may determine the level of access to provide the video player instance based on the access token. In embodiments, the access management module5206implements a single exported SDK that allows a data source (e.g., multimedia servers) to manage access to data. In other embodiments, the access management module5206implements one or more customized exported SDKs that each contain respective modules for interacting with a respective data source. The access management module5206may be a pass through layer, whereby the video player instance is passed to the video transformation module5208. The video transformation module5208receives the video player instance and obtains video feeds and/or additional content provided by a multimedia server (or analogous device) that may be displayed with the video encoded in the video feeds. In embodiments, the video transformation module5208receives the video content and/or additional content from the data management module5212. In some of these embodiments, the video transformation module5208may receive a smart pipe that contains one or more video feeds, audio feeds, data feeds, and/or an index. In embodiments, the video feeds may be time-aligned video feeds, such that the video feeds offer different viewing angles or perspectives of the event to be displayed. In embodiments, the index may be a spatio-temporal index. In these embodiments, the spatio-temporal index identifies information associated with particular video frames of a video and/or particular locations depicted in the video frames. In some of these embodiments, the locations may be locations in relation to a playing surface (e.g., at the fifty yard line or at the free throw line) or defined in relation to individual pixels or groups of pixels. It is noted that the pixels may be two-dimensional pixels or three-dimensional pixels (e.g., voxels). The spatio-temporal index may index participants on a playing surface (e.g., players on a basketball court), statistics relating to the participants (e.g., Player A has scored 32 points), statistics relating to a location on the playing surface (e.g., Team A has made 30% of three-pointers from a particular area on a basketball court), advertisements, score bugs, graphics, and the like. In some embodiments, the spatio-temporal index may index wall times corresponding to various frames. For example, the spatio-temporal index may indicate a respective wall time for each video frame in a video feed (e.g., a real time at which the frame was captured/initially streamed). The video transformation module5208receives the video feeds and the index and may output a video to the GUI module5202. In embodiments, the video transformation module5208is configured to generate augmented video content and/or switch between different video feeds of the same event (e.g., different camera angles of the event). In embodiments, the video transformation module5208may overlay one or more GUI elements that receive user selections into the video being output. For example, the video transformation module5208may overlay one or more visual selection elements over the video feed currently being output by the GUI module5202. The visual selection elements may allow a user to view information relating to the event depicted in the video feed, to switch views, or to view a recent highlight. In response to the user providing a command via the user interface of the client device5100, the video transformation module5208may augment the currently displayed video feed with augmentation content, switch the video feed to another video feed, or perform other video transformation related operations. The video transformation module5208may receive a command to display augmentation content. For example, the video transformation module5208may receive a command to display information corresponding to a particular location (e.g., a pixel or group of pixels) and a particular frame. In response to the command, the video transformation module5208may reference the spatio-temporal index to determine an object (e.g., a player) that is located at the particular location in the particular frame. The video transformation module5208may retrieve information relating to the object. For example, the video transformation module5208may retrieve a name of a player or statistics relating to a player or a location on the playing surface. The video transformation module5208may augment the current video feed with the retrieved content. In embodiments, the video transformation module5208may request the content (e.g., information) from the multimedia server via the data management module5212. In other embodiments, the content may be transmitted in a data feed with the video feeds and the spatio-temporal index. In response to receiving the requested content (which may be textual or graphical), the video transformation module5208may overlay the requested content on the output video. The video transformation module5208may determine a location in each frame at which to display the requested data. In embodiments, the video transformation module5208may utilize the index to determine a location at which the requested content may be displayed, whereby the index may define locations in each frame where specific types of content may be displayed. In response to determining the location at which the requested content may be displayed, the video transformation module5208may overlay the content onto the video at the determined location. In another example, the video transformation module5208may receive a command to display an advertisement corresponding to a particular frame and location. In response to the command, the video transformation module5208determines the advertisement to display from the spatio-temporal index based on the particular frame and location. In embodiments, the video transformation module5208may retrieve the advertisement from the multimedia server (or another device). In other embodiments, the advertisement may be transmitted with the video feeds and the spatio-temporal index. In response to obtaining the advertisement, the video transformation module5208may determine a location at which the advertisement is to be displayed (e.g., in the manner discussed above), and may overlay the advertisement onto the video at the determined location. In embodiments, the video transformation module5208may receive a command to switch between video feeds in response to a user command to switch feeds. In response to such a command, the video transformation module5208switches the video feed from the current video feed to a requested video feed, while maintaining time-alignment between the video (i.e., the video continues at the same point in time but from a different feed). For example, in streaming a particular basketball game and receiving a request to change views, the video transformation module5208may switch from a sideline view to an under the basket view without interrupting the action of the game. The video transformation module5208may time align the video feeds (i.e., the current video feed and the video feed being switched to) in any suitable manner. In some embodiments, the video transformation module5208obtains a wall time from the time transformation module5210corresponding to a current frame or upcoming frame. The video transformation module5208may provide a frame identifier of the current frame or the upcoming frame to the video transformation module5208. In embodiments, the frame identifier may be represented in block plus offset form (e.g., a block identifier and a number of frames within the block). In response to the frame identifier, the time transformation module5208may return a wall time corresponding to the frame identifier. The video transformation module5208may switch to the requested video feed, whereby the video transformation module5208begins playback at a frame corresponding to the received wall time. In these embodiments, the video transformation module5208may obtain the wall time corresponding to the current or upcoming frame from the time transformation module5210, and may obtain a frame identifier of a corresponding frame in the video feed being switched to based on the received wall time. In some embodiments, the video transformation module5208may obtain a “block plus offset” of a frame in the video feed being switched to based on the wall time. The block plus offset may identify a particular frame within a video stream as a block identifier of a particular video frame and an offset indicating a number of frames into the block where the particular video frame is sequenced. In some of these embodiments, the video transformation module5208may provide the video transformation module5210with the wall time and an identifier of the video feed being switched, and may receive a frame identifier in block plus offset format from the time transformation module5210. In some embodiments, the video transformation module5208may reference the index using a frame identifier of a current or upcoming frame in the current video feed to determine a time aligned video frame in the requested video feed. It is noted that while the “block plus offset” format is described, other formats of frame identifiers may be used without departing from the scope of the disclosure. In response to obtaining a frame identifier, the video transformation module5208may switch to the requested video feed at the determined time aligned video frame. For example, the video transformation module5208may queue up the requested video feed at the determined frame identifier. The video transformation module5208may then begin outputting video corresponding to the requested video feed at the determined frame identifier. In embodiments, the time transformation module5210receives an input time value in a first format and returns an output time value in a second format. For example, the time transformation module5210may receive a frame indicator in a particular format (e.g., block plus offset”) that indicates a particular frame of a particular video feed (e.g., the currently displayed video feed of an event) and may return a wall time corresponding to the frame identifier (e.g., the time at which the particular frame was captured or was initially broadcast). In another example, the time transformation module5210receives a wall time indicating a particular time in a broadcast and a request for a frame identifier of a particular video feed. In response to the wall time and the frame identifier request, the time transformation module5210determines a frame identifier of a particular video frame within a particular video feed and may output the frame identifier in response to the request. The time transformation module5210may determine the output time in response to the input time in any suitable manner. In embodiments, the time transformation module5210may utilize an index corresponding to an event (e.g., the spatio-temporal index corresponding to an event) to determine a wall time in response to a frame identifier and/or a frame identifier in response to a wall time. In these embodiments, the spatio-temporal index may be keyed by frame identifiers and/or wall times, whereby the spatio-temporal index returns a wall time in response to a frame identifier and/or a frame identifier in response to a wall time and a video feed identifier. In other embodiments, the time transformation module5210calculates a wall time in response to a frame identifier and/or a frame identifier in response to a wall time. In some of these embodiments, each video feed may include metadata that includes a starting wall time that indicates a wall time at which the respective video feed began being captured/broadcast, a number of frames per block, and a frame rate of the encoding. In these embodiments, the time transformation module5210may calculate a wall time in response to a frame identifier based on the starting time of the video feed indicated by the frame identifier, the number of frames per block, and the frame indicated by the frame identifier (e.g., the block identifier and the offset value). Similarly, the time transformation module5210may calculate a frame identifier of a requested video feed in response to a wall time based on the starting time of the requested video feed, the received wall time, the number of frames per block, and the encoding rate. In some embodiments, the time transformation module5210may be configured to transform a time with respect to first video feed to a time with respect to a second video feed. For example, the time transformation module5210may receive a first frame indicator corresponding to a first video feed and may output a second frame indicator corresponding to a second video feed, where the first frame indicator and the second frame indicator respectively indicate time-aligned video frames. In some of these embodiments, the time transformation module5210may utilize an index corresponding to an event (e.g., the spatio-temporal index corresponding to an event) to determine the second frame identifier in response to the second frame identifier. In these embodiments, the spatio-temporal index may be keyed by frame identifiers and may index frame identifiers of video frames that are time-aligned with the video frame referenced by each respective frame identifier. In other embodiments, the time transformation module5210calculates the second frame identifier in response to the first identifier. In some of these embodiments, the time transformation module5210may convert the first frame identifier to a wall time, as discussed above, and then may calculate the second frame identifier based on the wall time, as described above. In embodiments, the data management module5212requests and/or receives data from external resources and provides the data to a requesting module. For example, the data management module5212may receive the one or more video feeds from a multimedia server. The data management module5212may further receive an index (e.g., spatio-temporal index) corresponding to an event being streamed. For example, in some embodiments, the data management module5212may receive a smart pipe corresponding to an event. The data management module5212may provide the one or more video feeds and the index to the video transformation module5208. In embodiments, the data management module5212may expose one or more APIs of the video player application to external resources, such multimedia servers and/or related data servers (e.g., a server that provides game information such as player names, statistics, and the like). In some embodiments, the external resources may push data to the data management module5212. Additionally or alternatively, the data management module5212may be configured to pull the data from the external resources. In embodiments, the data management5212may receive requests for data from the video management module5208. For example, the data management module5212may receive a request for information relating to a particular frame identifier, a location within the frame indicated by a frame identifier, and/or an object depicted in the frame indicated by a frame identifier. In these embodiments, the data management module5212may obtain the requested information and may return the requested information to the video management module5212. In some embodiments, the external resource may push any information that is relevant to an event to the data management module5212. In these embodiments, the data management module5212may obtain the requested data from the pushed data. In other embodiments, the data management module5212may be configured to pull any requested data from the external resource. In these embodiments, the data management module5212may transmit a request to the external resource, whereby the request indicates the information sought. For example, the request may indicate a particular frame identifier, a location within the frame indicated by a frame identifier, or an object (e.g., a player) depicted in the frame indicated by the frame identifier. In response to the request, the data management module5212may receive the requested information, which is passed to video transformation module5212. In embodiments, the data management module5212may be configured to obtain individual video feeds corresponding to an event. In some of these embodiments, the data management module5212may receive a request from the video transformation module5208for a particular video feed corresponding to an event. In response to the request, the data management module5212may return the requested video feed to the video transformation module5208. The video feed may have been pushed to the video application by an external resource (e.g., multimedia platform), or may be requested (pulled) from the external resource in response to the request. With reference toFIG.47, the machine learning model may include active learning and active quality assurance on a live spatiotemporal machine learning workflow4700in accordance with the various embodiments. The machine learning workflow4700includes a machine learning (ML) algorithm4702that may produce live and automatic machine learning (ML) classification output4704(with minimum delay) as well as selected events for human quality assurance (QA)4708based on live spatiotemporal data4710. In embodiments, the live spatiotemporal machine learning workflow4700includes the data from the human question and answer sessions that may then be fed back into a machine learning (ML) algorithm4720(which may be the same as the ML algorithm4702), which may be rerun on the corresponding segments of data, to produce a time-delayed classification output4724with improved classification accuracy of neighboring events, where the time delay corresponds to the QA process. In embodiments, the machine learning workflow4700includes data from the QA process4708being fed into ML training data4722to improve the ML algorithm models for subsequent segments such as improving on the ML algorithm4702and/or the ML algorithm4702. Live spatiotemporal data4730may be aligned with other imperfect sources of data related to a sequence of spatial-temporal events. In embodiments, the alignment across imperfect sources of data related to a sequence of spatial-temporal events may include alignment using novel generalized distance metrics for spatiotemporal sequences combining event durations, ordering of events, additions/deletions of events, a spatial distance of events, and the like. In embodiments, the systems and methods disclosed herein may include modeling and dynamically interacting with an n-dimensional point-cloud. By way of this example, each point may be represented as an n-sphere whose radius may be determined by letting each n-sphere grow until it comes into contact with a neighboring n-sphere from a specified subset of the given point-cloud. This method may be similar to a Voronoi diagram in that may allocate a single n-dimensional cell for every point in the given cloud, with two distinct advantages. The first advantage includes that the generative kernel of each cell may also be its centroid. The second advantage includes continuously changing shifts in the resulting model when points are relocated in a continuous fashion (e.g., as a function of time in an animation, or the like). In embodiments, ten basketball players may be represented as ten nodes that are divided into two subsets of five teammates. At any given moment, each player's cell may be included in a circle extending in radius until it comes to be mutually tangent with an opponent's cell. By way of this example, players on the same team will have cells that overlap. In embodiments, the systems and methods disclosed herein may include a method for modeling locale as a function of time, some other specified or predetermined variable, or the like. In embodiments, coordinates of a given point or plurality of points are repeatedly sampled over a given window of time. By way of this example, the sampled coordinates may then be used to generate a convex hull, and this procedure may be repeated as desired and may yield a plurality of hulls that may be stacked for a discretized view of spatial variability over time. In embodiments, a single soccer player might have their location on a pitch sampled every second over the course of two minutes leading to a point cloud of location data and an associated convex hull. By way of this example, the process may begin anew with each two-minute window and the full assemblage of generated hulls may be, for example, rendered in a translucent fashion and may be layered so as to yield a map of the given player's region of activity. In embodiments, the systems and methods disclosed herein may include a method for sampling and modeling data by applying the recursive logic of a quadtree to a topologically deformed input or output space. In embodiments, the location of shots in a basketball game may be sampled in arc-shaped bins, which may be partitioned by angle-of-incidence to the basket and the natural logarithm of distance from the basket, and, in turn, yielding bins which may be subdivided and visualized according to the same rules governing a rectilinear quadtree. In embodiments, the systems and methods disclosed herein may include a method for modeling multivariate point-cloud data such that location coordinates map to the location, while velocity (or some other relevant vector) may be represented as a contour map of potential displacements at various time intervals. In embodiments, a soccer player running down a pitch may be represented by a node surrounded by nested ellipses each indicating a horizon of displacement for a given window of time. In embodiments, the systems and methods disclosed herein may include a method for modeling and dynamically interacting with a directed acyclic graph such that every node may be rendered along a single line, while the edges connecting nodes may be rendered as curves deviating from this line in accordance with a specified variable. In embodiments, these edges may be visualized as parabolic curves wherein the height of each may correspond to the flow, duration, latency, or the like of the process represented by the given edge. The methods and systems disclosed herein may include methods and systems for enabling a user to express preferences relating to display of video content and may include using machine learning to develop an understanding of at least one event, one metric related to the event, or relationships between events, metrics, venue, or the like within at least one video feed to determine at least one type for the event; automatically, under computer control, extracting the video content displaying the event and associating the machine learning understanding of the type for the event with the video content in a video content data structure; providing a user interface by which a user can indicate a preference for at least one type of content; and upon receiving an indication of the preference by the user, retrieving at least one video content data structure that was determined by the machine learning to have content of the type preferred by the user and providing the user with a video feed containing the content of the preferred type. In embodiments, the user interface is of at least one of a mobile application, a browser, a desktop application, a remote control device, a tablet, a touch screen device, a virtual reality or augmented reality headset, and a smart phone. In embodiments, the user interface further comprises an element for allowing a user to indicate a preference as to how content will be presented to the user. In embodiments, the machine learning further comprises determining an understanding of a context for the event and the context is stored with the video content data structure. In embodiments, the user interface further comprises an element for allowing a user to indicate a preference for at least one context. In embodiments, upon receiving an indication of a preference for a context, video content corresponding to the context preference is retrieved and displayed to the user. In embodiments, the context comprises at least one of the presence of a preferred player in the video feed, a preferred matchup of players in the video feed, a preferred team in the video feed, and a preferred matchup of teams in the video feed. In embodiments, the user interface allows a user to select at least one of a metric and a graphic element to be displayed on the video feed, wherein at least one of the metric and the graphic is based at least in part on the machine understanding. The methods and systems disclosed herein may include methods and systems for enabling a mobile application allowing user interacting with video content method and may include taking a video feed; using machine learning to develop an understanding of an event within the video feed, the understanding including identifying context information relating to the event; automatically, under computer control, extracting the content displaying the event and associating the extracted content with the context information; producing a video content data structure that includes the associated context information; and using the context information for a plurality of such video content data structures to generate, automatically under computer control, producing a story or video clip that includes the video content data structure, wherein the content of the story is based on a user preference. In embodiments, the user preference for a type of content is based on at least one of a user expressed preference and a preference that is inferred based on user interaction with an item of content. The methods and systems disclosed herein may include methods and systems for enabling a user to express preferences relating to display of video content and may include a machine learning facility for developing an understanding of at least one event within at least one video feed to determine at least one type for the event; a video production facility for automatically, under computer control, extracting the video content displaying the event and associating the machine learning understanding of the type for the event with the video content in a video content data structure; a server for serving data to a user interface by which a user can indicate a preference for at least one type of content; and upon receiving at the server an indication of the preference by the user, retrieving at least one video content data structure that was determined by the machine learning to have content of the type preferred by the user and providing the user with a video feed containing the content of the preferred type. In embodiments, the user interface is of at least one of a mobile application, a browser, a desktop application, a remote control device, a tablet, and a smart phone. In embodiments, the user interface further comprises an element for allowing a user to indicate a preference as to how content will be presented to the user. In embodiments, the machine learning further comprises determining an understanding of a context for the event and the context is stored with the video content data structure. In embodiments, the user interface further comprises an element for allowing a user to indicate a preference for at least one context. In embodiments, upon receiving an indication of a preference for a context, video content corresponding to the context preference is retrieved and displayed to the user. In embodiments, the context comprises at least one of the presence of a preferred player in the video feed, a preferred matchup of players in the video feed, a preferred team in the video feed, and a preferred matchup of teams in the video feed. In embodiments, the user interface allows a user to select at least one of a metric and a graphic element to be displayed on the video feed, wherein the metric is based at least in part on the machine understanding. The methods and systems disclosed herein may include methods and systems delivering personalized video content and may include using machine learning to develop an understanding of at least one event within at least one video feed to determine at least one type for the event; automatically, under computer control, extracting the video content displaying the event and associating the machine learning understanding of the type for the event with the video content in a video content data structure; developing a personal profile for a user based on at least one of expressed preferences of the user, information about the user, and information collected about actions taken by the user with respect to at least one type of video content; and upon receiving an indication of the user profile, retrieving at least one video content data structure that was determined by the machine learning to have content of the type likely to be preferred by the user based on the user profile. The methods and systems disclosed herein may include methods and systems for delivering personalized video content and may include using machine learning to develop an understanding of at least one event within at least one video feed to determine at least one type for the event, wherein the video feed is a video feed for a professional game; using machine learning to develop an understanding of at least one event within a data feed relating to the motion of a non-professional player; based on the machine learning understanding of the video feed for the professional game and the data feed of the motion of the non-professional player, automatically, under computer control, providing an enhanced video feed that represents the non-professional player playing within the context of the professional game. In embodiments, the methods and systems may further include providing a facility having cameras for capturing 3D motion data and capturing video of a non-professional player to provide the data feed for the non-professional player. In embodiments, the non-professional player is represented by mixing video of the non-professional player with video of the professional game. In embodiments, the non-professional player is represented as an animation having attributes based on the data feed about the non-professional player. The methods and systems disclosed herein may also include one or more of the following features and capabilities: spatiotemporal pattern recognition (including active learning of complex patterns and learning of actions such as P&R, postups, play calls); hybrid methods for producing high quality labels, combining automated candidate generation from XYZ data, and manual refinement; indexing of video by automated recognition of game clock; presentation of aligned optical and video; new markings using combined display, both manual and automated (via pose detection etc.); metrics: shot quality, rebounding, defense and the like; visualizations such as Voronoi, heatmap distribution, etc.; embodiment on various devices; video enhancement with metrics & visualizations; interactive display using both animations and video; gesture and touch interactions for sports coaching and commentator displays; and cleaning of XYZ data using, for example, HMI, PBP, video, hybrid validation. Further details as to data cleaning204are provided herein. Raw input XYZ is frequently noisy, missing, or wrong. XYZ data is also delivered with attached basic events such as possession, pass, dribble, shot. These are frequently incorrect. This is important because event identification further down the process (Spatiotemporal Pattern Recognition) sometimes depends on the correctness of these basic events. As noted above, for example, if two players' XY positions are switched, then “over” vs. “under” defense would be incorrectly switched, since the players' relative positioning is used as a critical feature for the classification. Also, PBP data sources are occasionally incorrect. First, one may use validation algorithms to detect all events, including the basic events such as possession, pass, dribble, shot, and rebound that are provided with the XYZ data. Possession/Non-possession may use a Hidden Markov Model to best fit the data to these states. Shots and rebounds may use the possession model outputs, combined with 1) projected destination of the ball, and 2) PBP information. Dribbles may be identified using a trained ML algorithm and also using the output of the possession model. Specifically, once possessions are determined, dribbles may be identified with a hidden Markov model. The hidden Markov model consists of three states:1. Holding the ball while the player is still able to dribble.2. Dribbling the ball.3. Holding the ball after the player has already dribbled. A player starts in State 1 when he gains possession of the ball. At all times players are allowed to transition to either their current state, or the state with a number one higher than their current state, if such a state exists. The players' likelihood of staying in their current state or transitioning to another state may be determined by the transition probabilities of the model as well as the observations. The transition probabilities may be learned empirically from the training data. The observations of the model consist of the player's speed, which is placed into two categories, one for fast movement, and one for slow movement, as well as the ball's height, which is placed into categories for low and high height. The cross product of these two observations represents the observation space for the model. Similar to the transition probabilities, the observation probabilities, given a particular state, may be learned empirically from the training data. Once these probabilities are known, the model is fully characterized and may be used to classify when the player is dribbling on unknown data. Once it is known that the player is dribbling, it remains to be determined when the actual dribbles occur. This may be done with a Support Vector Machine that uses domain specific information about the ball and player, such as the height of the ball as a feature to determine whether at that instant the player is dribbling. A filtering pass may also be applied to the resulting dribbles to ensure that they are sensibly separated, so that for instance, two dribbles do not occur within 0.04 seconds of each other. Returning to the discussion of the algorithms, these algorithms decrease the basic event labeling error rate by a significant factor, such as about 50%. Second, the system has a library of anomaly detection algorithms to identify potential problems in the data. These include temporal discontinuities (intervals of missing data are flagged); spatial discontinuities (objects traveling is a non-smooth motion, “jumping”); interpolation detection (data that is too smooth, indicating that post-processing was done by the data supplier to interpolate between known data points in order to fill in missing data). This problem data is flagged for human review so that events detected during these periods are subject to further scrutiny. Spatio-player tracking may be undertaken in at least two types, as well as in a hybrid combined type. For tracking with broadcast video, the broadcast video is obtained from multiple broadcast video feeds. Typically, this will include a standard “from the stands view” from the center stands midway-up, a backboard view, a stands view from a lower angle from each corner, and potentially other views. Optionally, PTZ (pan tilt zoom) sensor information from each camera is also returned. An alternative is a Special Camera Setup method. Instead of broadcast feeds, this uses feeds from cameras that are mounted specifically for the purposes of player tracking. The cameras are typically fixed in terms of their location, pan, tilt, zoom. These cameras are typically mounted at high overhead angles; in the current instantiation, typically along the overhead catwalks above the court. A Hybrid/Combined System may be used. This system would use both broadcast feeds and feeds from the purpose-mounted cameras. By combining both input systems, accuracy is improved. Also, the outputs are ready to be passed on to the DataFX pipeline for immediate processing, since the DataFX will be painting graphics on top of the already-processed broadcast feeds. Where broadcast video is used, the camera pose must be solved in each frame, since the PTZ may change from frame to frame. Optionally, cameras that have PTZ sensors may return this info to the system, and the PTZ inputs are used as initial solutions for the camera pose solver. If this initialization is deemed correct by the algorithm, it will be used as the final result; otherwise, refinement will occur until the system receives a useable solution. As described above, players may be identified by patches of color on the court. The corresponding positions are known since the camera pose is known, and we can perform the proper projections between 3D space and pixel space. Where purpose mounted cameras are used, multiple levels of resolution may be involved. Certain areas of the court or field require more sensitivity, e.g., on some courts, the color of the “paint” area makes it difficult to track players when they are in the paint. Extra cameras with higher dynamic range and higher zoom are focused on these areas. The extra sensitivity enables the computer vision techniques to train separate algorithms for different portions of the court, tuning each algorithm to its type of inputs and the difficulty of that task. In a combination system, by combining the fixed and broadcast video feeds, the outputs of a player tracking system can feed directly into the DataFX production, enabling near-real-time DataFX. Broadcast video may also produce high-definition samples that can be used to increase accuracy. The methods and systems disclosed herein may include methods and systems for enabling interaction with a broadcast video content stream and may include a machine learning facility for developing an understanding of at least one event within a video feed for a video broadcast, the understanding including identifying context information relating to the event; and a touch screen user interface by which a broadcaster can interact with the video feed, wherein the options for broadcaster interaction are based on the context information, wherein the interaction with the touch screen controls the content of the broadcast video event. In embodiments, the touch screen interface is a large screen adapted to be seen by viewers of the video broadcast as the broadcaster uses the touch screen. In embodiments, a smaller touch screen is used by a commentator on air to control the information content being displayed, and the images/video on the touch screen is simultaneously displayed on a larger screen that is filmed and broadcast or is simultaneously displayed directly in the broadcast feed. In embodiments, the broadcaster can select from a plurality of context-relevant metrics, graphics, or combinations thereof to be displayed on the screen. In embodiments, the broadcaster can display a plurality of video feeds that have similar contexts as determined by the machine learning facility. In embodiments, the similarity of contexts is determined by comparing events within the video feeds. In embodiments, the broadcaster can display a superimposed view of at least two video feeds to facilitate a comparison of events from a plurality of video feeds. In embodiments, the comparison is of similar players from different, similar, or identical time periods. In embodiments, a similarity of players is determined by machine understanding of the characteristics of the players from the different time periods. In embodiments, the broadcaster can display a plurality of highlights that are automatically determined by a machine understanding of a live sports event that is the subject of the video feed. In embodiments, the highlights are determined based on similarity to highlights that have been identified for other events. The methods and systems disclosed herein may include methods and systems for enabling interaction with a broadcast video content stream and may include developing a machine learning understanding of at least one event within a video feed for a video broadcast, the understanding including identifying context information relating to the event; and providing a touch screen user interface by which a broadcaster can interact with the video feed, wherein the options for broadcaster interaction are based on the context information, wherein the interaction with the touch screen controls the content of the broadcast video event. In embodiments, the touch screen interface is a large screen adapted to be seen by viewers of the video broadcast as the broadcaster uses the touch screen. In embodiments, the broadcaster can select from a plurality of context-relevant metrics to be displayed on the screen. In embodiments, the broadcaster can display a plurality of video feeds that have similar contexts as determined by the machine learning facility. In embodiments, the similarity of contexts is determined by comparing events within the video feeds. In embodiments, the broadcaster can display a superimposed view of at least two video feeds to facilitate a comparison of events from a plurality of video feeds. In embodiments, the comparison is of similar players from different time periods. In embodiments, a similarity of players is determined by machine understanding of the characteristics of the players from the different time periods. In embodiments, the broadcaster can display a plurality of highlights that are automatically determined by a machine understanding of a live sports event that is the subject of the video feed. In embodiments, the highlights are determined based on similarity to highlights that have been identified for other events. The methods and systems disclosed herein may include methods and systems for enabling interaction with a broadcast video content stream and may include taking a video feed; using machine learning to develop an understanding of an event within the video feed, the understanding including identifying context information relating to the event; automatically, under computer control, extracting the content displaying the event and associating the extracted content with the context information; producing a video content data structure that includes the associated context information; and providing an application by which a user can interact with the video content data structure, wherein the options for user interaction are based on the context information, wherein the interaction with the video content data structure controls the presentation of a broadcast video event on a display screen. Methods and systems disclosed herein may include tracklet stitching. Optical player tracking results in short to medium length tracklets, which typically end when the system loses track of a player or the player collides (or passes close to) with another player. Using team identification and other attributes, algorithms can stitch these tracklets together. Where a human being is in the loop, systems may be designed for rapid interaction and for disambiguation and error handling. Such a system is designed to optimize human interaction with the system. Novel interfaces may be provided to specify the motion of multiple moving actors simultaneously, without having to match up movements frame by frame. In embodiments, custom clipping is used for content creation, such as involving OCR. Machine vision techniques may be used to automatically locate the “score bug” and determine the location of the game clock, score, and quarter information. This information is read and recognized by OCR algorithms. Post-processing algorithms using various filtering techniques are used to resolve issues in the OCR. Kalman filtering/HMMs may be used to detect errors and correct them. Probabilistic outputs (which measure the degree of confidence) assist in this error detection/correction. Sometimes, a score is nonexistent or cannot be detected automatically (e.g., sometimes during PIP or split screens). In these cases, remaining inconsistencies or missing data is resolved with the assistance of human input. Human input is designed to be sparse so that labelers do not have to provide input at every frame. Interpolation and other heuristics are used to fill in the gaps. Consistency checking is done to verify game clock. For alignment2112, as discussed in connection withFIG.21, another advance is to use machine vision techniques to verify some of the events. For example, video of a made shot will typically show the score being increased or will show a ball going through a hoop. Either kind of automatic observation serves to help the alignment process result in the correct video frames being shown to the end user. In accordance with an exemplary and non-limiting embodiment, augmented or enhanced video with extracted semantics-based experience is provided based, at least in part, on 3D position/motion data. In accordance with other exemplary embodiments, there is provided embeddable app content for augmented video with an extracted semantics-based experience. In yet another exemplary embodiment, there is provided the ability to automatically detect the court/field, and relative positioning of the camera, in (near) real time using computer vision techniques. This may be combined with automatic rotoscoping of the players in order to produce dynamic augmented video content. The methods and systems disclosed herein may include methods and systems for embedding video content in an application and may include taking a video feed; using machine learning to develop an understanding of an event within the video feed, the understanding including identifying context information relating to the event; automatically, under computer control, extracting the content displaying the event and associating the extracted content with the context information; producing a video content data structure that includes the associated context information; taking an application that displays video content; and embedding the video content data structure in the application. In embodiments, the user interface of the application offers the user the option to control the presentation of the video content from the video content data structure in the application. In embodiments, the control of the presentation is based on at least one of a user preference and a user profile. In embodiments, the application is a mobile application that provides a story about an event and wherein the video content data structure comprises at least one of a content card and a digital still image. The methods and systems disclosed herein may include methods and systems for enabling a mobile application that allows user interaction with video content and may include a video ingestion facility for taking a video feed; a machine learning facility for developing an understanding of an event within the video feed, the understanding including identifying context information relating to the event; and a video production facility for automatically, under computer control, extracting the content displaying the event, associating the extracted content with the context information and producing a video content data structure that includes the associated context information; and using the context information for a plurality of such video content data structures to generate, automatically under computer control, a story that includes a sequence of the video content data structures. In embodiments, the content of the story is based on a user profile that is based on at least one of an expressed user preference, information about a user interaction with video content, and demographic information about the user. In embodiments, the methods and systems may further include determining a pattern relating to a plurality of events in the video feed and associating the determined pattern with the video content data structure as additional context information. In embodiments, the pattern relates to a highlight event within the video feed. In embodiments, the highlight event is associated with at least one of a player and a team. In embodiments, the embedded application allows a user to indicate at least one of a player and a team for which the user wishes to obtain video feeds containing the highlight events. In embodiments, the pattern relates to a comparison of events occurring at least one of within the video feed or within a plurality of video feeds. In embodiments, the comparison is between events occurring over time. In embodiments, the embedded application allows a user to select at least one player to obtain a video providing a comparison between the player and at least one of a past representation of the same player and a representation of another player. In embodiments, the pattern is a cause-and-effect pattern related to the occurrence of a following type of event after the occurrence of a pre-cursor type of event. In embodiments, the embedded application allows the user to review video cuts in a sequence that demonstrate the cause-and-effect pattern. In embodiments, the application provides a user interface for allowing a user to enter at least one of text and audio input to provide a narrative for a sequence of events within the video feed. In embodiments, the user may select a sequence of video events from within the feed for display in the application. In embodiments, upon accepting the user narrative, the system automatically generates an electronic story containing the events from the video feed and the narrative. The methods and systems disclosed herein may include methods and systems for enabling a mobile application that allows user interaction with video content and may include taking a video feed; using a machine learning facility to develop an understanding of an event within the video feed, the understanding including identifying context information relating to the event; and automatically, under computer control, extracting the content displaying the event, associating the extracted content with the context information and producing a video content data structure that includes the associated context information. In embodiments, the methods and systems may further include using the context information for a plurality of such video content data structures to generate, automatically under computer control, a story that includes a sequence of the video content data structures. In embodiments, the user may interact with an application, such as on a phone, laptop, or desktop, or with a remote control, to control the display of broadcast video. As noted above in connection with interaction with a mobile application, options for user interaction may be customized based on the context of an event, such as by offering options to display context-relevant metrics for the event. These selections may be used to control the display of broadcast video by the user, such as by selecting preferred, context-relevant metrics that appear as overlays, sidebars, scrolling information, or the like on the video display as various types of events take place in the video stream. For example, a user may select settings for a context like a three point shot attempt, so that when the video displays three point shot attempts, particular metrics (e.g., the average success percentage of the shooter) are shown as overlays above the head of the shooter in the video. The methods and systems disclosed herein may include methods and systems for personalizing content for each type of user based on determining the context of the content through machine analysis of the content and based on an indication by the user of a preference for a type of presentation of the content. The methods and systems disclosed herein may include methods and systems for enabling a user to express preferences relating to display of video content and may include: taking a video feed; using machine learning to develop an understanding of an event within the video feed, the understanding including identifying context information relating to the event; automatically, under computer control, extracting the content displaying the event and associating the extracted content with the context information; producing a video content data structure that includes the associated context information; and providing a user interface by which a user can indicate a preference for how content that is associated with a particular type of context will be presented to the user. In embodiments, a user may be presented with an interface element for a mobile application, browser, desktop application, remote control, tablet, smart phone, or the like, for indicating a preference as to how content will be presented to the user. In embodiments, the preference may be indicated for a particular context, such a context determined by a machine understanding of an event. In embodiments, a user may select to see certain metrics, graphics or additional information overlaid on top of the existing broadcast for certain types of semantic events such as players expected field goal percentage when they possess the ball or the type and effectiveness of defense being played on a pick and roll. The methods and systems disclosed herein may include methods and systems for automatically generating stories/content based on the personal profile of a viewer and their preferences or selections of contextualized content. The methods and systems disclosed herein may include methods and systems for enabling a mobile application allowing user interacting with video content method and may include taking a video feed; using machine learning to develop an understanding of an event within the video feed, the understanding including identifying context information relating to the event; automatically, under computer control, extracting the content displaying the event and associating the extracted content with the context information; producing a video content data structure that includes the associated context information; and using the context information for a plurality of such video content data structures to generate, automatically under computer control, a story that includes the video content data structures, wherein the content of the story is based on a user preference. In embodiments, the user preference for a type of content is based on at least one of a user expressed preference and a preference that is inferred based on user interaction with an item of content. In embodiments, items of content that are associated, based on machine understanding, with particular events in particular contexts can be linked together, or linked with other content, to produce modified content such as stories. For example, a game summary, such as extracted from an online report about an event, may be augmented with machine-extracted highlight cuts that correspond to elements featured in the game summary, such as highlights of important plays, images of particular players, and the like. These stories can be customized for a user, such as linking a story about a game played by the user's favorite team with video cuts of the user's favorite player that were taken during the game. The methods and systems disclosed herein may include methods and systems for using machine learning to extract context information and semantically relevant events and situations from a video content stream, such that the events and situations may be presented according to the context of the content. The methods and systems disclosed herein may include methods and systems for embedding video content in an application and may include taking a video feed; using machine learning to develop an understanding of an event within the video feed, the understanding including identifying context information relating to the event; automatically, under computer control, extracting the content displaying the event and associating the extracted content with the context information; producing a video content data structure that includes the associated context information; taking an application that displays video content; and embedding the video content data structure in the application, wherein the location of the embedded video content in the application is based on the context information. In embodiments, context-identified video cuts can be used to enrich or enhance applications, such as by embedding the cuts in relevant locations in the applications. For example, a mobile application displaying entertainment content may be automatically populated with video cuts of events that are machine-extracted and determined to be of the appropriate type (based on context), for the application. A video game application can be enhanced, such as by including real video cuts of plays that fit a particular context (e.g., showing a pick-and-roll play where players A and B are matched up against players C and D in a real game, and the same matchup is determined to occur in the video game). To facilitate embedding the application, a set of protocols, such as APIs, may be defined, by which available categories (such as semantic categories, types of contexts, types of events, and the like) are specified, such that an application may call for particular types of events, which can, in turn, be embedded in the application. Similarly, an application may be constructed with appropriate pointers, calls, objects, or the like, that allow a designer to specify, and call for, particular types of events, which may be automatically extracted from a library of machine-extracted, context-identified events and then embedded where appropriate into the application code. In embodiments, an application may provide stories about events, such as sporting events, and the machine-extracted content may include content cards or digital stills that are tagged by context so that they can be placed in appropriate locations in a story. The application can provide automatically generated content and stories, enhanced by content from a live game. In embodiments, an application may recommend video clips based on the use of keywords that match machine learned semantics that enable users to post or share video clips automatically tailored to text that they are writing. For example, clips may be recommended that include the presence of a particular player, that include a particular type of play (e.g., “dunks”) and/or that are from a particular time period (e.g., “last night,” etc.). In accordance with an exemplary and non-limiting embodiment, there is described a method for the extraction of events and situations corresponding to semantically relevant concepts. In yet other embodiments, semantic events may be translated and cataloged into data and patterns. The methods and systems disclosed herein may include methods and systems for embedding content cards or digital stills with contextualized content stories/visualizations into a mobile application. They may include automatically generated content, such as stories, extracted from a live game delivered to users via an application, such as a mobile application, an augmented reality glasses application, a virtual reality glasses application, or the like. In embodiments, the application is a mobile application that provides a story about an event and wherein the video content data structure comprises at least one of a content card and a digital still image. The methods and systems disclosed herein may include methods and systems for applying contextualized content from actual sporting events to video games to improve the reality of the game play. The methods and systems disclosed herein may include methods and systems for improving a video game and may include taking a video feed; using machine learning to develop an understanding of at least one first real event within the video feed, the understanding including identifying context information relating to the first real event; taking a game event coded for display within a video game; matching the context information for the real event with the context of the game event in the video game; comparing the display of the game event to the video for the real event; and modifying the coding of the game event based on the comparison. In embodiments, context information can be used to identify video cuts that can be used to improve video games, such as by matching the context of a real event with a similar context in a coded video game event, comparing the video for the real event with the video game display of a similar event, and modifying the video event to provide a more faithful simulation of the real event. The methods and systems disclosed herein may include methods and systems for taking the characteristics of a user either from a video capture of their recreational play or through user generated features and importing the user's avatar into a video game. The methods and systems disclosed herein may include methods and systems for interactive contextualized content that can be filtered and adjusted via a touch screen interface. In embodiments, the user interface is a touch screen interface. The methods and systems disclosed herein may include methods and systems for real time display of relevant fantasy and betting metrics overlaid on a live game feed. The methods and systems disclosed herein may include methods and systems for real time adjustment of betting lines and/or additional betting option creation based on in-game contextual content. The methods and systems disclosed herein may include methods and systems for taking a video feed and using machine learning to develop an understanding of at least one first event within the video feed. The understanding includes identifying context information relating to the first event. The methods and systems also include determining a metric based on the machine understanding. The metric is relevant to at least one of a wager and a fantasy sports outcome. The methods and systems include presenting the metric as an overlay for an enhanced video feed. In embodiments, the metrics described throughout this disclosure may be placed as overlays on video feeds. For example, metrics calculated based on machine-extracted events that are relevant to betting lines, fantasy sports outcomes, or the like, can be presented as overlays, scrolling elements, or the like on a video feed. The metrics to be presented can be selected based on context information, such as showing fantasy metrics for players who are on screen at the time or showing the betting line where a scoring play impacts the outcome of a bet. As noted above, the displays may be customized and personalized for a user, such as based on that user's fantasy team for a given week or that user's wagers for the week. The methods and systems disclosed herein may include methods and systems for taking a video feed of a recreational event; using machine learning to develop an understanding of at least one event within the video feed, the understanding including identifying context information relating to the event; and based on the machine understanding, providing content including information about a player in the recreational event based on the machine understanding and the context. The methods and systems may further include providing a comparison of the player to at least one professional player according to at least one metric that is based on the machine understanding. In embodiments, machine understanding can be applied to recreational venues, such as for capturing video feeds of recreational games, practices, and the like. Based on machine understanding, highlight clips, metrics, and the like, as disclosed throughout this disclosure, may be extracted by processing the video feeds, including machine understanding of the context of various events within the video. In embodiments, metrics, video, and the like can be used to provide players with personalized content, such as a highlight reel of good plays, or a comparison to one or more professional players (in video cuts, or with semantically relevant metrics). Context information can allow identification of similar contexts between recreational and professional events, so that a player can see how a professional acted in a context that is similar to one faced by the recreational player. The methods and systems may enable the ability to use metrics and events recorded from a video stream to enable the creation of a recreational fantasy sports game with which users can interact. The methods and systems may enable the ability for to recognize specific events or metrics from a recreational game and compare them to similar or parallel events from a professional game to help coach a recreational player or team or for the creation of a highlight reel that features both recreational and professional video cuts. The methods and systems disclosed herein may include methods and systems for providing enhanced video content and may include using machine learning to develop an understanding of a plurality of events within at least one video feed to determine at least one type for each of the plurality of events; extracting a plurality of video cuts from the video feed and indexing the plurality of video cuts based on at least one type of event determined by the understanding developed by machine learning; and making the indexed and extracted video cuts available to a user. In embodiments, the user is enabled to at least one of edit, cut, and mix the video cuts to provide an enhanced video containing at least one of the video cuts. In embodiments, the user is enabled to share the enhanced video. In embodiments, the methods and systems may further include indexing at least one shared, enhanced video with the semantic understanding of the type of events in that was determined by machine learning. In embodiments, the methods and systems may further include using the index information for the shared, enhanced video to determine a similarity between the shared, enhanced video and at least one other video content item. In embodiments, the similarity is used to identify additional extracted, indexed video cuts that may be of interest to the user. In embodiments, the similarity is used to identify other users who have shared similarly enhanced video. In embodiments, the similarity is used to identify other users who are likely to have an interest in the shared, enhanced video. In embodiments, the methods and systems may further include recommending at least one of the shared, enhanced video and one of the video cuts based on an understanding of the preferences of the other users. In embodiments, the similarity is based at least in part on user profile information for users who have indicated an interest in the video cut and the other video content item. The methods and systems disclosed herein may include methods and systems for providing enhanced video content and may include using machine learning to develop an understanding of a plurality of events within at least one video feed to determine at least one type for each of the plurality of events; extracting a plurality of video cuts from the video feed and indexing the plurality of video cuts to form an indexed set of extracted video cuts, wherein the indexing is based on at least one type of event determined by the understanding developed by machine learning; determining at least one pattern relating to a plurality of events in the video feed; adding the determined pattern information to the index for the indexed set of video cuts; and making the indexed and extracted video cuts available to a user. In embodiments, the user is enabled to at least one of edit, cut, and mix the video cuts to provide an enhanced video containing at least one of the video cuts. In embodiments, the user is enabled to share the enhanced video. In embodiments, the video cuts are clustered based on the patterns that exist within the video cuts. In embodiments, the pattern is determined automatically using machine learning and based on the machine understanding of the events in the video feed. In embodiments, the pattern is a highlight event within the video feed. In embodiments, the highlight event is presented to the user when the indexed and extracted video cut is made available to the user. In embodiments, the user is prompted to watch a longer video feed upon viewing the indexed and extracted video cut. In accordance with an exemplary and non-limiting embodiment, there is provided a touch screen or other gesture-based interface experience based, at least in part, on extracted semantic events. The methods and systems disclosed herein may include methods and systems for machine extracting semantically relevant events from 3D motion/position data captured at a venue, calculating a plurality of metrics relating to the events, and presenting the metrics in a video stream based on the context of the video stream. The methods and systems disclosed herein may include methods and systems for producing machine-enhanced video streams and may include taking a video feed from 3D motion and position data from a venue; using machine learning to develop an understanding of at least one first event within the video feed, the understanding including identifying context information relating to the first event; calculating a plurality of metrics relating to the events; and producing an enhanced video stream that presents the metrics in the video stream, wherein the presentation of at least one metric is based on the context information for the event with which the metric is associated in the video stream. In embodiments, semantically relevant events determined by machine understanding of 3D motion/position data for an event from a venue can be used to calculate various metrics, which may be displayed in the video stream of the event. Context information, which may be determined based on the types and sequences of events, can be used to determine what metrics should be displayed at a given position within the video stream. These metrics may also be used to create new options for users to place wagers on or be integrated into a fantasy sports environment. The methods and systems disclosed herein may include methods and systems enabling a user to cut or edit video based on machine learned context and share the video clips. These may further include allowing a user to interact with the video data structure to produce an edited video data stream that includes the video data structure. In embodiments, the interaction includes at least one of editing, cutting, and sharing a video clip that includes the video data structure. The methods and systems may enable the ability for users to interact with video cuts through an interface to enhance the content with graphics or metrics based on a pre-set set of options, and then share a custom cut and enhanced clip. The methods and systems may include the ability to automatically find similarity in different video clips based on semantic context contained in the clips, and then cluster clips together or to recommend additional clips for viewing. The methods and systems may include the ability to extract contextualized content from a feed of a recreational event to immediately deliver content to players, including comparing a recreational player to a professional player based on machine learned understanding of player types. In accordance with an exemplary and non-limiting embodiment, there is described a second screen interface unique to extracted semantic events and user selected augmentations. In yet other embodiments, the second screen may display real-time, or near real time, contextualized content. In accordance with further exemplary and non-limiting embodiments, the methods and systems disclosed herein may include methods and systems for taking a video feed; using machine learning to develop an understanding of an event within the video feed, the understanding including identifying context information relating to the event; automatically, under computer control, extracting the content displaying the event and associating the extracted content with the context information; and producing a video content data structure that includes the associated context information. In embodiments, the methods and systems may further include determining a plurality of semantic categories for the context information and filtering a plurality of such video content data structures based on the semantic categories. In embodiments, the methods and systems may further include matching the events that occur in one video feed to those that occur in a separate video feed such that the semantic understanding captured in the first video feed can be used to at least one of filter and cut a separate second video feed based on the same events. In embodiments, the methods and systems may further include determining a pattern relating to a plurality of the events and providing a content data structure based on the pattern. In embodiments, the pattern comprises a plurality of important plays in a sports event that are identified based on comparison to similar plays from previous sports events. In embodiments, the pattern comprises a plurality of plays in a sports event that is determined to be unusual based on comparison to video feeds from other sports events. In embodiments, the methods and systems may further include extracting semantic events over time to draw a comparison of at least one of a player and a team over time. In embodiments, the methods and systems may further include superimposing video of events extracted from video feeds from at least two different time periods to illustrate the comparison. In embodiments, the methods and systems may further include allowing a user to interact with the video data structure to produce an edited video data stream that includes the video data structure. In embodiments, the interaction includes at least one of editing, mixing, cutting, and sharing a video clip that includes the video data structure. In embodiments, the methods and systems may further include enabling users to interact with the video cuts through a user interface to enhance the video content with at least one graphic element selected from a menu of options. In embodiments, the methods and systems may further include enabling a user to share the enhanced video content. In embodiments, the methods and systems may further include enabling a user to find similar video clips based on the semantic context identified in the clips. In embodiments, the methods and systems may further include using the video data structure and the context information to construct modified video content for a second screen that includes the video data structure. In embodiments, the content for the second screen correlates to the timing of an event displayed on a first screen. In embodiments, the content for the second screen includes a metric determined based on the machine understanding, wherein the metric is selected based on the context information. The methods and systems disclosed herein may include methods and systems for displaying contextualized content of a live event on a second screen that correlates to the timing of the live event on the first screen. These may include using the video data structure and the context information to construct modified video content for a second screen that includes the video data structure. In embodiments, the content for the second screen correlates to the timing of an event displayed on a first screen. In embodiments, the content for the second screen includes a metric determined based on the machine understanding, wherein the metric is selected based on the context information. In embodiments, machine extracted metrics and video cuts can be displayed on a second screen, such as a tablet, smart phone, or smart remote control screen, such as showing metrics that are relevant to what is happening, in context, on a main screen. The methods and systems disclosed herein may include methods and systems for an ingestion facility adapted or configured to ingest a plurality of video feeds; a machine learning system adapted or configured to apply machine learning on a series of events in a plurality of video feeds in order to develop an understanding of an event within the video feed, the understanding including identifying context information relating to the event; an extraction facility adapted or configured to automatically, under computer control, extract the content displaying the event and associate the extracted content with the context information; and a video publishing facility for producing a video content data structure that includes the associated context information. In embodiments, the methods and systems may further include an analytic facility adapted or configured to determine a plurality of semantic categories for the context information and filter a plurality of such video content data structures based on the semantic categories. In embodiments, the methods and systems may further include a matching engine adapted or configured to match the events that occur in one video feed to those that occur in a separate video feed such that the semantic understanding captured in the first video feed can be used to at least one of filter and cut a separate second video feed based on the same events. In embodiments, the methods and systems may further include a pattern recognition facility adapted or configured to determine a pattern relating to a plurality of the events and providing a content data structure based on the pattern. The methods and systems disclosed herein may include methods and systems for displaying machine extracted, real time, contextualized content based on machine identification of a type of event occurring in a live video stream. The methods and systems disclosed herein may include methods and systems for taking a video feed; using machine learning to develop an understanding of an event within the video feed, the understanding including identifying context information relating to the event; automatically, under computer control, extracting the content displaying the event and associating the extracted content with the context information; and producing a video content data structure that includes the associated context information. The methods and systems disclosed herein may include methods and systems for providing context information in video cuts that are generated based on machine extracted cuts that are filtered by semantic categories. The methods and systems disclosed herein may include methods and systems for determining a plurality of semantic categories for the context information and filtering a plurality of the video content data structures based on the semantic categories. The methods and systems disclosed herein may include methods and systems for matching the events that occur in one video feed to those that occur in a separate video feed such that the semantic understanding captured in the first video feed can be used to filter and cut a separate second video feed based on these same events. The methods and systems disclosed herein may include methods and systems for enabling user interaction with a mobile application that displays extracted content, where the user interaction is modified based on the context of the content (e.g., the menu is determined by context). The methods and systems disclosed herein may include methods and systems for enabling an application allowing user interaction with video content and may include an ingestion facility adapted or configured to access at least one video feed, wherein the ingestion facility may be executing on at least one processor; a machine learning facility operating on the at least one video feed to develop an understanding of an event within the video feed, the understanding including identifying context information relating to the event; an extraction facility adapted or configured to automatically, under computer control, extract the content displaying the event and associate the extracted content with the context information; a video production facility adapted or configured to produce a video content data structure that includes the associated context information; and an application having a user interface by which a user can interact with the video content data structure, wherein the options for user interaction are based on the context information. In embodiments, the application is a mobile application. In embodiments, the application is at least one of a smart television application, a virtual reality headset application and an augmented reality application. In embodiments, the user interface is a touch screen interface. In embodiments, the user interface allows a user to enhance the video feed by selecting a content element to be added to the video feed. In embodiments, the content element is at least one of a metric and a graphic element that is based on the machine understanding. In embodiments, the user interface allows the user to select content for a particular player of a sports event. In embodiments, the user interface allows the user to select content relating to a context involving the matchup of two particular players in a sports event. In embodiments, the system takes at least two video feeds from different time periods, the machine learning facility determines a context the includes a similarity between at least one of a plurality of players and a plurality of plays in the two feeds and the user interface allows the user to select at least one of the players and the plays to obtain a video feed that illustrates a comparison. In embodiments, the user interface includes options for at least one of editing, cutting, and sharing a video clip that includes the video data structure. In embodiments, the video feed comprises 3D motion camera data captured from a live sports venue. In embodiments, the ability of the machine learning facility to develop the understanding is developed by feeding the machine learning facility a plurality of events for which context has already been identified. The methods and systems disclosed herein may include methods and systems for enabling a mobile application allowing user interaction with video content and may include taking at least one video feed; applying machine learning on the at least one video feed to develop an understanding of an event within the video feed, the understanding including identifying context information relating to the event; automatically, under computer control, extracting the content displaying the event and associating the extracted content with the context information; producing a video content data structure that includes the associated context information; and providing a mobile application having a user interface by which a user can interact with the video content data structure, wherein the options for user interaction are based on the context information. In embodiments, the user interface is a touch screen interface. In embodiments, the user interface allows a user to enhance the video feed by selecting a content element to be added to the video feed. In embodiments, the content element is at least one of a metric and a graphic element that is based on the machine understanding. In embodiments, the user interface allows the user to select content for a particular player of a sports event. In embodiments, the user interface allows the user to select content relating to a context involving the matchup of two particular players in a sports event. In embodiments, the system takes at least two video feeds from different time periods, the machine learning facility determines a context the includes a similarity between at least one of a plurality of players and a plurality of plays in the two feeds and the user interface allows the user to select at least one of the players and the plays to obtain a video feed that illustrates a comparison. In embodiments, the user interface includes options for at least one of editing, cutting, and sharing a video clip that includes the video data structure. In embodiments, the video feed comprises 3D motion camera data captured from a live sports venue. In embodiments, the ability of the machine learning facility to develop the understanding is developed by feeding the machine learning facility a plurality of events for which context has already been identified. The methods and systems disclosed herein may include methods and systems for enabling a mobile application allowing user interacting with video content and may include taking a video feed; using machine learning to develop an understanding of an event within the video feed, the understanding including identifying context information relating to the event; automatically, under computer control, extracting the content displaying the event and associating the extracted content with the context information; producing a video content data structure that includes the associated context information; and providing a mobile application by which a user can interact with the video content data structure, wherein the options for user interaction are based on the context information. In embodiments, machine extracted content, with associated context information, may be provided to users via a mobile application, through which the users may display and interact with the content, such as by selecting particular types of content based on a desired semantic category (such as by selecting the category in list, menu, or the like), playing content (including pausing, rewinding, fast forwarding, and the like), and manipulating content (such as positioning content within a display window, zooming, panning, and the like). In embodiments, the nature of the permitted interaction may be governed by the context information associated with the content, where the context information is based on a machine understanding of the content and its associated context. For example, where the content is related to a particular type of play within a context of an event like a game, such as rebounding opportunities in basketball, the user may be permitted to select from a set of metrics that are relevant to rebounding, so that the selected metrics from a context-relevant set are displayed on the screen with the content. If the context is different, such as if the content relates to a series of pick-and-roll plays by a particular player, different metrics may be made available for selection by the user, such as statistics for that player, or metrics appropriate for pick-and-rolls. Thus, the machine-extracted understanding of an event, including context information, can be used to customize the content displayed to the user, including to allow the user to select context-relevant information for display. The methods and systems disclosed herein may include methods and systems for allowing a user to control a presentation of a broadcast video event, where the options for control are based on a context of the content as determined by machine extraction of semantically relevant events from the content. In accordance with an exemplary and non-limiting embodiment, there is described a method for “painting” translated semantic data onto an interface. In accordance with an exemplary and non-limiting embodiment, there is described spatiotemporal pattern recognition based, at least in part, on optical XYZ alignment for semantic events. In yet other embodiments, there is described the verification and refinement of spatiotemporal semantic pattern recognition based, at least in part, on hybrid validation from multiple sources. In accordance with an exemplary and non-limiting embodiment, there is described human identified video alignment labels and markings for semantic events. In yet other embodiments, there is described machine learning algorithms for spatiotemporal pattern recognition based, at least in part, on human identified video alignment labels for semantic events. In accordance with an exemplary and non-limiting embodiment, there is described automatic game clock indexing of video from sporting events using machine vision techniques, and cross-referencing this index with a semantic layer that indexes game events. The product is the ability to query for highly detailed events and return the corresponding video in near real-time. In accordance with an exemplary and non-limiting embodiment, there is described unique metrics based, at least in part, on spatiotemporal patterns including, for example, shot quality, rebound ratings (positioning, attack, conversion) and the like. In accordance with an exemplary and non-limiting embodiment, there is described player tracking using broadcast video feeds. In accordance with an exemplary and non-limiting embodiment, there is described player tracking using a multi-camera system. In accordance with an exemplary and non-limiting embodiment, there is described video cut-up based on extracted semantics. A video cut-up is a remix made up of small clips of video that are related to each other in some meaningful way. The semantic layer enables real-time discovery and delivery of custom cut-ups. The semantic layer may be produced in one of two ways: (1) Video combined with data produces a semantic layer, or (2) video directly to a semantic layer. Extraction may be through ML or human tagging. In some exemplary embodiments, video cut-up may be based, at least in part, on extracted semantics, controlled by users in a stadium and displayed on a Jumbotron. In other embodiments, video cut-up may be based, at least in part, on extracted semantics, controlled by users at home and displayed on broadcast TV. In yet other embodiments, video cut-up may be based, at least in part, on extracted semantics, controlled by individual users and displayed on the web, tablet, or mobile for that user. In yet other embodiments, video cut-up may be based, at least in part, on extracted semantics, created by an individual user, and shared with others. Sharing could be through inter-tablet/inter-device communication, or via mobile sharing sites. In accordance with further exemplary and non-limiting embodiments, the methods and systems disclosed herein may include methods and systems for enabling an application allowing user interaction with video content and may include an ingestion facility for taking at least one video feed; a machine learning facility operating on the at least one video feed to develop an understanding of an event within the video feed, the understanding including identifying context information relating to the event; an extraction facility for automatically, under computer control, extracting the content displaying the event and associating the extracted content with the context information; a video production facility for producing a video content data structure that includes the associated context information; and an application having a user interface by which a user can interact with the video content data structure, wherein the options for user interaction are based on the context information. In embodiments, the application is a mobile application. In embodiments, the application is at least one of a smart television application, a virtual reality headset application and an augmented reality application. In embodiments, the user interface is a touch screen interface. In embodiments, the user interface allows a user to enhance the video feed by selecting a content element to be added to the video feed. In embodiments, the content element is at least one of a metric and a graphic element that is based on the machine understanding. In embodiments, the user interface allows the user to select content for a particular player of a sports event. In embodiments, the user interface allows the user to select content relating to a context involving the matchup of two particular players in a sports event. In embodiments, the system takes at least two video feeds from different time periods, the machine learning facility determines a context the includes a similarity between at least one of a plurality of players and a plurality of plays in the two feeds and the user interface allows the user to select at least one of the players and the plays to obtain a video feed that illustrates a comparison. In embodiments, the user interface includes options for at least one of editing, cutting, and sharing a video clip that includes the video data structure. In embodiments, the video feed comprises 3D motion camera data captured from a live sports venue. In embodiments, the ability of the machine learning facility to develop the understanding is developed by feeding the machine learning facility a plurality of events for which context has already been identified. The methods and systems disclosed herein may include methods and systems for enabling a mobile application allowing user interaction with video content and may include taking at least one video feed; applying machine learning on the at least one video feed to develop an understanding of an event within the video feed, the understanding including identifying context information relating to the event; automatically, under computer control, extracting the content displaying the event and associating the extracted content with the context information; producing a video content data structure that includes the associated context information; and providing a mobile application having a user interface by which a user can interact with the video content data structure, wherein the options for user interaction are based on the context information. In embodiments, the user interface is a touch screen interface. In embodiments, the user interface allows a user to enhance the video feed by selecting a content element to be added to the video feed. In embodiments, the content element is at least one of a metric and a graphic element that is based on the machine understanding. In embodiments, the user interface allows the user to select content for a particular player of a sports event. In embodiments, the user interface allows the user to select content relating to a context involving the matchup of two particular players in a sports event. In embodiments, the system takes at least two video feeds from different time periods, the machine learning facility determines a context the includes a similarity between at least one of a plurality of players and a plurality of plays in the two feeds and the user interface allows the user to select at least one of the players and the plays to obtain a video feed that illustrates a comparison. In embodiments, the user interface includes options for at least one of editing, cutting, and sharing a video clip that includes the video data structure. In embodiments, the video feed comprises 3D motion camera data captured from a live sports venue. In embodiments, the ability of the machine learning facility to develop the understanding is developed by feeding the machine learning facility a plurality of events for which context has already been identified. The methods and systems disclosed herein may include methods and systems for an analytic system and may include a video ingestion facility for ingesting at least one video feed; a machine learning facility that develops an understanding of at least one event within the video feed, wherein the understanding identifies at least a type of the event and a time of the event in an event data structure; a computing architecture enabling a model that takes one or more event data structures as input and applies at least one calculation to transform the one or more event data structures into an output data structure; and a data transport layer of the computing architecture for populating the model with the event data structures as input to the model. In embodiments, the output data structure includes at least one prediction. In embodiments, the prediction is of an outcome of at least one of a sporting event and at least one second event occurring within a sporting event. In embodiments, the video feed is of a live sporting event, wherein the prediction is made during the live sporting event, and wherein the prediction relates to the same sporting event. In embodiments, the prediction is based on event data structures from a plurality of video feeds. In embodiments, the prediction is used for at least one of placing a wager, setting a line for a wager, interacting with a fantasy program, setting a parameter of a fantasy program, providing insight to a coach and providing information to a fan. In embodiments, the model takes inputs from a plurality of data sources in addition to the event data structures obtained from the video feed. In embodiments, the methods and systems may further include a pattern analysis facility that takes a plurality of the event data structures and enables analysis of patterns among the event data structures. In embodiments, the pattern analysis facility includes at least one tool selected from the group consisting of a pattern visualization tool, a statistical analysis tool, a machine learning tool, and a simulation tool. In embodiments, the methods and systems may further include a second machine learning facility for refining the model based on outcomes of a plurality of predictions made using the model. The methods and systems disclosed herein may include methods and systems for an analytic method and may include ingesting at least one video feed in a computing platform capable of handling video data; developing an understanding of at least one event within the video feed using machine learning, wherein the understanding identifies at least a type of the event and a time of the event in an event data structure; providing a computing architecture that enables a model that takes one or more event data structures as input and applies at least one calculation to transform the one or more event data structures into an output data structure; and populating the model with the event data structures as input to the model. In embodiments, the output data structure includes at least one prediction. In embodiments, the prediction is of an outcome of at least one of a sporting event and at least one-second event occurring within a sporting event. In embodiments, the video feed is of a live sporting event, wherein the prediction is made during the live sporting event, and wherein the prediction relates to the same sporting event. In embodiments, the prediction is based on event data structures from a plurality of video feeds. In embodiments, the prediction is used for at least one of placing a wager, setting a line for a wager, interacting with a fantasy program, setting a parameter of a fantasy program, providing insight to a coach and providing information to a fan. In embodiments, the model takes inputs from a plurality of data sources in addition to the event data structures obtained from the video feed. In embodiments, the methods and systems may further include providing a pattern analysis facility that takes a plurality of the event data structures and enables analysis of patterns among the event data structures. In embodiments, the pattern analysis facility includes at least one tool selected from the group consisting of a pattern visualization tool, a statistical analysis tool, a machine learning tool, and a simulation tool. In embodiments, the methods and systems may further include at least one of providing and using a second machine learning facility to refine the model based on outcomes of a plurality of predictions made using the model. The methods and systems disclosed herein may include methods and systems for taking a video feed; using machine learning to develop an understanding of a semantically relevant event within the video feed; indexing video segments of the video feed with information indicating the semantically relevant events identified within the feed by the machine learning; and applying machine learning to a plurality of the semantically relevant events to determine a pattern of events. In embodiments, the pattern is within a video feed. In embodiments, the pattern is across a plurality of video feeds. In embodiments, the pattern corresponds to a narrative structure. In embodiments, the narrative structure corresponds to a recurring pattern of events. In embodiments, the narrative structure relates to a sporting event and wherein the pattern relates to at least one of a blow-out victory pattern, a comeback win pattern, a near comeback pattern, a back-and-forth game pattern, an individual achievement pattern, an injury pattern, a turning point moment pattern, a close game pattern, and a team achievement pattern. In embodiments, the indexed video segments are arranged to support the narrative structure. In embodiments, the arranged segments are provided in an interface for developing a story using the segments that follow the narrative structure and wherein a user may at least one of edit and enter additional content for the story. In embodiments, summary content for the narrative structure is automatically generated, under computer control, to provide a story that includes the video sequences. In embodiments, the methods and systems may further include delivering a plurality of the automatically generated stories at least one of from a defined time period and of a defined type, allowing a user to indicate whether they like or dislike the delivered stories, and using the indications to inform later delivery of at least one additional story. In embodiments, the pattern is relevant to a prediction. In embodiments, the prediction is related to a wager, and the pattern corresponds to similar patterns that were used to make predictions that resulted in successful wagers in other situations. The methods and systems disclosed herein may include methods and systems for machine-extracting semantically relevant events from a video content stream and determining a pattern relating to the events. The methods and systems also include providing a content stream based on the pattern. In embodiments, the content stream is used to provide coaching information based on the pattern. In embodiments, the content stream is used to assist prediction of an outcome in a fantasy sports contest. In embodiments, the pattern is used to provide content for a viewer of a sporting event. The methods and systems disclosed herein may include methods and systems for machine-extracting semantically relevant events from a video content stream; determining a pattern relating to the events; storing the pattern information with the extracted events; and providing a user with the option to view and interact with the patterns, wherein at least one of the patterns and the interaction options are personalized based on a profile of the user. In embodiments, the profile is based on at least one of user indication of a preference, information about actions of the user, and demographic information about the user. In embodiments, the pattern comprises at least one of a trend and a statistic that is curated to correspond with the user profile. In embodiments, the pattern relates to a comparison of a professional athlete to another athlete. In embodiments, the other athlete is the user and the comparison is based on a playing style of the user as determined by at least one of information indicated by the user and a video feed of the user. In embodiments, the pattern relates to an occurrence of an injury. In embodiments, the pattern information is used to provide coaching to prevent an injury. In embodiments, the methods and systems may further include automatically generating, under computer control, an injury prevention regimen based on the pattern and based on information about the user. The methods and systems disclosed herein may include methods and systems for machine-extracting semantically relevant events from a video content stream, determining a pattern relating to the events, and providing a content stream based on the pattern. The methods and systems may further include determining a pattern relating to a plurality of the events and providing a content data structure based on the pattern. In embodiments, machine-extracted information about events and contexts may be used to determine one or more patterns, such as by analyzing time series, correlations, and the like in the machine-extracted events and contexts. For example, tendencies of a team to follow running a certain play with a particular play may be determined by comparing instances of the two plays over time. Embodiments may include extracting particularly interesting or potential “game changing” plays by understanding the context of an individual event and comparing it to similar events from previous games. Embodiments may include extracting situations or plays that are particularly rare or unique by understanding the context of an individual event and comparing it to similar events from previous games. Embodiments may include extracting semantic events over time to draw a comparison of a player's or team's trajectory over time and superimposing video to draw out this comparison. The methods and systems disclosed herein may include methods and systems for a model to predict the outcome of a game or events within a game based on a contextualized understanding of a live event for use in betting/fantasy, coaching, augmented fan experiences, or the like. The methods and systems disclosed herein may include methods and systems for an analytic system and may include taking a video feed; using machine learning to develop an understanding of at least one first event within the video feed, the understanding including identifying context information relating to the first event; taking a model used to predict the outcome of at least one of a live game and at least one second event within a live game; and populating the model with the machine understanding of the first event and the context information to produce a prediction of an outcome of at least one of the game and the second event. In embodiments, the model is used for at least one of placing a wager, setting a line for a wager, interacting with a fantasy program, setting a parameter of a fantasy program, providing insight to a coach and providing information to a fan. In embodiments, machine-extracted event and context information can be used to populate one or more predictive models, such as models used for betting, fantasy sports, coaching, and entertainment. The machine understanding, including various metrics described throughout this disclosure, can provide or augment other factors that are used to predict an outcome. For example, outcomes from particular matchups can be machine extracted and used to predict outcomes from similar matchups in the future. For example, based on the machine understood context of a moment in an individual game, and the machine understanding of similar moments from previous games, a model can be created to predict the outcome of an individual play or a series of plays on which an individual can place a bet or on which a betting line may be set. In embodiments, the methods and systems disclosed herein may include methods and systems for suggestions of bets to make based on patterns of previously successful bets. For example, a user may be prompted with an option to place a bet based on previous betting history on similar events or because a particular moment is an opportunistic time to place a bet based on the context of a game and other user generated preferences or risk tolerances. The methods and systems disclosed herein may include methods and systems for automated storytelling, such as the ability to use patterns extracted from semantic events, metrics derived from tracking data, and combinations thereof to populate interesting stories about the content. The methods and systems disclosed herein may include methods and systems for enabling automated generation of stories and may include taking a video feed; using machine learning to develop an understanding of a semantically relevant event within the video feed, the understanding including identifying context information relating to the event; providing a narrative structure for a story, wherein the narrative structure is arranged based on the presence of semantic types of events and the context of those events; and automatically, under computer control, generating a story following the narrative structure, wherein the story is populated based on a sequence of the machine—understood events and the context information. In embodiments, patterns from semantic events may be used to populate stories. Various narrative structures can be developed, corresponding to common patterns of events (e.g., stories about blow-out victories, comeback wins, back-and-forth games, games that turned on big moments, or the like). Machine extracting of events and contexts can allow identification of patterns in the events and contexts that allow matching to one or more of the narrative structures, as well as population of the story with content for the events, such as video cuts or short written summaries that are determined by the machine extraction (e.g., “in the first quarter, Team A took the lead, scoring five times on the pick-and-roll.”). The methods and systems disclosed herein may include methods and systems for enabling a mobile application allowing user interacting with video content and may include taking a video feed; using machine learning to develop an understanding of an event within the video feed, the understanding including identifying context information relating to the event; automatically, under computer control, extracting the content displaying the event and associating the extracted content with the context information; producing a video content data structure that includes the associated context information; and providing a mobile application by which a user can interact with the video content data structure, wherein the options for user interaction are based on the context information. In embodiments, machine extracted content, with associated context information, may be provided to users via a mobile application, through which the users may display and interact with the content, such as by selecting particular types of content based on a desired semantic category (such as by selecting the category in list, menu, or the like), playing content (including pausing, rewinding, fast forwarding, and the like), and manipulating content (such as positioning content within a display window, zooming, panning, and the like). In embodiments, the nature of the permitted interaction may be governed by the context information associated with the content, where the context information is based on a machine understanding of the content and its associated context. For example, where the content is related to a particular type of play within a context of an event like a game, such as rebounding opportunities in basketball, the user may be permitted to select from a set of metrics that are relevant to rebounding, so that the selected metrics from a context-relevant set are displayed on the screen with the content. If the context is different, such as if the content relates to a series of pick-and-roll plays by a particular player, different metrics may be made available for selection by the user, such as statistics for that player, or metrics appropriate for pick-and-rolls. Thus, the machine-extracted understanding of an event, including context information, can be used to customize the content displayed to the user, including to allow the user to select context-relevant information for display. The methods and systems disclosed herein may include methods and systems for allowing a user to control the presentation of a broadcast video event, where the options for control are based on a context of the content as determined by machine extraction of semantically relevant events from the content. In accordance with an exemplary and non-limiting embodiment, X, Y, and Z data may be collected for purposes of inferring player actions that have a vertical component. The methods and systems disclosed herein may employ a variety of computer vision, machine learning, and/or active learning techniques and tools to extract, analyze and process data elements originating from sources, such as, but not limited to, input data sources relating to sporting events and items in them, such as players, venues, items used in sports (such as balls, pucks, and equipment), and the like. These data elements may be available as video feeds in an example, such that the video feeds may be captured by image recognition devices, video recognition devices, image and video capture devices, audio recognition devices, and the like, including by use of various devices and components such as a camera (such as a tracking camera or broadcast camera), a microphone, an image sensor, or the like. Audio feeds may be captured by microphones and similar devices, such as integrated on or with cameras or associated with independent audio capture systems. Input feeds may also include tracking data from chips or sensors (such as wearable tracking devices using accelerometers and other motion sensors), as well as data feeds about an event, such as a play-by-play data feed, a game clock data feed, and the like. In the case of input feeds, facial recognition systems may be used to capture facial images of players, such as to assist in recognition of players (such as in cases where player numbers are absent or obscured) and to capture and process expressions of players, such as emotional expressions, micro-expressions, or the like. These expressions may be associated with events, such as to assist in machine understanding (e.g., an expression may convey that the event was exciting, meaningful, the like, that it was disappointing to one constituency, that it was not important, or the like). Machine understanding may thus be trained to recognize expressions and provide an expression-based understanding of events, such as to augment one or more data structures associated with an event for further use in the various embodiments described herein. For example, a video feed may be processed based on a machine understanding of expressions to extract cuts that made players of one team happy. As another example, a cut showing an emotional reaction (such as by a player, fan, teammate, or coach) to an event may be associated with a cut of the event itself, providing a combined cut that shows the event and the reaction it caused. The various embodiments described throughout this disclosure the involve machine understanding, extraction of cuts, creation of data structures that are used or processed for various purposes, combining cuts, augmenting data feeds, producing stories, personalizing content, and the like should all be understood to encompass, where appropriate, use of machine understanding of emotional expression within a video feed, including based on use of computer vision techniques, including facial recognition techniques and expression recognition techniques. The computer vision, machine learning and/or active learning tools and techniques (together referred to as computer-controlled intelligent systems for simplicity herein) may receive the data elements from various input feeds and devices as a set of inputs either in real-time (such as in case of a live feed or broadcast) or at a different time (such as in case of a delayed broadcast of the sporting or any other event) without limitations. The computer-controlled intelligent systems may process the set of inputs, apply machine learning and natural language processing using artificial intelligence (AI) and natural language processing (NLP) capabilities to produce a set of services and outputs. In an example, the set of services and outputs may signify spatial-temporal positions of the players and sports accessories/objects such as a bat, ball, football, and the like. In an example, the set of services and outputs may represent spatial-temporal alignments of the inputs such as the video feeds, etc. For example, a broadcast video feed may be aligned in time with another input feed, such as input from one or more motion tracking cameras, inputs from player tracking systems (such as wearable devices), and the like. The set of services and outputs may include machine understood contextual outputs involving machine learning or understanding that may be built using various levels of artificial intelligence, algorithmic processes, computer-controlled tasks, custom rules, and the like, such as described throughout this disclosure. The machine understanding may include various levels of semantic identification, as well as information of position and speed information for various items or elements, identification of basic events such as various types of shots and screens during a sporting event, and identification of complex events or a sequence of events such as various types of plays, higher level metrics and patterns involving such as game trajectory, style of play, strengths and weaknesses of teams and team members/players from each team, and the like. The machine learning tools and input feed alignment may allow automatic generation of content and information such as statistics, predictions, comparisons, and analysis. The machine learning tools may further allow to generate outputs based on a user query input such as to determine various predictive analytics for a particular team player in view of historical shots and screens in a particular context, determine possibilities of success and failures in particular zones and game scenarios conditioned to particular user inputs, and the like. The machine understanding tools may simulate entire aspects of real-life sporting events on a computer screen utilizing visualization and modeling examples. The services and outputs generated by the intelligent computer-controlled systems may be used in a variety of ways such as generation of a live feed or a delayed feed during a sporting event in real time or at a later broadcasting time after the sporting event. The services and outputs may allow generating various analyses of statistics, trends, and strategy before events or across multiple events. The services and outputs may facilitate an interactive user session to extract contextual details relating to instantaneous sporting sessions of the sporting events in association with user defined queries, constraints, and rules. In an example, the services and outputs generated by the computer-controlled intelligent systems may enable spatiotemporal analysis of various game attributes and elements for exploring, learning, analyzing such sporting events and utilize analytics results to generate predictive models and predictive analytics for gaming strategy. These services and outputs may provide valuable insights and learnings that are otherwise not visible. The methods and systems disclosed herein may employ delay-dependent computer vision and machine learning systems (or the intelligent computer-controlled systems) for providing delay-dependent services and outputs with respect to the occurrence of a sporting event. The services and outputs as discussed herein may be employed in different applications with varying time delays relative to the actual occurrence of the sporting event. For example, the actual event may occur at a time T1 and the content feeding or broadcasting may occur at a time T2 with a time delay of T2−T1. The time delay may be small such as of a few seconds so as the content is useful in a live commentary or augmentation of a live video. In such cases, the machine learning tools may for example utilize real-time services and outputs and benefit from the spatiotemporal features and attributes to generate game patterns and automatic validations during the event itself such as to highlight certain event aspects in the commentary and/or validate momentary sessions when there are confusions during the event for decision making. The time delay may be longer in certain situations such as for replays, post-event analysis, predictive modeling, and future strategies, and the like. The methods and systems disclosed herein may support the provisioning of the services and outputs at various time delays by determining processing steps and their order of execution according to delay requirements. The system may be configured to operate such that the services and outputs may be obtained at arbitrary times with an increasing accuracy or time resolution or such that the system targets specific delay requirements as specified by users or defined in accordance with intended applications. For example, if in an application, computational resources are insufficient to process all frames originating from input devices such as cameras etc. at maximum accuracy at a video frame rate within a desired delay, then instead of processing the input video frames in sequential orders, processing may be ordered in such a way that at any time there is a uniform or approximately uniform distribution of processed frames. In some cases, processing decisions may also be influenced by other computational efficiency considerations for certain tasks that operate on video segments, such as an opportunity to reuse certain computations across successive frames in tracking algorithms. In some examples, processing techniques such as inference and interpolation over processed frames may be used to provide a tracking output whose accuracy and time resolution improves with delay as more frames are processed. If a target delay is specified, each component of processing application (such as background subtraction, detection of various elements) may be assigned an execution time budget within which to compute its output, such that the specified delay is met by a combination of the components. In some examples, the specified time delays may also consider video qualities needed at sending destinations so as to ensure that enough computation resources are allocated for appropriate resolutions and transmission rates at the destinations during broadcasting of the content. In certain cases, a normal resolution may be sufficient while in other cases a higher resolution may be needed. In various embodiments, the intelligent computer-controlled systems may be capable of defining appropriate resolutions, data transmission rates, and computation resources allocation in view of the delay requirements. The methods and systems disclosed herein may facilitate enabling calibration of a moving camera or any other image recognition device via tracking of moving points in a sporting event. Existing techniques for finding unknown camera calibration parameters from captured images or videos of sporting events rely on identifying a set of known locations, such as intersections of lines on the court or field. In accordance with such techniques, calibrating the moving camera as it changes its position or zooms across frames is challenging since there may be only a few of such known locations in the frames. The methods and systems disclosed herein may enable finding the calibration parameters of the moving or operator-controlled camera by using positions of moving points located by an associated tracking system. In an example, these positions may represent locations and spatial coordinates of a player's or a referee's head or hand or legs in the sporting event which may be identified by the tracking system. The tracking system may be an optical tracking system or a chip-based tracking system, which may be configured to determine positions of locations tags. In various examples, several other types of camera control, calibration, and position determining systems may be employed along with the tracking systems. For example, a fixed spotting camera may be used to capture a view and a moving camera contained within the tracking system may be used to capture the positions of the moving points in the frames. The moving camera may be configured to perform several functions such as zoom, tilt, pan, and the like. The tracking system may be configured to perform calibration and identification of the positions based on a tracking algorithm that may execute pre-defined instructions to compute relevant information necessary to drive the tracking system across the frames. The methods and systems disclosed herein may facilitate enabling pre-processing of images from calibrated cameras to improve object detection and recognition. The methods and systems disclosed herein may enable providing for accurate detection and recognition of humans, such as players or referees, and objects, such as a ball, a game clock, jersey numbers and the like with better performance and lower complexity. In embodiments, the tasks of object detection and recognition may be performed on the basis of knowledge of known calibration parameters of the cameras in the tracking system and known properties of the objects being detected such as their size, orientation, or positions etc. For example, perspectives and distortions introduced by the cameras can be undone by applying a transformation such that the objects being detected may have a consistent scale and orientation in transformed images. The transformed images may be used as inputs to detection and recognition algorithms by image processing devices so as to enable faster and more accurate object detection and recognition performance with lower complexity as compared to performing object detection and recognition directly on original images. In such cases, an output generated by the image processing devices may be used as inputs, along with other inputs described herein, to enable or refine the various machine learning and algorithmic capabilities described throughout this disclosure. In some embodiments, machine learning capabilities may be introduced to build improved processing utilizing machine learning tools as discussed above in the document. The methods and systems described herein may be deployed in part or in whole through a machine that executes computer software, program codes, and/or instructions on a processor. The processor may be part of a server, client, network infrastructure, mobile computing platform, stationary computing platform, or other computing platforms. A processor may be any kind of computational or processing device capable of executing program instructions, codes, binary instructions, and the like. The processor may be or include a signal processor, digital processor, embedded processor, microprocessor, or any variant such as a co-processor (math co-processor, graphic co-processor, communication co-processor and the like) and the like that may directly or indirectly facilitate execution of program code or program instructions stored thereon. In addition, the processor may enable execution of multiple programs, threads, and codes. The threads may be executed simultaneously to enhance the performance of the processor and to facilitate simultaneous operations of the application. By way of implementation, methods, program codes, program instructions and the like described herein may be implemented in one or more thread. The thread may spawn other threads that may have assigned priorities associated with them; the processor may execute these threads based on priority or any other order based on instructions provided in the program code. The processor may include memory that stores methods, codes, instructions, and programs as described herein and elsewhere. The processor may access a storage medium through an interface that may store methods, codes, and instructions as described herein and elsewhere. The storage medium associated with the processor for storing methods, programs, codes, program instructions or other type of instructions capable of being executed by the computing or processing device may include but may not be limited to one or more of a CD-ROM, DVD, memory, hard disk, flash drive, RAM, ROM, cache, and the like. A processor may include one or more cores that may enhance speed and performance of a multiprocessor. In embodiments, the process may be a dual core processor, quad core processors, other chip-level multiprocessor and the like that combine two or more independent cores (called a die). The methods and systems described herein may be deployed in part or in whole through a machine that executes computer software on a server, client, firewall, gateway, hub, router, or other such computer and/or networking hardware. The software program may be associated with a server that may include a file server, print server, domain server, Internet server, intranet server and other variants such as secondary server, host server, distributed server, and the like. The server may include one or more of memories, processors, computer readable media, storage media, ports (physical and virtual), communication devices, and interfaces capable of accessing other servers, clients, machines, and devices through a wired or a wireless medium, and the like. The methods, programs or codes as described herein and elsewhere may be executed by the server. In addition, other devices required for execution of methods as described in this application may be considered as a part of the infrastructure associated with the server. The server may provide an interface to other devices including, without limitation, clients, other servers, printers, database servers, print servers, file servers, communication servers, distributed servers, and the like. Additionally, this coupling and/or connection may facilitate remote execution of program across the network. The networking of some or all of these devices may facilitate parallel processing of a program or method at one or more location without deviating from the scope. In addition, any of the devices attached to the server through an interface may include at least one storage medium capable of storing methods, programs, code and/or instructions. A central repository may provide program instructions to be executed on different devices. In this implementation, the remote repository may act as a storage medium for program code, instructions, and programs. The software program may be associated with a client that may include a file client, print client, domain client, Internet client, intranet client and other variants such as secondary client, host client, distributed client, and the like. The client may include one or more of memories, processors, computer readable media, storage media, ports (physical and virtual), communication devices, and interfaces capable of accessing other clients, servers, machines, and devices through a wired or a wireless medium, and the like. The methods, programs or codes as described herein and elsewhere may be executed by the client. In addition, other devices required for execution of methods as described in this application may be considered as a part of the infrastructure associated with the client. The client may provide an interface to other devices including, without limitation, servers, other clients, printers, database servers, print servers, file servers, communication servers, distributed servers, and the like. Additionally, this coupling and/or connection may facilitate remote execution of program across the network. The networking of some or all of these devices may facilitate parallel processing of a program or method at one or more location without deviating from the scope. In addition, any of the devices attached to the client through an interface may include at least one storage medium capable of storing methods, programs, applications, code and/or instructions. A central repository may provide program instructions to be executed on different devices. In this implementation, the remote repository may act as a storage medium for program code, instructions, and programs. The methods and systems described herein may be deployed in part or in whole through network infrastructures. The network infrastructure may include elements such as computing devices, servers, routers, hubs, firewalls, clients, personal computers, communication devices, routing devices and other active and passive devices, modules and/or components as known in the art. The computing and/or non-computing device(s) associated with the network infrastructure may include, apart from other components, a storage medium such as flash memory, buffer, stack, RAM, ROM, and the like. The processes, methods, program codes, instructions described herein and elsewhere may be executed by one or more of the network infrastructural elements. The methods, program codes, and instructions described herein and elsewhere may be implemented on a cellular network having multiple cells. The cellular network may either be frequency division multiple access (FDMA) network or code division multiple access (CDMA) network. The cellular network may include mobile devices, cell sites, base stations, repeaters, antennas, towers, and the like. The cell network may be a GSM, GPRS, 3G, EVDO, mesh, or other networks types. The methods, programs codes, and instructions described herein and elsewhere may be implemented on or through mobile devices. The mobile devices may include navigation devices, cell phones, mobile phones, mobile personal digital assistants, laptops, palmtops, netbooks, pagers, electronic books readers, music players and the like. These devices may include, apart from other components, a storage medium such as a flash memory, buffer, RAM, ROM and one or more computing devices. The computing devices associated with mobile devices may be enabled to execute program codes, methods, and instructions stored thereon. Alternatively, the mobile devices may be configured to execute instructions in collaboration with other devices. The mobile devices may communicate with base stations interfaced with servers and configured to execute program codes. The mobile devices may communicate on a peer to peer network, mesh network, or other communications networks. The program code may be stored on the storage medium associated with the server and executed by a computing device embedded within the server. The base station may include a computing device and a storage medium. The storage device may store program codes and instructions executed by the computing devices associated with the base station. The computer software, program codes, and/or instructions may be stored and/or accessed on machine readable media that may include: computer components, devices, and recording media that retain digital data used for computing for some interval of time; semiconductor storage known as random access memory (RAM); mass storage typically for more permanent storage, such as optical discs, forms of magnetic storage like hard disks, tapes, drums, cards and other types; processor registers, cache memory, volatile memory, non-volatile memory; optical storage such as CD, DVD; removable media such as flash memory (e.g., USB sticks or keys), floppy disks, magnetic tape, paper tape, punch cards, standalone RAM disks, Zip drives, removable mass storage, off-line, and the like; other computer memory such as dynamic memory, static memory, read/write storage, mutable storage, read only, random access, sequential access, location addressable, file addressable, content addressable, network attached storage, storage area network, bar codes, magnetic ink, and the like. The methods and systems described herein may transform physical and/or intangible items from one state to another. The methods and systems described herein may also transform data representing physical and/or intangible items from one state to another. The elements described and depicted herein, including in flow charts and block diagrams throughout the figures, imply logical boundaries between the elements. However, according to software or hardware engineering practices, the depicted elements and the functions thereof may be implemented on machines through computer executable media having a processor capable of executing program instructions stored thereon as a monolithic software structure, as standalone software modules, or as modules that employ external routines, code, services, and so forth, or any combination of these, and all such implementations may be within the scope of the present disclosure. Examples of such machines may include, but may not be limited to, personal digital assistants, laptops, personal computers, mobile phones, other handheld computing devices, medical equipment, wired or wireless communication devices, transducers, chips, calculators, satellites, tablet PCs, electronic books, gadgets, electronic devices, devices having artificial intelligence, computing devices, networking equipment, servers, routers, and the like. Furthermore, the elements depicted in the flow chart and block diagrams or any other logical component may be implemented on a machine capable of executing program instructions. Thus, while the foregoing drawings and descriptions set forth functional aspects of the disclosed systems, no particular arrangement of software for implementing these functional aspects should be inferred from these descriptions unless explicitly stated or otherwise clear from the context. Similarly, it may be appreciated that the various steps identified and described above may be varied and that the order of steps may be adapted to particular applications of the techniques disclosed herein. All such variations and modifications are intended to fall within the scope of this disclosure. As such, the depiction and/or description of an order for various steps should not be understood to require a particular order of execution for those steps, unless required by a particular application, or explicitly stated or otherwise clear from the context. The methods and/or processes described above, and steps thereof, may be realized in hardware, software or any combination of hardware and software suitable for a particular application. The hardware may include a general purpose computer and/or dedicated computing device or specific computing device or particular aspect or component of a specific computing device. The processes may be realized in one or more microprocessors, microcontrollers, embedded microcontrollers, programmable digital signal processors or other programmable devices, along with internal and/or external memory. The processes may also, or instead, be embodied in an application specific integrated circuit, a programmable gate array, programmable array logic, or any other device or combination of devices that may be configured to process electronic signals. It may further be appreciated that one or more of the processes may be realized as a computer executable code capable of being executed on a machine readable medium. The computer executable code may be created using a structured programming language such as C, an object oriented programming language such as C++, or any other high-level or low-level programming language (including assembly languages, hardware description languages, and database programming languages and technologies) that may be stored, compiled or interpreted to run on one of the above devices, as well as heterogeneous combinations of processors, processor architectures, or combinations of different hardware and software, or any other machine capable of executing program instructions. Thus, in one aspect, each method described above and combinations thereof may be embodied in computer executable code that, when executing on one or more computing devices, performs the steps thereof. In another aspect, the methods may be embodied in systems that perform the steps thereof, and may be distributed across devices in a number of ways, or all of the functionality may be integrated into a dedicated, standalone device or other hardware. In another aspect, the means for performing the steps associated with the processes described above may include any of the hardware and/or software described above. All such permutations and combinations are intended to fall within the scope of the present disclosure. While the methods and systems described herein have been disclosed in connection with certain preferred embodiments shown and described in detail, various modifications and improvements thereon may become readily apparent to those skilled in the art. Accordingly, the spirit and scope of the methods and systems described herein are not to be limited by the foregoing examples but is to be understood in the broadest sense allowable by law. All documents referenced herein are hereby incorporated by reference in their entirety. Interactive Game System Based on Spatiotemporal Analysis of Video Content and Related Methods Overview In particular embodiments, an interactive game system5300is configured to augment (e.g., supplement) an experience of one or more viewers (e.g., users) that are viewing an event (e.g., on any suitable computing device). In particular embodiments, the interactive game system5300is configured to convert the event (e.g., sporting event) into an interactive game (e.g., video game) that the system may enable one or more users to play along with the event. In particular embodiments, the system is configured to determine a score for each of the one or more users playing the interactive game based at least in part on one or more of: (1) one or more user-provided inputs during the event; (2) one or more actions performed by one or more participants during the event; (3) one or more scoring criteria; and/or (4) any other suitable metric. In a particular embodiment, for example, the system may be configured to enable a user to select one or more particular participants in the event (e.g., one or more players in the sporting event) for at least a portion of the event (e.g., a quarter, a half, a play, a drive, a possession, the entire event, a portion of the event until the user makes an alternative selection of the one or more particular participants, etc.). The system may then be configured to determine a score for each user based on one or more actions taken by the selected one or more particular participants during the event while the user has selected those one or more particular participants. As a particular example, an interactive game system5300that is configured to convert a basketball game (e.g., a televised basketball game) into an interactive game may be configured to enable a user to select a particular player from the game (e.g., via a user interface provided on a mobile computing device associated with the user). The system may then be configured to determine a number of points accumulated for the selected player in the basketball in order to determine a score for the user. For example, the system may be configured to assign points to the selected player for performing particular actions during the basketball game. The system may, for example, be configured to assign a particular number of points for any suitable action and/or activity performed by the player during the game such as: (1) scoring a two point basket; (2) scoring a three point basket; (3) completing a dribble; (4) assisting a basket; (5) drawing a foul; (6) blocking a shot; (7) stealing the ball; and/or (8) any other basketball related activity. In still other embodiments, the system may be configured to reduce a user's score in response to determining that the user's selected player has performed one or more particular negative actions. The one or more negative actions may include, for example: (1) turning the ball over; (2) stepping out of bounds; (3) missing a shot; (4) conceding a basket against another player that the selected player was defending; (5) leaving a player open to take a shot; (6) committing a foul; (7) losing a jump ball; (8) committing a technical foul; and/or (9) any other negative action which may occur during a basketball or other game. In various embodiments, the system is configured to augment a video feed of an event (e.g., sporting event) based at least in part one or more user selections (e.g., on or more user selections of one or more particular participants in the event, one or more players in the sporting event, etc.). As discussed more fully herein, the system may be configured to augment the video feed by displaying any suitable indicia adjacent (e.g., around) a selected player in addition to an indication of a scoring event (e.g., a spatiotemporal event) associated with the selected player (e.g., +10 points for a rebound), such as may be understood fromFIG.68. As discussed herein, in various embodiments, the system may be configured to overlay (e.g., over a video feed of the event) one or more on screen indications related to a particular spatiotemporal event over the video feed in a location that at least generally corresponds to a location of a particular selected player in the video feed in conjunction with (e.g., and/or substantially immediately following) a spatiotemporal event in which the selected player is involved. The system may, for example, be configured to display (e.g., as part of a customized user interface, augmentation to a video feed, etc.) points earned by a selected player over the head of the selected player in the video feed, underneath the selected player, or otherwise adjacent the selected player (e.g., as the selected player earns points during the sporting event). In some embodiments, the system is configured to determine a location of the selected player within the video feed based at least in part on the spatiotemporal event data (e.g., any suitable spatiotemporal event data described herein). In particular embodiments, the system is configured to generate a set of one or more (e.g., two or more) augmented videos, for example, at the one or more third party servers5320, at the one or more spatiotemporal event analysis servers5360, the one or more interactive game servers5330, and/or any other suitable remote server and/or combination of services. The set of one or more augmented videos may, for example, include an augmented video that corresponds to each user-selectable player (e.g., and/or combination of user-selectable players). In such embodiments, the system may be configured to: (1) receive a selection of one or more particular participants in an event; (2) in response to receiving the selection, retrieve and/or identify (e.g., from one or more remote servers) an existing augmented video feed that corresponds to the selection (e.g., and was generated and/or is being generated by one or more remote servers); and (3) provide the augmented video feed to a computing device associated with the user (e.g., a client device) for display on the computing device. In various embodiments, the system may then be configured to: (1) receive a selection of one or more different participants in the event from the user; (2) retrieve, and/or identify the augmented video feed (e.g., existing augmented video feed) for the one or more different participants; and (3) provide the augmented video feed for the one or more different participants to the computing device associated with the user for display on the computing device (e.g., by switching which particular video feed is transmitted to the user's computing device). In particular embodiments, the system may be configured to generate an augmented video feed (e.g., at a suitable server or combination of servers) for any possible combination of selections by the user of the system, and transmit the augmented video feed to the user (e.g., to the user's mobile device) that corresponds to the user's actual selections. In still other embodiments, the system is configured to generate the augmented video corresponding to the selection of one or more particular participants in the event locally on a client device (e.g., one or more mobile computing devices5310) associated with the user. For example, in various embodiments, the system is configured to perform one or more client-side augmentation steps using one or more techniques described in U.S. provisional patent application No. 62/808,243, filed Feb. 20, 2019, entitled “Methods and Systems of Combining Video Content with One or More Augmentations to Produce Augmented Video,” which is hereby incorporated herein in its entirety. In various embodiments, the system is configured to augment an existing broadcast video feed of the event (e.g., sporting event), which may, for example, be provided by a suitable broadcaster (e.g., television channel, streaming service, etc.). In particular embodiments, the system may be configured to identify a particular selected player within the broadcast video feed by identifying a number associated with the selected player within the video feed (e.g., a jersey number, etc.). In particular embodiments, the system is configured to enable a user to select a different player that is participating in the event (e.g., basketball game) at any point during the event. In this way, the system may be configured to provide an interactive game in which a user may provide a selection of any particular player in the event at any time during the event. In such embodiments, the system may be configured to determine a user's score based on actions taken by one or more players (e.g., participants) in the event only while the user has selected those one or more players. For example, in response to a user selecting Player A at the beginning of the event, the system may be configured to accumulate points for the user based on one or more actions, events, etc. associated with Player A as long as the user still has Player A selected. In response to the user selecting Player B, the system may be configured to stop accumulating points for the user based on actions associated with Player A and begin accumulating points for the user based on one or more actions associated with Player B (e.g., substantially instantaneously). In this way, the system may be configured to provide an interactive game that engages a user to play along during the event in order to select players that, for example, are playing well during particular portions of the event. The system may be further configured to provide an interactive game that engages a user to play along during the event in order to switch to a different player when an initially selected player is, for example, performing poorly, is ejected from the game, is injured, etc. By enabling users to select players substantially on the fly (e.g., on the fly) during the course of the event, the system may be configured to provide a larger variety of user scoring during the event. For example, as may be understood in light of this disclosure and in light of the nature of sporting events, an interactive game system that merely enabled a user to select a player at the outset of the event and did not enable the user to select different players during the course of the event may result in a high number of users with substantially the same score at the end of the event (e.g., particularly due to the limited number of players available for selection in most sporting events). This may particularly occur for such interactive game systems that have a high number of users participating in an interactive game for a particular event for which there is a low number of available participants for selection (e.g., such as in a basketball game). In various embodiments, the interactive game system is further configured to generate and display (e.g., on a computing device associated with each respective user playing the interactive game) a custom graphical user interface over (e.g., in conjunction with) a video feed of the sporting event (e.g., on a mobile computing device5310). In particular embodiments, the custom graphical user interface may include, for example, one or more indications related to: (1) scoring data for the user (e.g., the user's overall, accumulated score for the event); (2) one or more actions performed by one or more participants selected by the user (e.g., in conjunction with and/or substantially immediately after the one or more selected participants preform the one or more actions in the video feed of the sporting event; (3) scoring data for one or more other users (e.g., one or more other users with whom the user is competing in the interactive game; (4) etc. Although various embodiments herein will be described with respect to one or more sporting events (e.g., a soccer game, a basketball game, a tennis match, a football game, a cricket match, a volleyball game, etc.), it should be understood that other embodiments of the system described herein may be implemented in the context of any other suitable system in which scoring data may be applied to an event such that a user score may be determined based on one or more user selections during the event. This may include, for example: (1) one or more e-sports events (e.g., one or more electronic sporting events); (2) one or more televised debates; (3) one or more table games (e.g., one or more poker tournaments); and/or (4) any other suitable event for which the system may determine scoring data and display one or more custom user interfaces in conjunction with video of the event. Particular embodiments of an interactive game system are described more fully below. Exemplary Technical Platforms As will be appreciated by one skilled in the relevant field, the embodiments described herein may be, for example, embodied as a computer system, a method (e.g., a computer-implemented method, computer-implemented data processing method, etc.), or a computer program product. Accordingly, various embodiments may take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment combining software and hardware aspects. Furthermore, particular embodiments may take the form of a computer program product stored on a computer-readable storage medium (e.g., a nontransitory computer-readable medium) having computer-readable instructions (e.g., software) embodied in the storage medium. Various embodiments may take the form of web-implemented computer software. Any suitable computer-readable storage medium may be utilized including, for example, hard disks, compact disks, DVDs, optical storage devices, and/or magnetic storage devices. Various embodiments are described herein with reference to block diagrams and flowchart illustrations of methods (e.g., computer-implemented methods), apparatuses (e.g., systems) and computer program products. It should be understood that each block of the block diagrams and flowchart illustrations, and combinations of blocks in the block diagrams and flowchart illustrations, respectively, can be implemented by a computer executing computer program instructions. These computer program instructions may be loaded onto a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions which execute on the computer or other programmable data processing apparatus to create means for implementing the functions specified in the flowchart block or blocks. These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner such that the instructions stored in the computer-readable memory produce an article of manufacture that is configured for implementing the function specified in the flowchart block or blocks. The computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions that execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart block or blocks. Accordingly, blocks of the block diagrams and flowchart illustrations support combinations of mechanisms for performing the specified functions, combinations of steps for performing the specified functions, and program instructions for performing the specified functions. It should also be understood that each block of the block diagrams and flowchart illustrations, and combinations of blocks in the block diagrams and flowchart illustrations, can be implemented by special purpose hardware-based computer systems that perform the specified functions or steps, or combinations of special purpose hardware and other hardware executing appropriate computer instructions. Example System Architecture FIG.53is a block diagram of an interactive game system5300according to particular embodiments. In various embodiments, the interactive game system5300may be configured to: (1) enable a user to select one or more players participating in a substantially live (e.g., live) sporting or other event; (2) determine scoring data for each of the one or more selected players during the sporting or other event; (3) track the determined scoring data; (4) generate a custom (e.g., to the user) user interface that includes the scoring data; and (5) display the custom user interface over at least a portion of a display screen (e.g., on a mobile computing device5310) displaying one or more video feeds of the sporting or other event. As may be understood fromFIG.53, the interactive game system5300includes one or more computer networks5315, One or More Mobile Computing Devices5310(e.g., tablet computer, smartphone, etc.), One or More Third Party Servers5320, One or More Interactive Game Servers5330, One or More Databases5340or other data structures, one or more remote computing devices5350(e.g., a desktop computer, laptop computer, tablet computer, smartphone, etc.), and/or One or More Spatiotemporal Event Analysis Servers5360. In particular embodiments, the one or more computer networks5315facilitate communication between (e.g., and/or among) the One or More Mobile Computing Devices5310, One or More Third Party Servers5320, One or More Interactive Game Servers5330, One or More Databases5340, one or more remote computing devices, and/or One or More Spatiotemporal Event Analysis Servers5360. Although in the embodiment shown inFIG.53, the One or More Mobile Computing Devices5310, One or More Third Party Servers5320, One or More Interactive Game Servers5330, One or More Databases5340, one or more remote computing devices, and/or One or More Spatiotemporal Event Analysis Servers5360are depicted as separate servers and computing devices, it should be understood that in other embodiments, one or more of these servers and/or computing devices may comprise a single server, a plurality of servers, one or more cloud-based servers, or any other suitable configuration. The one or more computer networks5315may include any of a variety of types of wired or wireless computer networks such as the Internet, a private intranet, a public switch telephone network (PSTN), or any other type of network. The communication link between the One or More Mobile Computing Devices5310and the One or More Interactive Game Servers5330may be, for example, implemented via a Local Area Network (LAN) or via the Internet. In other embodiments, the One or More Databases5340may be stored either fully or partially on any suitable server or combination of servers described herein. In various other embodiments, an interactive game system5300may utilize one or more suitable cloud computing techniques in order to execute overlay software, underlying software, store and access one or more pieces of data, etc. The interactive game system5300may, for example, be configured to perform one or more processing steps on one or more remote servers (e.g., the One or More Interactive Game Servers5330and/or One or More Spatiotemporal Event Analysis Servers5360) prior to transmitting and displaying particular data on one or more interfaces on the One or More Mobile Computing Devices5310as described herein. For example, the one or more networks5315may facilitate communication between the One or More Interactive Game Servers5330and the One or More Spatiotemporal Event Analysis Servers5360in order to transmit spatiotemporal event data for a sporting or other event (e.g., during the event in substantially real time) to the One or More Interactive Game Servers5330, for example, in order to determine scoring data (e.g., at the One or More Interactive Game Servers) for a user based on the user's selections during the sporting or other event. The system may then, for example, transmit any suitable data from the One or More Interactive Game Servers5330, via the One or More Networks5315, to the One or More Mobile Computing Devices5310for display as part of a customized user interface for the user while the user is viewing the sporting or other event on the One or More Mobile Computing Devices5310. FIG.54illustrates a diagrammatic representation of a computer architecture5400that can be used within the interactive game system5300, for example, as a client computer (e.g., One or More Mobile Computing Devices5310shown inFIG.53), or as a server computer (e.g., One or More Interactive Game Servers5330, One or More Spatiotemporal Event Servers5360, etc.) shown inFIG.53. In particular embodiments, the computer5400may be suitable for use as a computer within the context of the interactive game system5300that is configured to receive input from a user, determine scoring data for the user based on one or more user-provided inputs and spatiotemporal event data associated with a particular sporting or other event, etc. In particular embodiments, the computer5400may be connected (e.g., networked) to other computers in a LAN, an intranet, an extranet, and/or the Internet. As noted above, the computer5400may operate in the capacity of a server or a client computer in a client-server network environment, or as a peer computer in a peer-to-peer (or distributed) network environment. The Computer5400may be a desktop personal computer (PC), a tablet PC, a set-top box (STB), a Personal Digital Assistant (PDA), a cellular telephone, a web appliance, a server, a network router, a switch or bridge, or any other computer capable of executing a set of instructions (sequential or otherwise) that specify actions to be taken by that computer. Further, while only a single computer is illustrated, the term “computer” shall also be taken to include any collection of computers that individually or jointly execute a set (or multiple sets) of instructions to perform any one or more of the methodologies discussed herein. An exemplary computer5400includes a processing device5402(e.g., one or more computer processors), a main memory5404(e.g., read-only memory (ROM), flash memory, dynamic random access memory (DRAM) such as synchronous DRAM (SDRAM) or Rambus DRAM (RDRAM), etc.), a static memory5406(e.g., flash memory, static random access memory (SRAM), etc.), and a data storage device5418, which communicate with each other via a bus232. The processing device5402represents one or more general-purpose processing devices such as a microprocessor, a central processing unit, or the like. More particularly, the processing device5402may be a complex instruction set computing (CISC) microprocessor, reduced instruction set computing (RISC) microprocessor, very long instruction word (VLIW) microprocessor, Scalar Board, or processor implementing other instruction sets, or processors implementing a combination of instruction sets. The processing device5402may also be one or more special-purpose processing devices such as an application specific integrated circuit (ASIC), a field programmable gate array (FPGA), a digital signal processor (DSP), network processor, or the like. The processing device5402may be configured to execute processing logic5426for performing various operations and steps discussed herein. The computer5400may further include a network interface device5408. The computer5400also may include a video display unit5410(e.g., a liquid crystal display (LCD), LED display, OLED display, plasma display, a projector, a cathode ray tube (CRT), any suitable display described herein, or any other suitable display), an alphanumeric or other input device5412(e.g., a keyboard), a cursor control or other input device5414(e.g., a mouse, stylus, pen, touch-sensitive input device, etc.), and a signal generation device5416(e.g., a speaker). The data storage device5418may include a non-transitory computer-accessible storage medium5430(also known as a non-transitory computer-readable storage medium or a non-transitory computer-readable medium) on which is stored one or more sets of instructions (e.g., software5422) embodying any one or more of the methodologies or functions described herein. The software5422may also reside, completely or at least partially, within the main memory5404and/or within the processing device5402during execution thereof by the computer5400—the main memory5404and the processing device5402also constituting computer-accessible storage media. The software5422may further be transmitted or received over a network5315via a network interface device5408. While the computer-accessible storage medium5430is shown in an exemplary embodiment to be a single medium, the term “computer-accessible storage medium” should be understood to include a single medium or multiple media (e.g., a centralized or distributed database, and/or associated caches and servers) that store the one or more sets of instructions. The term “computer-accessible storage medium” should also be understood to include any medium that is capable of storing, encoding or carrying a set of instructions for execution by the computer and that cause the computer to perform any one or more of the methodologies of the present invention. The term “computer-accessible storage medium” should accordingly be understood to include, but not be limited to, solid-state memories, optical and magnetic media, etc. Exemplary System Platform Various embodiments of an interactive game system5300may be implemented in the context of any suitable system (e.g., as a software application running on One or More Mobile Computing Devices5310, as an overlay to an underlying software application running on the One or More Mobile Computing Devices5310, as a data processing system utilizing one or more servers to perform particular processing steps, as part of the system4800described herein, or any other suitable combination thereof or discussed herein). For example, the interactive game system5300may be implemented to: (1) receive one or more user-provided selections and/or inputs via a user interface (e.g., on One or More Mobile Computing Devices5310), the one or more user-provided selections and/or inputs being associated with a particular sporting event (e.g., or other event); (2) receive spatiotemporal event data related to the particular sporting event; (3) receive one or more scoring criteria; (4) apply the one or more scoring criteria to the spatiotemporal event data; (5) generate one or more custom user interfaces based on the one or more scoring criteria, the spatiotemporal event data, and the one or more user-provided selections and/or inputs; and (6) display the one or more custom user interfaces on a computing device associated with the user (e.g., the One or More Mobile Computing Devices5310) while the user is viewing the particular sporting event on the computing device associated with the user. Various aspects of the system's functionality may be executed by certain system modules, including an Interactive Game Module5500and a Spatiotemporal Event Analysis Module5600. These modules are discussed in greater detail below. Although these modules are presented as a series of steps, it should be understood in light of this disclosure that various embodiments of the Interactive Game Module5500and Spatiotemporal Event Analysis Module5600described herein may perform the steps described below in an order other than in which they are presented. In still other embodiments, the Interactive Game Module5500and Spatiotemporal Event Analysis Module5600may omit certain steps described below. In various other embodiments Interactive Game Module5500and Spatiotemporal Event Analysis Module5600may perform steps in addition to those described (e.g., such as one or more steps described with respect to one or more other modules, etc.). Interactive Game Module In particular embodiments, when executing an Interactive Game Module5500, the interactive game system5300is configured to: (1) receive, from a user, a selection of one or more players participating in a sporting event; (2) identify events that occur during the sporting event that involve the selected one or more players; (3) determine a score for each event for each of the one or more players involved in the event (e.g., based on one or more scoring criteria); (4) aggregate scoring data for the user over the course of the sporting event; and (5) display a graphical user interface over (e.g., and/or in conjunction with) a video feed of the sporting event (e.g., on a mobile computing device5310) that includes one or more indications related to, for example: the scoring data for the user, one or more event scores, scoring data for one or more other users, etc. In particular embodiments, when executing the Interactive Game Module5500, the interactive game system is configured to convert sporting event video footage (e.g., substantially real-time footage) into an interactive game. In various embodiments, the system may receive the sporting event video footage, for example, from a broadcaster of the sporting event. In some embodiments, the system may, for example, be configured to enable a user to select particular players (e.g., for at least a particular portion of the sporting event) in order to accumulate points based at least in part on one or more actions undertaken by the selected players during the sporting event (e.g., while the user has selected the particular player(s). The system may then be configured to enable the user to compete with one or more other users to achieve a higher score during the course of the sporting event. The system may, for example, be configured to enable a user to switch among selected players during the course of the event. In this way, the system may be configured to provide a more engaging way for viewers of a sporting event to engage with the event (e.g., by playing the interactive game while viewing the event). Turning toFIG.55, in particular embodiments, when executing the Interactive Game Module5500, the system begins, at Step5510, by receiving one or more user-provided inputs comprising a selection of one or more players from a plurality of available players participating in a sporting event (e.g., or other suitable event). In particular embodiments, the system is configured to receive the selection of the one or more players in response to a user selecting the one or more players via a suitable user interface (e.g., on a mobile computing device5310) associated with the user. In some embodiments, the system is configured to prompt the user to select a particular number of players competing in the sporting event. The system may, for example, be configured to prompt the user to select: (1) a single player participating in the event; (2) one player from each team competing in the event; (3) one or more players in one or more different positions for the event (e.g., one or more offensive players, one or more defensive players, one or more quarterbacks, one or more running backs, one or more guards, one or more wide receivers, one or more midfield players, one or more goalkeepers, or any other suitable number of players from any suitable team or playing any suitable position). In particular embodiments, the system may enable the user to enable the user to select a particular number or type of participants in the event (e.g., sporting event) based at least in part on the type of event. As may be understood in light of this disclosure, the system may be configured to prompt a user to make a selection of one or more players based at least in part on: (1) a type of interactive game selected by the user; (2) a type of sporting event (e.g., or other event) for which the user is playing the interactive game; etc. In various embodiments the system is configured to retrieve a listing of available players from one or more databases5340. The system may, for example, be configured to retrieve the listing of players from one or more third party servers5320, which may, for example, be configured to track: (1) substantially current lineups for the sporting event; (2) substantially current rosters for each team participating in the sporting event; (3) etc. The system may then be configured to receive information related to the plurality of available players in order to provide the user with a selection of the available players. As may be understood in light of this disclosure, the sporting event may include any suitable sporting event (e.g., a soccer match, a basketball game, a tennis match, a football game, a cricket match, gymnastics meet, track meet, a golf tournament, etc.), it should be understood that other embodiments of the system described herein may be implemented in the context of any other suitable system in which scoring data may be applied to an event such that a user score may be determined based on one or more user selections during the event. This may include, for example: (1) one or more e-sports events (e.g., one or more electronic sporting events, video game events, etc.); (2) one or more televised debates; (3) one or more table games (e.g., poker); (4) one or more racing events (e.g., horse racing, stock car racing, formula one racing, etc.); and/or (5) any other suitable event for which the system may be configured to convert a substantially live competitive activity into an interactive game. Returning to Step5520, the system is configured to determine spatiotemporal event data for at least the selected one or more players during the sporting or other event. Various embodiments of a system configured to retrieve and/or analyze spatiotemporal event data for at least the selected one or more players during the sporting or other event are discussed in more detail below with respect to the Spatiotemporal Event Analysis Module5600. As may be understood in light of this disclosure, in particular embodiments, a particular sporting event (e.g., or other event) may comprise a plurality of discrete events that occur over the course of the event. In various embodiments, the system may be configured to determine and/or receive data related to each of the plurality of discrete events (e.g., spatiotemporal events) over the course of the overall sporting event. In particular embodiments, the spatiotemporal event data may include, for example, one or more particular actions undertaken by one or more players in the game, such as: (1) during each discrete event; (2) leading up to each discrete event; and/or (3) after each discrete event. In particular embodiments, a spatiotemporal event may include any event that occurs during the sporting event. A spatiotemporal event may include, for example: (1) a particular play during the sporting event; (2) a particular time period during the sporting event (e.g., a quarter, half, etc.); (3) a particular incident during the sporting event; (4) a particular action by one or more players during the sporting event (e.g., a pass, an attempted shot, a scored shot, an assist, a dribble, a tackle, a steal, a foul, a particular movement, a particular off-the-ball movement, a run, a throw, a pitch, an interception, a forced fumble, a hit, a catch, a score, a pitch, a strike, a thrown ball, a missed tackle, a drop, an incomplete pass, a completed pass, a sack, a fumble, a throw-in, a free-kick, a blocked shot, a defensive pressure, a defensed pass, a wicket, a home run, a single, a double, a triple, and/or any other suitable potential action which a player may make during the course of any suitable sporting event); and/or (5) any other suitable spatiotemporal event related to any activity undertaken by any participant in a sporting or other event at any time during the sporting event at any location on a field of play (e.g., or outside of the field of play) associated with the sporting event. In some embodiments, the data may include any suitable semantic event data. In still other embodiments, the spatiotemporal event data may include any other suitable data related to each discrete spatiotemporal event such as, for example: (1) a time of each spatiotemporal event (e.g., during the sporting event); (2) a location of each player participating in the sporting event during the spatiotemporal event (e.g., within and/or outside a field of play of the sporting event) and/or leading up to the spatiotemporal event; (3) a relative position of each player participating in the sporting event (e.g., a distance between at least two players during the spatiotemporal event); (4) a likelihood of the spatiotemporal event having occurred (e.g., a probability that a player will make a particular shot that makes up the spatiotemporal event); (5) a movement speed of each particular participant in the sporting event during any particular spatiotemporal event); and/or (6) any other suitable data related to the event (e.g., which may, for example, be used by the system to modify a spatiotemporal event score for any particular participant in the sporting event with respect to one or more particular spatiotemporal events). In some embodiments, the spatiotemporal event data may comprise one or more pieces of raw coordinate data associated with each particular participant in the sporting event (e.g., at any time during the sporting event). The spatiotemporal event data may define, for example, distinct location data (e.g., within the playing field of the sporting event, relative to one or more field markings on the playing field or surface, etc.) for each of the plurality of players participating in the sporting event during the course of the sporting event (e.g., instantaneous location data). In various embodiments, the system is configured to track (e.g., and/or receive) substantially instantaneous (e.g., instantaneous) location data for each participant in the sporting event during the course of the event. In any embodiment described herein, the spatiotemporal event data may include any suitable semantic event data related to any number of a plurality of discrete semantic events that occur during the sporting event. Next, at Step5530, the system is configured to retrieve one or more scoring criteria. In various embodiments, the system may be configured to receive the one or more scoring criteria from the user that is playing the interactive game (e.g., the system may be configured to enable the user to select one or more scoring settings). In other embodiments, the system may be configured to retrieve the one or more scoring criteria based at least in part on: (1) a game-type selected by the user; (2) a sporting event type for which the interactive game is being played; and/or (3) any other suitable factor. In particular embodiments, the one or more scoring criteria may include any suitable criteria related to a scoring value for one or more particular actions undertaken by one or more players during the course of the event for which the interactive game system is providing the interactive game. For example, the one or more scoring criteria may include one or more criteria related to: (1) a point value for each of a plurality of different actions that a player (e.g., participate) may undertake during the vent (e.g., which may, for example, depend on a type of the event; (2) a point value modifier based at least in part on one or more spatiotemporal attributes associated with a particular event (e.g., which will be discussed more fully below); (3) a criteria related to modification of the point value and/or point value modifier based at least in part on a probability and/or likelihood of each of the plurality of different actions occurring, succeeding, and/or failing; and/or (4) any other suitable criteria which may be applied to one or more activities performed by, actions performed by, inactions by, positions taken by, and/or other suitable events involving any particular participant (e.g., player) in the event. In some embodiments, the one or more scoring criteria may include one or more criteria related to a player losing points for particular negative actions (e.g., as discussed herein). In still other embodiments, the one or more scoring criteria may include one or more criteria related to positioning of a player during a particular spatiotemporal event. The system may, for example, access one or more scoring criteria relating to an increase or decrease in a player's score based on a distance of the player from a point of interest of a particular spatiotemporal event. For example, in response to determining that a defense player is greater than a particular distance away from an offensive player while the offensive player is scoring and/or attempting to score, the system may be configured to reduce the defensive player's score (e.g., by removing a particular number of points). In another example, the system may be configured to increase a defensive player's score in response to determining that the defensive player forced an offense player to take a low percentage shot (e.g., even if the offensive player ends of making the shot). Next, at Step5540, the system is configured to apply the one or more scoring criteria to the spatiotemporal event data to determine a spatiotemporal event score for each of a plurality of spatiotemporal events during the sporting event (e.g., or other event). As may be understood in light of this disclosure, as the interactive game system receives spatiotemporal event data regarding events (e.g., actions, activities, etc.) that occur during the course of the sporting event, the system may be configured to determine a score for any particular player involved in a particular event. In one example, a spatiotemporal event may include a made basket in basketball. In this example, the spatiotemporal event data may include, for example: (1) a location of each player participating in the basketball game at the time of the made basket; (2) a time of the made basket (e.g., the basket was made with 4:12 left in the third quarter); (3) information associated with a player that scored the basket (e.g., a location from which the player took the shot that resulted in the scored basket); (4) information associated with a player that assisted the basket; (5) information associated with a player that was guarding the player that made the basket (e.g., a distance between the player that made the shot and the player defending the player that made the shot); (6) relative positioning data for one or more players in the game (e.g., a distance between the player that made the basket and the player guarding the player that made the basket at the time the basket was made); (7) probability data related to the made basket (e.g., which the system may determine based on, for example, shooting statistics associated with the player that made the basket, a location from which the shot was taken, a skill level of the player defending the shot, the relative position of the shooter and the defender, etc.); and/or (8) any other suitable data related to the event (e.g., a defensive posture of the team that conceded the basket and/or goal such as whether the defensive team was playing man-to-man defense, zone defense, etc.). In this example, when applying the one or more scoring criteria to the spatiotemporal event data to determine a spatiotemporal event score, the system may be configured to calculate and/or determine a spatiotemporal event score for one or more of: (1) the player that made the basket; (2) the player that assisted the basket; (3) the player defending the player that made the basket; and/or (4) any other player that may have been involved in the event (e.g., made basket) such as, for example: (1) one or more players setting an off the ball screen; (2) one or more players that were in close enough proximity to provide defensive help but failed to do so; and/or (3) any other suitable player in the event. In various embodiments, the system is configured to determine the spatiotemporal event score (e.g., for any suitable player in the event) based on any suitable factor (e.g., piece of spatiotemporal event data) and/or scoring criteria described herein. Returning to Step5550, the system is configured to generate one or more custom user interfaces based at least in part on one or more of the one or more user provided inputs, the spatiotemporal event data, the spatiotemporal event score for each of the plurality of spatiotemporal events, and the one or more scoring criteria. In various embodiments, the interactive game system is configured to generate (e.g., on a computing device associated with each respective user playing the interactive game) a custom graphical user interface over (e.g., in conjunction with) a video feed of the sporting event (e.g., on the mobile computing device5310). In various embodiments, the custom graphical user interface may augment an existing video feed of the sporting event. In particular embodiments, the custom graphical user interface may include, for example, one or more indications related to: (1) scoring data for the user (e.g., the user's overall, accumulated score for the event); (2) one or more actions performed by one or more participants selected by the user (e.g., in conjunction with and/or substantially immediately after the one or more selected participants preform the one or more actions in the video feed of the sporting event); (3) scoring data for one or more other users (e.g., one or more other users with whom the user is competing in the interactive game); (4) etc. As discussed more fully herein, in various embodiments, the system is configured to generate the one or more custom user interfaces (e.g., one or more augmented videos) remotely (e.g., on any suitable server or combination of servers discussed herein). For example, in particular embodiments, the system is configured to generate a set of one or more (e.g., two or more) augmented videos (e.g., custom user interfaces), for example, at the one or more third party servers5320, at the one or more spatiotemporal event analysis servers5360, the one or more interactive game servers5330, and/or any other suitable remote server and/or combination of services. The set of one or more augmented videos may, for example, include an augmented video that corresponds to each user-selectable player (e.g., and/or combination of user-selectable players). In such embodiments, the system may be configured to: (1) receive a selection of one or more particular participants in an event; (2) in response to receiving the selection, retrieve and/or identify (e.g., from one or more remote servers) an existing augmented video feed that corresponds to the selection; and (3) provide the augmented video feed to a computing device associated with the user for display on the computing device. In various embodiments, the system may then be configured to: (1) receive a selection of one or more different participants in the event from the user; (2) retrieve, and/or identify the augmented video feed (e.g., existing augmented video feed) for the one or more different participants; and (3) provide the augmented video feed for the one or more different participants to the computing device associated with the user for display on the computing device (e.g., by switching which particular video feed is transmitted to the user's computing device). In particular embodiments, the system may be configured to generate an augmented video feed (e.g., at a suitable server or combination of servers) for any possible combination of selections by the user of the system, and transmit the augmented video feed to the user (e.g., to the user's mobile device) that corresponds to the user's actual selections. In still other embodiments, the system is configured to generate the one or more custom user interfaces (e.g., one or more augmented videos) locally on a client device. In still other embodiments, the system is configured to generate at least a portion of the one or more custom user interfaces (e.g., one or more augmented videos) on one or more remote servers, and further augment the video locally on a client device. At Step5560, the system is configured to display the one or more custom user interfaces (e.g., video augmentations) on at least a portion of a display screen of a computing device (e.g., associated with the user) as the computing device is displaying a video feed of the sporting or other event. In some embodiments, the system is configured to overlay the one or more custom user interfaces over at least a portion of the video feed of the sporting event. This may include, for example, overlaying one or more on screen indications related to a particular spatiotemporal event over the video feed in a location that at least generally corresponds to a location of a particular selected player in the video feed in conjunction with (e.g., and/or substantially immediately following) a spatiotemporal event in which the selected player is involved. The system may, for example, be configured to display (e.g., as part of a customized user interface) points earned by a selected player over the head of the selected player in the video feed (e.g., as the selected player earns points during the sporting event) or otherwise adjacent the selected player. In various embodiments, as may be understood in light of this disclosure, the system may take a particular processing time to perform the analysis described herein with respect to generating the one or more custom user interfaces (e.g., video augmentations), determining user and/or spatiotemporal event scores for one or more players, etc. As further discussed herein, in certain embodiments, the system may be configured to overlay one or more on screen indications related to a particular spatiotemporal event over the video feed in a location that at least generally corresponds to a location of a particular selected player in the video feed in conjunction with (e.g., and/or substantially immediately following) a spatiotemporal event in which the selected player is involved. In such embodiments, is may be desirable for the system to display these one or more on screen indications at substantially the same time as (e.g., and/or immediately following) the occurrence of the related spatiotemporal event. As such, in various embodiments, the system may be configured to display the video feed on a delay that corresponds to the particular processing time. In such embodiments, the system may, for example, be configured to: (1) receive spatiotemporal event data as the sporting event is occurring; (2) process the spatiotemporal event as the spatiotemporal event data is received; (3) display the video feed of the event on the mobile computing device (e.g., or other device) on a time delay that at least generally corresponds to a processing time for processing the spatiotemporal event data to determine scoring data for each particular spatiotemporal event that occurs during the sporting event; and (4) display the custom user interface along with the time delayed video feed such that the system may display user interface features in a manner that the features are displayed at a time that corresponds to an occurrence of the one or more spatiotemporal events during the sporting event. In particular embodiments, the system is configured to delay the initiation of scoring points for the user based on a particular selected player (e.g., up to about thirty seconds, up to about a minute, and/or up to about any other suitable length of time). In this way, the system may be configured to account for a lag between a video feed of the sporting event and the live event (e.g., to prevent an individual that is attending the live event from ‘cheating’ by picking players that have already performed a high scoring activity/event but have not performed the activity yet in the interactive game due to a broadcast or video feed and/or processing delay.) In particular embodiments, the delay between selection of a player in the game and beginning to accumulate points for that player may be based at least in part on a delay in the video feed of the game (e.g., from the live events of the game) that is being shown as part of the interactive game. Spatiotemporal Event Analysis Module In particular embodiments, when executing a Spatiotemporal Event Analysis Module5600, the interactive game system5300is configured to: (1) determine spatiotemporal event data for each of a plurality of available players during a sporting event; (2) analyze the spatiotemporal event data to identify individual spatiotemporal events that occur during the sporting event; (3) determine which of a plurality of players participating in the sporting event are involved in each individual spatiotemporal event; and (4) determine, for each individual spatiotemporal event, spatiotemporal event score data for each player of the plurality of players involved in the spatiotemporal event. In particular embodiments, as may be understood in light of this disclosure, a particular sporting event (e.g., or other event) may comprise a plurality of discrete events that occur over the course of the event. In various embodiments, the system may be configured to assign a score to one or more particular actions undertaken by one or more players in the game, for example: (1) during each discrete event; (2) leading up to each discrete event; and/or (3) after each discrete event. In particular embodiments, a spatiotemporal event may include any event that occurs during the sporting event. A spatiotemporal event may include, for example: (1) a particular play during the sporting event; (2) a particular time period during the sporting event (e.g., a quarter, half, etc.); (3) a particular incident during the sporting event; (4) a particular action by one or more players during the sporting event (e.g., a pass, an attempted shot, a scored shot, an assist, a dribble, a tackle, a steal, a foul, a particular movement, a particular off-the-ball movement, a run, a throw, a pitch, an interception, a forced fumble, a hit, a catch, a score, a pitch, a strike, a thrown ball, a missed tackle, a drop, an incomplete pass, a completed pass, a sack, a fumble, a throw-in, a free-kick, a blocked shot, a defensive pressure, a defensed pass, and/or any other suitable potential action which a player may make during the course of any suitable sporting event); and/or (5) any other suitable spatiotemporal event related to any activity undertaken by any participant in a sporting or other event at any time during the sporting event at any location on a field of play (e.g., or outside of the field of play) associated with the sporting event. In various embodiments, the system is configured to analyze spatiotemporal event data for a particular sporting event in order to determine a number of points scored and/or lost (e.g., based on one or more scoring criteria as part of the interactive game) by one or more players selected by one or more users of an interactive game system as described herein. Turning toFIG.56, in particular embodiments, when executing the Spatiotemporal Event Analysis Module5600, the system begins, at Step5610, by determining spatiotemporal event data for each of a plurality of available players during a sporting event. In particular embodiments, the system is configured to receive the spatiotemporal event data from one or more third party servers5320. In particular embodiments, the spatiotemporal event data may be compiled by one or more entities which may, for example, use one or more image analysis techniques and a plurality of cameras to identify and/or determine any of the spatiotemporal event data described herein. In various embodiments, the system is configured to receive (e.g., determine) the spatiotemporal event data based on data provided by one or more third parties (e.g., from one or more third party servers5320, one or more databases5340, etc.). In other embodiments, the system is configured to determine the spatiotemporal event data as the sporting event occurs using any suitable technique. In any embodiment described herein, a spatiotemporal event may include, for example: (1) a particular play during the sporting event; (2) a particular time period during the sporting event (e.g., a quarter, half, etc.); (3) a particular incident during the sporting event; (4) a particular action by one or more players during the sporting event (e.g., a pass, an attempted shot, a scored shot, an assist, a dribble, a tackle, a steal, a foul, a particular movement, a particular off-the-ball movement, a run, a throw, a pitch, an interception, a forced fumble, a hit, a catch, a score, a pitch, a strike, a thrown ball, a missed tackle, a drop, an incomplete pass, a completed pass, a sack, a fumble, a throw-in, a free-kick, a blocked shot, a catch and shoot, a deflection, an own goal, a save, a cross, an attempted pass, a dribble, a defensive pressure, a defensed pass, and/or any other suitable potential action which a player may make during the course of any suitable sporting event); and/or (5) any other suitable spatiotemporal event related to any activity undertaken by any participant in a sporting or other event at any time during the sporting event at any location on a field of play (e.g., or outside of the field of play) associated with the sporting event. In still other embodiments, the spatiotemporal event data may include any other suitable data related to each discrete spatiotemporal event such as, for example: (1) a time of each spatiotemporal event (e.g., during the sporting event); (2) a location of each player participating in the sporting event during the spatiotemporal event (e.g., within and/or outside a field of play of the sporting event; (3) a relative position of each player participating in the sporting event (e.g., a distance between at least two players during the spatiotemporal event); (4) a likelihood of the spatiotemporal event having occurred (e.g., a probability that a player will make a particular shot that makes up the spatiotemporal event); (5) a movement speed of each particular participant in the sporting event during any particular spatiotemporal event); and/or (6) any other suitable data related to the event (e.g., which may, for example, be used by the system to modify a spatiotemporal event score for any particular participant in the sporting event with respect to one or more particular spatiotemporal events). Returning to Step5620, the system is configured to analyze the spatiotemporal event data to identify one or more spatiotemporal events during the sporting event. In particular embodiments, the system is configured to identify each of the one or more spatiotemporal events that make up the sporting event based on the spatiotemporal event data. The system may be further configured to identify one or more pieces of spatiotemporal event data associated with each particular identified spatiotemporal event. Continuing to Step5630, the system is configured to determine which of the plurality of available players are involved in each of the one or more spatiotemporal events. The system may, for example, be configured to determine which of the plurality of available players are involved in each of the one or more spatiotemporal events based at least in part on the spatiotemporal event data described herein. Next, at Step5640, the system is configured to determine, for each of the one or more spatiotemporal events, spatiotemporal event score data for each player of the plurality of players involved in the spatiotemporal event. In some embodiments, as described herein, the system may be configured to apply the one or more scoring criteria to the spatiotemporal event data to determine a spatiotemporal event score for each of a plurality of spatiotemporal events during the sporting event (e.g., or other event). As may be understood in light of this disclosure, as the interactive game system receives spatiotemporal event data regarding events (e.g., actions, activities, etc.) that occur during the course of the sporting event, the system may be configured to determine a score for any particular player involved in a particular event. In various embodiments, the system is configured to determine a spatiotemporal event score for each player participating in the sporting event (e.g., for each spatiotemporal event). In one example, a spatiotemporal event may include a completed pass in a football game. In this example, the spatiotemporal event data may include, for example: (1) a location of each player participating in the football game at the beginning of the play that led to the completed pass, at the time the quarterback threw the pass, at the time the pass was caught, at a time that the play ended (e.g., due to the player that caught the ball being tacked, running out of bounds, and/or scoring a touchdown, etc.), and/or any other suitable time during the course of the play (e.g., or a continuously determined position of each player during the course of the play; (2) a time of the caught pass (e.g., the pass may have been completed with 8:46 left in the first quarter); (3) information associated with a player that threw the pass; (4) information associated with a player that caught the pass; (5) information associated with a player (e.g., one or more players) that was guarding the player that caught the pass; (6) relative positioning data for one or more players in the game (e.g., a distance between the player that caught the pass and the player guarding or defending the player that caught the pass at the time the pass was caught, leading up to the catch, etc.); (7) probability data related to the caught pass (e.g., which the system may determine based on, for example, a length of the pass, a distance the ball travelled in the air, a pass catch rate for the player that caught the pass, completion statistics associated with the player that threw the pass, etc.) and/or (8) any other suitable data related to the event. In this example, when applying the one or more scoring criteria to the spatiotemporal event data to determine a spatiotemporal event score, the system may be configured to calculate and/or determine a spatiotemporal event score for one or more of: (1) the player that caught the pass; (2) the player that threw the pass; (3) the player defending the player that caught the pass; and/or (4) any other player that may have been involved in the event (e.g., completed pass) such as, for example: (1) one or more blockers; (2) one or more pass rushers; and/or (3) any other suitable player in the event. In various embodiments, the system is configured to determine the spatiotemporal event score (e.g., for any suitable player involved in the event) based on any suitable factor (e.g., piece of spatiotemporal event data) and/or scoring criteria described herein. The system may then, at Step5650, use the spatiotemporal event score data to determine a user score (e.g., a user score for a user of the interactive game system5300described herein based on the one or more players selected by the user). The system may then be configured to use the spatiotemporal event score data and user score in the implementation of the custom user interfaces described herein. Exemplary User Experience FIGS.57-68depict exemplary screen displays and graphical user interfaces (GUIs) according to various embodiments of the system, which may display information associated with the system or enable access to, or interaction with, one or more features of the system by one or more users. FIG.57depicts an exemplary screen display5700that a user of the interactive game system5300may encounter when accessing a software application (e.g., on a mobile computing device5310) for playing an interactive game during a sporting event. As may be understood from this figure, the screen display includes a modes tab5710, which may, for example, display a plurality of user-selectable game modes5712,5714,5716,5718,5720,5722. Each of the plurality of user-selectable game modes512,514,5716,5718,5720,5722may, for example, include one or more different rule sets (e.g., scoring criteria) for the interactive game. In various embodiments, the system is configured to display a plurality of user-selectable game modes512,514,5716,5718,5720,5722that are based at least in part on a type of event (e.g., basketball game, soccer match, etc.) during which the system will provide the interactive game(s). As may be understood fromFIG.57, the screen display5700may include a video feed of a sporting event5740(e.g. a basketball game between the Houston Rockets and the LA Clippers). In particular embodiments, the system may be configured to display the video feed of the sporting event5740natively within the software application. In other embodiments, the system is configured to display the screen display5700as an overlay to the video feed of the sporting event5740, which may, for example, be provided by one or more video content providers. In particular embodiments, the screen display5700may further include a current score5730of the game (e.g., 0-0 before the game begins), a time left before the start of the game (e.g., 3:00 minutes), and any other suitable information. As may be understood fromFIG.58, in response to the user selecting a ‘play’ indicium5750in the screen display5700shown inFIG.57, the system may display the screen display5800ofFIG.58. As shown in this figure, the system may be configured to display data (e.g., information) related to the interactive game that the user has selected and may include a user-selectable indicium to start playing5810the game. In response to the user selecting the user-selectable indicia to start playing the game5810shown inFIG.58, the system, in particular embodiments, may be configured to progress the user through a series of initial user interfaces shown inFIGS.59-62prior to the initiation of the interactive game. The system may, for example, be configured to display the user interface5900depicted inFIG.59, which may be configured to enable the user to provide a username for use during the interactive game (which the system may, for example, use to display the user's score on a leaderboard of user scores). The system may be further configured to display the display screens shown inFIGS.60-62(e.g., prior to initiating the interactive game). As may be understood fromFIG.60, the screen display6000shown inFIG.60may include any suitable textual information related to the interactive game that the user has selected to play. The textual information may include, for example: (1) information related to the event that is about to occur (e.g., the sporting event); (2) information associated with the interactive game that the user has selected to play (e.g., scoring rules, scoring criteria, prize information, information associated with one or more participants in the event from which the user may select, etc.).FIG.61depicts a display screen6100via which a user may select a particular player for one or more available players in the sporting (e.g., or other) event. As maybe be understood from the exemplary embodiment shown in this figure, the user may select a single particular player from a selection of five available players. In some embodiments, the five players may include five players that are current playing in a particular sporting event (e.g., a basketball game) for a particular team (e.g., the LA Clippers). In still other embodiments, the system may be configured to display (e.g., via a suitable display screen6200such as the display screen shown inFIG.62) one or more particular actions (e.g., activities, events, etc.) that may be performed by a selected participant in order to acquire points (e.g., bonus points, additional points, etc.). In still other embodiments, as shown inFIG.63, the system may be configured to display, via a suitable interface6300, past performance data for a selected event participant (e.g., for a selected basketball player that is playing for a particular team in the basketball game for which the interactive game is running). This may, for example, include a score that the selected participant earned in one or more past events. FIGS.64-66depict exemplary user interfaces depicting a custom user interface overlaid on a video feed of a sporting event (e.g., a basketball game). As may be understood from these interfaces6400,6500,6600, the system may be configured to enable a user to select one or more different players during the sporting event as part of any suitable interactive game described herein. The interfaces may further display a user's currently selected player and score.FIG.67depicts an exemplary user interface6700that depicts a leaderboard that the system is displaying as an overlay to the underlying video feed of the sporting event. The leaderboard may, for example, depict the user's relative score with respect to one or more other users playing the interactive game during the sporting event. In the user interface shown inFIG.68, the custom user interface6800includes an indicium adjacent (e.g., around) the selected player in addition to an indication of a scoring event (e.g., a spatiotemporal event) associated with the selected player (e.g., +10 points for a rebound). As discussed herein, in various embodiments, the system may be configured to overlay one or more on screen indications related to a particular spatiotemporal event over the video feed in a location that at least generally corresponds to a location of a particular selected player in the video feed in conjunction with (e.g., and/or substantially immediately following) a spatiotemporal event in which the selected player is involved. The system may, for example, be configured to display (e.g., as part of a customized user interface) points earned by a selected player over the head of the selected player in the video feed (e.g., as the selected player earns points during the sporting event). In some embodiments, the system is configured to determine a location of the selected player within the video feed based at least in part on the spatiotemporal event data (e.g., any suitable spatiotemporal event data described herein). Client-Side Augmentation Systems and Methods As discussed above, systems according to various embodiments may be adapted to combine video content with one or more augmentations to produce an augmented video. Also as noted above, such augmentations may include, for example in the context of a video of a professional basketball game or other sporting event, suitable text and/or graphics. The augmentations may also, or alternatively, include one or more audio overlays. In the context of text and/or graphics, the system may be adapted to display each augmentation: (1) so that it stays in a fixed position relative to a particular object in the video (e.g., so that a player indicator remains in a particular position relative to the image of a particular player, so that player statistics for a particular player in a sporting event remain positioned over the image of the particular player's head as the player moves from place to place in the video, etc.); or (2) so that it stays in a fixed position relative to the frame of the display screen that displays the video. As discussed in greater detail above, the system may accomplish this by having the spatial indexing and alignment system4534provide suitable temporal and spatial indexing and alignment information (e.g., for objects in the video, the augmentations, etc.) to the processing system4518. The processing system4518then uses this information to produce augmented video content. In various embodiments, the step of creating the augmented video content may be done by a server and then transmitted to a user's client device for playback. A potential disadvantage to such embodiments is that they may require the generation of a separate augmented video for each potential combination of individual augmentations. In various embodiments, the individual augmentations may be rendered separately by a server and then transmitted to a client along with suitable temporal and spatial indexing and alignment information. A suitable application on the user's client device may then use the provided temporal and spatial indexing and alignment information to display and/or display the augmentation over, or in conjunction with, the base video. In various embodiments, the base video and selected augmentations can be combined at the user device (e.g., the client) based on input from the user indicating which one or more of a plurality of augmentations the user would like to see. One advantage of various such embodiments is that, from a practical perspective, it may allow the user to select from a larger number of combinations of augmentations. These augmentations may, in various embodiments, be customized to user preferences (e.g., as indicated by a user toggling the various augmentations on or off). Systems and methods described herein may enable the interactivity and user control of independent augmented elements in video within the context of a sporting event, or other live or recorded event. For instance, in a particular embodiment, a user of tablet computer (e.g., an iPad) watching a basketball game may separately tap a player on the tablet computer's touch screen to toggle through various statistics related to the player. For example, a user toggling in this manner could toggle between: (1) the player's statistics being displayed above his head; (2) the player's shot chart being overlaid on the basketball court; (3) a player indicator being displayed in a particular position relative to the image of a particular player; and/or (4) a trail illustrating the player's recent movement and speed being overlaid on the basketball court. In particular embodiments, the system may allow the user to select for these types of statistics and visual features to be displayed for any combination of players independently. So, for example, the user may select to have shot percentages and shot charts displayed for two particular active players while the user is viewing video of a particular basketball game, but to not have the system display these augmentations for the other active players within the game. In various embodiments, the rendering engine may include a three-dimensional rendering engine that takes in information on the camera calibration, distortion parameters, image segmentation, and/or tracking coordinates of subjects in the video, so as to produce augmentations that are rendered in the 3D scene of the video, for example, so that such augmentations may be masked by objects closer to the camera. The augmentation data may be represented and transmitted using any suitable video or image format. For instance, the augmentation data may be represented as Portable Network Graphics (PNG) images, together with information specifying the video frame and location (relative to the video viewport) corresponding to each image. In various embodiments, the images may be cropped (e.g., by the server computer) into boxes containing visible assets surrounded by a transparent background, and/or may be collected into sprite sheets, for more efficient compression and transmission. The augmentation data may be sent from the server to the user's client device via any suitable communication channel, such as that provided by the Web Socket protocol. In various embodiments, specific augmentations are selectively transmitted to the client device based on user input events (e.g., a user tapping on a particular player) that are transmitted from the client device to the server. To allow a user to easily specify and interact with the augmentations, the system may define a respective “bounding box” for one or more objects shown in the video (e.g., each particular player shown within the video, a ball shown in the video, etc.). In particular embodiments, the system may define a bounding box only for the one or more objects in a video that have available associated augmentations. In various embodiments, the application on the client device is configured to augment a particular object when the user selects (e.g., clicks on, taps on, etc.) the particular object's respective bounding box. In various embodiments, rather than perform resource-intensive bounding box calculations on the client computer, data defining the bounding boxes may be calculated by a server computer that may then pass the bounding box data to the client computer along with (e.g., within) the augmentations themselves. To accomplish this, in various embodiments, the system may use a PNG file format to define the augmentations. This file format may include an alpha channel specifying transparency and/or opacity. Using tracking data and camera calibration, the rendering engine (which may be executed by the server) can demarcate the area of a frame of video containing an object with some predetermined or pre-defined color value and opacity, such as a red, green, blue (RGB) value with an alpha of 0 (e.g., an RGBA value indicating a certain color that is fully transparent), so that the bounding box is invisible to the user. In such embodiments, when the user selects a particular bounding area (e.g., a bounding box) associated with a particular object (e.g., a particular player) while the video is displayed on (e.g., paused or playing on) the user's client device, the client device will determine an RGBA value that corresponds to the selected pixel, and determine a corresponding bounding area by matching the RGBA value of the selected pixel to the RGBA value of a bounding area and provide an indicator of the determined bounding area to a renderer. The system (e.g., a renderer) will then determine which object the particular indicator of the determined bounding area corresponds to (e.g., a certain indicator may correspond to the bounding box around James Harden), and then, in response to each selection (e.g., click or tap) on the bounding area, cycle (e.g., toggle) through various augmentations that correspond to the selected object. This approach may allow the client computer to determine (e.g., on-the-fly) if a user's selection occurs within the bounding area, while keeping the bounding area invisible to the user. An example algorithm for this concept is provided below in Table 2. Exemplary Algorithm TABLE 2Inputs:List of PNG images of augmentations for each relevant object in the frame imsA touch gesture tAn RGB value indicatorOutput:augmented_object, describing which object on the screen should be augmentedfor each image in ims:if image[t.loc.x][t.loc.y].RGB == indicator:augmented_object = image.namereturn# If we reach this line, no object was touched, so augmented_object should not changereturn null FIG.69illustrates a process that may be performed by a Client-side Augmentation Module6900that may be executed by a client device or system. When executing Client-side Augmentation Module6900, the system begins, at Step6910, by receiving video data and displaying the associated video content on the client device, such as video content of a sporting event, other live event, a performance, or prerecorded video content. In particular embodiments, the system is configured to receive the video data from one or more third party servers5320, one or more spatiotemporal event analysis servers5360, or one or more interactive game servers5330. In particular embodiments, the received video data may define or otherwise indicate one or more bounding boxes, each of which may be associated with an object represented in video content of the video data (e.g., each particular player shown within the video, a ball shown in the video, a referee shown in the video, etc.) shown in the video content of the video data. In various embodiments, the received video data may define or otherwise indicate one or more RGBA values associated with each bounding box. For example, the received video data may include one or more bounding boxes, each demarcating an area of a frame of video containing an object and each associated with a predetermined RGB value and an alpha value of 0 (zero), such that each bounding box is transparent and therefore invisible to the user. At Step6920, the system may detect that a selection of some portion of the video content has been made by a user. For example, the system may detect that a user has clicked or tapped on a particular portion of the video content (e.g., by detecting pressure applied to a touchscreen). The system may translate or otherwise interpret this click or tap into one or more pixels. Based on the detected selection, the system may determine a bounding box within which the selection was made. To determine this bounding box, the system may determine which of one or more pre-defined RGBA values matches the RGBA value of the user's pixel selection. In particular embodiments, the system may determine that no bounding box is associated with the selection, in which case the system may not request any augmentation based on the detected selection. If the system determines that augmentation is needed, the system may transmit a request for augmentation to the server. Alternatively, even if the system determines that augmentation is needed, the system may determine that locally stored or otherwise available asset renditions are satisfactory and therefore determine not to request augmentations from the server. At Step6930, to determine a bounding box associated with the user selection, the system may first determine an associated RGBA value for a pixel associated with the user selection (e.g., the pixel at the portion of the display tapped or clicked by the user). At Step6940, the system may compare the RGBA value of this pixel to the RGBA values of one or more predetermined bounding boxes for that particular video frame to determine which particular bounding box has been selected (e.g., which particular bounding box has an associated RGBA value that matches the RGBA value of the pixel associated with the user selection). At Step6950, the system may transmit an indicator of this bounding box (which may or may not be the RGBA value itself) to a renderer as part of, or indicating, a request for any applicable augmentation data. In various embodiments, such a renderer may be a server (e.g., executing a rendering engine), client-side application, or any other suitable system. The renderer may, in response to receiving the bounding box indicator, determine an object corresponding to the received bounding box indicator (e.g., a player, a ball, other non-player object, etc.). The renderer may then select a suitable augmentation for the corresponding object. The renderer may select the augmentation by selecting a PNG image from one or more PNG images associated with the object, where each such PNG image represents an augmentation of the object. In particular embodiments, the suitable augmentation may be a single PNG for each frame, at full size resolution, with augmentations occupying pixels sparsely and with transparent pixels occupying areas with no augmentations. In other particular embodiments, the renderer may provide a PNG of lesser resolution (e.g., only encompassing the augmentations to be displayed) and a UV location to indicate where to place the PNG onto the video. Once an object is selected, its augmentations may persist across multiple frames. In various embodiments, there may be several augmentations options available for a particular object that may be toggled sequentially. For example, the system may associate the augmentation options of “name,” “stats” (player statistics), and “none” with a player object. Upon each determined selection of that player, the system may toggle through the augmentations options. Thus, when the player object is initially in a “none” augmentation state, in response to receiving an indication that the player has been selected by a user, the system may generate an augmentation of the player object that includes the player's name, putting the player object in a “name” augmentation state. When that player object is in a “name” augmentation state, in response to receiving an indication that the player has been selected by a user, the system may generate an augmentation of the player object that includes the player's stats, putting the player object in a “stats” augmentation state, and so forth. At Step6960, the system may receive the augmentation data associated with the bounding box indicated by the RGBA value indicator transmitting to the server at Step6950. In various embodiments, the augmentation data may include a PNG image and information specifying the video frame and location corresponding to the PNG image. At Step6970, the system may display the video content to the user augmented based on the augmentation data. FIG.70illustrates a process that may be performed by an Augmentation Generation Module7000that may be executed by a server or system operating in communication with a client device to enable client-side augmentation. In various embodiments, the Augmentation Generation Module7000may be executed by one or more third party servers5320, one or more spatiotemporal event analysis servers5360, one or more interactive game servers5330, one or more mobile computing devices5310, and/or a rendering engine configured on any suitable system or server. At Step7010, the system may receive an indicator of a selected bounding box from a client device. This indicator may be received in any form of communication from the client device, such as, but not limited to, a request for augmentation data, a request for video data, etc. The system may then, at least partially in response to receiving the bounding box indicator, at Step7020, determine the object that is associated with the bounding box indicator. As noted above, an object may have one or more associated augmentation states, one of which may be no augmentation. For example, a player object may have “name,” “stats,” and “none” associated augmentations. At Step7030, the system may determine the current augmentation state of the object determined from the indicator received at Step7010. In particular embodiments, the system may determine that the object is in one of a plurality of augmentation states or that the object has no currently defined augmentation state. In particular embodiments, the system may determine a current augmentation for an object rather than a current augmentation state, for example, by determining a current augmentation for the object or determining that the object is not currently associated with an augmentation. At Step7040, the system may select an augmentation for the object, in various embodiments, based on the current augmentation state of the object or an augmentation currently associated with the object. For example, for a player object having “name,” “stats,” and “none” associated augmentations, where the object is currently in the “name” augmentation state or associated with the “name” augmentation, the system may determine that the object should next be in the “stats” augmentation state or associated with the “stats” augmentation. In this way, the system may allow a user to toggle through several augmentations until the user's preferred augmentation is found. In various embodiments, the system may be configured to determine an augmentation or augmentation state for a particular object based on other criteria instead of, or in combination with, the object's current augmentation or augmentation state. In other embodiments, the system may display a user interface in response to an object's selection, which may allow the user to select from different types of associated augmentations (e.g., “name,” “stats,” or “none”). In various embodiments, the selected and/or available augmentations may be PNG images. At Step7050, the system may determine the video frame and object location data that will be used by the client device to apply the augmentation. The system may determine such information for the object and/or a bounding box associated with the object using any means described herein or any other effective means. At Step7060, the system may respond to the client device by transmitting the determined augmentation and the associated frame and location data to the client device. As noted above, the augmentation may be one or more PNG images. The system may transmit the augmentation data to the client device via any suitable communication channel, such as that provided by the Web Socket protocol. Exemplary Client-Side Augmentation User Interfaces FIG.71depicts an exemplary screen display and graphical user interface (GUI)7100presenting video content on a client device according to various embodiments of the system. In this particular example, the video data associated with the video content shown in GUI7100includes bounding boxes7110,7111,7112,7113,7114,7115, which are shown for illustrative purposes in the figure, but are transparent and therefore invisible to the user. Each of bounding boxes7110,7111,7112,7113,7114,7115, may have an associated RGB value and alpha value (an RGBA value). The alpha value of each such bounding box may be 0 in this example, rendering the respective bounding box transparent. Each of bounding boxes7110,7111,7112,7113,7114,7115may be selectable by a user, for example, by tapping or clicking any portion of the bounding box.FIG.72shows GUI7100along with only bounding box7113for illustrative purposes. In this example, bounding box7113has been selected by a user by a tap at portion7120of bounding box7113. In response to detecting the selection of portion7120, the system may determine the RGBA value of bounding box7113and transmit an indicator derived from that RGBA value to the server to request augmentation data for bounding box7113. The server, at least in part in response to receiving the particular indicator for bounding box7113, may determine or select an augmentation7300as shown inFIG.73. The augmentation7300may be a PNG image. The server may select the particular PNG image7300from one or more PNG images associated with the object that is associated with bounding box7113, where each such PNG image represents an augmentation of the object. The server may also select or otherwise determine information specifying the video frame and location (relative to the video viewport) corresponding to the particular PNG image7300. The server may transmit the particular PNG image7300to the client device. The system may then display the augmentation along with the video content, as shown on exemplary GUI7400inFIG.74. In GUI7400, the video content shown includes the visible portion of augmentation7300only, reflecting what would be presented to a user in this example. Systems and Methods for Enhanced Augmentation of Interactive Video Content In various embodiments, an interactive content system may be configured to augment (e.g., supplement) an experience of one or more viewers (e.g., users) that are viewing an event (e.g., on any suitable computing device). In particular embodiments, the interactive content system is configured to facilitate user interaction with objects and augmented elements in video content based on, for example, spatial, temporal, and/or spatiotemporal indexing (e.g., in two dimensions, in three dimensions) of one or more regions of pixels in one or more video frames. Such indexing may identify one or more semantically meaningful elements and one or more respective semantic contexts in each of the one or more video frames. In various embodiments, spatiotemporal event data corresponding to video content may include spatiotemporal index data associated with one or more pixels and/or regions of pixels in one or more video frames. Each such one or more pixels and/or pixel regions may correspond to an element included in the video content. In various embodiments, each such element may correspond to: (1) one or more persons (e.g., players, referees, coaches, etc.), objects (e.g., balls, scoreboards, baskets, nets, etc.); (2) environments (e.g., a field, an arena, a court, etc.); (3) parts of a person (e.g., head, foot, hand, etc.); (4) items attached to or worn by a person (e.g., shoes, jerseys, hats, etc.); (5) parts of an object (e.g., one score on a scoreboard, game time shown on a scoreboard, etc.); and/or (6) parts or portions of an environment (e.g., basketball baskets, goal posts, field lines, an unoccupied region in a field, portion of a court where a goal was recently made, etc.). A context associated with such an element may correspond to one or more semantically meaningful events in which the element may be involved. For example, a contextualized semantic element may be one or more shoes of a particular basketball player (element) who is in the process of making a dunk (context). In another example, a contextualized semantic element may be an unoccupied region of a court proximate to a portion of the court (element) in which a basket has recently been made (context). The various aspects of systems and methods described herein may be integrated into an enhanced augmentation system. For example, the enhanced augmentation system may use tracking (e.g., person tracking, object tracking), classifications, (e.g., object classification, event classification), a video augmentation pipeline, augmentation aspects (e.g., client-side augmentation), interactive user apps, and/or any other aspects described herein. In various embodiments, one or more semantic elements may be made available in a dynamic marketplace for advertising and/or e-commerce. A particular semantic element may be associated with one or more advertising augmentations and/or e-commerce links based on a particular context associated with the particular element. For example, the shoes of a basketball player may be an element that has a context of being worn by a player who has possession of the ball in a basketball game. In this example, one or more of the shoes of the player who has possession of the ball in the basketball game may be visually augmented (e.g., highlighted, augmented with superimposed graphics, etc.) to indicate, for example, that a user may purchase and/or obtain further information about the same type of shoes by tapping or clicking on the visually augmented shoes. Once the player loses possession of the ball, the augmentation may be removed or altered in response to the change of context associated with the element (e.g., the shoes are no longer associated with the context of a player in possession of the ball). In various embodiments, certain elements associated with particular contexts may have more value than other elements in different contexts. Accordingly, the system may demand a higher price for certain elements in specific contexts than it does for other combinations of elements and contexts. For example, the element of the shoes of a player in the context of that player having possession of the ball may have a higher advertising value (and, therefore, a higher asking price) than the element of the shoes of a player in the context of that player sitting on the bench. In various embodiments, multiple elements may be augmented in any particular video frame or multiple video frames, and each such element may have more than one augmentation applied. Each particular element may also have more than one context associated with it. To continue with the example of shoes of a basketball player, these shoes may have, at the same time, the context of being worn by the top scorer in the game (or league, tournament, etc.) and the context of being worn by the player with possession of the ball. Each particular combination of element and context may have its own respective advertising value and associated augmentation. Alternatively, or in addition, the system may calculate a value for a particular element based on the various contexts associated with that element. For example, the system may calculate a higher value for the element of the shoes worn by a particular player in the contexts of that player having possession of the ball and being the top scorer in the game than for the element of the shoes worn by another player in the contexts of that other player having possession of the ball but not being the top scorer in the game. The system may use any combination of contexts and/or other criteria to calculate a value of one or more augmentations available for any particular element. The system may insert advertising elements as augmentations into the environment of a video (e.g., the three-dimensional environment of a video). In various embodiments, the system may dynamically augment one or more video frames with one or more such advertising elements in association with the movement of one or more real-world elements in the video. The system may use the semantic context of one or more various real-world elements in each video frame to determine, at least in part, the selection, appearance, and/or attributes of any advertising and/or e-commerce augmentations that may be applied to the video frame. For example, a three-dimensional model of an advertising element (e.g. an animated character, a placard, etc.) may be inserted in a video frame at an area of a soccer field that is unoccupied by players. This advertising element may be animated to react to player movement (e.g., animated to move away in response to players approaching its location, etc.). The system may also, or instead, use the semantic context of one or more various real-world elements in one or more video frames to calculate a value of one or more augmentation that may be associated with each such one or more real-world elements. In various embodiments, user context may be used to determine, at least in part, a choice and appearance of one or more advertising augmentations and/or other augmentations. Such user context may be associated with a user viewing the video content, such as a particular user logged into a video streaming service or into a particular device presenting video content to the user. Such user context may include a user's profile, a user's past interaction history, a user's social media data (e.g., online friends, social media postings, social media interactions), a user's online activity (e.g., frequently visited websites, subscribed streaming video services, etc.), a user's shopping data (e.g., frequently purchased items, frequently visited merchants (online and/or real-life), etc.), etc. For example, the system may determine, based on user context, that a particular user is fan of a particular sports team. When presenting an advertising augmentation to be included in the video content being presented to that particular user for the sale of sports apparel (jersey, hat, shoes, T-shirt, etc.), the system may generate the augmentation to represent such apparel with logos, colors, etc. associated with that particular sports team. In particular embodiments, an interactive virtual experience for a particular user may be generated by integrating avatars of other users and/or friends of the particular user as augmentations to the three-dimensional environment of the video content (e.g., into one or more video frames of the video content) presented to the particular user (for example, in the audience of a sporting event). Such augmentations may be in addition to, or instead of, advertising-based augmentations. For example, the system may determine, using user context information, one or more friends of a particular user that have an account on the same streaming service that is providing video content to the particular user. The system may obtain or otherwise generate avatars for one or more of such friends and augment the video content presented to the user with such avatars. In particular embodiments, the system may use user context to generate an interactive virtual experience for a particular user by integrating advertising and/or e-commerce augmentations associated with online shopping sites frequented by the particular user into the three-dimensional environment of the video content presented to the particular user (for example, on the court of a sporting arena). The placement and/or appearance of such avatars and/or augmentations may be based, at least in part, on the particular user's context (online friends, other users known to the particular user, online interactions, online purchases, shopping memberships, viewing history, etc.). In particular embodiments, the system may use user context and/or other information to enable users to interact with one another via chat and/or augmentations chosen for insertion by the users and/or determined based on one or more of such users' contexts. For example, the system may generate a communications interface that facilitates communications between two or more users (e.g., augmented video content that represents two or more users chatting with one another via their avatars) that the system has augmented into the video content presented to each such user. In various embodiments, semantic elements and their associated contexts may provide a basis for improving the efficiency and power of a user interface. The system may use such elements and contexts to generate user-customized augmented content. In particular embodiments, the system may determine the means of presentation of augmentations and the order in which such augmentations are presented based on semantic elements and their associated semantic contexts. For example, when a particular user taps or clicks on a semantic element in a video frame, the system may use the element's attributes and context to determine one or more applicable augmentations and/or other content to present to the particular user. The system may order applicable augmentations and/or other content based on the relevance of one or more elements to the user and/or the situation. For example, a content editor clicking on a player who has just scored a goal may be presented with possible augmentations providing different statistics, such as ball speed, distance to goal, number of defenders, etc. Similarly, the system may present such options to a typical user so that the user can select the desired augmentation. The system may order augmentations based on the element and/or user context or user preferences. For example, the system may serially present various augmentations in a particular order based on a user configuration. In another example, the system may serially present various augmentations in a particular order based on the user's online shopping activity (e.g., first presenting advertising for the user's most frequently visited online merchant, and then, upon detecting expiration of a time period, presenting advertising for the user's next most frequently visited online merchant, and so forth). In this way, the system may serially present multiple augmentations to a particular user using a single particular element over a time period. The system may also, or instead, provide an editing function that allows the user to determine or influence the ordering and/or relevance of one or more augmentations that may be presented to the user or consumer of the content. For example, a particular user may indicate to the system that the particular user is interested in cars. Based on this information, the system may prioritize advertising for car retailers in determining advertising augmentations presented to that particular user. Any ordering and/or other determinations of the manner and means of augmenting content presented to a user may be preconfigured by the user and/or determined using machine learning techniques to analyze past user actions. In particular embodiments, one or more taps or clicks on augmented content may generate an interface that allows the user to cycle through different sets of options. When a user selects multiple semantic elements, the system may generate and present one or more options applicable to the combination of the selected elements. Although various embodiments herein will be described with respect to one or more sporting events (e.g., a soccer game, a basketball game, a tennis match, a football game, a cricket match, a volleyball game, etc.), it should be understood that embodiments of the system described herein may be implemented in the context of any other suitable system that facilitates user interaction during the presentation of any type of event. This may include, for example: (1) one or more e-sports events (e.g., one or more electronic sporting events); (2) one or more televised debates; (3) one or more table games (e.g., one or more poker tournaments); (4) one or more video games of any type; and/or (5) any other suitable event for which the system may facilitate user interaction and present augmentations in conjunction with video of the event. Particular embodiments of an interactive video content presentation and augmentation system are described more fully below and may be integrated into any other aspects set forth herein. Exemplary Enhanced Augmentation System Architecture FIG.75is a block diagram of an Enhanced Augmentation System7500according to particular embodiments. In various embodiments, the Enhanced Augmentation System7500may be configured to: (1) determine augmentation data for augmentations that may be applied to various elements presented in video content (e.g., a video presentation of a substantially live (e.g., live) sporting or other event) based on element contexts; (2) determine augmentation data for augmentations that may be applied to various elements presented in video content based on user contexts; (3) determine augmentation ordering and preferences based on user contexts and/or user preferences; and/or (4) enable a user to select one or more elements and/or augmentations presented in one or more frames of such video content. As may be understood fromFIG.75, the Enhanced Augmentation System7500may include one or more computer networks7515, One or More Mobile Computing Devices7510(e.g., tablet computer, smartphone, etc.), One or More Third Party Servers7520, One or More Enhanced Augmentation Servers7530, One or More Databases7540or other data structures, One or More Remote Computing Devices7550(e.g., a desktop computer, laptop computer, tablet computer, smart television, smartphone, etc.), and/or One or More Spatiotemporal Event Analysis Servers7560. In particular embodiments, the one or more computer networks7515facilitate communication between (e.g., and/or among) the One or More Mobile Computing Devices7510, One or More Third Party Servers7520, One or More Enhanced Augmentation Servers7530, One or More Databases7540, One or More Remote Computing Devices7550, and/or One or More Spatiotemporal Event Analysis Servers7560. Although in the embodiment shown inFIG.75, the One or More Mobile Computing Devices7510, One or More Third Party Servers7520, One or More Enhanced Augmentation Servers7530, One or More Databases7540, One or More Remote Computing Devices7550, and/or One or More Spatiotemporal Event Analysis Servers7560are depicted as separate servers and computing devices, it should be understood that in other embodiments, one or more of these servers and/or computing devices may comprise a single server, a plurality of servers, one or more cloud-based servers, or any other suitable configuration. The One or More Computer Networks7515may include any of a variety of types of wired or wireless computer networks such as the Internet, a private intranet, a public switch telephone network (PSTN), or any other type of network. The communication link between the One or More Mobile Computing Devices7510and the One or More Enhanced Augmentation Servers7530may be, for example, implemented via a Local Area Network (LAN) or via the Internet. In other embodiments, the One or More Databases7540may be stored either fully or partially on any suitable server or combination of servers described herein. In various other embodiments, an Enhanced Augmentation System7500may utilize one or more suitable cloud computing techniques in order to execute overlay software, underlying software, store and access one or more pieces of data, etc. The Enhanced Augmentation System7500may, for example, be configured to perform one or more processing steps on one or more remote servers (e.g., the One or More Enhanced Augmentation Servers7530and/or One or More Spatiotemporal Event Analysis Servers7560) prior to transmitting and displaying particular data on one or more interfaces on the One or More Mobile Computing Devices7510as described herein. For example, the One or More Computer Networks7515may facilitate communication between the One or More Enhanced Augmentation Servers7530and the One or More Spatiotemporal Event Analysis Servers7560in order to transmit spatiotemporal event data for a sporting or other event (e.g., during the event in substantially real time) to the One or More Enhanced Augmentation Servers7530, for example, in order to determine augmentation data (e.g., at the One or More Enhanced Augmentation Servers7530) for an element based on element context and/or user context. The system may then, for example, transmit any suitable data from the One or More Enhanced Augmentation Servers7530, via the One or More Computer Networks7515, to the One or More Mobile Computing Devices7510for display as part of a customized user interface for the user while the user is viewing the sporting or other event on the One or More Mobile Computing Devices7510. In various embodiments, a computer architecture such as computer architecture5400illustrated inFIG.54can be used within the Enhanced Augmentation System7500, for example, as a client computer (e.g., One or More Mobile Computing Devices7510shown inFIG.75), or as a server computer (e.g., One or More Enhanced Augmentation Servers7530, One or More Spatiotemporal Event Servers7560, etc.) shown inFIG.75. In particular embodiments, the computer5400may be suitable for use as a computer within the context of the Enhanced Augmentation System7500that is configured to receive input from a user, determine augmentation data for the user based on one or more contexts (e.g., user, element) and/or spatiotemporal event data associated with a particular sporting or other event, etc. Any of the aspects of the computer5400as described herein may be integrated, in whole or in part, into the Enhanced Augmentation System7500. Exemplary Enhanced Augmentation System Platform Various embodiments of an Enhanced Augmentation System7500may be implemented in the context of any suitable system (e.g., as a software application running on One or More Mobile Computing Devices7510, as an overlay to an underlying software application running on the One or More Mobile Computing Devices7510, as a data processing system utilizing one or more servers to perform particular processing steps, or any other suitable combination thereof or discussed herein). For example, the Enhanced Augmentation System7500may be implemented to: (1) receive video presentation data for the particular sporting event; (2) determine one or more augmentation criteria (e.g., user context, user preferences, element context, etc.); (3) determine one or more augmentations (e.g., advertising augmentations, e-commerce links, user avatars, etc.) based on the augmentation criteria; (4) apply the one or more augmentations to the video presentation data (e.g., to one or more video frames) to generate interactive video content; and (5) display, transmit, or otherwise present the interactive video content including the one or more augmentations on a display device to a user (e.g., the One or More Mobile Computing Devices7510, One or More Remote Computing Devices7550, the video display5410, etc.). Various aspects of the system's functionality may be executed by certain system modules, including an Interactive Content Module7500. Although the modules described herein are presented as a series of steps, it should be understood in light of this disclosure that various embodiments of the Interactive Content Module7500and other modules described herein may perform the steps described below in an order other than in which they are presented. In still other embodiments, the Interactive Content Module7500and other modules described herein may omit certain steps described below. In various other embodiments the Interactive Content Module7500and other modules described herein may perform steps in addition to those described (e.g., such as one or more steps described with respect to one or more other modules, etc.). Interactive Content Module In particular embodiments, when executing an Interactive Content Module7600, the Enhanced Augmentation System7500(e.g., the One or More Enhanced Augmentation Servers7530) is configured to: (1) receive video content of an event (e.g., video content of a substantially live (e.g., live) sporting event or other type of event that may include one or more video frames); (2) identify semantic elements in the content (e.g., players, objects, environment, any parts thereof, etc.); (3) determine a semantic context for one or more of the elements; (4) determine a user context for one or more users of the interactive video content; (5) determine user customization data for one or more users of the interactive video content; (6) a determine, based at least in part on one or more of the semantic contexts, the user contexts, and the user customization data, augmentations to be applied to the video content to generate the interactive video content; and/or (7) present the interactive video content to the one or more users. Turning toFIG.76, in particular embodiments, when executing the Interactive Content Module7600, the system begins, at Step7610, by receiving video content (e.g., video content of an ongoing, substantially live (e.g., live) sporting event or other type of event). Such video content may include one or more video frames. At Step7620, the system identifies one or more elements present in the video content. The system may, for example, perform identification of elements for each frame of the video content (e.g., using any element identification means, including those set forth herein). The determined one or more elements may each be a real-life object and/or an augmented element in the video content determined based on, for example, spatial, temporal, and/or spatiotemporal indexing of one or more regions of pixels in one or more video frames. Such indexing may identify one or more semantically meaningful elements. As noted above, each such element may correspond to one or more persons, objects, environments, and/or parts or portions thereof. For example, a semantic element may be an unoccupied region of a court proximate to a portion of the court in which a basket has recently been made. Also as noted above, one or more semantic elements may be made available in a dynamic marketplace for advertising and/or e-commerce. In particular embodiments, one or more semantic elements may include or be associated with one or more advertising elements that may be inserted as augmentations into the three-dimensional environment of the video content. Systems and methods that may be used with components of this system, including person and object tracking, object and event classification, video augmentation pipeline, and interactive user apps, are described in more detail herein. At Step7630, the system determines semantic contexts for the identified elements. A semantic context associated with an element may correspond to one or more semantically meaningful events in which the element may be involved. For example, a contextualized semantic element may be one or more shoes of a particular basketball player (element) in the process of making a dunk (semantic context). In another example, the shoes of a basketball player may be an element that has a semantic context of being worn by a player who has possession of the ball in a basketball game. At Step7640, the system determines a user context for a user that is the ultimate consumer of the interactive video content. As noted above, such user context may include a user's profile, a user's past interaction history (the user's previous clicks or taps on previously presented video frames), a user's social media data (e.g., online friends, social media postings, social media interactions), a user's online activity (e.g., frequently visited websites, etc.), etc. User context data may also, or instead, be determined based at least in part on using machine learning techniques to analyze past user actions. At Step7650, the system determines user preferences for the user that is the ultimate consumer of the interactive video content. The system may provide an editing function that allows the user to determine or influence the ordering and relevance of one or more augmentations that may be presented to the user. For example, a particular user may indicate to the system that the particular user is interested in particular product. Based on this information, the system may prioritize advertising for that product in determining advertising augmentations presented to that particular user. In particular embodiments, one or more taps or clicks on augmented content may generate an interface that allows the user to cycle through different sets of options. When a user selects multiple semantic elements, the system may generate and present one or more options applicable to the combination of the selected elements. The system may then use the options selected by the user to determine augmentations for future video frames (e.g., as user preferences and/or user context upon which such augmentation determinations may be based, at least in part). At Step7660, the system may determine the augmentations (e.g., advertising augmentations, e-commerce augmentations, user avatars, etc.) to integrate into the interactive video content to be presented to the user based, at least in part, on one or more of the semantic contexts, user contexts, and user preferences that may have been determined at Steps7630,7640, and/or7650. For example, the system may determine to visually augment (e.g., highlight, etc.) the shoes of a player who has possession of the ball in a basketball game to indicate that a user may purchase or obtain further information about the same shoes by tapping or clicking on the visually augmented shoes. In various embodiments, multiple elements may be augmented in any particular video frame or multiple video frames, and each such element may have more than one augmentation applied. In various embodiments, the system may be able to select particular augmentations and may not be able to select others. Further at Step7660, advertising elements may be inserted as augmentations dynamically on each video frame in association with the movement of real-world elements in the video. The system may use the semantic context of the various elements in each video frame to determine, at least in part, the selection, appearance, and/or attributes of any advertising and/or e-commerce augmentations that may be applied to a video frame. For example, a three-dimensional model of an advertising element may be inserted in a video frame at an area of a soccer field that is unoccupied by players. This model may change with each frame to generate an animated object when presented to the user over multiple, serialized frames. In various embodiments, the system may use user context to determine, at Step7660, a choice and appearance of one or more advertising augmentations and/or other augmentations. For example, the system may integrate one or more avatars of other users and/or friends of the particular user into the frames of the video content (for example, in the audience of a sporting event). The system may also, or instead, integrate advertising and/or e-commerce augmentations associated with online shopping sites frequented by a particular user into frames of the video content (for example, in the court of a sporting arena). In various embodiments, users of one or more of the disclosed systems may interact with one another via chat and/or augmentations chosen for insertion by the users or determined based on one or more of such users' contexts. At Step7660, the system may generate augmentations facilitating such chat and/or interaction based on user preferences and/or user context. Further at Step7660, the system may determine the form of presentation of one or more augmentations and/or the order in which such augmentations may be presented, for example, based on semantic elements, advertising elements, semantic context, user context, and/or user preferences. For example, the system may use an element's attributes and context to select an augmentation from among one or more available applicable augmentations and/or other content to present to the user. The system may order applicable augmentations and/or other content based on the relevance of one or more elements to the user and/or the situation. For example, the system may determine, based on a particular user's context, that the particular user buys athletic shoes several times a year. Based on this information, the system may prioritize advertising for shoe retailers in determining advertising augmentations presented to that particular user. At Step7670, the interactive video content may be generated using the determined augmentations and the video content received at Step7610. The generated interactive video content may then be presented or otherwise provided to one or more users via any suitable means. Exemplary Enhanced Augmentation User Interfaces FIG.77depicts an exemplary screen display and graphical user interface (GUI)7700representing video content according to various embodiments of the system. In this particular example, the video data associated with the video content shown in GUI7700includes several semantic elements, shown circled in a dashed line for illustrative purposes. One such element is the element7710, which is a basketball hoop and net. Another such element is the element7720, which is a basketball player's shoes. Another such element is the element7730, which is an unoccupied section of a basketball court. Another such element is the element7740, which is a portion of the environment that includes spectators. GUI7700may include many other elements not described herein. These elements and their respective augmentations will be described for illustrative purposes. One skilled in the art will readily recognize that many other elements and augmentations of various types may be processed and generated by the disclosed embodiments. According to various embodiments, the system may identify the semantic context associated with each of the elements7710,7720,7730, and7740and any associated user context and/or user preferences as described above to determine one or more augmentations for one or more of these elements. For example, the system may determine that there the portion of the court associated with the element7730has the context of being unoccupied, and therefore may generate an augmentation (that may, for example, also be based on user context and/or preferences) that includes an advertisement to be added to one or more video frames that include the element7730. Similarly, the system may determine that hoop associated with the element7710has the context of being the most likely hoop to be used to score at this time (e.g., based on the proximity of the element associated with the ball being closer to the hoop of the element7710than to the other hoop), and therefore may generate an clickable augmentation (that may, for example, also be based on user context and/or preferences) to be added to one or more video frames that include the element7710. In another example, the system may determine that the shoes associated with the element7720have the context of being worn by the highest scoring player of this game, and therefore may generate an augmentation (that may, for example, also be based on user context and/or preferences) to be added to one or more video frames that include the element7730, where the augmentation includes clickable highlighting that links to a shopping website. In yet another example, the system may determine that the spectator area associated with the element7740has the context of being available for use with an avatar of another user, and therefore may generate an augmentation (that may, for example, also be based on user context and/or preferences) that includes an avatar and chat area to be added to one or more video frames that include the element7730. FIG.78depicts an exemplary screen display and GUI7800representing video content that includes the augmentations that the system generated for the video content represented inFIG.77. In this particular example, the system has generated the augmentation7810and inserted it into a video frame such that the hoop is shown as highlighted by the augmentation. Similarly, the system has generated the augmentation7820, which is a clickable highlighting of a player's shoes, and inserted that into the video frame such that the shoes are shown as highlighted by the augmentation and serve as a control that, when activated, directs the user's device to a web site that sells and/or provides more information on the shoes. As also shown in this figure, the system has generated the augmentation7830and inserted it into a video frame such that an advertisement is presented in an unoccupied portion of the court. Further as shown in this figure, the system has generated the augmentation7840and inserted it into a video frame such that an avatar of a user (e.g., someone known to the user determined based on user context and/or user preference) is shown as a spectator. The system has also generated chat window7841that shows chat messages sent by the user associated with the avatar associated with the augmentation7840. The chat window7841may also be configured to allow the user to enter and send text messages and/or other communications to the user associated with the avatar associated with the augmentation7840. In particular embodiments, there may be several users represented by avatar augmentations (e.g., in a spectator section and/or one or more other section of video). In such embodiments, one or more of such avatars may be configured with a chat window or other interface that allows a viewing user to communicate with each such configured avatar. CONCLUSION Although embodiments above are described in reference to various interactive game systems in the particular context of interactive game systems that augment a user's experience of viewing a live sporting event, it should be understood that various aspects of the system described above may be applicable to interactive game systems for non-sporting events as well as past or historical events (e.g., as opposed to substantially live events), or to other types of systems, in general. While this specification contains many specific embodiment details, these should not be construed as limitations on the scope of any invention or of what may be claimed, but rather as descriptions of features that may be specific to particular embodiments of particular inventions. Certain features that are described in this specification in the context of separate embodiments may also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment may also be implemented in multiple embodiments separately or in any suitable sub-combination. Moreover, although features may be described above as acting in certain combinations and even initially claimed as such, one or more features from a claimed combination may, in some cases be excised from the combination, and the claimed combination may be directed to a sub-combination or variation of a sub-combination. Similarly, while operations are depicted in the drawings in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order, or that all illustrated operations be performed, to achieve desirable results. In certain circumstances, multitasking and parallel processing may be advantageous. Moreover, the separation of various system components in the embodiments described above should not be understood as requiring such separation in all embodiments, and it should be understood that the described program components and systems may generally be integrated together in a single software product or packaged into multiple software products. Furthermore, in the course of the description above, reference is made to particular embodiments, various embodiments, some embodiments, other embodiments, etc. It should be understood in light of this disclosure that any feature of any embodiment described herein may be combined in any suitable manner with any other feature of any other embodiment described. For example, it should be understood that a feature described in a particular embodiment may be included in any other embodiment described herein. Similarly, any reference to various embodiments in the above description should be understood to encompass any embodiment described herein. Many modifications and other embodiments of the invention will come to mind to one skilled in the art to which this invention pertains having the benefit of the teachings presented in the foregoing descriptions and the associated drawings. Therefore, it is to be understood that the invention is not to be limited to the specific embodiments disclosed and that modifications and other embodiments are intended to be included within the scope of the appended claims. Although specific terms are employed herein, they are used in a generic and descriptive sense only and not for the purposes of limitation. | 427,487 |
11861907 | DESCRIPTION OF EMBODIMENTS Turning now toFIG.1, a segmentation30of a live game video feed into a plurality of player states32(32a-32d) and a plurality of projectile stages34(34a-34c) is shown. In general, the game (e.g., American football, rugby, baseball, ultimate FRISBEE) might involve players occasionally and/or frequently holding a projectile (e.g., ball, flying disc, etc.) in a manner that occludes the projectile (e.g., hides the projectile from view). In the illustrated example, a first player state32a(“State-1”) involves a first player (“Player 1”, e.g., American football center) placing the ball on the ground and hiking the ball (e.g., in a “shotgun”) formation. As a result, the ball enters a first projectile stage34a(“Stage-1”) in which it is airborne at a relatively low height. In a second player state32b(“State-2”), a second player (“Player 2”, e.g., American football quarterback) may then catch the ball and pass (e.g., throw) the ball, which causes the ball to enter a second projectile stage34b(“Stage-2”) in which it is airborne at a relatively high height. In a third player state32c(State-3″), a third player (“Player 3”, e.g., American football receiver) receives the ball and runs. Accordingly, the ball enters a third projectile stage34c(“Stage-3”) in which it is held by the third player. In a fourth player state32d(“State-4”), the third player is tackled or runs out of bounds. Of particular note is that while the ball is in the first projectile stage34aand the second projectile stage34b, traditional optical detection techniques may readily detect the location of the ball and track the trajectory of the ball over several frames. While the ball is in the third projectile stage34c, however, optical detection and/or tracking of the ball may be more difficult due to occlusion (e.g., by the body parts of the ball-holding player/BHP and/or other players), the relatively small size of the ball and/or the speed of the game. As will be discussed in greater detail, the location of the ball-holding player may be used to infer the location of the ball and track the ball over frames during which the ball is occluded or otherwise not visible to the array of cameras. Thus, the third player may be automatically designated as the BHP so that tracking of the ball is achieved during the third projectile stage34c. Such an approach enables more accurate ball tracking, which in turn enhances the performance of the underlying system and renders an improved immersive experience. FIG.2Ashows a method36of operating a performance-enhanced computing system. The method36may be implemented as one or more modules in a set of logic instructions stored in a non-transitory machine- or computer-readable storage medium such as random access memory (RAM), read only memory (ROM), programmable ROM (PROM), firmware, flash memory, etc., in configurable logic such as, for example, programmable logic arrays (PLAs), field programmable gate arrays (FPGAs), complex programmable logic devices (CPLDs), in fixed-functionality hardware logic using circuit technology such as, for example, application specific integrated circuit (ASIC), complementary metal oxide semiconductor (CMOS) or transistor-transistor logic (TTL) technology, or any combination thereof. For example, computer program code to carry out operations shown in the method36may be written in any combination of one or more programming languages, including an object oriented programming language such as JAVA, SMALLTALK, C++ or the like and conventional procedural programming languages, such as the “C” programming language or similar programming languages. Additionally, logic instructions might include assembler instructions, instruction set architecture (ISA) instructions, machine instructions, machine dependent instructions, microcode, state-setting data, configuration data for integrated circuitry, state information that personalizes electronic circuitry and/or other structural components that are native to hardware (e.g., host processor, central processing unit/CPU, microcontroller, etc.). Illustrated processing block38provides for selecting a player (e.g., BHP) from a plurality of players based on an automated analysis of 2D video data associated with a plurality of cameras, wherein the selected player is nearest to a projectile depicted in the 2D video data. As will be discussed in greater detail, block38may also include estimating, via an artificial neural network, the location of the projectile and locations of a plurality of body parts in a bounding box associated with the location of the selected player. In an embodiment, block40tracks the location of the selected player over a subsequent plurality of frames, where the automated analysis is conducted over a buffered initial plurality of frames occurring before the subsequent plurality of frames. Thus, the buffered initial plurality of frames might correspond to an airborne stage such as, for example, the first projectile stage34a(FIG.1) or the second projectile stage34b(FIG.1), whereas the subsequent plurality of frames would correspond to a ball-held stage such as, for example, the third projectile stage34c(FIG.1). In one example, the location of the projectile is estimated (e.g., fused with the location of the BHP) at block42based on the location of the selected player over the subsequent plurality of frames. The illustrated method36therefore enhances performance through more accurate projectile tracking. FIG.2Bshows a method44of detecting a ball-holding player in 2D video data. The method44may generally be incorporated into block38(FIG.2), already discussed. More particularly, the method44may be implemented as one or more modules in a set of logic instructions stored in a non-transitory machine- or computer-readable storage medium such as RAM, ROM, PROM, firmware, flash memory, etc., in configurable logic such as, for example, PLAs, FPGAs, CPLDs, in fixed-functionality hardware logic using circuit technology such as, for example, ASIC, CMOS or TTL technology, or any combination thereof. Illustrated processing block45provides for determining whether the distance between the projectile and one or more of the plurality of players is less than a distance threshold. In an embodiment, block45includes comparing the 3D position of the projectile to the 3D position of one or more players for one or more frames corresponding to an airborne stage such as, for example, the first projectile stage34a(FIG.1) or the second projectile stage34b(FIG.1), already discussed. The distance threshold may generally be relatively low (e.g., on the order of a meter or two). Thus, the distance between the projectile and the player(s) being less than the distance threshold might be indicative of the ball being within reach of the player(s). If the distance between the projectile and the players is less than the distance threshold, illustrated block47conducts the automated analysis of the 2D video data in response to that condition being satisfied and the method44terminates. In one example, block47includes determining which player is closest to the projectile by comparing the distance from the projectile to a bounding box around each detected player. If it is determined at block45that the distance between the projectile and the player(s) is not less than the distance threshold, illustrated block46provides for determining whether the height of the projectile is less than a height threshold. In an embodiment, block46includes performing object detection on one or more frames corresponding to an airborne stage such as, for example, the first projectile stage34a(FIG.1) or the second projectile stage34b(FIG.1), already discussed. The height threshold may generally be greater than an estimated maximum leaping ability across all players. Thus, the projectile height being less than the height threshold might be indicative of the ball being on a downward trajectory (e.g., after being airborne due to a pass). If the projectile height is less than the height threshold, illustrated block48conducts the automated analysis of the 2D video data in response to that condition being satisfied and the method44terminates. In one example, block48includes determining which player is closest to the projectile by comparing the distance from the projectile to a bounding box around each detected player. The method therefore further enhances performance by triggering the BHP detection in response to different conditions. FIG.3shows a more detailed method52of operating a performance-enhanced computing system. The method52may be implemented as one or more modules in a set of logic instructions stored in a non-transitory machine- or computer-readable storage medium such as RAM, ROM, PROM, firmware, flash memory, etc., in configurable logic such as, for example, PLAs, FPGAs, CPLDs, in fixed-functionality hardware logic using circuit technology such as, for example, ASIC, CMOS or TTL technology, or any combination thereof. Illustrated processing block54provides for conducting multi-camera ball detection and tracking. In an embodiment, YOLO (You Only Look Once) techniques are used to find the ball position directly in single camera views (e.g., in Stage-1 and Stage-2 ofFIG.1). YOLO typically has a good tradeoff between accuracy and time. Based on video data from multiple cameras, the unique 3D ball location/position is determined in each frame. Additionally, block56conducts multi-camera player detection and tracking. For example, block56may involve detecting each player in each camera view, and then applying a deep-sort algorithm to track the players. After obtaining bounding boxes such as, for example, a bounding box64(FIG.4), for the players in the different camera views, an association is made at block58between all of the bounding boxes across the multiple cameras. The bounding boxes from different camera views that correspond to the same player are associated with one another, as will be discussed in greater detail. In an embodiment, block58associates a temporal tracking identifier (ID) across cameras, and adds a player bounding box and player 3D location to the video data from each camera. A determination is made at illustrated block60as to whether the 3D location of the ball is valid. If so, the 3D location of the ball and the player associations are used in block62to detect the BHP. Thus, when the ball descends from the sky, players might scramble for the ball, which may be invisible or heavily occluded. Block62identifies the BHP before the ball is held by a player. As already noted, once the height of the ball is less than a predetermined height threshold, candidates who are close to the ball may be selected. Block62then finds the most likely BHP in the following frames. In one example, the 3D position of the ball and the 3D position of the players are used to find the nearest player as BHP candidate. A sliding-window may be used as a buffer to determine whether the candidates are the same player. If the player is validated to the same player in the buffer, it is the BHP. The BHP tracking is then activated and initialized with the bounding box of BHP in each camera. Thus, in the American football context, regardless of whether the center snaps the ball to the Quarterback (QB) or the QB passes the ball to a wide receiver (WR), the BHP is automatically identified. For example, inFIG.5, when the ball is airborne as in frames66,68and70, ball detection is precise. Additionally, a player (WR) is running toward to the ball. When the distance between ball and the player is less than, for example, 2.5 m, the nearest player is chosen as a BHP candidate. Because the ball is typically touched/snatched by multiple players, a buffer may be used to find the true BHP in several frames. In frame72, the player is selected as the BHP candidate and labeled/annotated with a player bounding box. If the same player is selected in subsequent frames, that player is validated as BHP with high probability. Block62(FIG.3) will output the player bounding box and activate BHP tracking with the player bounding box in all camera views. In frames74and76, the BHP continues to be tracked is labeled with a BHP bounding box. In another example, the quarterback might catch the ball from the center at the end of Stage-1 and run forward with the ball. In such a case, the quarterback is identified and tracked as the BHP. Returning now toFIG.3, a determination may be made at block78as to whether BHP tracking is to be re-initialized. Block78may involve detecting that a new player is the BHP or the BHP was lost. If so, BHP tracking is restarted at block80and BHP tracking is conducted at block82to follow the BHP in each camera view. The output of the BHP tracking is a BHP bounding box in each camera view. In an embodiment, the foot centerline of the BHP is reconstructed in 3D based on the output of all cameras. Additionally, the 3D position of BHP foot centerline may be used to estimate the 3D position of the ball (e.g., 1 meter higher than the intersection point between the foot centerline and the ground plane). If it is determined at block78that re-initialization of the BHP tracking may be bypassed, a determination is made at block84as to whether BHP tracking has been started. If so, the method52proceeds to block82to conduct the BHP tracking. If it is determined at block84that BHP tracking has not been started, the method52proceeds to block86, which estimates the ball location. The ball location may be estimated based on the BHP tracking output and/or 3D ball location (if available). Block88fuses the ball location based on the estimated ball location and the multi-camera ball detection and tracking data from block54. Thus, two moving trajectories are obtained. One moving trajectory is the direct trajectory of the ball across all frames and the other moving trajectory is the inferred trajectory from the trajectory of the BHP. To determine the unique final trajectory of the ball, block88combines them together. Illustrated block90selects the next frame and the method52repeats. Turning now toFIG.6a frame-by-frame analysis chart90is shown in which the first row is the ball location estimated from BHP detection/tracking, the second row is the direct ball detection/tracking, and the third row is the fusion result. The period P2_S to P1_S corresponds to Stage-1, the period P1_S to P1_E corresponds to Stage-2, and the period P1_E to P2_E corresponds to Stage-3. The “correct” points indicate that the distance between the 3D position of the ball and the ground truth is less than or equal to a threshold (e.g., 100 cm), the “incorrect” points indicate that the distance between the 3D position of the ball is greater than the threshold, and the star points indicate a false detection. The chart90demonstrates that ball detection and tracking is very good in Stage-2 because the ball is airborne. In Stage-3, however, the ball is held by the player. Accordingly, the ball detection and tracking result is poor during Stage-3. BHP tracking, however, enhances the fusion result. Overall, the fusion (with BHP) result is better than pure ball tracking. FIG.7Ashows a multi-camera player association result92in which foot centerlines (V1-V4) are associated with a player who is depicted in plurality of camera views. In the illustrated example, a homography matrix is used to calculate corresponding principal lines (P1-P4) in the ground plane. An intersection point (C) represents the position of the same player from different camera views. Turning now toFIG.7B, a neural network training image94is shown in which a projectile (e.g., ball) and a plurality of body parts of a player are annotated. In general, a “pose-ball” model is used to locate the ball-holding player. Given a bounding box of a detected player, the 2D human pose estimation is responsible for predicting the joint locations, such as head, nose, left/right shoulder, left/right elbow, left/right wrist, left/right hip, left/right knee and left/right ankle, etc. The proposed pose-ball model is based on the trained 2D human pose estimation model, which is fine-tuned by annotation data that is modified for a new model in which the joints of the player and the ball in hand are labeled. Therefore, the fine-tuned pose-ball model can output the ball joint when it is in hand. There are many human pose estimation techniques such as, for example, Cascaded Pyramid Network (CPN), Stacked Hourglass Network and/or AlphaPose, that may be leveraged to conduct the estimation. In one example, CPN is adopted as the base model. The pose-ball model keeps almost the same deep neural network structure as CPN, where the weight of the pre-trained CPN model (e.g., trained on a Common Objects in Context/COCO dataset) is adopted to initialize the pose-ball model. The weight of the last output layer of the CPN model is discarded, however, because the original CPN model only outputs seventeen keypoints including neck, left/right eye, left/right ear, left/right shoulder, left/right elbow, left/right wrist, left/right hip, left/right knee, left/right ankle, which are well-defined by the COCO dataset. The pose-ball model outputs eighteen points including all the seventeen COCO defined keypoints plus the ball. Therefore, the last output layer of the pose-ball model will be set to output eighteen keypoints and initialized by the random weight and further fine-tuned by the annotated data. The annotated image data for fine-tuning the pose-ball model is collected beforehand. Additionally, the same human keypoint ordering and annotation specification as the COCO dataset are maintained and the ball point is added as the eighteenth keypoint. As already noted, the image94is an annotated training sample for the pose-ball model. In an embodiment, if some of the keypoints are invisible then their corresponding 2D positions will be set as (−1, −1). Therefore, every training sample has eighteen keypoint annotations. The remaining fine-tuned procedure may be the same as training a CPN model on the COCO dataset from scratch. After several epochs of fine tuning, the pose-ball model is ready for use. The pose-ball model may be used to estimate the ball position on each player bounding box. With a bounding box input, the pose-ball model predicts eighteen human joints including the ball and corresponding confidence levels for each point. If the confidence of the ball is higher than the pre-defined threshold, then the given bounding box is assumed to contain the ball with high confidence. By applying the pose-ball model on each detected bounding box, the ball in hand position may be detected. There may still be some false alarms, however, that are addressed by using multiple view geometry constraints. That is, if detected ball positions from different camera views are true positives, the ball positions may be successfully used, along with the projection matrix of the corresponding camera (e.g., calibrated before game starts), to initially build the 3D location of the ball. Then, the 3D location may be re-projected to the original camera views to obtain the re-projected ball 2D position. Additionally, the re-projection error may be estimated by calculating the Euclidian distance between the detected 2D position of the ball and the re-projected position on each camera view. If the re-projection error is within a certain threshold, then the detection is highly accurate. By checking all pairs of detected ball positions from different camera views, false alarms may be eliminated while retaining true positives. As a result, the player with the most bounding boxes containing true positives may be considered as a candidate for the BHP for the given frame. FIG.7Cshows a ball holding player detection result96obtained using the pose-ball model technology described herein. Experiments on large datasets indicate that the technology is effective in identifying the player with ball in hand. FIG.8shows a performance-enhanced computing system150that may generally be part of an electronic device/system having computing functionality (e.g., personal digital assistant/PDA, notebook computer, tablet computer, convertible tablet, server, data center, cloud computing infrastructure), communications functionality (e.g., smart phone), imaging functionality (e.g., camera, camcorder), media playing functionality (e.g., smart television/TV), wearable functionality (e.g., watch, eyewear, headwear, footwear, jewelry), vehicular functionality (e.g., car, truck, motorcycle), robotic functionality (e.g., autonomous robot), etc., or any combination thereof. In the illustrated example, the system150includes a graphics processor152(e.g., graphics processing unit/GPU) and a host processor154(e.g., central processing unit/CPU) having one or more processor cores156and an integrated memory controller (IMC)158that is coupled to a system memory160. Additionally, the illustrated system150includes an input output (IO) module162implemented together with the host processor154, and the graphics processor152on an SoC164(e.g., semiconductor die). In one example, the IO module162communicates with a display166(e.g., touch screen, liquid crystal display/LCD, light emitting diode/LED display), a network controller168(e.g., wired and/or wireless), mass storage170(e.g., hard disk drive/HDD, optical disk, solid state drive/SSD, flash memory), a camera array172, and an immersive video subsystem174. In one example, the network controller168receives 2D video data of a game. In another example, the camera array172generates the 2D video data of the game. In an embodiment, the system memory160and/or the mass storage170include a set of executable program instructions176, which when executed by the host processor154, the graphics processor152and/or the IO module162, cause the system150to perform one or more aspects of the method36(FIG.2A), the method44(FIG.2B), and/or the method52(FIG.3), already discussed. Thus, execution of the instructions176causes the system150to select a player from a plurality of players based on an automated analysis of the 2D video data, wherein the selected player is nearest to a projectile depicted in the 2D video data and track the location of the selected player over a subsequent plurality of frames in the 2D video data. Execution of the instructions176may further cause the system150to estimate the location of the projectile based on the location of the selected player over the subsequent plurality of frames. In an embodiment, the projectile is occluded from view in one or more of the subsequent plurality of frames. Additionally, the automated analysis may be conducted over a buffered initial plurality of frames occurring before the subsequent plurality of frames in the 2D video data. In one example, the automated analysis is conducted in response to the distance between the projectile and one or more of the plurality of players being less than a distance threshold. The automated analysis may also be conducted in response to a height of the projectile being less than a height threshold. Moreover, the automated analysis may include estimating, via an artificial neural network, the location of the projectile and locations of a plurality of body parts in a bounding box associated with the location of the selected player. The computing system150is therefore enhanced through more accurate projectile tracking. Moreover, concerns over altered projectile kinetics and/or costs may be eliminated by foregoing the use of RFID sensors mounted to the projectile. System Overview FIG.9is a block diagram of a processing system100, according to an embodiment. In various embodiments the system100includes one or more processors102and one or more graphics processors108, and may be a single processor desktop system, a multiprocessor workstation system, or a server system having a large number of processors102or processor cores107. In one embodiment, the system100is a processing platform incorporated within a system-on-a-chip (SoC) integrated circuit for use in mobile, handheld, or embedded devices. In one embodiment the system100can include, or be incorporated within a server-based gaming platform, a game console, including a game and media console, a mobile gaming console, a handheld game console, or an online game console. In some embodiments the system100is a mobile phone, smart phone, tablet computing device or mobile Internet device. The processing system100can also include, couple with, or be integrated within a wearable device, such as a smart watch wearable device, smart eyewear device, augmented reality device, or virtual reality device. In some embodiments, the processing system100is a television or set top box device having one or more processors102and a graphical interface generated by one or more graphics processors108. In some embodiments, the one or more processors102each include one or more processor cores107to process instructions which, when executed, perform operations for system and user software. In some embodiments, each of the one or more processor cores107is configured to process a specific instruction set109. In some embodiments, instruction set109may facilitate Complex Instruction Set Computing (CISC), Reduced Instruction Set Computing (RISC), or computing via a Very Long Instruction Word (VLIW). Multiple processor cores107may each process a different instruction set109, which may include instructions to facilitate the emulation of other instruction sets. Processor core107may also include other processing devices, such a Digital Signal Processor (DSP). In some embodiments, the processor102includes cache memory104. Depending on the architecture, the processor102can have a single internal cache or multiple levels of internal cache. In some embodiments, the cache memory is shared among various components of the processor102. In some embodiments, the processor102also uses an external cache (e.g., a Level-3 (L3) cache or Last Level Cache (LLC)) (not shown), which may be shared among processor cores107using known cache coherency techniques. A register file106is additionally included in processor102which may include different types of registers for storing different types of data (e.g., integer registers, floating point registers, status registers, and an instruction pointer register). Some registers may be general-purpose registers, while other registers may be specific to the design of the processor102. In some embodiments, one or more processor(s)102are coupled with one or more interface bus(es)110to transmit communication signals such as address, data, or control signals between processor102and other components in the system100. The interface bus110, in one embodiment, can be a processor bus, such as a version of the Direct Media Interface (DMI) bus. However, processor busses are not limited to the DMI bus, and may include one or more Peripheral Component Interconnect buses (e.g., PCI, PCI Express), memory busses, or other types of interface busses. In one embodiment the processor(s)102include an integrated memory controller116and a platform controller hub130. The memory controller116facilitates communication between a memory device and other components of the system100, while the platform controller hub (PCH)130provides connections to I/O devices via a local I/O bus. The memory device120can be a dynamic random access memory (DRAM) device, a static random access memory (SRAM) device, flash memory device, phase-change memory device, or some other memory device having suitable performance to serve as process memory. In one embodiment the memory device120can operate as system memory for the system100, to store data122and instructions121for use when the one or more processors102executes an application or process. Memory controller116also couples with an optional external graphics processor112, which may communicate with the one or more graphics processors108in processors102to perform graphics and media operations. In some embodiments a display device111can connect to the processor(s)102. The display device111can be one or more of an internal display device, as in a mobile electronic device or a laptop device or an external display device attached via a display interface (e.g., DisplayPort, etc.). In one embodiment the display device111can be a head mounted display (HMD) such as a stereoscopic display device for use in virtual reality (VR) applications or augmented reality (AR) applications. In some embodiments the platform controller hub130enables peripherals to connect to memory device120and processor102via a high-speed I/O bus. The I/O peripherals include, but are not limited to, an audio controller146, a network controller134, a firmware interface128, a wireless transceiver126, touch sensors125, a data storage device124(e.g., hard disk drive, flash memory, etc.). The data storage device124can connect via a storage interface (e.g., SATA) or via a peripheral bus, such as a Peripheral Component Interconnect bus (e.g., PCI, PCI Express). The touch sensors125can include touch screen sensors, pressure sensors, or fingerprint sensors. The wireless transceiver126can be a Wi-Fi transceiver, a Bluetooth transceiver, or a mobile network transceiver such as a 3G, 4G, or Long Term Evolution (LTE) transceiver. The firmware interface128enables communication with system firmware, and can be, for example, a unified extensible firmware interface (UEFI). The network controller134can enable a network connection to a wired network. In some embodiments, a high-performance network controller (not shown) couples with the interface bus110. The audio controller146, in one embodiment, is a multi-channel high definition audio controller. In one embodiment the system100includes an optional legacy I/O controller140for coupling legacy (e.g., Personal System2(PS/2)) devices to the system. The platform controller hub130can also connect to one or more Universal Serial Bus (USB) controllers142connect input devices, such as keyboard and mouse143combinations, a camera144, or other USB input devices. It will be appreciated that the system100shown is exemplary and not limiting, as other types of data processing systems that are differently configured may also be used. For example, an instance of the memory controller116and platform controller hub130may be integrated into a discreet external graphics processor, such as the external graphics processor112. In one embodiment the platform controller hub130and/or memory controller116may be external to the one or more processor(s)102. For example, the system100can include an external memory controller116and platform controller hub130, which may be configured as a memory controller hub and peripheral controller hub within a system chipset that is in communication with the processor(s)102. FIG.10is a block diagram of an embodiment of a processor200having one or more processor cores202A-202N, an integrated memory controller214, and an integrated graphics processor208. Those elements ofFIG.10having the same reference numbers (or names) as the elements of any other figure herein can operate or function in any manner similar to that described elsewhere herein, but are not limited to such. Processor200can include additional cores up to and including additional core202N represented by the dashed lined boxes. Each of processor cores202A-202N includes one or more internal cache units204A-204N. In some embodiments each processor core also has access to one or more shared cached units206. The internal cache units204A-204N and shared cache units206represent a cache memory hierarchy within the processor200. The cache memory hierarchy may include at least one level of instruction and data cache within each processor core and one or more levels of shared mid-level cache, such as a Level 2 (L2), Level 3 (L3), Level 4 (L4), or other levels of cache, where the highest level of cache before external memory is classified as the LLC. In some embodiments, cache coherency logic maintains coherency between the various cache units206and204A-204N. In some embodiments, processor200may also include a set of one or more bus controller units216and a system agent core210. The one or more bus controller units216manage a set of peripheral buses, such as one or more PCI or PCI express busses. System agent core210provides management functionality for the various processor components. In some embodiments, system agent core210includes one or more integrated memory controllers214to manage access to various external memory devices (not shown). In some embodiments, one or more of the processor cores202A-202N include support for simultaneous multi-threading. In such embodiment, the system agent core210includes components for coordinating and operating cores202A-202N during multi-threaded processing. System agent core210may additionally include a power control unit (PCU), which includes logic and components to regulate the power state of processor cores202A-202N and graphics processor208. In some embodiments, processor200additionally includes graphics processor208to execute graphics processing operations. In some embodiments, the graphics processor208couples with the set of shared cache units206, and the system agent core210, including the one or more integrated memory controllers214. In some embodiments, the system agent core210also includes a display controller211to drive graphics processor output to one or more coupled displays. In some embodiments, display controller211may also be a separate module coupled with the graphics processor via at least one interconnect, or may be integrated within the graphics processor208. In some embodiments, a ring based interconnect unit212is used to couple the internal components of the processor200. However, an alternative interconnect unit may be used, such as a point-to-point interconnect, a switched interconnect, or other techniques, including techniques well known in the art. In some embodiments, graphics processor208couples with the ring interconnect212via an I/O link213. The exemplary I/O link213represents at least one of multiple varieties of I/O interconnects, including an on package I/O interconnect which facilitates communication between various processor components and a high-performance embedded memory module218, such as an eDRAM module. In some embodiments, each of the processor cores202A-202N and graphics processor208use embedded memory modules218as a shared Last Level Cache. In some embodiments, processor cores202A-202N are homogenous cores executing the same instruction set architecture. In another embodiment, processor cores202A-202N are heterogeneous in terms of instruction set architecture (ISA), where one or more of processor cores202A-202N execute a first instruction set, while at least one of the other cores executes a subset of the first instruction set or a different instruction set. In one embodiment processor cores202A-202N are heterogeneous in terms of microarchitecture, where one or more cores having a relatively higher power consumption couple with one or more power cores having a lower power consumption. Additionally, processor200can be implemented on one or more chips or as an SoC integrated circuit having the illustrated components, in addition to other components. FIG.11is a block diagram of a graphics processor300, which may be a discrete graphics processing unit, or may be a graphics processor integrated with a plurality of processing cores. In some embodiments, the graphics processor communicates via a memory mapped I/O interface to registers on the graphics processor and with commands placed into the processor memory. In some embodiments, graphics processor300includes a memory interface314to access memory. Memory interface314can be an interface to local memory, one or more internal caches, one or more shared external caches, and/or to system memory. In some embodiments, graphics processor300also includes a display controller302to drive display output data to a display device320. Display controller302includes hardware for one or more overlay planes for the display and composition of multiple layers of video or user interface elements. The display device320can be an internal or external display device. In one embodiment the display device320is a head mounted display device, such as a virtual reality (VR) display device or an augmented reality (AR) display device. In some embodiments, graphics processor300includes a video codec engine306to encode, decode, or transcode media to, from, or between one or more media encoding formats, including, but not limited to Moving Picture Experts Group (MPEG) formats such as MPEG-2, Advanced Video Coding (AVC) formats such as H.264/MPEG-4 AVC, as well as the Society of Motion Picture & Television Engineers (SMPTE) 421M/VC-1, and Joint Photographic Experts Group (JPEG) formats such as JPEG, and Motion JPEG (MJPEG) formats. In some embodiments, graphics processor300includes a block image transfer (BLIT) engine304to perform two-dimensional (2D) rasterizer operations including, for example, bit-boundary block transfers. However, in one embodiment, 2D graphics operations are performed using one or more components of graphics processing engine (GPE)310. In some embodiments, GPE310is a compute engine for performing graphics operations, including three-dimensional (3D) graphics operations and media operations. In some embodiments, GPE310includes a 3D pipeline312for performing 3D operations, such as rendering three-dimensional images and scenes using processing functions that act upon 3D primitive shapes (e.g., rectangle, triangle, etc.). The 3D pipeline312includes programmable and fixed function elements that perform various tasks within the element and/or spawn execution threads to a 3D/Media sub-system315. While 3D pipeline312can be used to perform media operations, an embodiment of GPE310also includes a media pipeline316that is specifically used to perform media operations, such as video post-processing and image enhancement. In some embodiments, media pipeline316includes fixed function or programmable logic units to perform one or more specialized media operations, such as video decode acceleration, video de-interlacing, and video encode acceleration in place of, or on behalf of video codec engine306. In some embodiments, media pipeline316additionally includes a thread spawning unit to spawn threads for execution on 3D/Media sub-system315. The spawned threads perform computations for the media operations on one or more graphics execution units included in 3D/Media sub-system315. In some embodiments, 3D/Media subsystem315includes logic for executing threads spawned by 3D pipeline312and media pipeline316. In one embodiment, the pipelines send thread execution requests to 3D/Media subsystem315, which includes thread dispatch logic for arbitrating and dispatching the various requests to available thread execution resources. The execution resources include an array of graphics execution units to process the 3D and media threads. In some embodiments, 3D/Media subsystem315includes one or more internal caches for thread instructions and data. In some embodiments, the subsystem also includes shared memory, including registers and addressable memory, to share data between threads and to store output data. Graphics Processing Engine FIG.12is a block diagram of a graphics processing engine410of a graphics processor in accordance with some embodiments. In one embodiment, the graphics processing engine (GPE)410is a version of the GPE310shown inFIG.11. Elements ofFIG.12having the same reference numbers (or names) as the elements of any other figure herein can operate or function in any manner similar to that described elsewhere herein, but are not limited to such. For example, the 3D pipeline312and media pipeline316ofFIG.11are illustrated. The media pipeline316is optional in some embodiments of the GPE410and may not be explicitly included within the GPE410. For example and in at least one embodiment, a separate media and/or image processor is coupled to the GPE410. In some embodiments, GPE410couples with or includes a command streamer403, which provides a command stream to the 3D pipeline312and/or media pipelines316. In some embodiments, command streamer403is coupled with memory, which can be system memory, or one or more of internal cache memory and shared cache memory. In some embodiments, command streamer403receives commands from the memory and sends the commands to 3D pipeline312and/or media pipeline316. The commands are directives fetched from a ring buffer, which stores commands for the 3D pipeline312and media pipeline316. In one embodiment, the ring buffer can additionally include batch command buffers storing batches of multiple commands. The commands for the 3D pipeline312can also include references to data stored in memory, such as but not limited to vertex and geometry data for the 3D pipeline312and/or image data and memory objects for the media pipeline316. The 3D pipeline312and media pipeline316process the commands and data by performing operations via logic within the respective pipelines or by dispatching one or more execution threads to a graphics core array414. In one embodiment the graphics core array414include one or more blocks of graphics cores (e.g., graphics core(s)415A, graphics core(s)415B), each block including one or more graphics cores. Each graphics core includes a set of graphics execution resources that includes general-purpose and graphics specific execution logic to perform graphics and compute operations, as well as fixed function texture processing and/or machine learning and artificial intelligence acceleration logic. In various embodiments the 3D pipeline312includes fixed function and programmable logic to process one or more shader programs, such as vertex shaders, geometry shaders, pixel shaders, fragment shaders, compute shaders, or other shader programs, by processing the instructions and dispatching execution threads to the graphics core array414. The graphics core array414provides a unified block of execution resources for use in processing these shader programs. Multi-purpose execution logic (e.g., execution units) within the graphics core(s)415A-414B of the graphic core array414includes support for various 3D API shader languages and can execute multiple simultaneous execution threads associated with multiple shaders. In some embodiments the graphics core array414also includes execution logic to perform media functions, such as video and/or image processing. In one embodiment, the execution units additionally include general-purpose logic that is programmable to perform parallel general-purpose computational operations, in addition to graphics processing operations. The general-purpose logic can perform processing operations in parallel or in conjunction with general-purpose logic within the processor core(s)107ofFIG.9or core202A-202N as inFIG.10. Output data generated by threads executing on the graphics core array414can output data to memory in a unified return buffer (URB)418. The URB418can store data for multiple threads. In some embodiments the URB418may be used to send data between different threads executing on the graphics core array414. In some embodiments the URB418may additionally be used for synchronization between threads on the graphics core array and fixed function logic within the shared function logic420. In some embodiments, graphics core array414is scalable, such that the array includes a variable number of graphics cores, each having a variable number of execution units based on the target power and performance level of GPE410. In one embodiment the execution resources are dynamically scalable, such that execution resources may be enabled or disabled as needed. The graphics core array414couples with shared function logic420that includes multiple resources that are shared between the graphics cores in the graphics core array. The shared functions within the shared function logic420are hardware logic units that provide specialized supplemental functionality to the graphics core array414. In various embodiments, shared function logic420includes but is not limited to sampler421, math422, and inter-thread communication (ITC)423logic. Additionally, some embodiments implement one or more cache(s)425within the shared function logic420. A shared function is implemented where the demand for a given specialized function is insufficient for inclusion within the graphics core array414. Instead a single instantiation of that specialized function is implemented as a stand-alone entity in the shared function logic420and shared among the execution resources within the graphics core array414. The precise set of functions that are shared between the graphics core array414and included within the graphics core array414varies across embodiments. In some embodiments, specific shared functions within the shared function logic420that are used extensively by the graphics core array414may be included within shared function logic416within the graphics core array414. In various embodiments, the shared function logic416within the graphics core array414can include some or all logic within the shared function logic420. In one embodiment, all logic elements within the shared function logic420may be duplicated within the shared function logic416of the graphics core array414. In one embodiment the shared function logic420is excluded in favor of the shared function logic416within the graphics core array414. FIG.13is a block diagram of hardware logic of a graphics processor core500, according to some embodiments described herein. Elements ofFIG.13having the same reference numbers (or names) as the elements of any other figure herein can operate or function in any manner similar to that described elsewhere herein, but are not limited to such. The illustrated graphics processor core500, in some embodiments, is included within the graphics core array414ofFIG.12. The graphics processor core500, sometimes referred to as a core slice, can be one or multiple graphics cores within a modular graphics processor. The graphics processor core500is exemplary of one graphics core slice, and a graphics processor as described herein may include multiple graphics core slices based on target power and performance envelopes. Each graphics processor core500can include a fixed function block530coupled with multiple sub-cores501A-501F, also referred to as sub-slices, that include modular blocks of general-purpose and fixed function logic. In some embodiments the fixed function block530includes a geometry/fixed function pipeline536that can be shared by all sub-cores in the graphics processor core500, for example, in lower performance and/or lower power graphics processor implementations. In various embodiments, the geometry/fixed function pipeline536includes a 3D fixed function pipeline (e.g., 3D pipeline312as inFIG.11andFIG.12) a video front-end unit, a thread spawner and thread dispatcher, and a unified return buffer manager, which manages unified return buffers, such as the unified return buffer418ofFIG.12. In one embodiment the fixed function block530also includes a graphics SoC interface537, a graphics microcontroller538, and a media pipeline539. The graphics SoC interface537provides an interface between the graphics processor core500and other processor cores within a system on a chip integrated circuit. The graphics microcontroller538is a programmable sub-processor that is configurable to manage various functions of the graphics processor core500, including thread dispatch, scheduling, and pre-emption. The media pipeline539(e.g., media pipeline316ofFIG.11andFIG.12) includes logic to facilitate the decoding, encoding, pre-processing, and/or post-processing of multimedia data, including image and video data. The media pipeline539implement media operations via requests to compute or sampling logic within the sub-cores501-501F. In one embodiment the SoC interface537enables the graphics processor core500to communicate with general-purpose application processor cores (e.g., CPUs) and/or other components within an SoC, including memory hierarchy elements such as a shared last level cache memory, the system RAM, and/or embedded on-chip or on-package DRAM. The SoC interface537can also enable communication with fixed function devices within the SoC, such as camera imaging pipelines, and enables the use of and/or implements global memory atomics that may be shared between the graphics processor core500and CPUs within the SoC. The SoC interface537can also implement power management controls for the graphics processor core500and enable an interface between a clock domain of the graphic core500and other clock domains within the SoC. In one embodiment the SoC interface537enables receipt of command buffers from a command streamer and global thread dispatcher that are configured to provide commands and instructions to each of one or more graphics cores within a graphics processor. The commands and instructions can be dispatched to the media pipeline539, when media operations are to be performed, or a geometry and fixed function pipeline (e.g., geometry and fixed function pipeline536, geometry and fixed function pipeline514) when graphics processing operations are to be performed. The graphics microcontroller538can be configured to perform various scheduling and management tasks for the graphics processor core500. In one embodiment the graphics microcontroller538can perform graphics and/or compute workload scheduling on the various graphics parallel engines within execution unit (EU) arrays502A-502F,504A-504F within the sub-cores501A-501F. In this scheduling model, host software executing on a CPU core of an SoC including the graphics processor core500can submit workloads one of multiple graphic processor doorbells, which invokes a scheduling operation on the appropriate graphics engine. Scheduling operations include determining which workload to run next, submitting a workload to a command streamer, pre-empting existing workloads running on an engine, monitoring progress of a workload, and notifying host software when a workload is complete. In one embodiment the graphics microcontroller538can also facilitate low-power or idle states for the graphics processor core500, providing the graphics processor core500with the ability to save and restore registers within the graphics processor core500across low-power state transitions independently from the operating system and/or graphics driver software on the system. The graphics processor core500may have greater than or fewer than the illustrated sub-cores501A-501F, up to N modular sub-cores. For each set of N sub-cores, the graphics processor core500can also include shared function logic510, shared and/or cache memory512, a geometry/fixed function pipeline514, as well as additional fixed function logic516to accelerate various graphics and compute processing operations. The shared function logic510can include logic units associated with the shared function logic420ofFIG.12(e.g., sampler, math, and/or inter-thread communication logic) that can be shared by each N sub-cores within the graphics processor core500. The shared and/or cache memory512can be a last-level cache for the set of N sub-cores501A-501F within the graphics processor core500, and can also serve as shared memory that is accessible by multiple sub-cores. The geometry/fixed function pipeline514can be included instead of the geometry/fixed function pipeline536within the fixed function block530and can include the same or similar logic units. In one embodiment the graphics processor core500includes additional fixed function logic516that can include various fixed function acceleration logic for use by the graphics processor core500. In one embodiment the additional fixed function logic516includes an additional geometry pipeline for use in position only shading. In position-only shading, two geometry pipelines exist, the full geometry pipeline within the geometry/fixed function pipeline516,536, and a cull pipeline, which is an additional geometry pipeline which may be included within the additional fixed function logic516. In one embodiment the cull pipeline is a trimmed down version of the full geometry pipeline. The full pipeline and the cull pipeline can execute different instances of the same application, each instance having a separate context. Position only shading can hide long cull runs of discarded triangles, enabling shading to be completed earlier in some instances. For example and in one embodiment the cull pipeline logic within the additional fixed function logic516can execute position shaders in parallel with the main application and generally generates critical results faster than the full pipeline, as the cull pipeline fetches and shades only the position attribute of the vertices, without performing rasterization and rendering of the pixels to the frame buffer. The cull pipeline can use the generated critical results to compute visibility information for all the triangles without regard to whether those triangles are culled. The full pipeline (which in this instance may be referred to as a replay pipeline) can consume the visibility information to skip the culled triangles to shade only the visible triangles that are finally passed to the rasterization phase. In one embodiment the additional fixed function logic516can also include machine-learning acceleration logic, such as fixed function matrix multiplication logic, for implementations including optimizations for machine learning training or inferencing. Within each graphics sub-core501A-501F includes a set of execution resources that may be used to perform graphics, media, and compute operations in response to requests by graphics pipeline, media pipeline, or shader programs. The graphics sub-cores501A-501F include multiple EU arrays502A-502F,504A-504F, thread dispatch and inter-thread communication (TD/IC) logic503A-503F, a 3D (e.g., texture) sampler505A-505F, a media sampler506A-506F, a shader processor507A-507F, and shared local memory (SLM)508A-508F. The EU arrays502A-502F,504A-504F each include multiple execution units, which are general-purpose graphics processing units capable of performing floating-point and integer/fixed-point logic operations in service of a graphics, media, or compute operation, including graphics, media, or compute shader programs. The TD/IC logic503A-503F performs local thread dispatch and thread control operations for the execution units within a sub-core and facilitate communication between threads executing on the execution units of the sub-core. The 3D sampler505A-505F can read texture or other 3D graphics related data into memory. The 3D sampler can read texture data differently based on a configured sample state and the texture format associated with a given texture. The media sampler506A-506F can perform similar read operations based on the type and format associated with media data. In one embodiment, each graphics sub-core501A-501F can alternately include a unified 3D and media sampler. Threads executing on the execution units within each of the sub-cores501A-501F can make use of shared local memory508A-508F within each sub-core, to enable threads executing within a thread group to execute using a common pool of on-chip memory. Execution Units FIGS.14A-14Billustrate thread execution logic600including an array of processing elements employed in a graphics processor core according to embodiments described herein. Elements ofFIGS.14A-14Bhaving the same reference numbers (or names) as the elements of any other figure herein can operate or function in any manner similar to that described elsewhere herein, but are not limited to such.FIG.14Aillustrates an overview of thread execution logic600, which can include a variant of the hardware logic illustrated with each sub-core501A-501F ofFIG.13.FIG.14Billustrates exemplary internal details of an execution unit. As illustrated inFIG.14A, in some embodiments thread execution logic600includes a shader processor602, a thread dispatcher604, instruction cache606, a scalable execution unit array including a plurality of execution units608A-608N, a sampler610, a data cache612, and a data port614. In one embodiment the scalable execution unit array can dynamically scale by enabling or disabling one or more execution units (e.g., any of execution unit608A,608B,608C,608D, through608N-1and608N) based on the computational requirements of a workload. In one embodiment the included components are interconnected via an interconnect fabric that links to each of the components. In some embodiments, thread execution logic600includes one or more connections to memory, such as system memory or cache memory, through one or more of instruction cache606, data port614, sampler610, and execution units608A-608N. In some embodiments, each execution unit (e.g.608A) is a stand-alone programmable general-purpose computational unit that is capable of executing multiple simultaneous hardware threads while processing multiple data elements in parallel for each thread. In various embodiments, the array of execution units608A-608N is scalable to include any number individual execution units. In some embodiments, the execution units608A-608N are primarily used to execute shader programs. A shader processor602can process the various shader programs and dispatch execution threads associated with the shader programs via a thread dispatcher604. In one embodiment the thread dispatcher includes logic to arbitrate thread initiation requests from the graphics and media pipelines and instantiate the requested threads on one or more execution unit in the execution units608A-608N. For example, a geometry pipeline can dispatch vertex, tessellation, or geometry shaders to the thread execution logic for processing. In some embodiments, thread dispatcher604can also process runtime thread spawning requests from the executing shader programs. In some embodiments, the execution units608A-608N support an instruction set that includes native support for many standard 3D graphics shader instructions, such that shader programs from graphics libraries (e.g., Direct 3D and OpenGL) are executed with a minimal translation. The execution units support vertex and geometry processing (e.g., vertex programs, geometry programs, vertex shaders), pixel processing (e.g., pixel shaders, fragment shaders) and general-purpose processing (e.g., compute and media shaders). Each of the execution units608A-608N is capable of multi-issue single instruction multiple data (SIMD) execution and multi-threaded operation enables an efficient execution environment in the face of higher latency memory accesses. Each hardware thread within each execution unit has a dedicated high-bandwidth register file and associated independent thread-state. Execution is multi-issue per clock to pipelines capable of integer, single and double precision floating point operations, SIMD branch capability, logical operations, transcendental operations, and other miscellaneous operations. While waiting for data from memory or one of the shared functions, dependency logic within the execution units608A-608N causes a waiting thread to sleep until the requested data has been returned. While the waiting thread is sleeping, hardware resources may be devoted to processing other threads. For example, during a delay associated with a vertex shader operation, an execution unit can perform operations for a pixel shader, fragment shader, or another type of shader program, including a different vertex shader. Each execution unit in execution units608A-608N operates on arrays of data elements. The number of data elements is the “execution size,” or the number of channels for the instruction. An execution channel is a logical unit of execution for data element access, masking, and flow control within instructions. The number of channels may be independent of the number of physical Arithmetic Logic Units (ALUs) or Floating Point Units (FPUs) for a particular graphics processor. In some embodiments, execution units608A-608N support integer and floating-point data types. The execution unit instruction set includes SIMD instructions. The various data elements can be stored as a packed data type in a register and the execution unit will process the various elements based on the data size of the elements. For example, when operating on a 256-bit wide vector, the 256 bits of the vector are stored in a register and the execution unit operates on the vector as four separate 64-bit packed data elements (Quad-Word (QW) size data elements), eight separate 32-bit packed data elements (Double Word (DW) size data elements), sixteen separate 16-bit packed data elements (Word (W) size data elements), or thirty-two separate 8-bit data elements (byte (B) size data elements). However, different vector widths and register sizes are possible. In one embodiment one or more execution units can be combined into a fused execution unit609A-609N having thread control logic (607A-607N) that is common to the fused EUs. Multiple EUs can be fused into an EU group. Each EU in the fused EU group can be configured to execute a separate SIMD hardware thread. The number of EUs in a fused EU group can vary according to embodiments. Additionally, various SIMD widths can be performed per-EU, including but not limited to SIMD8, SIMD16, and SIMD32. Each fused graphics execution unit609A-609N includes at least two execution units. For example, fused execution unit609A includes a first EU608A, second EU608B, and thread control logic607A that is common to the first EU608A and the second EU608B. The thread control logic607A controls threads executed on the fused graphics execution unit609A, allowing each EU within the fused execution units609A-609N to execute using a common instruction pointer register. One or more internal instruction caches (e.g.,606) are included in the thread execution logic600to cache thread instructions for the execution units. In some embodiments, one or more data caches (e.g.,612) are included to cache thread data during thread execution. In some embodiments, a sampler610is included to provide texture sampling for 3D operations and media sampling for media operations. In some embodiments, sampler610includes specialized texture or media sampling functionality to process texture or media data during the sampling process before providing the sampled data to an execution unit. During execution, the graphics and media pipelines send thread initiation requests to thread execution logic600via thread spawning and dispatch logic. Once a group of geometric objects has been processed and rasterized into pixel data, pixel processor logic (e.g., pixel shader logic, fragment shader logic, etc.) within the shader processor602is invoked to further compute output information and cause results to be written to output surfaces (e.g., color buffers, depth buffers, stencil buffers, etc.). In some embodiments, a pixel shader or fragment shader calculates the values of the various vertex attributes that are to be interpolated across the rasterized object. In some embodiments, pixel processor logic within the shader processor602then executes an application programming interface (API)-supplied pixel or fragment shader program. To execute the shader program, the shader processor602dispatches threads to an execution unit (e.g.,608A) via thread dispatcher604. In some embodiments, shader processor602uses texture sampling logic in the sampler610to access texture data in texture maps stored in memory. Arithmetic operations on the texture data and the input geometry data compute pixel color data for each geometric fragment, or discards one or more pixels from further processing. In some embodiments, the data port614provides a memory access mechanism for the thread execution logic600to output processed data to memory for further processing on a graphics processor output pipeline. In some embodiments, the data port614includes or couples to one or more cache memories (e.g., data cache612) to cache data for memory access via the data port. As illustrated inFIG.14B, a graphics execution unit608can include an instruction fetch unit637, a general register file array (GRF)624, an architectural register file array (ARF)626, a thread arbiter622, a send unit630, a branch unit632, a set of SIMD floating point units (FPUs)634, and in one embodiment a set of dedicated integer SIMD ALUs635. The GRF624and ARF626includes the set of general register files and architecture register files associated with each simultaneous hardware thread that may be active in the graphics execution unit608. In one embodiment, per thread architectural state is maintained in the ARF626, while data used during thread execution is stored in the GRF624. The execution state of each thread, including the instruction pointers for each thread, can be held in thread-specific registers in the ARF626. In one embodiment the graphics execution unit608has an architecture that is a combination of Simultaneous Multi-Threading (SMT) and fine-grained Interleaved Multi-Threading (IMT). The architecture has a modular configuration that can be fine tuned at design time based on a target number of simultaneous threads and number of registers per execution unit, where execution unit resources are divided across logic used to execute multiple simultaneous threads. In one embodiment, the graphics execution unit608can co-issue multiple instructions, which may each be different instructions. The thread arbiter622of the graphics execution unit thread608can dispatch the instructions to one of the send unit630, branch unit632, or SIMD FPU(s)634for execution. Each execution thread can access128general-purpose registers within the GRF624, where each register can store 32 bytes, accessible as a SIMD 8-element vector of 32-bit data elements. In one embodiment, each execution unit thread has access to 4 Kbytes within the GRF624, although embodiments are not so limited, and greater or fewer register resources may be provided in other embodiments. In one embodiment up to seven threads can execute simultaneously, although the number of threads per execution unit can also vary according to embodiments. In an embodiment in which seven threads may access 4 Kbytes, the GRF624can store a total of 28 Kbytes. Flexible addressing modes can permit registers to be addressed together to build effectively wider registers or to represent strided rectangular block data structures. In one embodiment, memory operations, sampler operations, and other longer-latency system communications are dispatched via “send” instructions that are executed by the message passing send unit630. In one embodiment, branch instructions are dispatched to a dedicated branch unit632to facilitate SIMD divergence and eventual convergence. In one embodiment the graphics execution unit608includes one or more SIMD floating point units (FPU(s))634to perform floating-point operations. In one embodiment, the FPU(s)634also support integer computation. In one embodiment the FPU(s)634can SIMD execute up to M number of 32-bit floating-point (or integer) operations, or SIMD execute up to 2M 16-bit integer or 16-bit floating-point operations. In one embodiment, at least one of the FPU(s) provides extended math capability to support high-throughput transcendental math functions and double precision 64-bit floating-point. In some embodiments, a set of 8-bit integer SIMD ALUs635are also present, and may be specifically optimized to perform operations associated with machine learning computations. In one embodiment, arrays of multiple instances of the graphics execution unit608can be instantiated in a graphics sub-core grouping (e.g., a sub-slice). For scalability, product architects can chose the exact number of execution units per sub-core grouping. In one embodiment the execution unit608can execute instructions across a plurality of execution channels. In a further embodiment, each thread executed on the graphics execution unit608is executed on a different channel. FIG.15is a block diagram illustrating a graphics processor instruction formats700according to some embodiments. In one or more embodiment, the graphics processor execution units support an instruction set having instructions in multiple formats. The solid lined boxes illustrate the components that are generally included in an execution unit instruction, while the dashed lines include components that are optional or that are only included in a sub-set of the instructions. In some embodiments, instruction format700described and illustrated are macro-instructions, in that they are instructions supplied to the execution unit, as opposed to micro-operations resulting from instruction decode once the instruction is processed. In some embodiments, the graphics processor execution units natively support instructions in a 128-bit instruction format710. A 64-bit compacted instruction format730is available for some instructions based on the selected instruction, instruction options, and number of operands. The native 128-bit instruction format710provides access to all instruction options, while some options and operations are restricted in the 64-bit format730. The native instructions available in the 64-bit format730vary by embodiment. In some embodiments, the instruction is compacted in part using a set of index values in an index field713. The execution unit hardware references a set of compaction tables based on the index values and uses the compaction table outputs to reconstruct a native instruction in the 128-bit instruction format710. For each format, instruction opcode712defines the operation that the execution unit is to perform. The execution units execute each instruction in parallel across the multiple data elements of each operand. For example, in response to an add instruction the execution unit performs a simultaneous add operation across each color channel representing a texture element or picture element. By default, the execution unit performs each instruction across all data channels of the operands. In some embodiments, instruction control field714enables control over certain execution options, such as channels selection (e.g., predication) and data channel order (e.g., swizzle). For instructions in the 128-bit instruction format710an exec-size field716limits the number of data channels that will be executed in parallel. In some embodiments, exec-size field716is not available for use in the 64-bit compact instruction format730. Some execution unit instructions have up to three operands including two source operands, src0720, src1722, and one destination718. In some embodiments, the execution units support dual destination instructions, where one of the destinations is implied. Data manipulation instructions can have a third source operand (e.g., SRC2724), where the instruction opcode712determines the number of source operands. An instruction's last source operand can be an immediate (e.g., hard-coded) value passed with the instruction. In some embodiments, the 128-bit instruction format710includes an access/address mode field726specifying, for example, whether direct register addressing mode or indirect register addressing mode is used. When direct register addressing mode is used, the register address of one or more operands is directly provided by bits in the instruction. In some embodiments, the 128-bit instruction format710includes an access/address mode field726, which specifies an address mode and/or an access mode for the instruction. In one embodiment the access mode is used to define a data access alignment for the instruction. Some embodiments support access modes including a 16-byte aligned access mode and a 1-byte aligned access mode, where the byte alignment of the access mode determines the access alignment of the instruction operands. For example, when in a first mode, the instruction may use byte-aligned addressing for source and destination operands and when in a second mode, the instruction may use 16-byte-aligned addressing for all source and destination operands. In one embodiment, the address mode portion of the access/address mode field726determines whether the instruction is to use direct or indirect addressing. When direct register addressing mode is used bits in the instruction directly provide the register address of one or more operands. When indirect register addressing mode is used, the register address of one or more operands may be computed based on an address register value and an address immediate field in the instruction. In some embodiments instructions are grouped based on opcode712bit-fields to simplify Opcode decode740. For an 8-bit opcode, bits4,5, and6allow the execution unit to determine the type of opcode. The precise opcode grouping shown is merely an example. In some embodiments, a move and logic opcode group742includes data movement and logic instructions (e.g., move (mov), compare (cmp)). In some embodiments, move and logic group742shares the five most significant bits (MSB), where move (mov) instructions are in the form of 0000xxxxb and logic instructions are in the form of 0001xxxxb. A flow control instruction group744(e.g., call, jump (jmp)) includes instructions in the form of 0010xxxxb (e.g., 0x20). A miscellaneous instruction group746includes a mix of instructions, including synchronization instructions (e.g., wait, send) in the form of 0011xxxxb (e.g., 0x30). A parallel math instruction group748includes component-wise arithmetic instructions (e.g., add, multiply (mul)) in the form of 0100xxxxb (e.g., 0x40). The parallel math group748performs the arithmetic operations in parallel across data channels. The vector math group750includes arithmetic instructions (e.g., dp4) in the form of 0101xxxxb (e.g., 0x50). The vector math group performs arithmetic such as dot product calculations on vector operands. Graphics Pipeline FIG.16is a block diagram of another embodiment of a graphics processor800. Elements ofFIG.16having the same reference numbers (or names) as the elements of any other figure herein can operate or function in any manner similar to that described elsewhere herein, but are not limited to such. In some embodiments, graphics processor800includes a geometry pipeline820, a media pipeline830, a display engine840, thread execution logic850, and a render output pipeline870. In some embodiments, graphics processor800is a graphics processor within a multi-core processing system that includes one or more general-purpose processing cores. The graphics processor is controlled by register writes to one or more control registers (not shown) or via commands issued to graphics processor800via a ring interconnect802. In some embodiments, ring interconnect802couples graphics processor800to other processing components, such as other graphics processors or general-purpose processors. Commands from ring interconnect802are interpreted by a command streamer803, which supplies instructions to individual components of the geometry pipeline820or the media pipeline830. In some embodiments, command streamer803directs the operation of a vertex fetcher805that reads vertex data from memory and executes vertex-processing commands provided by command streamer803. In some embodiments, vertex fetcher805provides vertex data to a vertex shader807, which performs coordinate space transformation and lighting operations to each vertex. In some embodiments, vertex fetcher805and vertex shader807execute vertex-processing instructions by dispatching execution threads to execution units852A-852B via a thread dispatcher831. In some embodiments, execution units852A-852B are an array of vector processors having an instruction set for performing graphics and media operations. In some embodiments, execution units852A-852B have an attached L1 cache851that is specific for each array or shared between the arrays. The cache can be configured as a data cache, an instruction cache, or a single cache that is partitioned to contain data and instructions in different partitions. In some embodiments, geometry pipeline820includes tessellation components to perform hardware-accelerated tessellation of 3D objects. In some embodiments, a programmable hull shader811configures the tessellation operations. A programmable domain shader817provides back-end evaluation of tessellation output. A tessellator813operates at the direction of hull shader811and contains special purpose logic to generate a set of detailed geometric objects based on a coarse geometric model that is provided as input to geometry pipeline820. In some embodiments, if tessellation is not used, tessellation components (e.g., hull shader811, tessellator813, and domain shader817) can be bypassed. In some embodiments, complete geometric objects can be processed by a geometry shader819via one or more threads dispatched to execution units852A-852B, or can proceed directly to the clipper829. In some embodiments, the geometry shader operates on entire geometric objects, rather than vertices or patches of vertices as in previous stages of the graphics pipeline. If the tessellation is disabled the geometry shader819receives input from the vertex shader807. In some embodiments, geometry shader819is programmable by a geometry shader program to perform geometry tessellation if the tessellation units are disabled. Before rasterization, a clipper829processes vertex data. The clipper829may be a fixed function clipper or a programmable clipper having clipping and geometry shader functions. In some embodiments, a rasterizer and depth test component873in the render output pipeline870dispatches pixel shaders to convert the geometric objects into per pixel representations. In some embodiments, pixel shader logic is included in thread execution logic850. In some embodiments, an application can bypass the rasterizer and depth test component873and access un-rasterized vertex data via a stream out unit823. The graphics processor800has an interconnect bus, interconnect fabric, or some other interconnect mechanism that allows data and message passing amongst the major components of the processor. In some embodiments, execution units852A-852B and associated logic units (e.g., L1 cache851, sampler854, texture cache858, etc.) interconnect via a data port856to perform memory access and communicate with render output pipeline components of the processor. In some embodiments, sampler854, caches851,858and execution units852A-852B each have separate memory access paths. In one embodiment the texture cache858can also be configured as a sampler cache. In some embodiments, render output pipeline870contains a rasterizer and depth test component873that converts vertex-based objects into an associated pixel-based representation. In some embodiments, the rasterizer logic includes a windower/masker unit to perform fixed function triangle and line rasterization. An associated render cache878and depth cache879are also available in some embodiments. A pixel operations component877performs pixel-based operations on the data, though in some instances, pixel operations associated with 2D operations (e.g. bit block image transfers with blending) are performed by the 2D engine841, or substituted at display time by the display controller843using overlay display planes. In some embodiments, a shared L3 cache875is available to all graphics components, allowing the sharing of data without the use of main system memory. In some embodiments, graphics processor media pipeline830includes a media engine837and a video front-end834. In some embodiments, video front-end834receives pipeline commands from the command streamer803. In some embodiments, media pipeline830includes a separate command streamer. In some embodiments, video front-end834processes media commands before sending the command to the media engine837. In some embodiments, media engine837includes thread spawning functionality to spawn threads for dispatch to thread execution logic850via thread dispatcher831. In some embodiments, graphics processor800includes a display engine840. In some embodiments, display engine840is external to processor800and couples with the graphics processor via the ring interconnect802, or some other interconnect bus or fabric. In some embodiments, display engine840includes a 2D engine841and a display controller843. In some embodiments, display engine840contains special purpose logic capable of operating independently of the 3D pipeline. In some embodiments, display controller843couples with a display device (not shown), which may be a system integrated display device, as in a laptop computer, or an external display device attached via a display device connector. In some embodiments, the geometry pipeline820and media pipeline830are configurable to perform operations based on multiple graphics and media programming interfaces and are not specific to any one application programming interface (API). In some embodiments, driver software for the graphics processor translates API calls that are specific to a particular graphics or media library into commands that can be processed by the graphics processor. In some embodiments, support is provided for the Open Graphics Library (OpenGL), Open Computing Language (OpenCL), and/or Vulkan graphics and compute API, all from the Khronos Group. In some embodiments, support may also be provided for the Direct3D library from the Microsoft Corporation. In some embodiments, a combination of these libraries may be supported. Support may also be provided for the Open Source Computer Vision Library (OpenCV). A future API with a compatible 3D pipeline would also be supported if a mapping can be made from the pipeline of the future API to the pipeline of the graphics processor. Graphics Pipeline Programming FIG.17Ais a block diagram illustrating a graphics processor command format900according to some embodiments.FIG.17Bis a block diagram illustrating a graphics processor command sequence910according to an embodiment. The solid lined boxes inFIG.17Aillustrate the components that are generally included in a graphics command while the dashed lines include components that are optional or that are only included in a sub-set of the graphics commands. The exemplary graphics processor command format900ofFIG.17Aincludes data fields to identify a client902, a command operation code (opcode)904, and data906for the command. A sub-opcode905and a command size908are also included in some commands. In some embodiments, client902specifies the client unit of the graphics device that processes the command data. In some embodiments, a graphics processor command parser examines the client field of each command to condition the further processing of the command and route the command data to the appropriate client unit. In some embodiments, the graphics processor client units include a memory interface unit, a render unit, a 2D unit, a 3D unit, and a media unit. Each client unit has a corresponding processing pipeline that processes the commands. Once the command is received by the client unit, the client unit reads the opcode904and, if present, sub-opcode905to determine the operation to perform. The client unit performs the command using information in data field906. For some commands an explicit command size908is expected to specify the size of the command. In some embodiments, the command parser automatically determines the size of at least some of the commands based on the command opcode. In some embodiments commands are aligned via multiples of a double word. The flow diagram inFIG.17Billustrates an exemplary graphics processor command sequence910. In some embodiments, software or firmware of a data processing system that features an embodiment of a graphics processor uses a version of the command sequence shown to set up, execute, and terminate a set of graphics operations. A sample command sequence is shown and described for purposes of example only as embodiments are not limited to these specific commands or to this command sequence. Moreover, the commands may be issued as batch of commands in a command sequence, such that the graphics processor will process the sequence of commands in at least partially concurrence. In some embodiments, the graphics processor command sequence910may begin with a pipeline flush command912to cause any active graphics pipeline to complete the currently pending commands for the pipeline. In some embodiments, the 3D pipeline922and the media pipeline924do not operate concurrently. The pipeline flush is performed to cause the active graphics pipeline to complete any pending commands. In response to a pipeline flush, the command parser for the graphics processor will pause command processing until the active drawing engines complete pending operations and the relevant read caches are invalidated. Optionally, any data in the render cache that is marked ‘dirty’ can be flushed to memory. In some embodiments, pipeline flush command912can be used for pipeline synchronization or before placing the graphics processor into a low power state. In some embodiments, a pipeline select command913is used when a command sequence requires the graphics processor to explicitly switch between pipelines. In some embodiments, a pipeline select command913is required only once within an execution context before issuing pipeline commands unless the context is to issue commands for both pipelines. In some embodiments, a pipeline flush command912is required immediately before a pipeline switch via the pipeline select command913. In some embodiments, a pipeline control command914configures a graphics pipeline for operation and is used to program the 3D pipeline922and the media pipeline924. In some embodiments, pipeline control command914configures the pipeline state for the active pipeline. In one embodiment, the pipeline control command914is used for pipeline synchronization and to clear data from one or more cache memories within the active pipeline before processing a batch of commands. In some embodiments, return buffer state commands916are used to configure a set of return buffers for the respective pipelines to write data. Some pipeline operations require the allocation, selection, or configuration of one or more return buffers into which the operations write intermediate data during processing. In some embodiments, the graphics processor also uses one or more return buffers to store output data and to perform cross thread communication. In some embodiments, the return buffer state916includes selecting the size and number of return buffers to use for a set of pipeline operations. The remaining commands in the command sequence differ based on the active pipeline for operations. Based on a pipeline determination920, the command sequence is tailored to the 3D pipeline922beginning with the 3D pipeline state930or the media pipeline924beginning at the media pipeline state940. The commands to configure the 3D pipeline state930include 3D state setting commands for vertex buffer state, vertex element state, constant color state, depth buffer state, and other state variables that are to be configured before 3D primitive commands are processed. The values of these commands are determined at least in part based on the particular 3D API in use. In some embodiments, 3D pipeline state930commands are also able to selectively disable or bypass certain pipeline elements if those elements will not be used. In some embodiments, 3D primitive932command is used to submit 3D primitives to be processed by the 3D pipeline. Commands and associated parameters that are passed to the graphics processor via the 3D primitive932command are forwarded to the vertex fetch function in the graphics pipeline. The vertex fetch function uses the 3D primitive932command data to generate vertex data structures. The vertex data structures are stored in one or more return buffers. In some embodiments, 3D primitive932command is used to perform vertex operations on 3D primitives via vertex shaders. To process vertex shaders, 3D pipeline922dispatches shader execution threads to graphics processor execution units. In some embodiments, 3D pipeline922is triggered via an execute934command or event. In some embodiments, a register write triggers command execution. In some embodiments execution is triggered via a ‘go’ or ‘kick’ command in the command sequence. In one embodiment, command execution is triggered using a pipeline synchronization command to flush the command sequence through the graphics pipeline. The 3D pipeline will perform geometry processing for the 3D primitives. Once operations are complete, the resulting geometric objects are rasterized and the pixel engine colors the resulting pixels. Additional commands to control pixel shading and pixel back end operations may also be included for those operations. In some embodiments, the graphics processor command sequence910follows the media pipeline924path when performing media operations. In general, the specific use and manner of programming for the media pipeline924depends on the media or compute operations to be performed. Specific media decode operations may be offloaded to the media pipeline during media decode. In some embodiments, the media pipeline can also be bypassed and media decode can be performed in whole or in part using resources provided by one or more general-purpose processing cores. In one embodiment, the media pipeline also includes elements for general-purpose graphics processor unit (GPGPU) operations, where the graphics processor is used to perform SIMD vector operations using computational shader programs that are not explicitly related to the rendering of graphics primitives. In some embodiments, media pipeline924is configured in a similar manner as the 3D pipeline922. A set of commands to configure the media pipeline state940are dispatched or placed into a command queue before the media object commands942. In some embodiments, commands for the media pipeline state940include data to configure the media pipeline elements that will be used to process the media objects. This includes data to configure the video decode and video encode logic within the media pipeline, such as encode or decode format. In some embodiments, commands for the media pipeline state940also support the use of one or more pointers to “indirect” state elements that contain a batch of state settings. In some embodiments, media object commands942supply pointers to media objects for processing by the media pipeline. The media objects include memory buffers containing video data to be processed. In some embodiments, all media pipeline states must be valid before issuing a media object command942. Once the pipeline state is configured and media object commands942are queued, the media pipeline924is triggered via an execute command944or an equivalent execute event (e.g., register write). Output from media pipeline924may then be post processed by operations provided by the 3D pipeline922or the media pipeline924. In some embodiments, GPGPU operations are configured and executed in a similar manner as media operations. Graphics Software Architecture FIG.18illustrates exemplary graphics software architecture for a data processing system1000according to some embodiments. In some embodiments, software architecture includes a 3D graphics application1010, an operating system1020, and at least one processor1030. In some embodiments, processor1030includes a graphics processor1032and one or more general-purpose processor core(s)1034. The graphics application1010and operating system1020each execute in the system memory1050of the data processing system. In some embodiments, 3D graphics application1010contains one or more shader programs including shader instructions1012. The shader language instructions may be in a high-level shader language, such as the High Level Shader Language (HLSL) or the OpenGL Shader Language (GLSL). The application also includes executable instructions1014in a machine language suitable for execution by the general-purpose processor core1034. The application also includes graphics objects1016defined by vertex data. In some embodiments, operating system1020is a Microsoft® Windows® operating system from the Microsoft Corporation, a proprietary UNIX-like operating system, or an open source UNIX-like operating system using a variant of the Linux kernel. The operating system1020can support a graphics API1022such as the Direct3D API, the OpenGL API, or the Vulkan API. When the Direct3D API is in use, the operating system1020uses a front-end shader compiler1024to compile any shader instructions1012in HLSL into a lower-level shader language. The compilation may be a just-in-time (JIT) compilation or the application can perform shader pre-compilation. In some embodiments, high-level shaders are compiled into low-level shaders during the compilation of the 3D graphics application1010. In some embodiments, the shader instructions1012are provided in an intermediate form, such as a version of the Standard Portable Intermediate Representation (SPIR) used by the Vulkan API. In some embodiments, user mode graphics driver1026contains a back-end shader compiler1027to convert the shader instructions1012into a hardware specific representation. When the OpenGL API is in use, shader instructions1012in the GLSL high-level language are passed to a user mode graphics driver1026for compilation. In some embodiments, user mode graphics driver1026uses operating system kernel mode functions1028to communicate with a kernel mode graphics driver1029. In some embodiments, kernel mode graphics driver1029communicates with graphics processor1032to dispatch commands and instructions. IP Core Implementations One or more aspects of at least one embodiment may be implemented by representative code stored on a machine-readable medium which represents and/or defines logic within an integrated circuit such as a processor. For example, the machine-readable medium may include instructions which represent various logic within the processor. When read by a machine, the instructions may cause the machine to fabricate the logic to perform the techniques described herein. Such representations, known as “IP cores,” are reusable units of logic for an integrated circuit that may be stored on a tangible, machine-readable medium as a hardware model that describes the structure of the integrated circuit. The hardware model may be supplied to various customers or manufacturing facilities, which load the hardware model on fabrication machines that manufacture the integrated circuit. The integrated circuit may be fabricated such that the circuit performs operations described in association with any of the embodiments described herein. FIG.19Ais a block diagram illustrating an IP core development system1100that may be used to manufacture an integrated circuit to perform operations according to an embodiment. The IP core development system1100may be used to generate modular, re-usable designs that can be incorporated into a larger design or used to construct an entire integrated circuit (e.g., an SOC integrated circuit). A design facility1130can generate a software simulation1110of an IP core design in a high-level programming language (e.g., C/C++). The software simulation1110can be used to design, test, and verify the behavior of the IP core using a simulation model1112. The simulation model1112may include functional, behavioral, and/or timing simulations. A register transfer level (RTL) design1115can then be created or synthesized from the simulation model1112. The RTL design1115is an abstraction of the behavior of the integrated circuit that models the flow of digital signals between hardware registers, including the associated logic performed using the modeled digital signals. In addition to an RTL design1115, lower-level designs at the logic level or transistor level may also be created, designed, or synthesized. Thus, the particular details of the initial design and simulation may vary. The RTL design1115or equivalent may be further synthesized by the design facility into a hardware model1120, which may be in a hardware description language (HDL), or some other representation of physical design data. The HDL may be further simulated or tested to verify the IP core design. The IP core design can be stored for delivery to a 3rd party fabrication facility1165using non-volatile memory1140(e.g., hard disk, flash memory, or any non-volatile storage medium). Alternatively, the IP core design may be transmitted (e.g., via the Internet) over a wired connection1150or wireless connection1160. The fabrication facility1165may then fabricate an integrated circuit that is based at least in part on the IP core design. The fabricated integrated circuit can be configured to perform operations in accordance with at least one embodiment described herein. FIG.19Billustrates a cross-section side view of an integrated circuit package assembly1170, according to some embodiments described herein. The integrated circuit package assembly1170illustrates an implementation of one or more processor or accelerator devices as described herein. The package assembly1170includes multiple units of hardware logic1172,1174connected to a substrate1180. The logic1172,1174may be implemented at least partly in configurable logic or fixed-functionality logic hardware, and can include one or more portions of any of the processor core(s), graphics processor(s), or other accelerator devices described herein. Each unit of logic1172,1174can be implemented within a semiconductor die and coupled with the substrate1180via an interconnect structure1173. The interconnect structure1173may be configured to route electrical signals between the logic1172,1174and the substrate1180, and can include interconnects such as, but not limited to bumps or pillars. In some embodiments, the interconnect structure1173may be configured to route electrical signals such as, for example, input/output (I/O) signals and/or power or ground signals associated with the operation of the logic1172,1174. In some embodiments, the substrate1180is an epoxy-based laminate substrate. The package substrate1180may include other suitable types of substrates in other embodiments. The package assembly1170can be connected to other electrical devices via a package interconnect1183. The package interconnect1183may be coupled to a surface of the substrate1180to route electrical signals to other electrical devices, such as a motherboard, other chipset, or multi-chip module. In some embodiments, the units of logic1172,1174are electrically coupled with a bridge1182that is configured to route electrical signals between the logic1172,1174. The bridge1182may be a dense interconnect structure that provides a route for electrical signals. The bridge1182may include a bridge substrate composed of glass or a suitable semiconductor material. Electrical routing features can be formed on the bridge substrate to provide a chip-to-chip connection between the logic1172,1174. Although two units of logic1172,1174and a bridge1182are illustrated, embodiments described herein may include more or fewer logic units on one or more dies. The one or more dies may be connected by zero or more bridges, as the bridge1182may be excluded when the logic is included on a single die. Alternatively, multiple dies or units of logic can be connected by one or more bridges. Additionally, multiple logic units, dies, and bridges can be connected together in other possible configurations, including three-dimensional configurations. Exemplary System on a Chip Integrated Circuit FIGS.20-22Billustrated exemplary integrated circuits and associated graphics processors that may be fabricated using one or more IP cores, according to various embodiments described herein. In addition to what is illustrated, other logic and circuits may be included, including additional graphics processors/cores, peripheral interface controllers, or general-purpose processor cores. FIG.20is a block diagram illustrating an exemplary system on a chip integrated circuit1200that may be fabricated using one or more IP cores, according to an embodiment. Exemplary integrated circuit1200includes one or more application processor(s)1205(e.g., CPUs), at least one graphics processor1210, and may additionally include an image processor1215and/or a video processor1220, any of which may be a modular IP core from the same or multiple different design facilities. Integrated circuit1200includes peripheral or bus logic including a USB controller1225, UART controller1230, an SPI/SDIO controller1235, and an I2S/I2C controller1240. Additionally, the integrated circuit can include a display device1245coupled to one or more of a high-definition multimedia interface (HDMI) controller1250and a mobile industry processor interface (MIPI) display interface1255. Storage may be provided by a flash memory subsystem1260including flash memory and a flash memory controller. Memory interface may be provided via a memory controller1265for access to SDRAM or SRAM memory devices. Some integrated circuits additionally include an embedded security engine1270. FIGS.21A-21Bare block diagrams illustrating exemplary graphics processors for use within an SoC, according to embodiments described herein.FIG.21Aillustrates an exemplary graphics processor1310of a system on a chip integrated circuit that may be fabricated using one or more IP cores, according to an embodiment.FIG.21Billustrates an additional exemplary graphics processor1340of a system on a chip integrated circuit that may be fabricated using one or more IP cores, according to an embodiment. Graphics processor1310ofFIG.21Ais an example of a low power graphics processor core. Graphics processor1340ofFIG.21Bis an example of a higher performance graphics processor core. Each of the graphics processors1310,1340can be variants of the graphics processor1210ofFIG.20. As shown inFIG.21A, graphics processor1310includes a vertex processor1305and one or more fragment processor(s)1315A-1315N (e.g.,1315A,1315B,1315C,1315D, through1315N-1, and1315N). Graphics processor1310can execute different shader programs via separate logic, such that the vertex processor1305is optimized to execute operations for vertex shader programs, while the one or more fragment processor(s)1315A-1315N execute fragment (e.g., pixel) shading operations for fragment or pixel shader programs. The vertex processor1305performs the vertex processing stage of the 3D graphics pipeline and generates primitives and vertex data. The fragment processor(s)1315A-1315N use the primitive and vertex data generated by the vertex processor1305to produce a framebuffer that is displayed on a display device. In one embodiment, the fragment processor(s)1315A-1315N are optimized to execute fragment shader programs as provided for in the OpenGL API, which may be used to perform similar operations as a pixel shader program as provided for in the Direct 3D API. Graphics processor1310additionally includes one or more memory management units (MMUs)1320A-1320B, cache(s)1325A-1325B, and circuit interconnect(s)1330A-1330B. The one or more MMU(s)1320A-1320B provide for virtual to physical address mapping for the graphics processor1310, including for the vertex processor1305and/or fragment processor(s)1315A-1315N, which may reference vertex or image/texture data stored in memory, in addition to vertex or image/texture data stored in the one or more cache(s)1325A-1325B. In one embodiment the one or more MMU(s)1320A-1320B may be synchronized with other MMUs within the system, including one or more MMUs associated with the one or more application processor(s)1205, image processor1215, and/or video processor1220ofFIG.20, such that each processor1205-1220can participate in a shared or unified virtual memory system. The one or more circuit interconnect(s)1330A-1330B enable graphics processor1310to interface with other IP cores within the SoC, either via an internal bus of the SoC or via a direct connection, according to embodiments. As shownFIG.21B, graphics processor1340includes the one or more MMU(s)1320A-1320B, caches1325A-1325B, and circuit interconnects1330A-1330B of the graphics processor1310ofFIG.21A. Graphics processor1340includes one or more shader core(s)1355A-1355N (e.g.,1455A,1355B,1355C,1355D,1355E,1355F, through1355N-1, and1355N), which provides for a unified shader core architecture in which a single core or type or core can execute all types of programmable shader code, including shader program code to implement vertex shaders, fragment shaders, and/or compute shaders. The exact number of shader cores present can vary among embodiments and implementations. Additionally, graphics processor1340includes an inter-core task manager1345, which acts as a thread dispatcher to dispatch execution threads to one or more shader cores1355A-1355N and a tiling unit1358to accelerate tiling operations for tile-based rendering, in which rendering operations for a scene are subdivided in image space, for example to exploit local spatial coherence within a scene or to optimize use of internal caches. FIGS.22A-22Billustrate additional exemplary graphics processor logic according to embodiments described herein.FIG.22Aillustrates a graphics core1400that may be included within the graphics processor1210ofFIG.20, and may be a unified shader core1355A-1355N as inFIG.21B.FIG.22Billustrates an additional general-purpose graphics processing unit1430, which is a highly-parallel general-purpose graphics processing unit suitable for deployment on a multi-chip module. As shown inFIG.22A, the graphics core1400includes a shared instruction cache1402, a texture unit1418, and a cache/shared memory1420that are common to the execution resources within the graphics core1400. The graphics core1400can include multiple slices1401A-1401N or partition for each core, and a graphics processor can include multiple instances of the graphics core1400. The slices1401A-1401N can include support logic including a local instruction cache1404A-1404N, a thread scheduler1406A-1406N, a thread dispatcher1408A-1408N, and a set of registers1410A-1440N. To perform logic operations, the slices1401A-1401N can include a set of additional function units (AFUs1412A-1412N), floating-point units (FPU1414A-1414N), integer arithmetic logic units (ALUs1416-1416N), address computational units (ACU1413A-1413N), double-precision floating-point units (DPFPU1415A-1415N), and matrix processing units (MPU1417A-1417N). Some of the computational units operate at a specific precision. For example, the FPUs1414A-1414N can perform single-precision (32-bit) and half-precision (16-bit) floating point operations, while the DPFPUs1415A-1415N perform double precision (64-bit) floating point operations. The ALUs1416A-1416N can perform variable precision integer operations at 8-bit, 16-bit, and 32-bit precision, and can be configured for mixed precision operations. The MPUs1417A-1417N can also be configured for mixed precision matrix operations, including half-precision floating point and 8-bit integer operations. The MPUs1417-1417N can perform a variety of matrix operations to accelerate machine learning application frameworks, including enabling support for accelerated general matrix to matrix multiplication (GEMM). The AFUs1412A-1412N can perform additional logic operations not supported by the floating-point or integer units, including trigonometric operations (e.g., Sine, Cosine, etc.). As shown inFIG.22B, a general-purpose processing unit (GPGPU)1430can be configured to enable highly-parallel compute operations to be performed by an array of graphics processing units. Additionally, the GPGPU1430can be linked directly to other instances of the GPGPU to create a multi-GPU cluster to improve training speed for particularly deep neural networks. The GPGPU1430includes a host interface1432to enable a connection with a host processor. In one embodiment the host interface1432is a PCI Express interface. However, the host interface can also be a vendor specific communications interface or communications fabric. The GPGPU1430receives commands from the host processor and uses a global scheduler1434to distribute execution threads associated with those commands to a set of compute clusters1436A-1436H. The compute clusters1436A-1436H share a cache memory1438. The cache memory1438can serve as a higher-level cache for cache memories within the compute clusters1436A-1436H. The GPGPU1430includes memory1444A-1444B coupled with the compute clusters1436A-1436H via a set of memory controllers1442A-1442B. In various embodiments, the memory1434A-1434B can include various types of memory devices including dynamic random access memory (DRAM) or graphics random access memory, such as synchronous graphics random access memory (SGRAM), including graphics double data rate (GDDR) memory. In one embodiment the compute clusters1436A-1436H each include a set of graphics cores, such as the graphics core1400ofFIG.22A, which can include multiple types of integer and floating point logic units that can perform computational operations at a range of precisions including suited for machine learning computations. For example and in one embodiment at least a subset of the floating point units in each of the compute clusters1436A-1436H can be configured to perform 16-bit or 32-bit floating point operations, while a different subset of the floating point units can be configured to perform 64-bit floating point operations. Multiple instances of the GPGPU1430can be configured to operate as a compute cluster. The communication mechanism used by the compute cluster for synchronization and data exchange varies across embodiments. In one embodiment the multiple instances of the GPGPU1430communicate over the host interface1432. In one embodiment the GPGPU1430includes an I/O hub1439that couples the GPGPU1430with a GPU link1440that enables a direct connection to other instances of the GPGPU. In one embodiment the GPU link1440is coupled to a dedicated GPU-to-GPU bridge that enables communication and synchronization between multiple instances of the GPGPU1430. In one embodiment the GPU link1440couples with a high speed interconnect to transmit and receive data to other GPGPUs or parallel processors. In one embodiment the multiple instances of the GPGPU1430are located in separate data processing systems and communicate via a network device that is accessible via the host interface1432. In one embodiment the GPU link1440can be configured to enable a connection to a host processor in addition to or as an alternative to the host interface1432. While the illustrated configuration of the GPGPU1430can be configured to train neural networks, one embodiment provides alternate configuration of the GPGPU1430that can be configured for deployment within a high performance or low power inferencing platform. In an inferencing configuration the GPGPU1430includes fewer of the compute clusters1436A-1436H relative to the training configuration. Additionally, the memory technology associated with the memory1434A-1434B may differ between inferencing and training configurations, with higher bandwidth memory technologies devoted to training configurations. In one embodiment the inferencing configuration of the GPGPU1430can support inferencing specific instructions. For example, an inferencing configuration can provide support for one or more 8-bit integer dot product instructions, which are commonly used during inferencing operations for deployed neural networks. Advantageously, any of the above systems, processors, graphics processors, apparatuses, and/or methods may be integrated or configured with any of the various embodiments described herein (e.g., or portions thereof), including, for example, those described in the below Additional Notes and Examples. In one example, the processor(s)102(FIG.9) and/or the graphics processors108(FIG.9) receive image data from multiple cameras144(FIG.9), and implement one or more aspects of the method36(FIG.2A), the method44(FIG.2B), and/or the method52(FIG.3), already discussed, to achieve greater accuracy, greater cost effectiveness and/or an improved user experience. Additionally, the logic1172(FIG.19B) and/or the logic1174(FIG.19B) may implement one or more aspects of the method36(FIG.2A), the method44(FIG.2B), and/or the method52(FIG.3). Moreover, in some embodiments, the graphics processor instruction formats700may be adapted for use in the system150(FIG.8), with suitable instructions to implement one or more aspects of those embodiments. The technology described herein therefore enables the automated tracking of projectiles in 2D video data from a plurality of cameras. Additional Notes and Examples Example 1 includes a performance-enhanced computing system comprising a plurality of cameras to generate two-dimensional (2D) video data, a processor coupled to the plurality of cameras, and a memory coupled to the processor, the memory including a set of instructions, which when executed by the processor, cause the computing system to select a player from a plurality of players based on an automated analysis of the 2D video data, wherein the selected player is to be nearest to a projectile depicted in the 2D video data, track a location of the selected player over a subsequent plurality of frames in the 2D video data, and estimate a location of the projectile based on the location of the selected player over the subsequent plurality of frames. Example 2 includes the computing system of Example 1, wherein the projectile is to be occluded from view in one or more of the subsequent plurality of frames. Example 3 includes the computing system of Example 1, wherein the instructions, when executed, further cause the computing system to conduct the automated analysis over a buffered initial plurality of frames occurring before the subsequent plurality of frames in the 2D video data. Example 4 includes the computing system of Example 1, wherein the instructions, when executed, further cause the computing system to conduct the automated analysis in response to a height of the projectile being less than a height threshold. Example 5 includes the computing system of Example 1, wherein instructions, when executed, further cause the computing system to conduct the automated analysis in response to a distance between the projectile and one or more of the plurality of players being less than a distance threshold. Example 6 includes the computing system of any one of Examples 1 to 5, wherein the instructions, when executed, further cause the computing system to estimate, via an artificial neural network, the location of the projectile and locations of a plurality of body parts in a bounding box associated with the location of the selected player. Example 7 includes a semiconductor apparatus comprising one or more substrates, and logic coupled to the one or more substrates, wherein the logic is implemented at least partly in one or more of configurable logic or fixed-functionality hardware logic, the logic coupled to the one or more substrates to select a player from a plurality of players based on an automated analysis of two-dimensional (2D) video data associated with a plurality of cameras, wherein the selected player is to be nearest to a projectile depicted in the 2D video data, track a location of the selected player over a subsequent plurality of frames in the 2D video data, and estimate a location of the projectile based on the location of the selected player over the subsequent plurality of frames. Example 8 includes the semiconductor apparatus of Example 7, wherein the projectile is to be occluded from view in one or more of the subsequent plurality of frames. Example 9 includes the semiconductor apparatus of Example 7, wherein the logic coupled to the one or more substrates is to conduct the automated analysis over a buffered initial plurality of frames occurring before the subsequent plurality of frames in the 2D video data. Example 10 includes the semiconductor apparatus of Example 7, wherein the logic coupled to the one or more substrates is to conduct the automated analysis in response to a height of the projectile being less than a height threshold. Example 11 includes the semiconductor apparatus of Example 7, wherein the logic coupled to the one or more substrates is to conduct the automated analysis in response to a distance between the projectile and one or more of the plurality of players being less than a distance threshold. Example 12 includes the semiconductor apparatus of any one of Examples 7 to 11, wherein the logic coupled to the one or more substrates is to estimate, via an artificial neural network, the location of the projectile and locations of a plurality of body parts in a bounding box associated with the location of the selected player. Example 13 includes at least one computer readable storage medium comprising a set of instructions, which when executed by a computing system, cause the computing system to select a player from a plurality of players based on an automated analysis of two-dimensional (2D) video data associated with a plurality of cameras, wherein the selected player is to be nearest to a projectile depicted in the 2D video data, track a location of the selected player over a subsequent plurality of frames in the 2D video data, and estimate a location of the projectile based on the location of the selected player over the subsequent plurality of frames. Example 14 includes the at least one computer readable storage medium of Example 13, wherein the projectile is to be occluded from view in one or more of the subsequent plurality of frames. Example 15 includes the at least one computer readable storage medium of Example 13, wherein the instructions, when executed, further cause the computing system to conduct the automated analysis over a buffered initial plurality of frames occurring before the subsequent plurality of frames in the 2D video data. Example 16 includes the at least one computer readable storage medium of Example 13, wherein the instructions, when executed, further cause the computing system to conduct the automated analysis in response to a height of the projectile being less than a height threshold. Example 17 includes the at least one computer readable storage medium of Example 13, wherein instructions, when executed, further cause the computing system to conduct the automated analysis in response to a distance between the projectile and one or more of the plurality of players being less than a distance threshold. Example 18 includes the at least one computer readable storage medium of any one of Examples 13 to 17, wherein the instructions, when executed, further cause the computing system to estimate, via an artificial neural network, the location of the projectile and locations of a plurality of body parts in a bounding box associated with the location of the selected player. Example 19 includes a method of operating a performance-enhanced computing system, comprising selecting a player from a plurality of players based on an automated analysis of two-dimensional (2D) video data associated with a plurality of cameras, wherein the selected player is nearest to a projectile depicted in the 2D video data, tracking a location of the selected player over a subsequent plurality of frames in the 2D video data, and estimating a location of the projectile based on the location of the selected player over the subsequent plurality of frames. Example 20 includes the method of Example 19, wherein the projectile is occluded from view in one or more of the subsequent plurality of frames. Example 21 includes the method of Example 19, further including conducting the automated analysis over a buffered initial plurality of frames occurring before the subsequent plurality of frames in the 2D video data. Example 22 includes the method of Example 19, further including conducting the automated analysis in response to a height of the projectile being less than a height threshold. Example 23 includes the method of Example 19, further including conducting the automated analysis in response to a distance between the projectile and one or more of the plurality of players being less than a distance threshold. Example 24 includes the method of any one of Examples 19 to 23, further including estimating, via an artificial neural network, the location of the projectile and locations of a plurality of body parts in a bounding box associated with the location of the selected player. Example 25 includes means for performing the method of any one of Examples 19 to 24. Embodiments are applicable for use with all types of semiconductor integrated circuit (“IC”) chips. Examples of these IC chips include but are not limited to processors, controllers, chipset components, programmable logic arrays (PLAs), memory chips, network chips, systems on chip (SoCs), SSD/NAND controller ASICs, and the like. In addition, in some of the drawings, signal conductor lines are represented with lines. Some may be different, to indicate more constituent signal paths, have a number label, to indicate a number of constituent signal paths, and/or have arrows at one or more ends, to indicate primary information flow direction. This, however, should not be construed in a limiting manner. Rather, such added detail may be used in connection with one or more exemplary embodiments to facilitate easier understanding of a circuit. Any represented signal lines, whether or not having additional information, may actually comprise one or more signals that may travel in multiple directions and may be implemented with any suitable type of signal scheme, e.g., digital or analog lines implemented with differential pairs, optical fiber lines, and/or single-ended lines. Example sizes/models/values/ranges may have been given, although embodiments are not limited to the same. As manufacturing techniques (e.g., photolithography) mature over time, it is expected that devices of smaller size could be manufactured. In addition, well known power/ground connections to IC chips and other components may or may not be shown within the figures, for simplicity of illustration and discussion, and so as not to obscure certain aspects of the embodiments. Further, arrangements may be shown in block diagram form in order to avoid obscuring embodiments, and also in view of the fact that specifics with respect to implementation of such block diagram arrangements are highly dependent upon the platform within which the embodiment is to be implemented, i.e., such specifics should be well within purview of one skilled in the art. Where specific details (e.g., circuits) are set forth in order to describe example embodiments, it should be apparent to one skilled in the art that embodiments can be practiced without, or with variation of, these specific details. The description is thus to be regarded as illustrative instead of limiting. The term “coupled” may be used herein to refer to any type of relationship, direct or indirect, between the components in question, and may apply to electrical, mechanical, fluid, optical, electromagnetic, electromechanical or other connections. In addition, the terms “first”, “second”, etc. may be used herein only to facilitate discussion, and carry no particular temporal or chronological significance unless otherwise indicated. As used in this application and in the claims, a list of items joined by the term “one or more of” may mean any combination of the listed terms. For example, the phrase “one or more of A, B, and C” and the phrase “one or more of A, B, or C” both may mean A; B; C; A and B; A and C; B and C; or A, B and C. Those skilled in the art will appreciate from the foregoing description that the broad techniques of the embodiments can be implemented in a variety of forms. Therefore, while the embodiments have been described in connection with particular examples thereof, the true scope of the embodiments should not be so limited since other modifications will become apparent to the skilled practitioner upon a study of the drawings, specification, and following claims. | 125,002 |
11861908 | DETAILED DESCRIPTION In accordance with various embodiments of the disclosed subject matter, mechanisms (which can include methods, systems, and media) for adaptive presentation of a video content item based on an area of interest are provided. Generally speaking, the mechanisms can alter the presentation of a video content item such that an area of interest within the video content item is selected for presentation. For example, in some embodiments, the mechanisms can determine that a video content item being presented as part of a webpage is partially out of view, such as when a user navigates away from the video content item by scrolling through a web page and causing a portion of the video content item to be partially out of view. In another example, in some embodiments, the mechanisms can determine that a video content item being presented as part of a webpage is partially obstructed, such as when another window, overlay, or other suitable object is positioned over a portion of the video content item. In response, the mechanisms can determine the size of the available area of the video content item (e.g., the dimensions of the portion of the video content item that remains in view), and cause the area of interest to be displayed within that available area. In continuing this example, the mechanisms can cause the area of interest that is selected for presentation to be centered within the available area. It should be noted that, in some embodiments, the area of interest can be zoomed in or otherwise increased in size to generate a modified video content item that is focused on the area of interest. Additionally, in some embodiments, the mechanisms can remove or otherwise inhibit the presentation of the portions of the video content item outside of the area of interest that does not fit within the available area (e.g., by cropping the frames of the video content item). In some embodiments, the mechanisms can alter the presentation of a video content item to accommodate a video viewport or any other suitable media content space designated for presenting the video content item, such as a relatively small video viewport in comparison to a default size of an originally presented viewport. For example, the mechanisms can determine that the dimensions of a video viewport that are available for presenting a video content item can dynamically change, such as when a user navigates away from the video content item and the video viewport reduces in size (e.g., by scrolling through the web page, which causes a portion of the video content item within the video viewport to be partially out of view). In response, the mechanisms can determine the current dimensions of the video window (e.g., the dimensions of the portion of the video content item within the video window that remains in view on the web page) and can cause a determined area of interest from the video content item to be displayed within the current dimensions of the remaining video window. In continuing this example, the mechanisms can cause the area of interest that is selected for presentation to be centered within the current dimensions of the remaining video window. It should be noted that, in some embodiments, the area of interest can be zoomed in or otherwise increased in size to generate a video content item that is focused on the area of interest and that is presented within the current dimensions of the remaining video window. It should also be noted that, in some embodiments, multiple areas of interest can be selected for a video content item, where a first area of interest is selected from a first video frame (or a set of video frames) of the video content item and a second area of interest is selected from a second video frame (or a set of video frames) of the video content item. It should further be noted that, in some embodiments, the mechanisms can position the video content item within the video viewport such that the area of interest can be presented within the video viewport and portions of the video content item outside of the area of interest can be removed or otherwise inhibited from being presented within the video viewport. This can, for example, cause the presentation of the area of interest to be maximized or focused (e.g., by zooming into or increasing the scale of the area of interest) and cropping out the areas of the video content item that would not fit within the viewport. In some embodiments, the mechanisms can alter the presentation of the video content item by selecting a version of the video content item that is deemed appropriate for the dimensions of the currently provided video viewport (e.g., a version that fits within a video viewport of particular dimensions). For example, rather than selecting a version of the video content item that would fit completely within a small video viewport, that would have a low resolution, and that would leave black regions within the video viewport, the mechanisms can select a version of the video content item containing the region of interest having a higher resolution, where the area of interest can fit completely or nearly completely within the dimensions of the video viewport. In continuing this example, these mechanisms can also remove portions of the video content item outside of the area of interest such that there will be no black spaces or black regions in the video viewport (e.g., by cropping). It should be noted that, in some embodiments, the mechanisms can determine an area of interest for multiple frames of a video content item and dynamically modify the presentation of the video content item (e.g., each time the area of interest changes for a video content item). For example, the mechanisms can re-center and/or re-crop the video content item at each frame of the video content item having a different area of interest. In some embodiments, the mechanisms can cause a video content item being presented on a web page to continue being presented in response to the user navigating to another web page. For example, the mechanisms can cause an overlay element that includes the video content item to be presented over the original web page and, in response to determining that the other web page also includes the same video content item, the mechanisms can continue presenting the video content item while the content of the other web page is rendered around the video content item. Turning toFIG.1A,FIG.1Ashows an illustrative example of a presentation of a web page that includes a video content item that includes an area of interest in accordance with some embodiments of the disclosed subject matter. As shown inFIG.1A, in some embodiments, a web browser102can present a web page108that includes a video player104presenting a video content item110that includes an area of interest106. In some embodiments, web browser102can be any suitable web browser application. For example, web browser102can be an independent web browser application (e.g., an application configured particularly for web browsing), a dedicated web browser application (e.g., a web browser application for accessing specific web sites), an in-app web browser application (e.g., a web browser application that is integrated with the functionality of another application), or any other application suitable for presenting web content. In some embodiments, video player104can be any suitable video player. For example, video player104can be a HyperText Markup Language (HTML) video player, a video player included in web browser102, a video player plugin for web browser102, a native video player, or any other suitable video player. In some embodiments, video player104can present video content item110in one or more presentation modes. For example, as illustrated inFIGS.1A and1B, video player104can present video content item110in a static position mode (e.g., a mode where the video player remains in the same position relative to the other content on the web page as the user navigates through the web page). As another example, as illustrated inFIG.1C, video player104can present video content item110in a scrolling mode (e.g., a mode where the video player has a viewport having particular dimensions and remains in a particular position of the web browser as the user navigates through the web page). A viewport, such as the viewport of video player104shown inFIG.1C, can be relatively small in size when compared with the dimensions of web browser102. As yet another example, video player104can present video content item110in a full screen mode (e.g., a mode where the video player has a viewport that is relatively large in size such that the viewport occupies all or substantially all of the web browser). As a further example, video player104can present video content item110in a thumbnail mode (e.g., a mode where the video player has a viewport that is relatively small in size when compared with the dimensions of the web browser and remains in the same position in the web browser even as the web page is navigated). As a more particular example, video player104can present video content item110in a thumbnail mode when presenting video content item110in a list of search results (e.g., provided in a response to a search query). In some embodiments, video player104can be configured such that, when video player104is in a thumbnail mode or in a scrolling mode, a user selection of video player104(e.g., a mouse click within video player104) can cause the video player to enter a full screen mode (e.g., a full screen mode as illustrated inFIG.1D). In some embodiments, upon receiving such a user selection, video player104can expand across web page108and display progressively more of previously cropped areas of video content item110until video content item110is displayed in full. In some embodiments, video content item110can include an area of interest106. It should be noted that, although area of interest106is presented with a dashed boundary line inFIGS.1A-1Dto show an area of interest within a video content item, this is merely illustrative and area of interest106may be invisible and may not include a boundary line or a boundary box. In some embodiments, area of interest106can be any suitable area of interest in connection with video content item110. For example, area of interest106can be an area of interest as discussed below in connection withFIGS.2-4. As shown inFIGS.1A and1B, in some embodiments, the presentation of video content item110can be partially obstructed and/or cut off within web browser102due to, for example, the web page being scrolled down such that a top area of video player104has been cut off and is out of view. In some embodiments, in response to video content item110being partially obstructed, cut off, or otherwise out of view, the mechanisms described herein can cause the presentation of video content item110to be altered based on area of interest106. For example, as illustrated inFIG.1B, the mechanisms can cause the presentation of video content item110to be centered on area of interest106and/or shifted such that area of interest106is presented within the unobstructed portion of video player104. In such an example, as also illustrated inFIG.1B, areas at the top and bottom of video content item110can be cropped (e.g., as described below in connection withFIG.2) such that area of interest106is fully displayed within the unobstructed portion of video player104. Additionally or alternatively, in some embodiments, if area of interest106is too large to be displayed fully, the mechanisms can cause a top, bottom, left, and/or right portions of area of interest106to be cropped or otherwise removed (e.g., as described below in connection withFIG.2). For example, as shown inFIG.1C, video player104can present video content item110in a relatively small viewport where area of interest106is larger in size than the size of the viewport. In such an example, as illustrated inFIG.1C, the mechanisms can cause a left area and a right area of video content item110to be cropped or otherwise removed from being presented. In some embodiments, the mechanisms can use any suitable process for causing the presentation of video content item110to be altered based on area of interest106. For example, the mechanisms can use process200as described below in connection withFIG.2, process300as described below in connection withFIG.3, process400as described below in connection withFIG.4, and/or any other suitable process or any suitable combination thereof. FIG.2shows an illustrative example200of a process for altering presentation of a video content item based on an area of interest in accordance with some embodiments of the disclosed subject matter. At202, in some embodiments, process200can transmit a request for a video content item. In some embodiments, process200can transmit the request for a video content item using any suitable technique or combination of techniques. For example, process200can transmit the request for the video content item to a particular URL that is included in a received web page. In another example, process200can transmit a search query for a video content item. At204, in some embodiments, process200can receive area of interest information for the video content item. In some embodiments, the area of interest information can be associated with any suitable area of interest. For example, the area of interest information can be associated with an area of interest as described above in connection withFIGS.1A,1B,1C, and1D. In some embodiments, process200can receive area of interest information from any suitable source. For example, process200can receive area of interest information in connection with the requested video content item. As a more particular example, the requested video content item can be provided to process200with a video tag that includes the area of interest information (e.g., as described below in connection withFIG.4). As another example, process200can receive area of interest information in connection with a web page that includes the video content item. As a more particular example, the web page can include a video tag that includes the area of interest information (e.g., as described below in connection withFIG.4). As yet another example, process200can request the area of interest information from a database of area of interest information, where a query with a video content item identifier can be transmitted to a database of area of interest information and responsive area of interest information can be received from the database of area of interest information. In some embodiments, the area of interest information can include any suitable information related to the size and/or shape of the area of interest. For example, the area of interest information can include proportions associated with the video content item, such as a height proportion and/or a width proportion that define the size of an area of interest. As another example, the area of interest information can include a number of pixels that define the size of an area of interest. As yet another example, the area of interest information can include identifiers associated with pixels within and/or surrounding the area of interest, coordinates associated with the area of interest, any other suitable information related to the size and/or shape of the area of interest, or any suitable combination thereof. As still another example, the area of interest information can include information related to a shape for the area of interest. As a more particular example, the information can include the type of shape (e.g., square, rectangle, circle, ellipse, and/or any other suitable shape) and information about the size and/or character of the shape (e.g., a radius in connection with a circle, and/or foci in connection with an ellipse). In some embodiments, the area of interest information can include information related to the location of the area of interest. For example, the area of interest information can include a center point of the area of interest, a vertical offset of the area of interest, a horizontal offset of the area of interest, coordinates of the area of interest, any other information related to the location of the area of interest, or any suitable combination thereof. In some embodiments, the area of interest information can include information related to a minimum size of a video player viewport required to present the video content item. For example, the area of interest information can include a minimum proportion of the height, width, and/or area of the video content item required. In some embodiments, in response to the video player viewport being below such a minimum, process200can take any suitable action (e.g., an action as described below in connection with210ofFIG.2). In some embodiments, the area of interest information can include identification information related to an entity that appears within the video content item. For example, the area of interest information can include identifying information relating to a person. In continuing this example, the identifying information can be transmitted to an image recognition system that determines an area of interest within frames of the video content item in which the person appears. In another example, each video content item can have different areas of interest associated with different entities and, in response to determining which of the different entities that a viewer of the video content item may be interested in, the identifying information can be used to select one of the different areas of interest to present. In some embodiments, the area of interest information can include any suitable information related to an area of interest within the video content item. For example, the area of interest information can include area of interest parameters as described below in connection withFIG.3. At206, in some embodiments, process200can cause the video content item to be presented in a viewport of a video player. In some embodiments, process200can cause the video content item to be presented in any suitable video viewport. For example, process200can cause the video content item to be presented in a video viewport of a video player (e.g., a video player as described above in connection withFIGS.1A,1B,1C, and1D). At208, in some embodiments, process200can determine the current size of the video viewport that is being presented. In some embodiments, process200can determine the current size of the video viewport using any suitable technique or combination of techniques. For example, in a situation where the video viewport is being presented on a web page using a web browser, process200can query the web browser for the current size of the video viewport. As another example, in a situation where the video viewport is being presented using a native video player, process200can determine the size of the video viewport based on the pixels being used to present the video viewport. As yet another example, in a situation where process200is being executed by a web browser, process200can determine the current size of the video viewport based on the web browser's rendering of the web page and the location of web browser's user interface. In some embodiments, process200can determine the current size of the video viewport at predetermined intervals. For example, process200can transmit a query to the video player for the current size of the viewport at every second. In another example, process200can determine the current size of the video viewport based on the information received from a tag executing on a web browser (e.g., where the viewport information is received every one-tenth of a second). At210, process200can cause the presentation of the video content item to be altered based on the area of interest information and the current size of the viewport. In some embodiments, process200can cause the presentation of the video content item to be altered using any suitable technique or combination of techniques. For example, process200can transmit and/or pass the received area of interest information to a video player being used to present the video content item (e.g., a video player as described below in connection withFIGS.1A,1B,1C, and1D). In some embodiments, process200can cause the presentation to be altered by causing the presentation of the video content item to be shifted based on the area of interest information. For example, the presentation of the video content item can be shifted as described above in connection withFIGS.1A,1B,1C, and1D. As another example, process200can cause the video player to shift the presentation of the video content item such that the area of interest is fully displayed within the viewport. In some embodiments, process200can cause the presentation of the video content item to be shifted within the viewport of the video player based on any suitable criteria. For example, process200can cause the presentation of the video content item to be shifted by a minimum amount required to fully present the area of interest within the viewport. As another example, process200can cause the presentation of the video content item to be shifted based on a center point of the area of interest (e.g., by centering the center point within the viewport). In some embodiments, process200can cause the presentation to be altered by causing areas of the video content item to be cropped based on the area of interest in relation to the current size of the viewport. For example, process200can cause the video player to crop areas of the video content item as described above in connection withFIGS.1A,1B,1C, and1D. In some embodiments, process200can cause areas of the video content item to be cropped or otherwise removed based on any suitable criteria. For example, process200can cause areas of the video content item to be removed based on the shift of the presentation within the viewport (e.g., by cropping the area of the video content item that would lie outside the viewport following the shift). As another example, process200can cause the areas of the video content item to be cropped such that no blank space is produced in the viewport. As yet another example, process200can cause areas of the video content item to be cropped such as to maximize the amount of the area of interest that is presented. As a more particular example, in a situation where at one dimension of the viewport is too small to fit a corresponding dimension of the area of interest, but another dimension of the viewport is larger than the corresponding dimension of the area of interest, the corresponding the dimension of the area of interest that is too small can be cropped such that it matches the corresponding dimension of the viewport (e.g., as described above in connection withFIG.1C). In some embodiments, process200can cause the presentation to be altered by transmitting the current size of the viewport to a video server (e.g., a video server as described below in connection withFIG.5). For example, process200can transmit the current size of the viewport to a video server such that the video server transmits an altered version of the video content item to the video player. In such an example, the video server can adapt the video content item to the available video viewport by cropping the video content item, shifting and/or centering the remaining portions of the video content item, altering the video content item as described above in connection withFIGS.1A,1B,1C, and1D, altering the video content item using any other suitable technique, or altering the video content item using any suitable combination thereof. In some embodiments, process200can cause the presentation to be altered by causing a different version of the video content item to be presented. For example, process200can determine that the current size of the viewport would not be able to present all of the area of interest, and in response to the determination, cause an available smaller version (e.g., a lower definition version) of the video content item to be presented (e.g., by transmitting a request for the smaller version). In such an example, process200can then crop and/or shift the presentation of the smaller version. Additionally or alternatively, process200can determine that the current size of the viewport would be able to present all of the area of interest of an available larger version (e.g., a higher definition version) of the video content item, and in response to the determination, cause the available larger version of the video content item to be presented (e.g., by transmitting a request for the larger version). In some embodiments, process200can cause the presentation to be altered by causing the video player to enter a different presentation mode (e.g., a presentation mode as described above in connection withFIGS.1A,1B,1C, and1D). For example, in a situation where a video content item is being presented as part of a web page and the web page is navigated by scrolling past the video content item, process200can cause the video player to enter a scrolling mode as described below in connection withFIG.1C. Additionally or alternatively, in some embodiments, process200can determine a minimum size of the viewport and, in response to the viewport being reduced to below the minimum size, as a result of the web page being scrolled down, can cause the video player to enter a scrolling mode as described above in connection withFIG.1C. At212, in some embodiments, process200can determine that the viewport of the video player has changed. For example, process200can determine that the dimensions of the viewport have changed. In another example, process can determine that the presentation mode of the viewport has changed (e.g., from full screen mode to scrolling mode). In some embodiments, process200can determine that the viewport of the video player has changed using any suitable technique or combination of techniques. For example, process200can receive a user request to scroll down on a window that is presenting the viewport (e.g., a window of a web browser that is presenting the viewport). As another example, process200can query the video player to determine that the viewport of the video player has changed. In some embodiments, in response to determining that the viewport of the video player has changed, process200can return to208and re-determine the current size of the viewport of the video player. FIG.3shows an illustrative example300of a process for determining an area of interest in a video content item and causing the video content item to be presented based on the area of interest in accordance with some embodiments of the disclosed subject matter. At302, in some embodiments, process300can determine an area of interest for one or more frames of a video content item. In some embodiments, process300can determine an area of interest by receiving a user input for the area of interest. For example, process300can receive a user selection of a portion of a screen (e.g., via a click, a click-and-drag, or any other suitable user input) that is presenting the video, and determine the area of interest based on the user selection. As another example, process300can receive a user input of area of interest information as discussed above in connection withFIG.2. As a more particular example, process300can receive a user input of proportions, shapes, coordinates, and/or any other suitable information that can be used to determine the area of interest. In yet another example, when uploading a video content item to a video sharing service, process300can present a user interface for indicating particular portions from one or more video frames of the video content that are deemed important to the uploading user. In some embodiments, process300can determine an area of interest by presenting a user interface for browsing the frames of a video content item that is configured to receive a user input for determining the area of interest. For example, process300can present a user interface that allows a user to draw the area of interest on one or more frames of the video content item. As another example, process300can present a user interface that allows a user to select a shape, size, coordinates, any other suitable information related to an area of interest, or any suitable combination thereof. In some embodiments, process300can determine an area of interest using ant suitable detection technique. For example, process300determine an area of interest using object recognition technique, visual scene processing technique, machine learning technique, and/or any other suitable technique. In some embodiments, process300can use such a technique to find objects of interest within a video content item and use the identified objects of interest to determine an area of interest. For example, process300can use an image recognition technique to identify a basketball in a video of a basketball game, and determine the area of interest based on the location of the basketball in a given frame. As another example, process300can use an image recognition technique to identify one or more faces within a video, and determine an area of interest based on the location of the one or more faces in a given frame of the video. In some embodiments, process300can determine an area of interest based on gaze tracking information. For example, in response receiving affirmative authorization from a user of a user device to track the user's gaze using an imaging device associated with the user device, process300can use a gaze tracking technique to determine which areas of a video that the user is focused on while presenting a video, and determine an area of interest for one or more frames of the video based on the determined areas that the user was focused on. At304, in some embodiments, process300can generate tuples based on the areas of interest and the one or more frames of the video content item. In some embodiments, process300can generate tuples that include any suitable information related to the one or more frames of the video content item that correspond to an area of interest. For example, the tuples can include an identifier for each of the one or more frames that correspond to an area of interest determined at302. As another example, the tuples can include a time of the video that corresponds to each of the one or more frames that correspond to an area of interest determined at302. In some embodiments, process300can generate tuples that include any suitable information related to an area of interest determined at302. For example, process300can include area of interest information as described above in connection withFIG.2. In some embodiments, each tuple can include information related to one of the frames that correspond to an area of interest and area of interest information associated with the frame. In some embodiments, each tuple can correspond to a key frame of the video content item. For example, each tuple can correspond to a frame in which the determined area of interest changes. As a more particular example, process300can determine an area of interest for a frame Fxthat applies to each consecutive frame Fx+n, until frame Fx+n+1wherein frame Fx+n+1has a different determined area of interest from frame Fxand is a key frame. At306, in some embodiments, process300can generate a video tag based on the generated tuples. In some embodiments, process300can generate a video tag that includes any suitable information related to the tuples. For example, process300can generate a video tag that includes each generated tuple. As another example, process300can generate a video tag that includes a URL where the tuples can be accessed. As another example, process300can generate a video tag that includes each generated tuple that corresponds to a key frame (e.g., a key frame as discussed above in connection with304). At308, in some embodiments, process300can associate the generated video tag with the video content item. In some embodiments, process300can associate the generated video tag with the video content item using any suitable technique or combination of techniques. For example, process300can include the generated tag in metadata associated with the video content item. As another example, process300can embed the generated video tag within the video content item. As yet another example, process300can store the generated video tag in a database in connection with an identifier for the video content item. As yet another example, process300can embed the generated video tag in a URL associated with the video content item. As still another example, the generated video tag can be included in a web page (e.g., included in the HTML, code for the web page) that also includes the video content item. At310, in some embodiments, process300can cause a presentation of the video content item to be altered based on the generated video tag. In some embodiments, process300can cause a presentation of the video content item to be altered based on the generated video tag using any suitable technique or combination of techniques. For example, process300can cause a presentation of the video content item to be altered using a technique as described above in connection withFIG.2. In some embodiments, in a situation where the video content item is presented in connection with a web page, process300can present the video content item and, in some instances, an adapted or modified video content item by causing the video content item be presented in an overlay that is positioned over the web page. For example, a web page can include a page layer that includes web page content associated with the web page. In continuing this example, using embedded tags or other suitable instructions, the web page can cause a video content layer to be rendered as an overlay that is positioned over the page layer including the web page content. In continuing this example, as the page layer including the web page content changes, the video content layer can be adapted and/or updated to reflect the changes to the page layer (e.g., due to navigation through the web page content in the page layer). In addition, the video content item can be adapted and/or modified in the video content layer based on area of interest information and based on the current parameters of the video viewport. In such an example, process300can transmit an altered version of the video content item (e.g., a version of the video content item that has been altered as described above in connection withFIGS.1A,1B,1C,1D, and2) to the application being used to present the web page. In another suitable example, a page layer of a web page can include an area specified for presenting the video content item using a video player, such as a video viewport having particular dimensions, and process300can cause the application being used to present the web page to render the video content item in an overlay that is positioned over the specified area. In such an example, process300can transmit an altered version of the video content item (e.g., a version of the video content item that has been altered as described above in connection withFIGS.1A,1B,1C,1D, and2) to the application being used to present the web page. In continuing this example, as the particular dimensions of the video viewport changes (e.g., in response to switching to a different presentation mode, in response to the video viewport being navigated such that a portion of the viewport is out of view, etc.), an adapted or modified video content item can be presented in an overlay that is positioned over the currently available video viewport. FIG.4shows an illustrative example400of a process for continuously presenting a video content item across two webpages in accordance with some embodiments of the disclosed subject matter. At402, in some embodiments, process400can receive a requested first web page that includes a first video identifier. In some embodiments, the first web page can include any suitable first video identifier. For example, the first video identifier can be a URL associated with a video content item, an alphanumeric code associated with the video content item, metadata associated with a video content item, any other suitable identifier, or any suitable combination thereof. In some embodiments, the first web page can be requested and/or received using any suitable application. For example, the first web page can be requested and/or received using a web browser, a native video player, a web-based video player, a social media application, an operating system, and advertising application (e.g., an application for selecting advertisements), any other suitable application, or any suitable combination thereof. At404, in some embodiments, process400can receive a video content item associated with the first video identifier. In some embodiments, process400can receive the video content item using any suitable technique or combination of techniques. For example, process400can receive the video content item in response to transmitting a request for the video content item to a URL included in the first web page (e.g., as discussed above in connection withFIG.2). At406, in some embodiments, process400can receive area of interest information associated with the video content item. In some embodiments, process400can receive the area of interest information using any suitable technique or combination of techniques. For example, process400can receive the information in connection with the first web page (e.g., as discussed above in connection withFIG.2). As another example, process400can transmit a request for the area of interest information to a database of area of interest information (e.g., a database as described above in connection withFIG.2). As yet another example, in a situation as described above in connection withFIG.3where the area of interest information is embedded in the video content item and/or included in the metadata, process400can receive the information in connection with the received video content item. At408, in some embodiments, process400can present the video content item in connection with the first web page based on the area of interest parameters. In some embodiments, process400can present the video content item based on the area of interest information using any suitable technique or combination of techniques. For example, process400can use a technique as described above in connection withFIG.2and/orFIG.3. At410, in some embodiments, process400can request a second web page that includes a second video identifier. In some embodiments, the second video identifier can be a URL associated with a video content item, an alphanumeric code associated with the video content item, metadata associated with a video content item, any other suitable identifier, or any suitable combination thereof. At412, in some embodiments, process400can determine that the second video identifier matches the first video identifier using any suitable technique or combination of techniques. For example, process400can store the first video identifier and compare the stored first video identifier with the second video identifier upon receiving the second video identifier. At414, in some embodiments, in response to determining that the second video identifier matches the first video identifier, process400can continue to present the video content item and alter the presentation of the video content item based on the second webpage and the area of interest information. In some embodiments, process400can continue the presentation of the video content item using any suitable technique or combination of techniques. For example, process400can render the second web page around the viewport in which the video content item was being presented on the first web page. In such an example, process400can then alter the viewport based on information in the second web page (e.g., information indicating a viewport specified by the second web page). In some embodiments, process400can continue the presentation of the video content item without interruption. For example, process400can prevent the viewport being used to present the video content item from being interrupted or re-rendered, or reproduced in response to the request for the second web page. As another example, process400can prevent an interruption in the communication with a video server (e.g., a video server as described below in connection withFIG.5) being used to stream the video content item in response to the request for the second web page. In some embodiments, process400can alter the presentation of the video content item based on the area of interest and the second web page using any suitable technique or combination of techniques. For example, process400can determine the size of a viewport for presenting the video content item on the second web page (e.g., using a technique as described above in connection withFIG.2) and alter the presentation of the video content item based on the size of the viewport and the area of interest using a technique as described above in connection withFIGS.1A,1B,1C,1D,2, and3. In some embodiments, at least some of the above described blocks of the processes ofFIG.2,FIG.3, and/orFIG.4can be executed or performed in any order or sequence not limited to the order and sequence shown in and described in connection with the figures. In some embodiments, the above described blocks of the processes ofFIG.2,FIG.3, and/orFIG.4can be executed by any suitable computer application. For example, the above described blocks of the processes ofFIG.2,FIG.3, and/orFIG.4can be executed by a web browser, a native video player, a web application, an operating system, a social media application, any other suitable computer application, or any suitable combination thereof. Also, some of the above blocks ofFIG.2,FIG.3, and/orFIG.4can be executed or performed substantially simultaneously where appropriate or in parallel to reduce latency and processing times. Additionally or alternatively, in some embodiments, some of the above described blocks of the processes ofFIG.2,FIG.3, and/orFIG.4can be omitted. FIG.5shows an example500of hardware that can be used in accordance with some embodiments of the disclosed subject matter for adaptive presentation of a video content item based on an area of interest. As illustrated, hardware500can include a web server502and a video server514connected to a communication network506via one or more communication links504, and/or one or more user devices, such as user devices510and512, connected to communication network506via one or more communication links508. In some embodiments, web server502and/or video server514can be any suitable server for storing media content items, delivering the media content items to user devices510, and/or512, receiving requests for media content items (e.g., video content items and/or web pages) from user devices510and/or512, storing information related to media content items (e.g., area of interest information as described above), and/or transmitting media content items to user devices510, and/or512. For example, web server502can be a server that transmits web pages to user devices510and/or512via communication network506. As another example, video server514can be a server that transmits video content items to user devices510and/or512via communication network506. Media content items provided by web server502can be any suitable media content, such as video content, audio content, image content, text content, and/or any other suitable type of media content. As a more particular example, media content items can include user-generated content, music videos, television programs, movies, cartoons, sound effects, streaming live content (e.g., a streaming radio show, a live concert, and/or any other suitable type of streaming live content), and/or any other suitable type of media content. Media content items can be created by any suitable entity and/or uploaded to web server502by any suitable entity. As another example, video server502can be a server that hosts one or more databases (e.g., databases for video metadata, databases for area of interest information, and/or databases for video tags). Communication network506can be any suitable combination of one or more wired and/or wireless networks in some embodiments. For example, communication network506can include any one or more of the Internet, an intranet, a wide-area network (WAN), a local-area network (LAN), a wireless network, a digital subscriber line (DSL) network, a frame relay network, an asynchronous transfer mode (ATM) network, a virtual private network (VPN), and/or any other suitable communication network. User devices510, and/or512can be connected by one or more communications links508to communication network506which can be linked via one or more communications links504to web server502and video server514. Web server502and video server514can be linked via one or more communication links504. Communications links504, and/or508can be any communications links suitable for communicating data among user devices510and512, web server502, and video server514, such as network links, dial-up links, wireless links, hard-wired links, any other suitable communications links, or any suitable combination of such links. User devices510, and/or512can include any one or more user devices suitable for executing applications (e.g., web browsers, native video players, and/or social media applications), requesting media content items, searching for media content items, presenting media content items, presenting advertisements, receiving input for presenting media content and/or any other suitable functions. For example, in some embodiments, user devices510, and/or512can be implemented as a mobile device, such as a mobile phone, a tablet computer, a laptop computer, a vehicle (e.g., a car, a boat, an airplane, or any other suitable vehicle) entertainment system, a portable media player, and/or any other suitable mobile device. As another example, in some embodiments, user devices510, and/or512can be implemented as a non-mobile device such as a desktop computer, a set-top box, a television, a streaming media player, a game console, and/or any other suitable non-mobile device. Although web server502and video server514are illustrated as two separate devices, the functions performed by web server502and video server514can be performed using any suitable number of devices in some embodiments. For example, in some embodiments, the functions performed by web server502and video server514can be performed on a single server. As another example, in some embodiments, multiple devices can be used to implement the functions performed by web server502and video server514. Although two user devices510and512are shown inFIG.5to avoid over-complicating the figure, any suitable number of user devices, and/or any suitable types of user devices, can be used in some embodiments. Web server502, video server514, and user devices510, and/or512can be implemented using any suitable hardware in some embodiments. For example, in some embodiments, devices502,510,512, and/or514can be implemented using any suitable general purpose computer or special purpose computer. As another example, a mobile phone may be implemented as a special purpose computer. Any such general purpose computer or special purpose computer can include any suitable hardware. For example, turning toFIG.6, user device510can include a hardware processor612, a memory and/or storage618, an input device616, and a display614. Hardware processor612can execute the mechanisms described herein for transmitting requests (e.g., for area of interest information as described above and/or for video content items), presenting video content items, determining area of interest information, and/or performing any other suitable functions in accordance with the mechanisms described herein for altering presentation of a video content item based on an area of interest. In some embodiments, hardware processor612can send and receive data through communications link508or any other communication links using, for example, a transmitter, a receiver, a transmitter/receiver, a transceiver, or any other suitable communication device. In some embodiments, memory and/or storage618can include a storage device for storing data received through communications link508or through other links. The storage device can further include one or more programs for controlling hardware processor612. In some embodiments, the one or more programs for controlling hardware processor612can cause hardware processor612to, for example, execute at least a portion of process200described below in connection withFIG.2, process300described below in connection withFIG.3, and/or process400described below in connection withFIG.4. Display614can include a touchscreen, a flat panel display, a cathode ray tube display, a projector, a speaker or speakers, and/or any other suitable display and/or presentation devices. Input device616can be a computer keyboard, a computer mouse, a touchpad, a voice recognition circuit, a touchscreen, and/or any other suitable input device. In some embodiments, user device512can be implemented using any of the above-described hardware for user device510. Web server502can include a hardware processor622, a display624, an input device626, and a memory and/or storage628, which can be interconnected. In some embodiments, memory and/or storage628can include a storage device for storing data received through communications link504or through other links. The storage device can further include a server program for controlling hardware processor622. In some embodiments, memory and/or storage628can include information stored as a result of user activity (e.g., area of interest information, etc.), and hardware processor622can receive requests for such information. In some embodiments, the server program can cause hardware processor622to, for example, execute at least a portion of process200described below in connection withFIG.2, process300described below in connection withFIG.3, and/or process400described below in connection withFIG.4. Hardware processor622can use the server program to communicate with user devices510, and/or512as well as provide access to and/or copies of the mechanisms described herein. It should also be noted that data received through communications links504and/or508or any other communications links can be received from any suitable source. In some embodiments, hardware processor622can send and receive data through communications link504or any other communication links using, for example, a transmitter, a receiver, a transmitter/receiver, a transceiver, or any other suitable communication device. In some embodiments, hardware processor622can receive commands and/or values transmitted by one or more user devices510and/or512, such as a user that makes changes to adjust settings associated with the mechanisms described herein for altering presentation of a video content item based on an area of interest. Display624can include a touchscreen, a flat panel display, a cathode ray tube display, a projector, a speaker or speakers, and/or any other suitable display and/or presentation devices. Input device626can be a computer keyboard, a computer mouse, a touchpad, a voice recognition circuit, a touchscreen, and/or any other suitable input device. Any other suitable components can be included in hardware600in accordance with some embodiments. In some embodiments, video server514can be implemented using any of the above-described hardware for video server502. In some embodiments, any suitable computer readable media can be used for storing instructions for performing the functions and/or processes herein. For example, in some embodiments, computer readable media can be transitory or non-transitory. For example, non-transitory computer readable media can include media such as magnetic media (e.g., hard disks, floppy disks, and/or any other suitable magnetic media), optical media (e.g., compact discs, digital video discs, Blu-ray discs, and/or any other suitable optical media), semiconductor media (e.g., flash memory, electrically programmable read-only memory (EPROM), electrically erasable programmable read-only memory (EEPROM), and/or any other suitable semiconductor media), any suitable media that is not fleeting or devoid of any semblance of permanence during transmission, and/or any suitable tangible media. As another example, transitory computer readable media can include signals on networks, in wires, conductors, optical fibers, circuits, any suitable media that is fleeting and devoid of any semblance of permanence during transmission, and/or any suitable intangible media. Accordingly, methods, systems, and media for altering presentation of a video content item based on an area of interest are provided. Although the invention has been described and illustrated in the foregoing illustrative embodiments, it is understood that the present disclosure has been made only by way of example, and that numerous changes in the details of implementation of the invention can be made without departing from the spirit and scope of the invention, which is limited only by the claims that follow. Features of the disclosed embodiments can be combined and rearranged in various ways. | 53,702 |
11861909 | DETAILED DESCRIPTION The current embodiments solve these and/or other problems in the current solutions. Through unsupervised video summarization with piecewise linear interpolation, the issue of high variance is alleviated and video summarization is produced naturally. Also, through an unsupervised video summarization network which only needs to anticipate lacking importance scores, learning speed is enhanced and computational complexity is reduced. Referring now toFIG.1, a process of video summarization system is illustrated according to an embodiment of the present invention. As shown, video summarization system selects a keyframe with a high importance score to summarize video based on reinforcement learning. To alleviate high variance and generate a natural sequence of summarized frames, an unsupervised video summarization network applying piecewise linear interpolation is offered. With reference toFIG.1, a method to select keyframe to summarize video using importance score of the frame is outlined. Video summarization system can receive video through input data. Video summarization system can use a convolution neural network to extract visual characteristics of video frame images. For example, GoogleNet can be used to extract visual characteristics. Video summarization system can provide a video summarization network (VSN) based on Linear Attention recurrent Neural Network (LARNN). LARNN can be a multi-head attention variant of LSTM. Video summarization network, in order to learn video expression, uses multi-head attention and one or more LSTM cells to record the recent status. Video summarization network output's last few sequences use Fully Connected Layer and Sigmoid Functions to calculate a set of candidate importance scores. Each candidate importance score can be interpolated on based on the importance score of the video frame. Particularly, in a high-dimensional action space, various actions may be selected, which changes reward during training. As such, gradient estimate, calculated by log probability of action and reward disperse increasingly. When selecting frame using the interpolated importance score, a nearby frame with a similar score can be selected together, at a high probability. As such, variation of actions can reduce which effectively reduce action space. Referencing diagram 6, a natural sequence of nearby frames with high importance scores can be produced. Also, high variance can be alleviated by reduced action space. Video summarization system, after interpolating the set of candidate importance scores, can convert each importance score through 0 or 1 frame selection action using a Bernoulli Distribution. Utilizing this action, summarization selects keyframe and then, using a Reward Function, summarization can be evaluated for diversity and representativeness. After this, using reward and log probability of action to calculate objective function offered in UREX (underappreciated reward). Lastly, it is possible to calculate reconstruction loss, modified to enhance summarization's representativeness. Video summarization system, in order to alleviate high-variance and tp produce a natural sequence of summarization frames, offers an unsupervised video summarization network with piecewise linear interpolation. As such, the unsupervised video network's output layer's size can be reduced. Also, the unsupervised video summarization network only needs to anticipate the lacking importance score which enables faster learning by the network. Video summarization system, in order to promote summarization's representativeness, can offer reconstruction loss modified by random masking. Video summarization system, in order to enhance outcome, can adopt underappreciated reward search. Referring now toFIG.2, a video summarization network within video summarization system is depicted according to an embodiment of the present invention. As illustrated inFIG.2, a video summarization framework based on piecewise linear interpolation is shown. Video summarization network can be trained to anticipate a set of candidate importance scores using the video's frame-level features. This candidate importance score can be interpolated as an importance score to allow summarization to select a frame. A reward function can calculate reward from each frame's selected action. An underappreciated reward's function can be calculated as reward to learn accumulative reward maximization. Output of the reward function, reconstruction loss and regularization loss can be summarized as a summary loss. Video summarization system can formulate video summarization with respect to frame selection of the importance score trained by the summarization network. In order to each anticipate candidate importance score which is to be interpolated in importance score, system can offer a parametrized policy like video summarization network with piecewise linear interpolation. In other words, importance score means frame selection probability. Similar to diagram 2, the Bernoulli distribution is used to convert importance score of frame selection action to select keyframe. Video summarization system can use GoogleNet, which is a Convolution Neural Network trained by ImageNet dataset, in order to extract visual features {xt}t=1Nof video frames. Extracted feature info is used to explain visual explanation of frame and to capture a visual difference between frames. As described in diagram 2, in order to train the video summarization network, sequence of frame-level features can be input. Video summarization network can be LARNN. LARNN is a variation of LSTM network, using batch normalized as well as hidden LSTM cell and multi-head attention mechanism. Batch normalized speeds up network's training, however in the illustrations, the size of batch is set at 1 which should not affect speed. Multi-head attention maintains the recent cell value, and through Query can gain hidden cell value. Video summarization network, through multi-head attention mechanism and LSTM cell, can learn a video's expression and the visual differences between frames. Video summarization's hidden status {ht}t=1Ncan train an input function to anticipate each candidate importance score. Video network's output is importance candidate score C={ct}t=1I, interpolated in importance score S={st}t=1Nwhich is frame selection probability to be chosen by a video summarization. Within the output length l last step's sequence can be selected. Subsequently, a Fully Connected Layer and Sigmoid function can be used to convert multi-dimensional featured Output; candidate importance score can be converted to probability between 0 and 1. Interpolation is known to predict a new data point within the scope of discrete data point; it is a form of estimation methodology. Video summarization network can align candidate importance scores at equal distance to fit an input size N of a frame sequence. Illustrated in diagram 3, system can interpolate candidate importance score (C) to importance score (S) by using piecewise linear interpolation. This piecewise linear interpolation can connect candidate importance score to a linear line and can calculate a medium value. Video summarization system, after interpolation, can achieve an importance score for each frame of the video, that is, a frame selection probability. Video summarization system reduces calculation complexity of video summarization network and offers interpolation to increase the network's learning. Video summarization network only needs to estimate a particular candidate importance score's length l, and not all of the sequence. Interpolation alleviates high-variance and promotes natural sequence of summarization frames. Particularly, in a high dimensional action space, reward of action can change at different phases. This is because when selecting a frame, a Bernoulli distribution is used. Therefore, the Gradient Estimate, calculated by Reward, can increasingly disperse. At this time, when a frame is selected using the interpolated importance score, a nearby frame with a similar score can be selected as well, as illustrated in Diagram 6. Video summarization system, after interpolating, can take importance score (S), and through frame selection action A={at|at∈{0,1}, t=1, . . . N} using a Bernoulli Distribution, and convert a frame selection probability. At each frame, if the frame selection action is similar to example 1, this frame can be selected as one keyframe to summarize the video. Example 1 A∼Bernoulli(at;st)={st,forat=11-st,forat=0 Video summarization system, in order to evaluate the policy effectively, uses a Bernoulli Distribution for the importance score and can sample the frame selection action. The reward function is used to evaluate quality of summarization variations produced by the frame selection actions due to various episodes; at the end of episode it can obtain the action's log probability and reward. In an illustration, non-patent document 1 suggests using diversity and representative reward function. During training, a combination of values in diversity reward (Rdiv) and representativeness reward (Rrep) can be maximized. Diversity reward estimates differences between keyframes selected by frame level features. Depending on the reward, a policy can be trained to produce frame selection action to select each keyframe from among the various frames. At this time, in order to prevent Reward Function from calculating a difference in frames that are spaced out from each other, temporal distance can be limited to a pre-determined distance (e.g., to 20). This temporal distance helps to maintain the storyline of the video. This will also reduce computational complexity. If selected frame's index is={k|=1, k=1, 2, . . . , ||} Diversity the reward can be expressed as the following: Rdiv=1(-1)∑t∈∑t′t≠t′(1-xtTxt′xt2xt′2)Formula2 Representative reward estimates the similarity between the selected frame level feature and all of the frame level features of the original video and produces a video summarization that represents the original video. Rrep=exp(-1N∑t=1Nxt-xt′2)Formula3 To train parametrized policy πθwhich is a video summarization network, search strategy can be used to search the underappreciated reward method as well as a Policy Gradient Method. Policy Gradient Method is a reinforcement learning based on a popular and powerful policy. This parametrized policy is optimized using a reward (accumulative reward) based on a gradient descent mode, such as SGD. Using a Policy Gradient Method increases action-maximized log probability; and can activate by receiving a reward through an action produced by policy. However, the policy gradient method has a few problems, such as low sample effectiveness. An agent, as opposed to a human, requires more samples (experience) to learn an action in an environment (state). Another issue is the estimated gradient's high variance. In our illustration, video summarization network with piecewise linear interpolation is offered to reduce action space and variance. Under the current policy, if an action's log reward probability underappreciates reward, the action can be further searched under the suggested search strategy. In order to calculate the objective function, first, the action function's πθ(at|ht) Log probability log πθ(at|ht) and episodes J's reward are calculated. r(at|ht)=Rdiv+Rrep. At the end of the episode, the action's log probability and reward to calculate objective function are maintained. At the equivalent video, the reward due to variations of frame selection actions due to many episodes is calculated and estimate is then calculated. Because the expected value of all of the variations of frame selection actions is hard to calculate in a short period of time, when frame sequence input takes longer, this calculation becomes even more difficult. UREXis an objective function to train video summarization network. Objective function is the sum of the expected reward andRAML.RAMLthe reward augmentation maximum-likelihood objective function to optimize reward in the traditional technology. At the early stage of training, normalized importance weights can disperse too highly so it can be combined with expected reward. ForRAMLapproximate value, J-th action is sampled and, using softmax, a set of normalized importance weights is calculated. τ is a normalized factor to avoid excessive random search. UREX(θ;τ)=Eh~p(ht){R(at❘"\[LeftBracketingBar]"ht)}Formula4R(at❘"\[LeftBracketingBar]"ht)=πθ(at❘"\[LeftBracketingBar]"ht)r(at❘"\[LeftBracketingBar]"ht)+RAMLFormula5RAML=τπT*(at❘"\[LeftBracketingBar]"ht)logπθ(at❘"\[LeftBracketingBar]"ht)Formula6 For a policy gradient, the important method is using standard to reduce variance and computational efficiency is improved. Standard can be an average of movement of all of expressed rewards up to the current point, updated at the end of episode. Standard is sum of each video (Vi) movement average reward b1=1/×r[νi] and average of movement average reward b2=1/νall×1/×Σi=1νallr[νi] Because of using movement average of diverse videos, diversity is enhanced. 0.7×b1+0.3×b2Function 7 Lrwd=UREX(θ;τ)−Function 8 Video summarization network can be trained as a parametrized policy, with Lrwd maximized based on policy gradient. Video summarization network can use regularization term Lreg as suggested in non-patent document 1, to control frame selection action. If more frames are selected as keyframes of video summarization, reward will increase based on reward function. Without regularization term, during training to maximize reward, frame selection probability that is interpolated importance score can be maximized to 1 or reduced to 0. In order to avoid overfitting, a value (0.001) is utilized and selected frame percentage can use selection value (0.5) Lreg=0.01×(1N×∑1Nst-0.5)2Formula9 To train video summarization network, a summarization's representativeness is promoted by reconstruction loss, modified by random masking. Using importance score (S), input frame level feature Xt is multiplied by score St; at time t frame level feature's representativeness is calculated. At time t, if score is high, time t's frame level function can show the video. In order to prevent score St from being too close to 1; for example, input feature's xtM20% can randomly be masked as 0. D represents input feature (1024) dimensional size that regulates Lrec value, because difference of square of input feature xtMand xt×stmay prove to be too large to use. Lrec=1D×∑(xtM-xt×st)2Function10 After all of loss functions are calculated, loss Lsummary is calculated and then backpropagated. Lsummary=Lreg+Lrec−LrwdFunction 11 Video summarization system, in order to test video summarization network, averages frame level importance scores within Shot, which enables shot—level importance score to be calculated. In order to produce main shot within dataset, KTS (Kernel Temporal Segmentation) is used to detect change points like shot boundaries. In order to produce video summarization, video length that meet predefined standard (for example, top 15%), and the main shots pertaining to them can be aligned based on score and then selected. Algorithm 1 is an illustration of training video summary network. Algorithm 1: Algorithm 1 Training Video Summarization Network.1: Input: frame-level features of video2: Output: VSN parameter (θ)3:4: for number of iterations do5: xt← Frame-level features of video6: C ← VSN(xt) % Generate candidate7: S ← Piecewise linear interpolation of C8: A ← Bernoulli(S) % Action A from the score S9: % Calculate Rewards and Loss using A and S10: % Update using policy gradient method:+11: {θ} ← −∇(Lreg+ lrec− Lrwd) % Minimization12: end for Referring now toFIG.4, a result of candidate importance score size, interpolated in importance score of dataset, within video summarization system is depicted according to the invention. For example, SumMe as well as TVSum dataset can be used, and if candidate importance score size is set to 35, network shows optimal performance within two datasets. Referring now toFIG.5, a visualization of a sample image and frame's importance score, selected by video summarization system, is depicted according to the invention. As illustrated, SumMe dataset's video “St Maarten Landing” is summarized as the selected sample image as well as frame's importance score are visualized. Gray bar is an actual importance score, red bar is the selected frame from the invention. Keyframe is properly selected to gain the highest importance score, by the video summarization network. Video summarization network without underappreciated method selects the video's main contents, similar to video summarization network; video summarization network without reconstruction loss, because it lacks process to calculate representativeness, a frame with a high importance score can be less selected. Representativeness calculation is done by summation of the selected frame feature with an anticipated importance score and the difference of original frame feature due to reconstruction loss. Video summarization network using LSTM can result in the network not being sufficiently trained. Referring now toFIG.6, a difference between an uninterpolated network and an interpolated network within video summarization system is depicted according to the invention. Referring toFIG.6(b), if piecewise linear interpolation is not applied, within video's main contents, one can find differing importance scores of adjacent frames. As such, it is difficult to select nearby frames from the frame with high scores. With respect to diagram 6(a), because adjacent frames share similar scores, an adjacent frame from a keyframe anticipated with a high score, can be selected. This is a strength of interpolation, where while keyframe selection training is going on, the relationship between adjacent frames can be maintained. Also, based on interpolation, a natural sequence of summarization frames is made possible, compared to the network without interpolation; a keyframe of main contents can be selected. Referring now toFIG.7, a block view to explain components of video summarization system is depicted according to an embodiment of the invention. Referring additionally toFIG.8a flowchart of a video summarization method within video summarization system is depicted according to the invention. As illustrated, Video summarization system (100) processor can include importance score anticipator (710), selection probability obtainer (720) as well summarization generator (730). Components of the processor are expressions of different functions executed by processor based on commands by programming codes stored in video summarization system. Processor as well as processing components can control video summarization system in such a way to execute steps (810thru830) included in video summarization method as prescribed in diagram 8. At this time, processor as well as processing components are designed to execute instructions of operating system's code included in memory and of at least one of program's code. Processor can load program code, saved in program file for video summarization method, to Memory. For example, when program is executed in video summarization system, based on operating system's control, processor can control video summarization system to load program code to memory, from program file. At this time, processor as well as anticipator (710), obtainer (720) and generator (730) included in processor can each execute instructions which correspond to portions of program code loaded onto Memory; therefore these are respective functional expressions to execute subsequent steps (810through830). At step (810), candidate importance score anticipator (710) uses video frame to train video summarization network to anticipate the candidate. The anticipator (710), using convolution neural network trained by image dataset, extracts feature information from each video frame; using the feature information, anticipator can train video summarization network. Anticipator (710) can output candidate importance score, which is interpolated in importance score—which is frame selection probability to select a set of frames to summarize video, using video summarization network's output data. At step (820), selection probability obtainer (720) interpolates anticipated candidate importance scores to obtain a frame selection probability of each frame of the video. Obtainer (720), based on sequence input size of video frame, aligns each candidate importance score in equal distance, uses piecewise linear interpolation to interpolate candidate scores to importance score, and obtains frame selection probability of each frame of the video. Obtainer (720), after interpolating importance candidate score to importance score, uses a Bernoulli distribution to convert frame selection action which is either 0 or 1 (used to select keyframe) to frame selection probability, and then selects any frame with frame selection probability of 1. Obtainer (720), using a Bernoulli Distribution to convert to a frame selection probability, samples frame selection action, and uses a reward function to evaluate representativeness of the video summarization produced by the frame selection action, and obtains video frame action's log probability and reward. Obtainer (720), using reward and log probability of frame selection action, can calculate an objective function as suggested by underappreciated reward. Obtainer (720), in order to train a parametrized policy, can use a search strategy to search any underappreciated reward method, as well as a policy gradient method. At step (830), summarization generator (730), based on obtained frame selection probability, can generate a video summarization using the set of selected frames. Generator (730), in order to promote summarization's representativeness, trains the video summarization network by using reconstruction loss modified by random masking. The above-mentioned device can be implemented through hardware component, software component and/or a combination of hardware components and software components. For example, devices as well as components explained in illustrations can utilize processor, controller, ALU (Arithmetic Logic Unit), DSP (Digital Signal Processor), microcomputer, FPGA (field programmable gate array), PLU (Programmable Logic Unit), microprocessor, or any other device including one or multiple all-purpose or special-purpose computer, capable of executing and responding to instructions. Processor can execute OS as well as one or multiple software applications run on the OS. Also, processor can respond to software's execution and can approach, store, modify, process or generate data. To help understand, there can be cases where there is only a single processor, but a person with common knowledge in the industry can assume that processor can take on a multiple number of processing elements and/or multiple forms of processing elements. For example, processor can take on multiple processors; or singular processor and singular controller. In addition, other processing configurations such as parallel processing are possible as well. In terms of software, the invention can include computer program, code, instructions, or a combination thereof, and it can be freely configured and can independently or collectively command the processor. Software and/or data can be interpreted by the processor or in order to offer a set of commands or data to processors, software and/or data can be embodied within some type of machine, component, physical device, computer storage medium or device. Software can be spread throughout network-enabled computer system and can be stored and executed in such a dispersed fashion. Software and data can be stored in one or more computer-readable device. The invention's method according to the embodiments can be implemented through programming command types executed by various computing means and can be recorded on a computer readable medium. At this point, medium can permanently store computer executable programs or temporarily store for execution or download. Also, medium can take the form of one or multiple hardware combinations, being able to record or store. The medium does not limit itself to a medium directly accessible to computing system and can take on variance throughout a network. Examples of medium can include magnetic media like hard disk, floppy disk and magnetic tape; optical recording mediums like CD-ROM and DVD; magneto-optical medium like floptical disk; mediums like ROM, RAM and Flash Memory that are composed of programming commands. Also, as examples of another form of medium, it can include recording or storing mediums managed by app stores (that distributes applications), or websites or servers that distributes or supplies various software. As such, the above illustrations were described on a limited capacity but for an individual with common knowledge in the relevant industry can modify the illustrations on a various level. For example, even if the technology is implemented in a different order than what is written above and/or the described components like system, structure, equipment, circuits are combined or associated in a different manner that what is written above and/or are substituted or replaced by other components or equivalents, it can result in proper outcome at the end. As such, other implementations and illustrations that are equal to the scope of the patent's claim, will fall within the realm of the following patent claim scope. | 25,774 |
11861910 | DETAILED DESCRIPTION Embodiments of the present disclosure will be described with reference to the drawings below. However, it should be understood that this description is only exemplary and is not intended to limit the scope of the present disclosure. In the following detailed description, for ease of explanation, many specific details are set forth to provide a comprehensive understanding of the embodiments of the present disclosure. However, obviously, one or more embodiments may also be implemented without these specific details. In addition, in the following, descriptions of well-known structures and technology are omitted to avoid unnecessarily obscuring the concept of the present disclosure. The terms used herein are only for describing specific embodiments, and are not intended to limit the present disclosure. The terms “comprising”, “including”, etc. used herein indicate presence of the features, steps, operations and/or components described, but do not exclude presence or addition of one or more other features, steps, operations or components. All terms used herein (including technical and scientific terms) have meanings commonly understood by those skilled in the art, unless otherwise defined. It should be noted that the terms used herein should be interpreted as having meanings consistent with the context of this specification, and should not be explained in an idealized or overly rigid manner. In the case where an expression similar to “at least one of A, B, C, etc.” is used, generally speaking, it should be interpreted according to the meaning of the expression commonly understood by those skilled in the art (for example, “a system with at least one of A, B and C” should include but is not limited to systems with A alone, B alone, C alone, with A and B, A and C, B and C, and/or a system with A, B, C, etc.). In the case where an expression similar to “at least one of A, B, or C, etc.” is used, generally speaking, it should be interpreted according to the meaning of the expression commonly understood by those skilled in the art (for example, “a system with at least one of A, B or C” should include but is not limited to systems with A alone, B alone, C alone, with A and B, A and C, B and C, and/or a system with A, B, C, etc.). Those skilled in the art should also understand that essentially any adversative conjunctions and/or phrases representing two or more optional items, whether in the specification, claims or drawings, should be understood to include the possibilities of one of these items, any of these items, or two items. For example, the phrase “A or B” should be understood to include the possibilities of “A” or “B”, or “A and B”. The embodiment of the present disclosure provides a method for acquiring an item placing state. The method includes sending an acquisition request. The acquisition request includes one or more association relationships, and each of the association relationships includes an association relationship between a hot zone and a camera equipment. One hot zone corresponds to one kind of item. The method also includes receiving a placing state of an item corresponding to at least one hot zone. The placing state of the item corresponding to at least one hot zone is determined according to a current image and a standard image of the at least one hot zone in response to the acquisition request. Herein, the camera equipment is configured to acquire the current image of the hot zone having an association relationship with the camera equipment. The embodiment of the present disclosure further provides a processing method. The method includes receiving an acquisition request. The acquisition request includes one or more association relationships, and each of the association relationships includes an association relationship between a hot zone and a camera equipment. One hot zone corresponds to one kind of item. The method also includes acquiring a current image and a standard image of at least one hot zone in response to the acquisition request, and determining a placing state of an item corresponding to the at least one hot zone according to the current image and the standard image of the at least one hot zone. Herein, the camera equipment is configured to acquire the current image of the hot zone having an association relationship with the camera equipment. FIG.1shows a schematic application scenario of a method and a system for acquiring an item placing state according to an embodiment of the present disclosure. It should be noted thatFIG.1is only an example of an application scenario where the embodiment of the present disclosure may be applied, so as to help those skilled in the art understand technical contents of the present disclosure, but it does not mean that the embodiment of the present disclosure cannot be used in other devices, systems, environments or scenarios. As shown inFIG.1, an application scenario100according to the embodiment includes a terminal equipment101, a cloud system102, a camera equipment103, and a shelf104. The camera equipment103and the cloud system102, the cloud system102and the terminal equipment101, or the camera equipment103and the terminal equipment101are connected through a medium of a communication link. This connection may include various connection types, such as a wired or a wireless communication link or a fiber optic cable, etc. The terminal equipment101may interact with the cloud system102to receive an item placing state, etc. On the terminal equipment101, there may installed various operation applications, to send an acquisition request to the cloud system102. The terminal equipment101may also interact with the camera equipment103, to adjust a shooting frequency of the camera equipment103. The terminal equipment101may be any electronic device having a display, so as to show an item placing state to a user, including but not limited to a desktop computer, a portable laptop computer, a tablet, a smart phone, etc. According to an embodiment of the present disclosure, the terminal equipment101may further for example acquire a current image200of the shelf (a current image including a hot zone) shot by the camera equipment103and stored in the cloud system102, and a standard image of the shelf (a standard image including the hot zone) for display by interacting with the cloud system102, so as to better display the placing state of the item corresponding to each hot zone on the shelf104. According to an embodiment of the present disclosure, a hot zone corresponds to an area having the same merchandise placed on the shelf104. For the shelf104inFIG.1, the hot zone may be for example an area having merchandise “Coca-Cola” placed at the bottom of the shelf104. It can be understood that above description of the hot zone is only used as an example to better understand the present disclosure, which is not limited by the present disclosure. The cloud system102may be a virtual terminal equipment (such as a virtual personal computer, etc.) at cloud having storage and calculation functions, to store the current image200of the shelf104shot by the camera equipment103and the standard image of the shelf, and in response to the acquisition request from the terminal equipment101, to process and analyze the current image and the standard image to obtain placing states of items in one or more hot zones on the shelf104(only as an example). As an example, the cloud system102further has an image processing function to mark and obtain an area corresponding to the hot zone in the current image of the shelf shot by the camera equipment103, according to a coordinate range of an area corresponding to the hot zone in the standard image. As an example, the camera equipment103may communicate with the cloud system102by using a set communication protocol, and the camera equipment103may be a terminal equipment having a shooting function for shooting a real-time image of the shelf104, and may send a shot image200to the cloud system102to store the shot image200in the cloud system102. According to an embodiment of the present disclosure, the camera equipment103may be also for example a camera equipment103having a calculation processing function. The camera equipment103may continuously capture the shelf104at an interval to obtain a plurality of real-time images, then the camera equipment may perform identification processing on the plurality of real-time images using the calculation processing function, to filter a best image from the plurality of real-time images as the current image (the current image including the shelf of the hot zone). According to an embodiment of the present disclosure, the camera equipment103may also has for example an image processing function similar to a cloud system. According to an embodiment of the present disclosure, the camera equipment103may be for example a camera equipment such as a monitoring camera having a real-time or a periodic shooting function, to acquire an image of the shelf104in real time or periodically, so as to obtain an image of the shelf104(an image including one or more hot zones the shelf104includes) in real time or periodically. It should be noted that, generally, the method for acquiring an item placing state provided in the embodiments of the present disclosure may be performed by the terminal equipment101. Correspondingly, the system for acquiring an item placing state provided in the embodiments of the present disclosure may generally be set in the terminal equipment101. According to an embodiment of the present disclosure, with reference to the application scenario inFIG.1, the cloud system102may also be excluded, and the camera equipment103may communicate with the terminal equipment101. The terminal equipment101also has storage and calculation functions, to store the current image200of the shelf104(the current image including the hot zone) and the standard image of the shelf104(the standard image including the hot zone) shot by the camera equipment103, and analyze and identify the current image200and the standard image to obtain the placing state of the item in one or more hot zones the shelf104includes. It can be understood that the number and the type of the terminal equipment, the cloud system, the camera equipment, the shelf, and items placed on the shelf inFIG.1are only illustrative. According to implementation needs, there can be any number and any types of terminal equipment, cloud system, camera equipment, shelves and items placed. FIGS.2A to2Bshow schematic flowcharts of a method for acquiring an item placing state according to an embodiment of the present disclosure. As shown inFIG.2A, the method for acquiring an item placing state includes operations S210to S220. In an operation S210, an acquisition request is sent, wherein the acquisition request includes one or more association relationships, each of the association relationships includes an association relationship between a hot zone and a camera equipment, and one hot zone corresponds to one kind of item. Herein, the one or more association relationships may be for example preset. As shown inFIG.2B, the method may further include an operation S230, in which the one or more association relationships are set. As an example, an operation S230may be performed before an operation S210. According to an embodiment of the present disclosure, the hot zone may be an area having the same kind of items placed on the shelf104, and the camera equipment may be a camera equipment used for acquiring the current image of the hot zone having an association relationship with the camera equipment. Specifically, for example, the camera equipment may be the camera equipment103used for acquiring the current image of the shelf104where the hot zone having an association relationship with the camera equipment103is located. According to an embodiment of the present disclosure, the camera equipment103may be for example located in the same space with the shelf104having a preset hot zone. Specifically, for example, the camera equipment103may be arranged on a wall opposite to the shelf104, on another shelf opposite to the shelf104, or on a top wall of a space where the shelf104is located, so as to shoot a real-time image of the shelf104. The shot real-time image of the shelf104includes a real-time image of the hot zone the shelf104includes, and therefore also includes a current image of the hot zone the shelf104includes. According to an embodiment of the present disclosure, the camera equipment may acquire the current image of the hot zone having an association relationship with the camera equipment (specifically, for example, may shoot the current image of the shelf where the hot zone is located) when the terminal equipment101sends a shooting instruction to the camera equipment in response to a user's operation, or the camera equipment may periodically acquire the current image of the hot zone having an association relationship with the camera equipment according to a set period. For example, the period may be set through the terminal equipment101. The method for acquiring an item placing state may further include a corresponding operation: setting a period for the camera equipment to acquire the current image of the hot zone having an association relationship with the camera equipment, wherein the period set for acquiring may be, for example, 5 min. According to an embodiment of the present disclosure, the acquisition request may be sent by the terminal equipment101in response to a user's operation, or sent according to a preset period, so as to periodically receive the placing state of the item corresponding to at least one hot zone. The method for acquiring an item placing state may further include a corresponding operation: setting a period for receiving the placing state of the item corresponding to at least one hot zone, wherein the period set for receiving may be the same 5 min as the period for acquiring. According to an embodiment of the present disclosure, the period for receiving the placing state of the item corresponding to the at least one hot zone is not shorter than the period for the camera equipment to acquire the current image of the hot zone having an association relationship with the camera equipment, thereby ensuring the placing state of the item corresponding to the hot zone received by using the method for acquiring an item placing state is updated in real time according to the state of the item placed in the hot zone. According to an embodiment of the present disclosure, the one or more association relationships may be set by the user by using the terminal equipment101. Specifically, a process of setting the association relationship between the hot zone and the camera equipment may be for example a process of binding camera equipment information and hot zone information by using the terminal equipment101. Herein, the camera equipment information may for example include a camera equipment number, a camera equipment model, and/or a camera equipment manufacturer, etc., and the hot zone information may for example include a hot zone number and item information (for example, an item name, an item number, etc.) placed in the hot zone. According to an embodiment of the present disclosure, specifically, setting the association relationship between the hot zone and the camera equipment may also be for example registering the camera equipment, the shelf, and the hot zone sequentially by using the terminal equipment101. Herein, a process of registering the camera equipment may be for example a process of filling in camera equipment information (for example, a camera equipment number, a camera equipment model, and/or a camera equipment manufacturer, etc.). A process of registering the shelf may for example include filling in shelf information first (for example, a shelf number and a camera equipment number may be included, and a store name where the shelf is located, a shelf length, a shelf width, etc. may also be included), and then determining whether a standard image of the shelf exists in the cloud system or the terminal equipment101locally. If the standard image of the shelf exists, a registration of the shelf is completed. If the standard image of the shelf doesn't exist, placing of the items on the shelf or a shooting angle for the camera equipment is readjusted, and then whether a real-time image of the shelf re-shot by the camera equipment is a standard image will be confirmed, and the registration of the shelf may be completed till after an existence of the standard image of the shelf is confirmed. A process of registering the hot zone may for example include: filling in the shelf information first, and then determining whether a registration of a shelf corresponding to the shelf information is completed. If the registration of the shelf is completed, a standard image of the shelf may be read, and an area range corresponding to the hot zone may be marked in the standard image and the hot zone information (for example, a hot zone number, a name and/or a number of the item placed in the hot zone, user information such as a name of a person in charge of the hot zone, etc.) may be filled in. If the registration of the shelf is not completed, the process of registering the shelf will be returned, and then the process of registering the hot zone will be performed again. In summary, by performing registration processes in above embodiments, binding between the camera equipment, the shelf and the hot zone can be achieved, so that setting an association relationship between the hot zone and the camera equipment is achieved. It can be understood that the method for setting the association relationship between the hot zone and the camera equipment is only used as an example to facilitate understanding the present disclosure, which is not limited by the present disclosure. According to an embodiment of the present disclosure, the standard image of the shelf mentioned above may be for example an image meeting a requirement and shot at a specific moment by the camera equipment. The standard image may be an image shot and obtained when the shelf has just been filled with items, and placing of the items meets a standard condition (placing of the items is at an angle, from which an item identification of the item placed is fully exposed to the camera equipment). Specifically, for example, the terminal equipment101may acquire images of the shelf shot in a plurality of time periods from the cloud system102and/or the camera equipment103and display them to the user. The user may select an image from the images of the shelf shot in a plurality of time periods as the standard image of the shelf, and may upload the standard image of the shelf to the cloud system102. As an example, the standard image of the shelf may also be the image filtered by the cloud system102from the images of the shelf shot in a plurality of time periods, according to a preset standard. According to an embodiment of the present disclosure, the one or more association relationships included in the acquisition request and the current image of the hot zone having an association relationship with the camera equipment acquired by the camera equipment103are capable of providing conditions for acquiring the placing state of the item corresponding to at least one hot zone. In an operation S220, a placing state of an item corresponding to at least one hot zone is received, wherein the placing state of the item corresponding to the at least one hot zone is determined according to a current image and a standard image of the at least one hot zone in response to the acquisition request. According to an embodiment of the present disclosure, the standard image of the hot zone mentioned above may be for example an image of an area corresponding to the hot zone among standard images of the shelf. In the standard image of the hot zone, items placed in the hot zone satisfy a standard condition: it is filled with items and an item identification is fully exposed to the camera equipment. According to an embodiment of the present disclosure, the current image of the hot zone may be for example an image including an area corresponding to the hot zone in the current image of the shelf. The placing state of the item may be determined for example according to the current image of the shelf and the standard image of the shelf, and the current image of the shelf may be obtained by filtering from a plurality of real-time images of the shelf. According to an embodiment of the present disclosure, the current image of the hot zone may also be obtained specifically through the following ways. According to a coordinate range of an area corresponding to the hot zone marked in the standard image of the shelf104when registering the hot zone, an image of an area corresponding to the coordinate range is obtained by dividing the current image of the shelf104, and the image obtained through this division is the current image of the hot zone. Then as an example, the placing state of the item may be determined specifically according to the current image of the hot zone obtained through the division and the standard image of the hot zone. It can be understood that the current image of the hot zone may be obtained for example through the camera equipment103having an image processing function, and may also be obtained through the cloud system102, or through the terminal equipment101, which is not limited in the present disclosure. According to an embodiment of the present disclosure, for example, the current image of the hot zone may also be obtained by filtering from a plurality of real-time images. According to an embodiment of the present disclosure, the placing state of the item received corresponding to the hot zone may include for example a normal state and an abnormal state. Herein, for example, determining the placing state of the item according to the current image and the standard image of the at least one hot zone in response to the acquisition request may include following steps. First, the current image and the standard image of the hot zone are identified to obtain an image feature currently included in the hot zone and a standard image feature included in the hot zone. Then, an item feature and a standard item feature corresponding to the hot zone are obtained according to the image feature and the standard image feature, and the placing state of the item is determined by comparing the item feature with the standard item feature corresponding to the hot zone. Herein, the normal state may be for example a state where a comparison result of the item feature and the standard item feature corresponding to the hot zone satisfies a plurality of preset conditions, and the abnormal state may be for example a state where a comparison result of the item feature and the standard item feature corresponding to the hot zone does not satisfy any of the plurality of preset conditions. According to an embodiment of the present disclosure, the item feature and the standard item feature corresponding to the hot zone may also be obtained for example through the following ways. The standard image feature obtained from the standard image of the hot zone and the image feature obtained from the current image of the hot zone are used as an input, and the item feature and the standard item feature corresponding to the hot zone are obtained by using a pre-trained model. For example, the item feature corresponding to the hot zone may include an item name, quantity of items corresponding to the item name, an item identification and/or an item location (i.e., a coordinate location in the current image of the hot zone) corresponding to the item name. Correspondingly, for example, the standard item feature may include a standard item name, quantity of items corresponding to the standard item name, an item identification and/or a standard item location (i.e., a coordinate location in the standard image of the hot zone) corresponding to the standard item name. According to an embodiment of the present disclosure, the item feature corresponding to the hot zone may also be obtained for example through the following ways. Image features obtained from the current image of the shelf the hot zone pertains and a coordinate range of the hot zone in the standard image of the shelf the hot zone pertains are used as an input, and an item feature in the coordinate range in the real-time image of the shelf is obtained by using a pre-trained model, which is the item feature corresponding to the hot zone. Correspondingly, the standard item feature corresponding to the hot zone may be obtained for example through the following ways. Areas corresponding to one or more hot zones are marked in advance in the standard image of the shelf the hot zone pertains, and the standard image being marked is used as an input, and an item feature corresponding to the one or more hot zones in the standard image of the shelf is obtained by using a pre-trained model. According to an embodiment of the present disclosure, as shown inFIG.2B, the method for acquiring an item placing state may further include for example an operation S240. In the operation S240, the placing state of the item corresponding to one or more hot zones among the at least one hot zone is displayed. According to an embodiment of the present disclosure, the terminal equipment101may have a display, the method for acquiring an item placing state may be performed by the terminal equipment101, and the placing state of the item corresponding to one or more hot zones is displayed through the display of the terminal equipment101, so that a user can learn in real time the state of items placed in each hot zone on the shelf according to content displayed. According to an embodiment of the present disclosure, the display of the terminal equipment101may further display for example at least one of a name of a store where the shelf the hot zone pertains is placed, a shelf number of the shelf the hot zone pertains, a hot zone number, an item name, an item number, and a standard image of the hot zone, a real-time image of the hot zone, and user information (for example, a name of a person in charge of the hot zone), etc. In summary, in the method for acquiring an item placing state of the embodiment of the present disclosure, by sending an acquisition request, the placing state of the item corresponding to the hot zone may be received, so as to learn the placing state of the item periodically or in real time for the user judging whether a tally is required. Since according to the method, the user is not required to inspect the area where the shelf the hot zone pertains is located, it is possible to effectively improve an efficiency of tally, avoid defects of serious delay and poor timeliness in a usual tally, and reduce a tally cost. Furthermore, since the placing state of the item is acquired based on the standard image and the current image of the hot zone, without a user's judgment through tally experience, a tally accuracy may be improved to a certain extent. FIGS.3A to3Bshow schematic flowcharts of a method for acquiring an item placing state according to another embodiment of the present disclosure.FIG.4shows a schematic displaying screen obtained according to the method for acquiring an item placing state described with reference toFIG.3A. As shown inFIG.3A, in addition to operations S210to S220described with reference toFIG.2A, the method for acquiring an item placing state may also include operations S350and S360. According to an embodiment of the present disclosure, for example, an operation S350may be performed after operation S220, or performed when a user inputs a filtering condition. In the operation S350, according to a filtering condition input by a user, a placing state of an item corresponding to one or more hot zones and matching with the filtering condition is obtained for display, by filtering from the placing state of the item corresponding to the at least one hot zone. Correspondingly, in an operation S360, the placing state of the item obtained by filtering is displayed, which is corresponding to one or more hot zones and matching with the filtering condition. According to an embodiment of the present disclosure, the terminal equipment having a display may also display for example a filtering condition inputting window, so as to facilitate inputting the filtering condition by the user. As an example, the filtering condition may include at least one of a camera equipment number, a hot zone number, an item name, and/or a placing state of an item. Under the case where a user inputs a camera equipment number, a filtering result of operation S350is a placing state of an item corresponding to all hot zones having an association relationship with the camera equipment corresponding to the camera equipment number. Under the case where the user inputs an item name, the filtering result of operation S350is a placing state of an item in all hot zones where the item corresponding to the item name is placed. According to an embodiment of the present disclosure, for example, the placing state of the item acquired in each time period may be stored in a local storage space (for example, the terminal equipment shown inFIG.1), then the filtering condition input by the user may further include a time period, and the filtering result of operation S350is the placing states of all items stored in the local storage space within the time period. If the association relationship between the hot zone and the camera equipment is set by using the registering method mentioned above, the filtering condition input by the user may further include a store name, a shelf number, etc., and the filtering result of operation S350is a placing state of an item stored in the local storage space, and corresponding to the hot zone included in the store/shelf corresponding to the store name/shelf number. According to an embodiment of the present disclosure, for example, the method for acquiring an item placing state may further periodically clear the placing state stored in the local storage space, so that the local storage space only stores the placing state of the most recent time period, reducing a load of the local storage space. According to an embodiment of the present disclosure, for actual application scenarios, such as a situation where a user is in charge of the placing state of items in a plurality of hot zones, an association relationship among the user information, the plurality of hot zones and the camera equipment having an association relationship with the plurality of hot zones may also be set. The filtering conditions input by the user may also include the user information and the filtering result of operation S350is a placing state of an item corresponding to the plurality of hot zones in charge by the user corresponding to the user information. Herein, the user information may include for example a name, a position, and/or an employee number of the person in charge. It can be understood that the filtering conditions mentioned above are only used as examples, so as to facilitate understanding the present disclosure, and is not limited by the present disclosure. The filtering conditions input may also be two or more of the filtering conditions listed above. For example, the input filtering condition may also be a store name and user information, etc. According to an embodiment of the present disclosure, the displaying screen obtained by the method for acquiring an item placing state described with reference toFIG.3Ais shown inFIG.4, and the user may input a filtering condition through the displaying screen inFIG.4. Specifically, the displaying screen displays a filtering condition option and a displaying result of the placing state of the item, which is obtained by filtering in response to a user's clicking on the “query” control widget after a filtering condition is entered. The placing state of the item may include for example a state of “normal”, “out of stock”, “irregular” and “misplaced”, etc. The specific meaning of these states will be described in the following content. In summary, in the method for acquiring an item placing state of the embodiment of the present disclosure, the user can input a filtering condition according to needs to flexibly retrieve the placing state of the item she/he is concerned, so that the displaying result is more in line with the user's needs and a user's checking efficiency of the placing state of the item is improved. As shown inFIG.3B, in addition to operations S210to S220described with reference toFIG.2A, the method for acquiring an item placing state of the embodiment of the present disclosure may also include for example an operation S370, and the operation S370may be performed after the operation S220. In the operation S370, an alarm signal is sent to the terminal equipment under the case where the placing state of the item corresponding to one or more hot zones among the at least one hot zone is in an abnormal state. According to an embodiment of the present disclosure, the terminal equipment herein may be for example the terminal equipment101described with reference toFIG.1, or a portable electronic device such as a smart phone or a tablet computer. The alarm signal or information may be for example a message sent to the terminal equipment101, an instruction on playing a specific music or ringtone sent to the terminal equipment, or an instruction on popping up a specific window sent to the terminal equipment, which enables a holder of the terminal equipment to find that the placing state of the item is in an abnormal state, and perform corresponding tally operations according to the abnormal state. Therefore, by using the method for acquiring an item placing state mentioned above, the user can find in time that the placing state of the item does not meet a requirement or the item is out of stock, so that a tally operation can be performed immediately, thus avoiding defects of poor shopping experience from the customer and sales decrease of the store caused by poor tally timeliness in the prior art. FIG.5shows a schematic flowchart of acquiring a current image of a hot zone according to an embodiment of the present disclosure. As shown inFIG.5, acquiring a current image of the hot zone having an association relationship with the camera equipment includes operations S510to S530. In an operation S510, a plurality of real-time images of the hot zone having an association relationship with the camera equipment are acquired. Herein, the plurality of real-time images are continuously acquired according to a time interval. According to an embodiment of the present disclosure, the operation may be specifically, for example, that the camera equipment continuously acquires a plurality of real-time images of the shelf where the hot zone having an association relationship with the camera equipment is located according to a set time interval, and performs the following operations S520to S530by using the plurality of real-time images of the shelf as the plurality of real-time images of the hot zone. According to an embodiment of the present disclosure, the operation may specifically be, for example, that after a plurality of real-time images of the shelf is acquired, according to a coordinate range in the standard image of the shelf where the hot zone having an association relationship with the camera equipment is located, an image corresponding to the coordinate range is acquired from the plurality of real-time images of the shelf, that is, a plurality of real-time images of the hot zone having an association relationship with the camera equipment is acquired, and the following operations S520to S530are performed on the basis of the plurality of real-time images of the hot zone acquired from the plurality of real-time images of the shelf. According to an embodiment of the present disclosure, a value of the time interval may be any value from 1 to 30 seconds, and the time interval may be set by the terminal equipment101with reference toFIG.1. In an operation S520, the plurality of real-time images of the hot zone is identified to obtain the quantity of items included in the plurality of real-time images. According to an embodiment of the present disclosure, identifying the plurality of real-time images of the hot zone may be, for example, performed through an object detection algorithm, so as to obtain the quantity of items included in each of the plurality of real-time images. Specifically, for example, a Faster-RCNN algorithm may be used to detect the items included in the plurality of real-time images. It can be understood that the method for identifying the plurality of real-time images of the hot zone mentioned above is only used as an example to facilitate understanding the present disclosure, and is not limited by the present disclosure. In an operation S530, the current image of the hot zone having an association relationship with the camera equipment is determined, and the current image is a real-time image that includes the largest quantity of items among the plurality of real-time images. According to an embodiment of the present disclosure, operations of acquiring the current image mentioned above may be performed by a camera equipment having a data processing function, and an output of the camera equipment is directly the current image of the hot zone having an association relationship with the camera equipment. According to an embodiment of the present disclosure, since the camera equipment having a processing function is expensive, operations of acquiring the current image mentioned above may also be performed by the cloud system102or the terminal equipment101described with reference toFIG.1. Among them, the plurality of real-time images of the shelf is shot by the camera equipment, and the camera equipment may interact with the cloud system102or the terminal equipment101to send the plurality of real-time images of the shelf shot to the cloud system102or the terminal equipment101, thereby reducing a performing cost of the method for acquiring the placing state of the item to a certain extent. According to an embodiment of the present disclosure, since a shelf may have various kinds of items to be placed thereon, the shelf may have a plurality of hot zones, and a range of photographable space of the camera equipment is not limited to one hot zone, or even to one shelf. Thus, one camera equipment may have an association relationship with one or more hot zones, and specifically, for example, the camera equipment may establish an association relationship with all pre-set hot zones it shots. According to an embodiment of the present disclosure, when the camera equipment acquires a real-time image of the shelf, there may be a customer standing in front of the shelf, and some items on the shelf may be occluded by the customer in the real-time image of the shelf shot in such case, therefore, the placing state of the item determined according to the real-time image acquired will be certainly inaccurate. In order to avoid this case, the embodiment of the present disclosure acquires a plurality of real-time images continuously shot by the camera equipment, and selects a real-time image having the largest quantity of items from the plurality of real-time images as the current image for a determination of the placing state of the item, which can improve accuracy of the placing state of the item acquired to a certain extent. FIG.6shows a schematic operation flowchart of a processing method according to an embodiment of the present disclosure. As shown inFIG.6, the processing method includes operations S610to S630. In an operation S610, an acquisition request is received, wherein the acquisition request includes one or more association relationships, and each of the association relationships includes an association relationship between a hot zone and a camera equipment, and one hot zone corresponds to one kind of item. According to an embodiment of the present disclosure, the camera equipment is used to acquire a current image of a hot zone having an association relationship with the camera equipment. According to an embodiment of the present disclosure, specifically, the current image of the hot zone herein may be a current image of the shelf where the hot zone is located, and the current image may be filtered from a plurality of real-time images; or the current image of the hot zone herein may be obtained by dividing the current image of the shelf where the hot zone is located. According to an embodiment of the present disclosure, specifically, the acquisition request may be for example an acquisition request sent with reference to operation S210inFIG.2A, and will not repeated here. In an operation S620, a current image and a standard image of at least one hot zone are acquired in response to the acquisition request. According to an embodiment of the present disclosure, the current image and the standard image may be determined from real-time images of the shelf shot by the camera equipment at various time points. Among them, the current image of the hot zone may be obtained through the method for acquiring the current image of the hot zone described with reference toFIG.5, and the standard image of the hot zone may be for example a real-time image meeting a standard condition obtained by filtering. As an example, the standard condition may be: the quantity of items placed in the hot zone is saturated (filled), and the items are placed in a manner of an item identification fully exposed to the camera equipment. According to an embodiment of the present disclosure, the current image and the standard image of the at least one hot zone may be stored in a cloud storage space interacting with the camera equipment. For example, the cloud storage space may store all images shot by a plurality of camera equipment, from which the operation S620may acquire the current image and the standard image of the at least one hot zone. According to an embodiment of the present disclosure, the cloud storage space may also, for example, periodically clear the current image of the at least one hot zone it stores, so that the cloud storage space only stores the current image of the at least one hot zone in a certain time period recently (for example, within a week). In an operation S630, a placing state of an item corresponding to the at least one hot zone is determined according to the current image and the standard image of the at least one hot zone. According to an embodiment of the present disclosure, the operation S630may be implemented for example through an image recognition and a comparison algorithm. Specifically, by identifying and comparing the current image and the standard image of the at least one hot zone, and combining an item placing state judgment logic to obtain the placing state of the item corresponding to the at least one hot zone. According to an embodiment of the present disclosure, the image recognition may be implemented mainly through an object detection algorithm (specifically, a Faster-RCNN algorithm), so as to obtain an item feature included in the current image of the at least one hot zone. The item feature may include at least one of an item name, the quantity of items corresponding to the item name, an item identification and an item location corresponding to the item name, etc. Herein, a standard item feature may be obtained by identifying the standard image of the at least one hot zone in advance by using the image recognition. The standard item feature may include at least one of a standard item name, the quantity of standard items corresponding to the standard item name, a standard item identification and a standard item location corresponding to the standard item name. According to an embodiment of the present disclosure, specifically, for example, the judgment logic may determine in the current image of at least one hot zone obtained through the image recognition: whether the item name is the same as the standard item name, whether a ratio of the quantity of items corresponding to the item name the same as the standard item name to the quantity of standard items corresponding to the standard item name is greater than a first ratio, and among the item name the same as the standard item name, whether a ratio of the quantity of item identifications which has a similarity with the standard item identification corresponding to the standard item name lower than a pre-set similarity, to the total quantity of the item identifications corresponding to the item name is less than a second ratio. It can be understood that the judgment logic above is only used as an example to facilitate understanding the present disclosure, and is not limited by the present disclosure. FIG.7shows a schematic flowchart of determining an item placing state according to an embodiment of the present disclosure. As shown inFIG.7, with reference toFIG.6, an operation S630for determining the placing state of an item may specifically include operations S631to S633. In an operation S631, the current image of the at least one hot zone is identified to obtain an image feature included in the current image of the at least one hot zone. In an operation S632, an item feature corresponding to the at least one hot zone is determined based on the image feature included in the current image of the at least one hot zone. In an operation S633, the item feature corresponding to the at least one hot zone is compared with a standard item feature corresponding to the at least one hot zone to determine the placing state of the item corresponding to the at least one hot zone. According to an embodiment of the present disclosure, the operations S631to S632described above may be specifically performed by using an image recognition algorithm (for example, a Faster-RCNN algorithm), or performed through a deep learning model trained by using the image recognition algorithm as a basic algorithm, using a large quantity of images including different items as training data, and using real-time images of the shelf as test data. An input of the model may be for example a divided current image of the shelf obtained through the operation described with reference toFIG.5, or may be a current image of the shelf obtained through the operation described with reference toFIG.5and a coordinate range of an area corresponding to one or more hot zones in the standard image of the shelf. An output of the model is the item feature corresponding to the hot zone obtained in operation S632, or the item feature corresponding to one or more hot zones included in the shelf. For the case where the item features corresponding to a plurality of hot zones are acquired, the item features may be grouped according to the hot zones by using the item location corresponding to the item name in the item features. According to an embodiment of the present disclosure, specifically, for example, an operation S633may obtain the placing state of the item corresponding to at least one hot zone by using the judgment logic described above, and will not be repeated here. FIGS.8A to8Cshow schematic diagrams of an item placing state being an abnormal state according to an embodiment of the present disclosure. The placing state of the item corresponding to the hot zone determined with reference toFIGS.6to7includes a normal state and an abnormal state. Herein, the abnormal state may include at least one of the following situations. Among item features corresponding to the hot zone, there is existed an item name which is different from a standard item name out of corresponding standard item features. Specifically, for the standard image of the shelf shown in the left inFIG.8A, a standard item name corresponding to a hot zone (an area enclosed by a dotted box) included in the shelf is for example “Coca-Cola”. For the real-time image (current image) of the shelf shown in the right inFIG.8A, the item names corresponding to the hot zone obtained with reference to operation S632inFIG.7includes “Coca-Cola” and “Wang Lao Ji”. Since the item name obtained “Wang Lao Ji” is different from the standard item name “Coca-Cola”, the placing state of the items corresponding to hot zone is an abnormal state “misplaced”. It can be understood that the standard image, the real-time image, the standard item information, and the item information corresponding to the hot zone are only used as examples to facilitate understanding the present disclosure, and is not limited by the present disclosure. Among item features corresponding to the hot zone, a ratio of the quantity of items corresponding to an item name which is the same as a standard item name out of corresponding standard item features, to the quantity of standard items corresponding to the standard item name is less than a first ratio. Specifically, for example, the first ratio is 0.5. For the standard image of the shelf shown in the left inFIG.8B, a standard item name corresponding to a hot zone (an area enclosed by a dotted box) included in the shelf is for example “Coca-Cola”, the quantity of standard items corresponding to the standard item name is 6. For the real-time image (current image) of the shelf shown in the right inFIG.8B, among the item names corresponding to the hot zone obtained with reference to operation S632inFIG.7, an item name the same as the standard item name is “Coca-Cola”, and the quantity of the item corresponding to the item name is 2. A ratio of the quantity of the item to the quantity 6 of standard item corresponding to the standard item name is 2/6=⅓, which is less than the first ratio 0.5. Then the placing state of the item corresponding to the hot zone is an abnormal state “out of stock”. It can be understood that the standard image, the real-time image, the standard item information, the item information corresponding to the hot zone and the value of the first ratio are only used as examples to facilitate understanding the present disclosure, and is not limited by the present disclosure. According to an embodiment of the present disclosure, if the ratio of the quantity of the item to the quantity of the standard item corresponding to the standard item name is not an integer, the ratio may be compared with the first ratio after being rounded up. Among item identifications corresponding to an item name out of corresponding item features, wherein the item name out of corresponding item features is the same as a standard item name out of corresponding standard item features, a ratio of the quantity of an item identification which has a similarity with a standard item identification corresponding to the standard item name lower than a pre-set similarity, to the total quantity of the item identifications corresponding to the item name is greater than a second ratio. Specifically, the pre-set similarity is 50%, and the second ratio is 0.5. For the standard image of the shelf shown in the left inFIG.8C, a standard item name corresponding to a hot zone (an area enclosed by a dotted box) included in the shelf is “Coca-Cola”, and a standard item identification corresponding to the standard item name is a vertically arranged “Coca-Cola” pattern exposed completely on the surface of a can body of the item in the image. For the real-time image (current image) of the shelf shown in the right inFIG.8C, among the item names corresponding to the hot zone obtained with reference to operation S632inFIG.7, an item name the same as the standard item name is “Coca-Cola”, and the quantity of the item corresponding to the item name is 6. Item identifications corresponding to the item name include not only a “Coca-Cola” pattern exposed completely but also a “Coca-Cola” pattern exposed partially, and a similarity between the “Coca-Cola” pattern exposed partially and the “Coca-Cola” pattern exposed completely is only 30%, i.e., an exposing proportion of the word “Coca-Cola” in the “Coca-Cola” pattern exposed partially is less than 30%. The quantity of the “Coca-Cola” patterns exposed partially is 4, and a ratio of the quantity of the “Coca-Cola” patterns exposed partially to the quantity 6 of items is 4/6=⅔, which is greater than the second ratio 0.5. As a result, the placing state of the item corresponding to the hot zone is an abnormal state “irregular”. It can be understood that the standard image, the real-time image, the standard item information, the item information corresponding to the hot zone, the value of the pre-set similarity and the value of the second ratio are only used as examples to facilitate understanding the present disclosure, and is not limited by the present disclosure. According to an embodiment of the present disclosure, if the ratio of the quantity of items mentioned above is not an integer, the ratio may be compared with the second ratio after being rounded up. For a situation that is not any of the abnormal state, the placing state of the item corresponding to the hot zone is a normal state. FIGS.9A to9Eshow schematic structural block diagrams of a system for acquiring an item placing state according to an embodiment of the present disclosure. As shown inFIG.9A, the system900for acquiring an item placing state may interact with a processing system (such as the cloud system102described with reference toFIG.1), and the system900for acquiring an item placing state includes an acquisition request sending module910and a placing state receiving module920. Herein, the acquisition request sending module910is used to send an acquisition request. The acquisition request includes one or more association relationships, and each of the association relationships includes an association relationship between a hot zone and a camera equipment. One hot zone corresponds to one kind of item. Herein the camera equipment is configured to acquire a current image of a hot zone having an association relationship with the camera equipment. According to an embodiment of the present disclosure, the acquisition request sending module910may perform the operation S210described with reference toFIG.2A, and will not be repeated here. Herein, the placing state receiving module920is used to receive a placing state of an item corresponding to at least one hot zone. The placing state of the item corresponding to the at least one hot zone is determined according to a current image and a standard image of the at least one hot zone by the processing system in response to the acquisition request. Herein, according to an embodiment of the present disclosure, the placing state receiving module920may perform the operation S220described with reference toFIG.2A, and will not be repeated here. According to an embodiment of the present disclosure, as shown inFIG.9B, the system900for acquiring an item placing state may further include for example a parameter setting module930. The parameter setting module930is used to set the one or more association relationships included in the acquisition request. According to an embodiment of the present disclosure, the parameter setting module930may perform the operation S230described with reference toFIG.2B, and will not be repeated here. According to an embodiment of the present disclosure, the parameter setting module930may also for example be used to set a period for the camera equipment acquiring the current image of the hot zone having an association relationship with the camera equipment, and/or the placing state receiving module920may also for example be used to set a period for receiving the placing state of the item corresponding to the at least one hot zone. Herein, the period for receiving the placing state of the item corresponding to the at least one hot zone is not shorter than a period for the camera equipment to acquire the current image of the hot zone having an association relationship with the camera equipment. According to an embodiment of the present disclosure, as shown inFIG.9C, the system900for acquiring an item placing state may further include a displaying module940. The displaying module940is used to display a placing state of an item corresponding to one or more of the at least one hot zone. According to an embodiment of the present disclosure, the displaying module940may perform the operation S240described with reference toFIG.2B, and will not be repeated here. According to an embodiment of the present disclosure, as shown inFIG.9D, the system900for acquiring an item placing state may further include a filtering module950. The filtering module950is used to obtain a placing state of an item matching a filtering condition and corresponding to one or more hot zones for display, by filtering from the placing state of the item corresponding to the at least one hot zone, according to a filtering condition input by a user. According to an embodiment of the present disclosure, the filtering module950may be used to perform the operation S350described with reference toFIG.3A, and correspondingly, the displaying module940may also be used to perform the operation S360described with reference toFIG.3A, and will not be repeated here. According to an embodiment of the present disclosure, the placing state of the item corresponding to the at least one hot zone includes a normal state and an abnormal state. As shown inFIG.9E, the system900for acquiring an item placing state may further include for example an alarming module960. The alarming module960is used to send an alarm signal to the terminal equipment under the case where the placing state of the item corresponding to one or more of the at least one hot zone is an abnormal state. According to an embodiment of the present disclosure, the alarming module960may perform the operation S370described with reference toFIG.3B, and will not be repeated here. According to an embodiment of the present disclosure, acquiring of the current image of the hot zone having an association relationship with the camera equipment includes: acquiring a plurality of real-time images of the hot zone having an association relationship with the camera equipment, wherein the plurality of real-time images are continuously acquired according to a time interval; identifying the plurality of real-time images to acquire the quantity of items included in the plurality of real-time images; and determining a current image of the hot zone having an association relationship with the camera equipment, wherein the current image is the real-time image having the largest quantity of items in the plurality of real-time images. Herein, the camera equipment has an association relationship with one or more hot zones. According to an embodiment of the present disclosure, acquiring of the current image of the hot zone having an association relationship with the camera equipment may be obtained through operations S510to S530described with reference toFIG.5, and will not be repeated here. According to an embodiment of the present disclosure, the system900for acquiring an item placing state may be disposed in the terminal equipment101described with reference toFIG.1. Functions of two or more of modules, sub-modules, units, and subunits according to the embodiments of the present disclosure, or at least a part thereof, may be implemented in one module. One or more of the modules, sub-modules, units, and sub-units according to the embodiments of the present disclosure may be split into multiple modules for implementation. One or more of the modules, sub-modules, units, and sub-units according to the embodiments of the present disclosure may be at least partially implemented as hardware circuits, such as field programmable gate array (FPGA), programmable logic array (PLA), system-on-chip, system-on-substrate, system-on-package, application specific integrated circuit (ASIC), or may be implemented by hardware or firmware in any other reasonable way that integrates or encapsulates the circuit, or by any one of the three implementation modes of software, hardware and firmware or in an appropriate combination of any of them. Alternatively, one or more of the modules, sub-modules, units, and sub-units according to the embodiments of the present disclosure may be at least partially implemented as a computer program module, and when the computer program module is executed, it may perform corresponding functions. For example, two or more of the acquisition request sending module910, the placing state receiving module920, the parameter setting module930, the displaying module940, the filtering module950and the alarming module960may be combined into one module for implementation, or one of the acquisition request sending module910, the placing state receiving module920, the parameter setting module930, the displaying module940, the filtering module950and the alarming module960may be split into multiple modules. Alternatively, at least part of the functions of the one or more of these modules may be combined with at least part of the functions of other modules and implemented in one module. According to an embodiment of the present disclosure, at least one of the acquisition request sending module910, the placing state receiving module920, the parameter setting module930, the displaying module940, the filtering module950and the alarming module960may be at least partially implemented as a hardware circuit, such as field programmable gate array (FPGA), programmable logic array (PLA), system-on-chip, system-on-substrate, system-on-package, application specific integrated circuit (ASIC), or may be implemented by hardware or firmware in any other reasonable way that integrates or encapsulates the circuit, or by any one of the three implementation modes of software, hardware and firmware or in an appropriate combination of any of them. Alternatively, at least one of the acquisition request sending module910, the placing state receiving module920, the parameter setting module930, the displaying module940, the filtering module950and the alarming module960may be at least partially implemented as a computer program module, and when the computer program module is executed, it may perform corresponding functions. FIGS.10A to10Cshow schematic structural block diagrams of a processing system according to an embodiment of the present disclosure. As shown inFIG.10A, the processing system1000may for example interact with the system for acquiring an item placing state described with reference toFIG.9. The processing system1000includes an acquisition request receiving module1010, an image acquiring module1020and a placing state determining module1030. Among them, the acquisition request receiving module1010is used to receive an acquisition request sent by the system for acquiring an item placing state. Herein, the acquisition request includes one or more association relationships, and each of the association relationships includes an association relationship between a hot zone and a camera equipment. One hot zone corresponds to one kind of item. Herein, the camera equipment is configured to acquire a current image of a hot zone having an association relationship with the camera equipment. According to an embodiment of the present disclosure, the acquisition request receiving module1010may for example be used to perform the operation S610described with reference toFIG.6, and will not be repeated here. Herein, the image acquiring module1020is used to acquire a current image and a standard image of at least one hot zone in response to the acquisition request. According to an embodiment of the present disclosure, the image acquiring module1020may for example be used to perform the operation S620described with reference toFIG.6, and will not be repeated here. According to an embodiment of the present disclosure, acquiring of the current image of the hot zone having an association relationship with the camera equipment includes: acquiring a plurality of real-time images of the hot zone having an association relationship with the camera equipment, wherein the plurality of real-time images are continuously acquired according to a time interval; identifying the plurality of real-time images to acquire the quantity of items included in the plurality of real-time images; and determining a current image of the hot zone having an association relationship with the camera equipment, wherein the current image is the real-time image having the largest quantity of items in the plurality of real-time images. Herein, the camera equipment has an association relationship with one or more hot zones. According to an embodiment of the present disclosure, acquiring of the current image may be obtained through the operations described with reference toFIG.5, and will not be repeated here. Herein, the placing state determining module1030is used to determine a placing state of an item corresponding to the at least one hot zone according to the current image and the standard image of the at least one hot zone. According to an embodiment of the present disclosure, the placing state determining module1030may for example be used to perform the operation S630described with reference toFIG.6, and will not be repeated here. According to an embodiment of the present disclosure, as shown inFIG.10B, the placing state determining module1030may for example include an image feature identifying sub-module1031, an item feature determining sub-module1032and a placing state determining sub-module1033. The image feature identifying sub-module1031is used to identify the current image of the at least one hot zone to obtain an image feature included in the current image of the at least one hot zone. The item feature determining sub-module1032is used to determine an item feature corresponding to the at least one hot zone based on the image feature included in the current image of the at least one hot zone. The placing state determining sub-module1033is used to compare the item feature corresponding to the at least one hot zone with a standard item feature corresponding to the at least one hot zone, to determine the placing state of the item corresponding to the at least one hot zone. Herein, the standard item feature corresponding to the at least one hot zone may be obtained based on the standard image of the at least one hot zone by using the image feature identifying sub-module1031and the item feature determining sub-module1032described above. According to an embodiment of the present disclosure, the image feature identifying sub-module1031, the item feature determining sub-module1032, and the placing state determining sub-module1033may be respectively used to perform operations S631to S633described with reference toFIG.7, and will not be repeated here. According to an embodiment of the present disclosure, the placing state of the item includes a normal state and an abnormal state, and the abnormal state includes at least one of: among item features corresponding to the hot zone, there is existed an item name which is different from a standard item name out of corresponding standard item features; among item features corresponding to the hot zone, a ratio of the quantity of items corresponding to an item name which is the same as a standard item name out of corresponding standard item features, to the quantity of standard items corresponding to the standard item name is less than a first ratio; or among item identifications corresponding to an item name out of corresponding item features, wherein the item name out of corresponding item features is the same as a standard item name out of corresponding standard item features, a ratio of the quantity of an item identification which has a similarity with a standard item identification corresponding to the standard item name lower than a pre-set similarity, to the total quantity of the item identifications corresponding to the item name is greater than a second ratio. Herein, the item feature includes at least one of the item name, the quantity of items corresponding to the item name, the item identification and/or the item location corresponding to the item name. According to an embodiment of the present disclosure, the abnormal state may include, for example, the abnormal state “misplaced”, “out of stock”, and “irregular” described with reference toFIGS.8A to8C, and will not be repeated here. According to an embodiment of the present disclosure, the processing system may be a cloud system102described with reference toFIG.1, and the cloud system102may interact with a camera equipment to receive a real-time image of a hot zone uploaded by the camera equipment. According to an embodiment of the present disclosure, as shown inFIG.100, the processing system may further include an image storing module1040. The image storing module1040is used to store the current image of the hot zone having an association relationship with the camera equipment acquired by the camera equipment and the standard image of the at least one hot zone, for an acquisition by the image acquiring module1020. Functions of two or more of modules, sub-modules, units, and subunits according to the embodiments of the present disclosure, or at least a part thereof, may be implemented in one module. One or more of the modules, sub-modules, units, and sub-units according to the embodiments of the present disclosure may be split into multiple modules for implementation. One or more of the modules, sub-modules, units, and sub-units according to the embodiments of the present disclosure may be at least partially implemented as hardware circuits, such as field programmable gate array (FPGA), programmable logic array (PLA), system-on-chip, system-on-substrate, system-on-package, application specific integrated circuit (ASIC), or may be implemented by hardware or firmware in any other reasonable way that integrates or encapsulates the circuit, or by any one of the three implementation modes of software, hardware and firmware or in an appropriate combination of any of them. Alternatively, one or more of the modules, sub-modules, units, and sub-units according to the embodiments of the present disclosure may be at least partially implemented as a computer program module, and when the computer program module is executed, it may perform corresponding functions. For example, two or more of the acquisition request receiving module1010, the image acquiring module1020, the placing state determining module1030, the image storing module1040, the image feature identifying sub-module1031, the item feature determining sub-module1032and the placing state determining sub-module1033may be combined into one module for implementation, or one of the acquisition request receiving module1010, the image acquiring module1020, the placing state determining module1030, the image storing module1040, the image feature identifying sub-module1031, the item feature determining sub-module1032and the placing state determining sub-module1033may be split into multiple modules. Alternatively, at least part of the functions of the one or more of these modules may be combined with at least part of the functions of other modules and implemented in one module. According to an embodiment of the present disclosure, at least one of the acquisition request receiving module1010, the image acquiring module1020, the placing state determining module1030, the image storing module1040, the image feature identifying sub-module1031, the item feature determining sub-module1032and the placing state determining sub-module1033may be at least partially implemented as a hardware circuit, such as field programmable gate array (FPGA), programmable logic array (PLA), system-on-chip, system-on-substrate, system-on-package, application specific integrated circuit (ASIC), or may be implemented by hardware or firmware in any other reasonable way that integrates or encapsulates the circuit, or by any one of the three implementation modes of software, hardware and firmware or in an appropriate combination of any of them. Alternatively, at least one of the acquisition request receiving module1010, the image acquiring module1020, the placing state determining module1030, the image storing module1040, the image feature identifying sub-module1031, the item feature determining sub-module1032and the placing state determining sub-module1033may be at least partially implemented as a computer program module, and when the computer program module is executed, it may perform corresponding functions. FIG.11shows a schematic block diagram of a computer system suitable for implementing a method for acquiring an item placing state or a processing method according to an embodiment of the present disclosure. The computer system shown inFIG.11is only an example, and should not bring any limitation to the functions and application scope of the embodiment of the present disclosure. As shown inFIG.11, a computer system1100according to an embodiment of the present disclosure includes a processor1101, wherein the processor1101may execute various appropriate actions and processing according to a program stored in a read-only memory (ROM)1102or a program loaded from a storage part1108to a random access memory (RAM)1103. The processor1101may include, for example, a general-purpose microprocessor (e.g., a CPU), an instruction set processor and/or a related chipset and/or a special-purpose microprocessor (e.g., an application specific integrated circuit (ASIC)), and so on. The processor1101may also include on-board memory for caching purposes. The processor1101may include a single processing unit for executing different actions of the method flow according to the embodiments of the present disclosure or multiple processing units. In the RAM1103, various programs and data required for the operation of the system1100are stored. The processor1101, the ROM1102, and the RAM1103are connected to each other through a bus1104. The processor1101executes various operations of the method flow according to the embodiments of the present disclosure by executing programs in the ROM1102and/or RAM1103. It should be noted that the program may also be stored in one or more memories other than the ROM1102and the RAM1103. The processor1101may also execute various operations of the method flow according to the embodiment of the present disclosure by executing programs stored in the one or more memories. According to an embodiment of the present disclosure, the system1100may further include an input/output (I/O) interface1105, and the input/output (I/O) interface1105is also connected to the bus1104. The system1100may also include one or more of the following components connected to the I/O interface1105: an input part1106including a keyboard, a mouse, etc.; an output part1107including a cathode ray tube (CRT), a liquid crystal display (LCD), etc., and a speaker, etc.; a storage part1108including a hard disk, etc.; and a communication part1109including a network interface card such as a LAN card, a modem, etc. The communication part1109performs communication processing via a network such as the Internet. The driver1110is also connected to the I/O interface1105as needed. A removable medium1111, such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, etc., is installed on the driver1110as needed, so that the computer program read from the removable medium1111is installed into the storage part1108as needed. According to the embodiments of the present disclosure, the method flow according to the embodiments of the present disclosure may be implemented as a computer software program. For example, an embodiment of the present disclosure includes a computer program item, which includes a computer program carried on a computer-readable medium, and the computer program includes program code for executing the method shown in the flowchart. In such an embodiment, the computer program may be downloaded and installed from the network through the communication part1109, and/or installed from the removable medium1111. When the computer program is executed by the processor1101, the above functions defined in the system of the embodiments of the present disclosure are executed. According to the embodiments of the present disclosure, the systems, devices, apparatus, modules, units, etc. described above may be implemented by computer program modules. The present disclosure also provides a computer-readable medium. The computer-readable medium may be included in the device/apparatus/system described in the above embodiments; or it may exist alone without being assembled into the device/apparatus/system. The above computer-readable medium carries one or more programs, and when the one or more programs are executed, the computer-readable medium realizes the method according the embodiments of the present disclosure. According to an embodiment of the present disclosure, the computer-readable medium may be a computer-readable signal medium or a computer-readable storage medium or any combination of the two. The computer-readable storage medium may be, for example, but not limited to, an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, device, or apparatus, or any combination of the above. More specific examples of computer-readable storage medium may include, but are not limited to: electrical connections with one or more wires, portable computer disks, hard disks, random access memory (RAM), read-only memory (ROM), erasable programmable read-only memory (EPROM or flash memory), optical fiber, portable compact disk read-only memory (CD-ROM), optical storage device, magnetic storage device, or any suitable combination of the above. In the present disclosure, the computer-readable storage medium may be any tangible medium that includes or stores a program, and the program may be used by or in combination with an instruction execution system, device, or apparatus. In the present disclosure, a computer-readable signal medium may include a data signal propagated in a baseband or as a part of a carrier wave, and a computer-readable program code is carried therein. The propagated data signal may take various forms, including but not limited to electromagnetic signals, optical signals, or any suitable combination of them. The computer-readable signal medium may also be any computer-readable medium other than the computer-readable storage medium, the computer-readable medium may send, propagate, or send the program for use or in combination with the instruction execution system, device, or apparatus. The program code included in the computer-readable medium may be sent by any suitable medium, including but not limited to: wireless, wired, optical cable, radio frequency signals, etc., or any suitable combination of the above. For example, according to an embodiment of the present disclosure, the computer-readable medium may include one or more memories other than the ROM1102and/or RAM1103and/or ROM1102and RAM1103described above. The flowcharts and block diagrams in the accompanying drawings illustrate the possible architecture, functions, and operations of the system, method, and computer program product according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagram may represent a module, program segment, or part of code, and the module, program segment, or part of code includes one or more executable instructions for realizing the specified logical function. It should also be noted that, in some alternative implementations, the functions marked in the block may also occur in a different order from the order marked in the drawings. For example, two blocks shown in succession may actually be executed substantially in parallel, and they may sometimes be executed in the reverse order, depending on the functions involved. It should also be noted that each block in the block diagram or flowchart, and the combination of blocks in the block diagram or flowchart, may be implemented by a dedicated hardware-based system that performs the specified functions or operations, or may be realized by a combination of dedicated hardware and computer instructions. Those skilled in the art may understand that the features described in the various embodiments and/or the claims of the present disclosure may be combined and/or integrated in various ways, even if such combinations or integrations are not explicitly described in the present disclosure. In particular, without departing from the spirit and teaching of the present disclosure, the features described in the various embodiments and/or the claims of the present disclosure may be combined and/or integrated in various ways. All these combinations and/or integrations fall within the scope of the present disclosure. The embodiments of the present disclosure have been described above. However, these embodiments are only for illustrative purposes, and are not intended to limit the scope of the present disclosure. Although the embodiments are described respectively above, it does not mean that the measures in the respective embodiments cannot be advantageously used in combination. The scope of the present disclosure is defined by the appended claims and their equivalents. Without departing from the scope of the present disclosure, those skilled in the art may make various substitutions and modifications, and these substitutions and modifications should fall within the scope of the present disclosure. | 82,122 |
11861911 | The figures are not intended to be exhaustive or to limit the invention to the precise form disclosed. It should be understood that the invention can be practiced with modification and alteration, and that the disclosed technology be limited only by the claims and the equivalents thereof. DETAILED DESCRIPTION OF THE EMBODIMENTS Various components of a video analytics platform are described herein. The video analytics platform may be configured to provide real-time monitoring and assessment of airplane safety processes. Video Analytics Platform FIG.1may comprise real-time transaction engine110that communicates via network132to both video camera unit150and client device160. Each of real-time transaction engine110, video camera unit150, and client device160may comprise processors, memory, antennas, and other components of computing devices to enable these computing devices to communicate via network130and execute machine-readable instructions. Real-Time Transaction Engine Real-time transaction engine110comprises video processing circuit112, machine learning circuit114, and reporting circuit116. Additional circuits and components may be included without diverting from the scope of the disclosure. Real-time transaction engine110is configured to receive video, audio, and other media from video camera unit150via network130. The media data may comprise footage captured from around an airport facility regarding ramp operations, safety checks, or other information used to validate compliance with safety rules, policies, and procedures. The video data may be wirelessly transmitted or otherwise conveyed from video camera unit150to real-time transaction engine110. Real-time transaction engine110is configured to provide video streaming (e.g., WebRTC) and related capabilities. The capabilities may include transcoding inbound and outbound videos from/into various video and audio codecs and formats, recording and storing videos or individual frames on the server, transcoding and compressing videos, ingesting and processing multiple video streams concurrently, applying ML and non-ML filters to video streams and individual frames, providing SFU (Selective Forwarding Unit) and MCU (Multipoint Conferencing Unit) functionalities enabling videoconferencing and video streams mixing, providing TURN and STUN servers functionalities allowing better routing of WebRTC connections and NAT traversal, supporting combined media pipelines allowing insertion of Computer Vision, statistical and other filters and modules into media processing pipeline, cross-platform WebRTC video streaming to iOS, Android, React Native, Web and desktop applications, the capability of applying augmented reality visualizations, masks and effects in 2D and 3D to real-time video streams, the capabilities of mixing and blending video streams as well as video calling, group communication and broadcasting. Video processing circuit112is configured to perform analytics on the video. For example, the analytics may identify metadata of the video, including location, originating video camera unit150, date or time stamp, and other information. The information may be stored with images data store120. Video processing circuit112is configured to recognize one or more objects in the video. For example, video processing circuit112may compare known images stored with images120with received images from video camera unit150. Video processing circuit112may tag the one or more objects with an object identifier based on the comparison between the known images and the identified image in the received video. The object identifier, timestamp, video portion, metadata, or other information (e.g., generated by optical character recognition (OCR), etc.) may be stored in images data store120. Video processing circuit112is configured to augment or add to the media data. For example, video processing circuit112is configured to provide video, image, and web interface augmentation capabilities in relation to Artificial Intelligence, Computer Vision detection, and related user interfaces such as preconfigured and pre-developed capabilities to visually highlight detected objects and scenes, to highlight movements of most importance, to overlay characteristics of detected objects and events. Video processing circuit112may compare the object identifier, timestamp, video portion, metadata, or other information from images data store120with rules from rules data store122. Rules data store122may comprise various content relating to safety processes and procedures. Video processing circuit112may compare the object identifier generated from the video analytics to stored rules from rules data store122. As an illustrative example, when a vest is required at a particular location, video processing circuit112may determine that the object identifier from the tagged media identifies a vest in the video. Video processing circuit112may confirm that the stored rule is satisfied based on the object identifier found in the media. The results of the comparison may be stored in results data store124. Additional information and illustrative rules stored with rules data store122are provided herein with the subsection labeled “illustrative examples in rules data store.” Machine learning circuit114is configured to receive media input, provide the media input to one or more trained machine learning (ML) models, and generate output in view of an objective function corresponding with each trained ML model. Additional information regarding the machine learning process is provided withFIG.4. Reporting circuit116is configured to generate one or more reports, alerts, notifications, or other information based on the comparisons or determinations described herein. The reports and alerts may be generated in real-time to correspond with the real-time streaming video from video camera unit150. If certain conditions are detected, reporting circuit116may output real-time notifications and/or provide alerts in real-time to address safety concerns. Reporting circuit116is configured to transmit generated reports on a predetermined basis (e.g., daily, etc.) to designated client devices160. In some examples, the reports may be transmitted in accordance with an event (e.g., Daily Safety Briefings, Monthly Station Safety Meetings, one-on-one briefings, Safety Huddles, etc.). Reporting circuit116is configured to provide other data to a user interface. For example, reporting circuit116may provide a smart timeline, dashboard, and/or video player. The other capabilities may include viewing the video stream in real-time in a web application, mobile application, or desktop application video player interface allowing to stream one or more videos simultaneously; playing, stopping, pausing, rewinding, fast-forwarding the videos; switching between camera views and/or open pre-recorded videos; capability to highlight important detections with color coding and other interface elements in the video player timeline; capability to skip/fast-forward intervals of no importance as determined by the system or user settings; capability to search through stored videos and highlight in search results and in the video player timeline the occurrences of the detection or event user is searching for; capability to display and store the internal console showing lower level detections, logging and other system information as part of user interface; dashboard capability allowing to display multiple video streams at the time, overlay, and augment videos, zoom in and zoom out videos, including capabilities of automated triage and detection of most important detections or events among displayed and non-displayed video streams and ability to automatically zoom in and alert human operator of the displayed or non-displayed video where highest ranking detections or events are happening. Reporting circuit116is configured to provide smart feedback. The capabilities may include providing an overlay of the user interface to provide end users with an easy way to communicate false positive and false negative detections to the system; automated ML and non-ML based methods to infer false positive and false negative detections from user interactions or inactivity; automated re-training and re-learning loop system automatically applying false negative and false positive detections to re-learning or transfer learning of ML models. Reporting circuit116is configured to transmit electronic messages via network130. The messaging system may be based on various communication protocols (e.g., Extensible Messaging and Presence Protocol (XMPP) instant messaging protocol, Ejabberd® server, accompanying custom-made services and APIs, etc.). The messaging may allow end users and automated systems (such as chat bots, ML and non-ML detects, alerting system, etc.) to carry out text-based communication between each other via existing interfaces of the system and/or integrated into 3rd party mobile, web, and desktop applications. Reporting circuit116is configured to transmit alerts or statistics. The capabilities may include push notifications, e-mail, user interface audio and visual alerts providing the designated contacts with real-time and summary alerts regarding detections, conclusions, and other information from the system. Reporting circuit116is configured to generate a compliance score. The compliance score corresponds with a number of rules that are satisfied based on a comparison between the media data and one or more rules from rules data store122. The compliance score may be of value (e.g., 100) and each image that identifies that a rule is not satisfied may reduce the compliance score. In some examples, the goal may be to achieve 100% compliance or satisfaction of each of the rules for the location from rules data store122. The compliance score may be compared with a score threshold. Corrective action may be identified based on the comparison of the score to the score threshold. In some examples, different score thresholds may correspond with different actions. Reporting circuit116is configured to generate one or more reports based on different stages, timestamps, locations, and the like. An illustrative example is shown herein. AverageAction PlanStagesGoalComplianceRequiredArrival100100NPost-Arrival10093.1NPost-Departure10056.52YPre-Arrival10074.03YPre-Departure Preparations10079.82YUpload10057.14Y Reporting circuit116is configured to generate a corrective action plan. When developing the corrective action plan, the following template may be used to drive consistency and to ensure the plan meets various rules stored in rules data store122. The action plan may be designed to address each individual element that has been identified (e.g., media data, rule, etc.), describe what will be implemented or put into place to improve compliance level(s), establish a timeline for when the plan objectives are expected to be achieved, indicate the person(s) responsible for implementation, and follow up and verification of objective effectiveness. Reporting circuit116is configured to generate and update a dashboard to display information in real-time. Historical media may also be accessible and provided via the dashboard. The dashboard may be accessible by client device160and provided for display at user interface162. Illustrative examples of reports and reporting data are provided withFIGS.5-15. Authentication circuit118is configured to authenticate client device160and/or user with access to real-time transaction engine110. Various reports and/or data may be limited based on a user type associated with the user accessing the system. A username, user type, or other information may be stored with a user profile in user profile data store126. In some examples, real-time transaction engine110may be configured to monitor and analyze the system itself. Monitoring and analytics may correspond with API, server, and infrastructure parameters allowing to monitor and visualize overall system performance such as uptime and latency of APIs, throughput of media processing and machine learning pipelines, activity, and availability of sources (e.g., video camera unit150, other video, audio, and data streams, etc.), load diagrams for CPU and GPU servers, data storages, queues etc., including uptime widget, detailed monitoring dashboard, master dashboard and alerts via e-mail, SMS, phone calls, chat messages and push notifications; underlying architecture, storage systems and server-side logic (e.g., MongoDB®, Apache® Cassandra®, MySQL®, Redis®, IPFS®, HDFS®, Apache® Kafka® and Zookeeper®, Kurento®, Janus®, Ejabberd®, Apache® Spark®, Apache® Flink®, Zabbix®, Grafana®, Prometheus®), APIs, backend (server-side) code, and documentation. This includes logic allowing to create and manipulate user accounts, etc. Client device160may access one or more reports generated by reporting circuit116and/or reporting data from results data store124via network130. The reporting data may be displayed for presentation at the reporting dashboard by user interface162. In some examples, client device160may receive push notifications from reporting circuit116via network130. Custom-Designed Video Camera Units Video camera unit150may comprise an off-the-shelf or custom designed video camera assembly. The video camera assembly may include a video capture (e.g. a video camera) encased in a ruggedized housing. The assembly may further include a transmitter (e.g., a wireless, cellular or other transmitter) to receive the video and transmit it in real-time to one or more designated locations (e.g., a server associated with an airport, a central server remote from the airport and/or other locations). The assembly may also include a storage device to store the video captured by the video camera. In some examples, video camera unit150may comply with rules according to Ingress Protection (IP) numbers. For example, video camera unit150may comply with IP67 (e.g., totally protected against dust and protected against the effects of temporary immersion between 15 cm and 1 m, or duration of test 30 minutes) and IP68 (e.g., totally protected against dust and protected against long periods of immersion under pressure). Video camera unit150may be environmentally tested. Video camera unit150may be encased in a durable, rugged utility case for stability and protection from environmental factors. The case may be equipped with a handle and wheels used to manually roll the unit to one or more locations around the aircraft facility. Video camera unit150may comprise one or more arrows on the case to facilitate proper orientation and alignment. For example, the case may be directed at an angle and location such that the arrow may be pointed toward the aircraft. The position of the arrow may align video camera unit150so that the camera lens may capture media corresponding with the aircraft from the particular location and perspective. Video camera unit150may generate a real-time video stream. For example, the camera incorporated with video camera unit150may capture images, audio, or other media and store the data locally with video camera unit150or transmit it to real-time transaction engine110via network130. Video cam unit150may comprise an antenna (e.g., for wireless communications) and/or physical port (e.g., for wired communications). Video camera unit150may comprise a Wi-Fi or cellular network interface for communicating with network130and/or uploading the data. Video camera unit150may comprise a camera lens. For example, the camera lens may comprise a wide angle lens for capturing images adjacent to camera unit150. Video camera unit150may be static or stationary. In some examples, video camera unit150may be uniquely placed based on aircraft type and/or other factors. This may comprise internal placements of a particular aircraft, including a first video camera unit150on a first floor of an aircraft and a second video camera unit150on a second floor of an aircraft when the aircraft corresponds with more than one floor. Video camera unit150may be placed externally to an aircraft. Video camera unit150may be positioned each morning or prior to the first operation at designated gates. The units may be placed outside the operational safety zone (OSZ) and are left in position until end of day and/or end of operations at a designated gate. The units may then be retrieved and returned to the docking station for recharge and video data upload. Two cameras are deployed at each designated gate; one at the wingtip, and one at the nose of the aircraft, well behind and/or to the driver's side of the pushback unit. Deployment And Placement Guidance Video camera unit150may be movable and manually rolled to a designated location. The location to place the one or more video camera units150may be determined based on the type of aircraft being monitored and other factors, including the services being provided to the aircraft (e.g., filling gasoline to the aircraft, adding food, safety checks, etc.). In some examples, video camera unit150may be manually moved to align with a marking on the ground, which may be placed at a different location for each airport and gate. The markings may help align the aircraft to the gate in order to properly service the aircraft while the aircraft is stationary at the gate. In an illustrative example, a Boeing 737 may align with Marking 10 and Marking 12, while a Cessna 172 may align with Marking 30 and 31. Video camera unit150may be manually rolled to a location corresponding with each marking that is utilized by the aircraft (e.g., 10 feet from Marking 10 and Marking 12, 5 feet above Marking 30, etc.) in order to capture and transmit the video, audio, and other media. In another example, video camera unit150may be manually moved to align with a wingtip of the aircraft and a nose of the aircraft in accordance with rules from rules data store122, unrelated to the markings. In some examples, video camera unit150may be stationary and affixed to correspond with the markings used by the aircraft. For example, when forty markings are incorporated with a gate at an airport, forty cameras may be aligned with the markings. The markings that the aircraft uses to align with the gate may also activate the camera corresponding with the marking. As described with the example above, the two video camera units150associated with Marking 10 and Marking 12 may be activated when a Boeing 737 arrives at the gate, which also aligns with Marking 10 and Marking 12. In another example, the two video camera units150associated with Marking 30 and 31 may be activated when a Cessna 172 arrives at the gate, which also aligns with Marking 30 and 31. In other examples, each video camera unit150may be continually active and stationary to capture video of the aircraft, without the activation process and without diverting from the essence of the disclosure. In some examples, servicing of the aircraft may focus on the port or left side of the aircraft as illustrated on regional aircraft such as the CRJ700. Two or more video camera units150may be assigned to a single aircraft. Thus, video camera unit150may be positioned off the left wing outside the operational safety zone (OSZ) or Safety Diamond in the general vicinity as shown in theFIG.2. Illustrative locations for a plurality of video camera units310(illustrated as video camera unit310A,310B,310C,310D,310E,310F) and particular aircrafts320(illustrated as aircraft320A,320B,320C) are provided. In some instances, video camera unit150may be placed where it would have a vantage point covering the entire operational safety zone (OSZ) or Safety Diamond in order to see all of the operational activities that occur while servicing the aircraft, as illustrated inFIG.3. For example, the camera placement may be on the opposite wing side if the observed aircrafts cargo compartments exist on the port side versus the starboard side. The markings may be placed in other locations as well, including a ramp, front/back of an aircraft, security door, or other location may correspond with a rule from rules data store122. In some examples, the rules (e.g., location of one or more video camera units150) may correspond with an airplane type. Arrows incorporated with each video camera unit150may facilitate proper orientation and alignment. For example, an arrow may be printed on top of the rugged casing. Generally, the arrows will be pointed towards the aircraft. The lenses are wide angle and capture a lot of area. Fuel tenders and catering trucks may block the entire view for an extended period of time depending on service requirements. Thus, be cognizant of the camera position when such ground equipment is used. Illustrative Examples In Rules Data Store Rules data store122may comprise a plurality of rules. An illustrative rule may comprise a 3-stop brake check. The rule may assume the vehicle is clearly visible at all times. Image data may be compared with images corresponding with this rule to determine satisfaction of the rule. Satisfaction of the rule may confirm that (1) the vehicle has carried out three stops, (2) the approximate distance between stops, and (3) whether the distance falls under compliance parameters (around 50% distance between each stop, etc.). An illustrative rule may comprise handrails on Ground Service Equipment (GSE) being used. The camera setup may be adjusted to determine satisfaction of this rule (e.g., the camera line of sight may confirm that the image data clearly distinguishes between person gripping the handrail and person simply moving their arm as they are walking beside the handrail, etc.). Satisfaction of the rule may confirm object detection (hand gripping handrail) and pose estimation (detecting person's skeletal and hand movements and trying to infer whether they are leaning on or gripping the handrail, detection based on 2 knees present, belt loader+working area tracking, etc.). An illustrative rule may comprise pre-arrival safety huddle conducted at huddle cone. Satisfaction of the rule may confirm detection of 3 or more persons stopping for 20-60 seconds near to a cone. An illustrative rule may comprise a lead marshaller and wing marshallers/walkers in correct position. The camera setup may be adjusted to confirm media data is captured of marshallers, including lead marshaller, and wing marshallers positions. The camera setup may be adjusted to detect wing marshallers positions that correspond with the aircraft size. The camera setup may be adjusted to detect safety zone markings on the ground. The camera setup may be adjusted to determine aircraft pose estimation. In some examples, aircraft and workers coordinates may be connected with a digital map (e.g., 2-dimensional) or geolocation device. An illustrative rule may comprise detection of employees wearing safety vests secured to their body. Satisfaction of the rule may confirm the vest is “Secured to body” (e.g., zipped, over two shoulders, across the back and front of the body, etc.) or detection of an unsecured vest (e.g., unzipped or unbuttoned vests, etc.). An illustrative rule may comprise cones placed in proper positions and timely. The camera setup may be adjusted to detect cones between the camera and the aircraft, including cones behind aircraft. Satisfaction of the rule may confirm that (1) detect all cones visible in the scene and (2) demonstrate the approach allowing to detect wing cones, tail cone, and front wheel cone being in position. An illustrative rule may comprise a detection of a belt loader as an object and/or detect stops they are making. The rule may detect that a belt loader is making a stop at a stop sign or other required stop location. The analysis may utilize a combination of ML and non-ML methods to detect belt loaders as objects and to detect the stops they are making. For example, real-time transaction engine110may determine where the loader is (e.g., geolocation, relative or adjacent objects, etc.) and transfer the coordinates into an array. The analysis may comprise smoothing the data with one of the filters (e.g., arithmetic mean, running average, Kalman filter), after which the averaged data for 5 passes is passed through a function that compares how much the coordinate has shifted compared to the last pass and after a small logical function that checks if the coordinate moved after the last check. This may confirm a new stop. Machine Learning Pipeline The system may incorporate one or more machine learning (ML) models. In some examples, the ML model may be pre-built and/or pre-configured platform with processing pipelines and infrastructures consisting of libraries (e.g., pre-configured TensorFlow, Yolo, PyTorch libraries), weights, models, and software code (e.g., C, C++, Python, Erlang and Node.js). The models may vary from gate to gate, airport to airport, and may vary by aircraft type. In some examples, the models may vary by gate and airport based on where the video capture units will be placed. For the models to be trained to a high degree of precision, the ML models may consistent placement zones which will not be possible to be identical across every gate and airport. Multiple machine learning models may be implemented. For example, a first ML model may identify an aircraft and a second ML model may identify the services required for the identified aircraft. For example, a first ML model may identify a model of aircraft that is entering the operational safety zone (OSZ) or Safety Diamond and a second ML model may apply the correct camera locations and/or markings for services. The ML model may correspond with linear or non-linear function. For example, the ML model may comprise a supervised learning algorithm that accepts the one or more input features associated with video data (e.g., streaming file, etc.) to provide a score. In some examples, when a nonlinear machine learning model is used, the weightings of fields corresponding with the video data may vary according to one or more object identifiers corresponding to the media data. This may be illustrated by a presence of an object (e.g., a vest, a safety cone, etc.) at a location corresponding with satisfying a safety rule for that location. In some examples, the weight may be decided through an iterative training process for each ML model. In some examples, the ML model may comprise a Deep Learning Neural Network, consisting of more than one layer of processing elements between the input layer and the output later. The ML model may further be a Convolutional Neural Network, in which successive layers of processing elements contain particular hierarchical patterns of connections with the previous layer. In some examples, the ML model may comprise an unsupervised learning method, such as k-nearest neighbors, to classify inputs based on observed similarities among the multivariate distribution densities of independent variables in a manner that may correlate with activity that does not correspond with the safety rules or regulations. Prior to receiving the input features associated with the video data, the ML model may be trained using a training data set of historical video data or standardized segment data for a particular aircraft. For example, the training data set may comprise a plurality of images and locations of an airplane that identify compliance with a safety rule. The correlation of the image to the safety rule may help determine one or more weights assigned to each of these input features. In some examples, the ML model may be incorporated with a service function call. For example, real-time transaction engine110may transmit input data to a third party that generates the trained ML model. The third-party may provide the input data to the trained ML model and transmit output back to real-time transaction engine110. Real-time transaction engine110may incorporate the output as, for example, correlating media data with a prediction of compliance with a safety rule. An illustrative ML model and data pipeline is provided withFIG.4. In some examples, the ML model and data pipeline is executed by real-time transaction engine110illustrated inFIG.1. At block410, input may be received. Input may be received from various sources, including real-time video stream data410A, historical observation data410B, or third-party data410C. At block412, the input may be provided to real-time transaction engine. In some examples, the ML models may be previously trained ML models and the data may be provided to the ML models in real-time. At block414, the input may be stored in a data store. Data from the real-time transaction engine may be synchronized and/or stored with the data store. At block416, the data may be provided to the ML pipeline. The ML pipeline may comprise online or off-line models. The ML pipeline may comprise data collection, data transformation, feature engineering, feature selection, model training, tuning and validation, and testing. At block418, output from the ML pipeline may be provided to a data science review. The output may comprise ranking and/or scoring. As an illustrative example, the input may include a video stream of a walk around a plane with one or more safety issues identified in the video stream. The output may correlate the images found in the video stream with rules, predefined safety issues, or other data that is feedback for future machine learning models. The ML models can use the output from the data science review to identify additional safety issues in future videos. At block420, observation results, reporting, and/or real-time notifications may be provided. For example, the safety issues may be identified by the ML model and additional information may be included with each safety issue. The safety issues may be ranked and/or scored in order of importance and the like. The list of safety issues, observation results, reporting, and/or real-time notifications may be provided to a graphical user interface (GUI). An illustrative example is provided withFIGS.5,12,15,15A, and15B. Reporting Dashboard And Notifications The reporting dashboard may be generated by reporting circuit116of real-time transaction engine110and provide analytics obtained from the analysis of video data uploaded from the plurality of video camera units150. The dashboard may be used to present the information in the form of charts, graphs, and tables. This visual display enables the user to more easily identify areas of concern and negative trends. Accessibility of the dashboard may correspond with different levels of authentication. For example, each user may be issued a unique username and password. The username may correspond with a particular level of access and user type, such that some reports may only be accessible to a particular user type. For example, client device160may transmit a username and profile via network130. Authentication circuit118may compare the username with a stored user profile to identify a user type. Reports, data, and other information accessible by the particular user type may be displayed for presentation at user interface162. User types may correspond with an airport level, customer level, and an executive level. The airport level users may be provided access only to a particular stations dashboard and analytics. This level may be assigned to general managers, station managers, station safety managers, and the like. The customer level users may be provided access to only observations assigned to a specific customer. The executive level users may be provided access to dashboards and analytics for every location associated with an airport facility. This level may be assigned to corporate executive leaders and regional vice president (RVP). A home screen may comprise access to welcome screens for each of the different user levels. The home screen may provide access to three dashboards, including executive dashboard, monitoring report, and monitoring details. Illustrative reports are provided inFIGS.5-15. FIG.5illustrates a graphical user interface providing an executive dashboard, in accordance with some embodiment discussed herein. Executive dashboard may be accessible when an authenticated user corresponding with an executive level accesses the system. In some examples, the executive dashboard may provide a search tool that can identify safety compliance averages between a time range and/or at a particular location (e.g., Airport A, etc.). The compliance averages may be sorted by stages (e.g., arrival, post-arrival, post-departure, pre-arrival, pre-departure preparations, upload, etc.). In some examples, the executive dashboard may provide a compliance percentage in a graph, table, chart, geographical location map, or other graphical representation. In some examples, the safety compliance issues that were included with the safety compliance averages. The issues may be limited in accordance with the time range and/or at a particular location in the executive dashboard. Each issue may correspond with a predetermined value (e.g., illustrated as “count”). The predetermined value may be higher for more critical safety issues and lower for less critical safety issues. FIG.6illustrates a graphical user interface providing a monitoring report dashboard, in accordance with some embodiment discussed herein. In some examples, the executive dashboard may provide a search tool that can identify safety compliance averages between a time range, a particular location, and/or at one or more airport gates (e.g., Airport A, Terminal 2, Gates 100, 101, 102, and 103, etc.). FIG.7illustrates a graphical user interface providing a monitoring details dashboard, in accordance with some embodiment discussed herein. In some examples, the executive dashboard may provide a search tool that can identify safety compliance averages between a time range, a particular location, one or more airport gates, and other filters available by the system, including turns, categories, answers, and the like. FIG.8illustrates a graphical user interface for providing a search function within an executive dashboard, in accordance with some embodiment discussed herein. In some examples, access to this GUI is limited to the executive dashboard. In this illustrative example, a select date tool, a selected airport tool, and a select gate tool are shown. The “Select Date” window allows for filtering the data via a selected date range (the default is the most recent 7-day period ending with the present day. The “Select Airports” window shows only the local airport for users with Airport Access Level. Executive Level users are able to choose any airport where real-time transaction engine110and/or video camera unit150is deployed. The “Select Gates” window allows for filtering the data relative to gate location. FIG.9illustrates a graphical user interface for providing compliance averages, in accordance with some embodiment discussed herein. This panel may show the average compliance rate per “stage of operation” observed for the selected date range. Access to this GUI may not be limited to the executive level authentication as illustrated inFIG.5. FIG.10illustrates a graphical user interface for providing a total average errors per turn panel, in accordance with some embodiment discussed herein. For example, a user operating client device160may access this dashboard panel. The user may hover a mouse cursor over a dot representing a particular airport to display an average errors per turn. In some examples, access to this GUI may not be limited to the executive level authentication as illustrated inFIG.5. Other methods of presenting the average errors per turn may be provided without diverting from the scope of the disclosure. FIG.11illustrates a graphical user interface for providing a compliance percentage panel, in accordance with some embodiment discussed herein. This panel shows a graphical illustration of the compliance percentage each day in the selected range. For example, a user operating client device160may access this dashboard panel. The user may hover a mouse cursor over any data point to display the actual value. In some examples, access to this GUI may not be limited to the executive level authentication as illustrated inFIG.5. Other methods of presenting the average errors per turn may be provided without diverting from the scope of the disclosure. FIG.12illustrates a graphical user interface for providing a total error panel, in accordance with some embodiment discussed herein. This panel provides a total error count for each violation observed for the selected date range. In some examples, access to this GUI may not be limited to the executive level authentication as illustrated inFIG.5. FIGS.13-15may correspond with illustrative examples of monitoring reporting dashboard panels. FIG.13illustrates a graphical user interface for providing a search filters panel, in accordance with some embodiment discussed herein. In this illustrative example, a select date tool, a selected airport tool, and a select gate tool are shown. The “Select Date” window allows for filtering the data via a selected date range (the default is the most recent 7-day period ending with the present day. The “Select Airports” window shows only the local airport for users with Airport Access Level. Executive Level users are able to choose any airport where real-time transaction engine110and/or video camera unit150is deployed. The “Select Gates” window allows for filtering the data relative to gate location. FIG.14illustrates a graphical user interface for providing a monitoring details panel, in accordance with some embodiment discussed herein. The “Select Date” window allows for filtering the data via a selected date range (the default is the most recent 7-day period ending with the present day. The “Select Airports” window shows only the local airport for users with Airport Access Level. Executive Level users are able to choose any airport where real-time transaction engine110and/or video camera unit150is deployed. The “Select Gates” window allows for filtering the data relative to gate location. The “Select Turns” window allows for filtering based on which turn of the day; 1st, 2nd, 3rd, etc. The “Select Categories” window allows for filtering based on operational stages. FIG.15illustrates a graphical user interface for providing a monitoring report, in accordance with some embodiment discussed herein. This report may be populated according to the filtering selections made in the Search Filters panel. In some examples, a checkmark appears in the Camera column when a photo is available by clicking on the icon in the Picture column. FIG.15Aillustrates a graphical user interface for providing an observation improvement report, in accordance with some embodiment discussed herein. This report may compare an operating phase or individual observations compliance score to another to illustrate the differences between the two. The report may be limited to a selected date range. In some examples, the report may show the difference in improvement or non-improvement between a date range in the effort to drive observation and/or safety issue improvement. FIG.15Billustrates a graphical user interface for providing a real-time alert dashboard, in accordance with some embodiment discussed herein. This report may include a set of real-time alerts by phase to indicate where an observation is being observed. The report may include information in real-time or offline. The dashboard may provide functionality to allow a user to interact with the dashboard and drill down capabilities to determine (e.g., based on the level of authentication access) which airport, customer, and gate the safety issue and/or alert is originating from. Where components, logical circuits, or engines of the technology are implemented in whole or in part using software, in one embodiment, these software elements can be implemented to operate with a computing or logical circuit capable of carrying out the functionality described with respect thereto. One such example logical circuit is shown inFIG.16. Various embodiments are described in terms of this example logical circuit1600. After reading this description, it will become apparent to a person skilled in the relevant art how to implement the technology using other logical circuits or architectures. Referring now toFIG.16, computing system1600may represent, for example, computing or processing capabilities found within desktop, laptop, and notebook computers; hand-held computing devices (PDA's, smart phones, cell phones, palmtops, etc.); mainframes, supercomputers, workstations, or servers; or any other type of special-purpose or general-purpose computing devices as may be desirable or appropriate for a given application or environment. Logical circuit1600might also represent computing capabilities embedded within or otherwise available to a given device. For example, a logical circuit might be found in other electronic devices such as, for example, digital cameras, navigation systems, cellular telephones, portable computing devices, modems, routers, WAPs, terminals and other electronic devices that might include some form of processing capability. Computing system1600might include, for example, one or more processors, controllers, control engines, or other processing devices, such as a processor1604. Processor1604might be implemented using a general-purpose or special-purpose processing engine such as, for example, a microprocessor, controller, or other control logic. In the illustrated example, processor1604is connected to a bus1602, although any communication medium can be used to facilitate interaction with other components of logical circuit1600or to communicate externally. Computing system1600might also include one or more memory engines, simply referred to herein as main memory1608. For example, preferably random-access memory (RAM) or other dynamic memory, might be used for storing information and instructions to be executed by processor1604. Main memory1608might also be used for storing temporary variables or other intermediate information during execution of instructions to be executed by processor1604. Logical circuit1600might likewise include a read only memory (“ROM”) or other static storage device coupled to bus1602for storing static information and instructions for processor1604. The computing system1600might also include one or more various forms of information storage mechanism1610, which might include, for example, a media drive1612and a storage unit interface1620. The media drive1612might include a drive or other mechanism to support fixed or removable storage media1614. For example, a hard disk drive, a floppy disk drive, a magnetic tape drive, an optical disk drive, a CD or DVD drive (R or RW), or other removable or fixed media drive might be provided. Accordingly, storage media1614might include, for example, a hard disk, a floppy disk, magnetic tape, cartridge, optical disk, a CD or DVD, or other fixed or removable medium that is read by, written to, or accessed by media drive1612. As these examples illustrate, the storage media1614can include a computer usable storage medium having stored therein computer software or data. In alternative embodiments, information storage mechanism1640might include other similar instrumentalities for allowing computer programs or other instructions or data to be loaded into logical circuit1600. Such instrumentalities might include, for example, a fixed or removable storage unit1622and an interface1620. Examples of such storage units1622and interfaces1620can include a program cartridge and cartridge interface, a removable memory (for example, a flash memory or other removable memory engine) and memory slot, a PCMCIA slot and card, and other fixed or removable storage units1622and interfaces1620that allow software and data to be transferred from the storage unit1622to logical circuit1600. Logical circuit1600might also include a communications interface1624. Communications interface1624might be used to allow software and data to be transferred between logical circuit1600and external devices. Examples of communications interface1624might include a modem or soft modem, a network interface (such as an Ethernet, network interface card, WiMedia, IEEE 802.XX or other interface), a communications port (such as for example, a USB port, IR port, RS232 port Bluetooth® interface, or other port), or other communications interface. Software and data transferred via communications interface1624might typically be carried on signals, which can be electronic, electromagnetic (which includes optical) or other signals capable of being exchanged by a given communications interface1624. These signals might be provided to communications interface1624via a channel1628. This channel1628might carry signals and might be implemented using a wired or wireless communication medium. Some examples of a channel might include a phone line, a cellular link, an RF link, an optical link, a network interface, a local or wide area network, and other wired or wireless communications channels. In this document, the terms “computer program medium” and “computer usable medium” are used to generally refer to media such as, for example, memory1608, storage unit1620, media1614, and channel1628. These and other various forms of computer program media or computer usable media may be involved in carrying one or more sequences of one or more instructions to a processing device for execution. Such instructions embodied on the medium, are generally referred to as “computer program code” or a “computer program product” (which may be grouped in the form of computer programs or other groupings). When executed, such instructions might enable the logical circuit1600to perform features or functions of the disclosed technology as discussed herein. AlthoughFIG.16depicts a computer network, it is understood that the disclosure is not limited to operation with a computer network, but rather, the disclosure may be practiced in any suitable electronic device. Accordingly, the computer network depicted inFIG.16is for illustrative purposes only and thus is not meant to limit the disclosure in any respect. While various embodiments of the disclosed technology have been described above, it should be understood that they have been presented by way of example only, and not of limitation. Likewise, the various diagrams may depict an example architectural or other configuration for the disclosed technology, which is done to aid in understanding the features and functionality that can be included in the disclosed technology. The disclosed technology is not restricted to the illustrated example architectures or configurations, but the desired features can be implemented using a variety of alternative architectures and configurations. Indeed, it will be apparent to one of skill in the art how alternative functional, logical, or physical partitioning and configurations can be implemented to implement the desired features of the technology disclosed herein. Also, a multitude of different constituent engine names other than those depicted herein can be applied to the various partitions. Additionally, with regard to flow diagrams, operational descriptions and method claims, the order in which the steps are presented herein shall not mandate that various embodiments be implemented to perform the recited functionality in the same order unless the context dictates otherwise. Although the disclosed technology is described above in terms of various exemplary embodiments and implementations, it should be understood that the various features, aspects and functionality described in one or more of the individual embodiments are not limited in their applicability to the particular embodiment with which they are described, but instead can be applied, alone or in various combinations, to one or more of the other embodiments of the disclosed technology, whether or not such embodiments are described and whether or not such features are presented as being a part of a described embodiment. Thus, the breadth and scope of the technology disclosed herein should not be limited by any of the above-described exemplary embodiments. Terms and phrases used in this document, and variations thereof, unless otherwise expressly stated, should be construed as open ended as opposed to limiting. As examples of the foregoing: the term “including” should be read as meaning “including, without limitation” or the like; the term “example” is used to provide exemplary instances of the item in discussion, not an exhaustive or limiting list thereof; the terms “a” or “an” should be read as meaning “at least one,” “one or more” or the like; and adjectives such as “conventional,” “traditional,” “normal,” “standard,” “known” and terms of similar meaning should not be construed as limiting the item described to a given time period or to an item available as of a given time, but instead should be read to encompass conventional, traditional, normal, or standard technologies that may be available or known now or at any time in the future. Likewise, where this document refers to technologies that would be apparent or known to one of ordinary skill in the art, such technologies encompass those apparent or known to the skilled artisan now or at any time in the future. The presence of broadening words and phrases such as “one or more,” “at least,” “but not limited to” or other like phrases in some instances shall not be read to mean that the narrower case is intended or required in instances where such broadening phrases may be absent. The use of the term “engine” does not imply that the components or functionality described or claimed as part of the engine are all configured in a common package. Indeed, any or all of the various components of an engine, whether control logic or other components, can be combined in a single package or separately maintained and can further be distributed in multiple groupings or packages or across multiple locations. Additionally, the various embodiments set forth herein are described in terms of exemplary block diagrams, flow charts and other illustrations. As will become apparent to one of ordinary skill in the art after reading this document, the illustrated embodiments and their various alternatives can be implemented without confinement to the illustrated examples. For example, block diagrams and their accompanying description should not be construed as mandating a particular architecture or configuration. | 50,869 |
11861912 | DETAILED DESCRIPTION In order to more clearly illustrate the technical solutions related to the embodiments of the present disclosure, a brief introduction of the drawings referred to the description of the embodiments is provided below. Obviously, the drawings described below are only some examples or embodiments of the present disclosure. Those having ordinary skills in the art, without further creative efforts, may apply the present disclosure to other similar scenarios according to these drawings. Unless obviously obtained from the context or the context illustrates otherwise, the same numeral in the drawings refers to the same structure or operation. It should be understood that the “system,” “device,” “unit,” and/or “module” used herein are one method to distinguish different components, elements, parts, sections, or assemblies of different levels. However, if other words can achieve the same purpose, the words can be replaced by other expressions. As used in the disclosure and the appended claims, the singular forms “a,” “an,” and “the” include plural referents unless the content clearly dictates otherwise; the plural forms may be intended to include singular forms as well. In general, the terms “comprise,” “comprises,” and/or “comprising,” “include,” “includes,” and/or “including,” merely prompt to include steps and elements that have been clearly identified, and these steps and elements do not constitute an exclusive listing. The methods or devices may also include other steps or elements. The flowcharts used in the present disclosure illustrate operations that the system implements according to the embodiment of the present disclosure. It should be understood that the foregoing or following operations may not necessarily be performed exactly in order. Instead, the operations may be processed in reverse order or simultaneously. Besides, one or more other operations may be added to these processes, or one or more operations may be removed from these processes. FIG.1is a schematic diagram illustrating an exemplary application scenario of an Internet of Things system for counting and regulating pedestrian volume in a public place of a smart city according to some embodiments of the present disclosure. An application scenario100may include a server110, a network120, a database130, a terminal device140, a user150, and a place160. The server110may include a processing device112. In some embodiments, the application scenario100of the Internet of Things system for counting and regulating pedestrian volume in a public place may obtain a query result for a query request of a user by implementing methods and/or processes disclosed in the present disclosure. For example, the processing device may receive, based on a user platform, the query request for an intended place initiated by the user, transmit, based on a service platform, the query request to a management platform, and generate, based on the management platform, a query instruction, issue, based on the management platform, the query instruction to a sensor network sub-platform of a sensor network platform corresponding to the management platform according to a regional location, send, based on the sensor network sub-platform, the query instruction to an object platform corresponding to the sensor network sub-platform, obtain, based on the object platform, a query result according to the query instruction, and feed back, based on the object platform, the query result to the user platform through the sensor network sub-platform, the management platform, and the service platform corresponding to the object platform respectively. The server110may be connected to the terminal device140through the network120. The server110may be connected to the database130through the network120. The server110may be configured to manage resources and process data and/or information from at least one component of the system or an external data source (e.g., a cloud data center). In some embodiments, the query request for the intended place initiated by the user may be received through the server110. The server110may obtain data in the database130or save the data to the database130during processing. In some embodiments, the server110may be a single server or a server group. In some embodiments, the server110may be local or remote. In some embodiments, the server110may be implemented on a cloud platform or provided in a virtual way. In some embodiments, the server110may include a processing device112. The processing device112may process data and/or information obtained from other devices or system components. The processor may execute program instructions based on the data, information, and/or processing results to perform one or more of functions described in the present disclosure. In some embodiments, the processing device112may include one or more sub-processing device (for example, a single-core processing device or a multi-core processing device). Merely by way of example, the processing device112may include a central processing unit (CPU), an application-specific integrated circuit (ASIC), or the like, or any combination thereof. The network120may connect components of the application scenario100and/or connect the system to an external resource. The network120may enable communication between the components and with other components outside the system, facilitating exchange of data and/or information. In some embodiments, the network120may be a wired network, a wireless network, or any combination thereof. For example, the network120may include a cable network, an optical network, or the like, or any combination thereof. The network connection between components may be in one of above ways, or in various ways. In some embodiments, the network may include various topological structures, such as a point-to-point, a shared, a centralized topological structure, or the like, or any combination thereof. In some embodiments, the network120may include one or more network access points. In some embodiments, relevant data of the user150and the place160may be transmitted through the network120. The database130may be configured to store data and/or instructions. The database130may be directly connected to the server110or in an interior of the server110. In some embodiments, the database130may be configured to store the relevant data of the user150and the place160. The database130may be implemented in a single central server, a plurality of servers or a plurality of personal devices connected through a communication link. In some embodiments, the server110, the terminal device140, and other possible system components may include the database130. The terminal device140may refer to one or more terminal devices or software. In some embodiments, the terminal device140may serve as a user platform. For example, when a user of the terminal device is a tourist, the terminal device140may be used as a user platform to input a query request of the user. In some embodiments, the terminal device140may serve as a management platform. For example, when the user of the terminal device is a regulation agency of pedestrian volume, the terminal device140may be used as a management platform to summarize data to make a plan. In some embodiments, a user of the terminal device140may be one or more users. In some embodiments, the terminal device140may be other devices with input and/or functions, such as a mobile device140-1, a tablet computer140-2, a laptop computer140-3, or the like, or any combination thereof. In some embodiments, the terminal device140and other possible system components may include the processing device112. The user150may be a user consumer of the user terminal140, and the user may be a tourist, a visitor, a person regulating the pedestrian volume, or the like. In some embodiments, the user may issue a query request. For example, the user may query a location of a certain supermarket, locations of other supermarkets near a certain supermarket, a location of a retail department, etc. In some embodiments, the user may receive information fed back by the user terminal140, such as receiving a query result, recommendation information, etc. In some embodiments, a count of users may be one or more. The place160may be a specific location in a certain area. For example, the place may be any location such as an office building160-1, a supermarket160-2, an administrative building160-3, a restaurant, a hairdresser, a station, a parking lot, or the like. In some embodiments, places may have a similarity in a function. For example, a shopping mall, a retail department, and a supermarket may belong to locations for shopping. A restaurant, a canteen, and a snack street may be locations for eating and drinking. Places may be classified according to differences and similarities of functions of the places. For example, types of the places may be classified into shopping, catering, attraction, medical care, etc. It should be noted that the application scenario100is merely provided for the purpose of illustration, and not intended to limit the scope of the present disclosure. For those skilled in the art, multiple variations and modifications may be made to the processes under the teachings of the present disclosure. For example, the application scenario100may also include an information source. However, those variations and modifications do not depart from the scope of the present disclosure. The Internet of Things system may be an information processing system that includes a user platform, a service platform, a management platform, a sensor network platform, or any combination thereof. The user platform may be a leader of the entire Internet of Things operation system, which may be used to obtain a user request. The user request may be is a foundation and premise of formation of the Internet of Things operation system. Connection between the platforms of the Internet of Things system may be to meet the user request. The service platform may be a bridge between the user platform and the management platform to realize connection between the user platform and the management platform. The service platform may provide an input and output service for a user. The management platform may realize overall planning and coordination of connection and cooperation between various functional platforms (such as the user platform, the service platform, the sensor network platform, and the object platform). The management platform may gather information of the Internet of Things operation system, and may provide functions of perception management and control management for the Internet of Things operation system. The sensor network platform may realize a function of connecting the management platform and the object platforms, and may play functions of perception information sensing communication and control information sensing communication. The object platform may be a functional platform for generating perception information and executing control information. Information processing in the Internet of Things system may be divided into a processing process of the perception information and a processing process of the control information. The control information may be information generate based on the perception information. The processing of the perception information may be that the object platform obtains the perception information and transmits the perception information to the management platform through the sensor network platform. The management platform may transmit calculated perception information to the service platform, and finally to the user platform. The user may generate control information after judging and analyzing the perception information. The control information may be generated by the user platform and sent to the service platform. The service platform may transmit the control information to the management platform. The management platform may calculate the control information, and send the control information to the object platform through the sensor network platform, thereby controlling an object corresponding to the object platform. In some embodiments, when applied to city management, the Internet of Things system may be called an Internet of Things system in a smart city. FIG.2is a block diagram illustrating an exemplary Internet of Things system for counting and regulating pedestrian volume in a public place of a smart city according to some embodiments of the present disclosure. As shown inFIG.2, an Internet of Things system200for counting and regulating pedestrian volume in a public place of a smart city may include a user platform210, a service platform220, a management platform230, a sensor network platform240, and an object platform250. In some embodiments, the Internet of Things system200for counting and regulating pedestrian volume in a public place may be a part of the server110or implemented by the server110. In some embodiments, the Internet of Things200system for counting and regulating pedestrian volume in a public place of a smart city may be applied to various scenarios of pedestrian volume counting and regulation. In some embodiments, the Internet of Things system200for counting and regulating pedestrian volume in a public place may obtain a query instruction based on a query request for an intended place initiated by a user and obtain a query result according to the query instruction. In some embodiments, the Internet of Things system200for counting and regulating pedestrian volume in a public place of a smart city may determine a management and control strategy for the intended place based on the query request for the intended place and current information of the intended place. Various scenarios of counting and regulation pedestrian volume in a public place may include, for example, a pedestrian volume monitoring in a place scenario, a municipal construction planning scenario, an urban population distribution prediction scenario, etc. It should be noted that the above scenarios are merely examples, which do not limit the application scenarios of the Internet of Things system200for counting and regulating pedestrian volume in a public place of a smart city. Those skilled in the art may apply the Internet of Things system200for counting and regulating pedestrian volume in a public place of a smart city to any other appropriate scenarios on the basis of the content disclosed in the embodiment. In some embodiments, the Internet of Things system200for counting and regulating pedestrian volume in a public place of a smart city may be applied to the pedestrian volume monitoring in a place. When the Internet of Things system200is applied to the pedestrian volume monitoring in a place, the object platform may be configured to collect a query request for the intended place and current information of the intended place, and determine a management and control strategy for the intended place based on the above information. In some embodiments, the Internet of Things system200for counting and regulating pedestrian volume in a public place of a smart city may be applied to the municipal construction planning. For example, a user demand for the place in an area corresponding to the place based on the pedestrian volume in the place and a management and control strategy corresponding to the pedestrian volume. It may be determined whether a new relevant place is built nearby based on the user demand. In some embodiments, the Internet of Things system200for counting and regulating pedestrian volume in a public place of a smart city may be applied to the urban population distribution prediction. For example, the user platform may receive the query request for the intended place initiated by the user. The service platform may transmit the query request to the management platform and generate a query instruction based on the management platform. The query instruction may be issued, based on the management platform, to a sensor network sub-platform of the sensor network platform corresponding to the management platform according to the regional location. The query instruction may be sent, based on the sensor network sub-platform, to the object platform corresponding to the sensor network sub-platform. The query instruction may be counted based on the object platform and population distribution may be obtained. Taking the Internet of Things system200for counting and regulating pedestrian volume in a public place of a smart city applied to the pedestrian volume monitoring in a place scenario as an example, details of Internet of Things system200for counting and regulating pedestrian volume in a public place is specifically illustrated as follows. The user platform210may be a user-oriented service interface. In some embodiments, the user platform210may receive a query request for an intended place initiated by a user. In some embodiments, the user platform210may be configured to feed back a query result to the user. In some embodiments, the user platform210may send the query request to the service platform. In some embodiments, the user platform210may receive a management and control strategy, the query result, etc. sent by the service platform. The service platform220may be a platform for preliminary processing of the query request. In some embodiments, the service platform220may transmit the query request to the management platform, and generate a query instruction based on the management platform. The query instruction may include a regional location of the intended place. In some embodiments, the service platform220may receive the management and control strategy, the query result, etc. sent by the management platform. The management platform230may refer to an Internet of Things platform that overall plans and coordinates the connection and cooperation between various functional platforms, and provides perception management and control management. In some embodiments, the management platform may generate the query instruction. In some embodiments, the management platform230may issue the query instruction to a sensor network sub-platform of the sensor network platform corresponding to the management platform according to the regional location. In some embodiments, the management platform230may receive the query request sent by the service platform. The sensor network platform240may be a platform that realizes an interaction between the management platform and the object platform. In some embodiments, the sensor network platform240may receive the query instruction sent by the management platform. In some embodiments, the sensor network platform240may send the query instruction to the object platform corresponding to the sensor network platform. In some embodiments, the sensor network platform240may be configured as an independent structure. The independent structure may refer to that the sensor network platform may perform data storage, data processing, and/or data transmission for data of different object platforms by using different sensor network sub-platforms. For example, each sensor network sub-platform may be in one-to-one correspondence with an object sub-platform of each object platform. The sensor network platform240may obtain query requests for intended places and relevant places uploaded by each object sub-platform, and upload the query requests to the management platform. The object platform250may be a functional platform for generating perception information and executing control information. The object platform250may be configured to obtain a query result according to the query instruction. The query result may include current information of the intended place and recommendation information of the relevant place. In some embodiments, the object platform250may also feed back the query result to the user platform through the sensor network platform, the management platform, and the service platform corresponding to the object platform, respectively. In some embodiments, the object platform250may be configured to include a plurality of object sub-platforms, and different object sub-platforms may obtain information of places in different areas correspondingly. For example, the object platform250may upload the query requests for the intended place and the relevant place to each sensor network platform corresponding to the object platform. In some embodiments, the object platform250may be further configured to obtain information of the intended places and the relevant place in a regional place map and generate the query result according to the query requests. Different nodes in the regional place map may represent different places. Attributes of the nodes in the regional place map may include place real-time information and place basic information. An edge in the regional place map may be configured to connect two nodes, a mutual relationship of which meets a preset condition. In some embodiments, the object platform250may be further configured to divide, based on a preset algorithm, the regional place map into several sub-maps; determine, based on the query request, a target sub-map from the several sub-maps; and determine, based on the target sub-map, a recommendation node, and determine a place corresponding to the recommendation node as the relevant place. In some embodiments, the object platform250may be further configured to determine a management and control strategy for the intended place based on the query request for the intended place and the current information of the intended place. In some embodiments, the object platform250may be further configured to determine, based on a current flow of the intended place and a count of users querying, a flow management and control strategy of the intended place. In some embodiments, the object platform250may be further configured to predict a flow of the intended place at a future time. When the flow at the future time is greater than a preset threshold, flow management and control may be performed in the intended place. In some embodiments, the object platform250may be further configured to determine, based on the count of users querying the intended place, popularity of the intended place; and adjust, based on the popularity, the current flow of the intended place to determine a flow at the future time. Detailed descriptions regarding the object platform250may be found inFIG.4-FIG.7and relevant descriptions. It will be understood that for those skilled in the art, after understanding the principle of the system, it is possible to apply the Internet of Things system200for counting and regulating pedestrian volume in a public place of a smart city to any other appropriate scenario without departing from this principle. It should be noted that the above description of the system and its components is merely provided for the purposes of illustration, and not intended to limit the scope of the present disclosure. It will be understood that for those skilled in the art, after understanding the principle of the system, it is possible to arbitrarily combine various components, or form subsystems to connect with other components without departing from this principle. For example, each component may share a storage device. Each component may have its own storage device. Those variations are still within the scope of the present disclosure. FIG.3is a flowchart illustrating an exemplary process for counting and regulating pedestrian volume in a public place of a smart city according to some embodiments of the present disclosure. As shown inFIG.3, in some embodiments, the process300may be performed by a processing device. In310, a query request for an intended place initiated by a user may be received based on a user platform. The intended place may be a place that a user intends to go to. For example, the intended place may be places, such as a shopping mall, a restaurant, a school, an administrative center, an office building, or the like. In some embodiments, the intended may be a place input by the user on a user terminal, for example, the user may input a specific restaurant name, etc. on the mobile phone. A query request may be content that the user searches for relevant information of the intended place. For example, the query request may include query content such as a location, a function, whether is controlled or not, information of pedestrian volume, etc. of the intended place. In some embodiments, if the query request is obtained by input of the user on the user terminal, the user terminal may be used as a user platform to obtain the query request. In320, the query request may be transmitted, based on a service platform, to a management platform, a query instruction may be generated based on the management platform, and the query instruction may include a regional location of the intended place. The query instruction may be information extracted from the query request that may be identified by a system. For example, the query instruction may include information such as a regional location, a query time, a query method, etc. of the intended place. The regional location may be information that reflects a geographical location of the intended place. For example, the regional location may be a latitude and a longitude, a coordinate, a relative distance from a current location of the user, etc. In some embodiments, the query request received from the user platform may be sent to the management platform for preliminary processing through the service platform to form a query instruction that may be identified by the system. For example, the query instruction may be a matrix, a data table, etc. composed of information such as the regional location, the query time, the query method, etc. of the intended place. In330, the query instruction may be issued, based on the management platform, to a sensor network sub-platform of the sensor network platform corresponding to the management platform according to the regional location. In some embodiments, the sensor network platform may include a plurality of sensor network sub-platforms, and different sensor network sub-platforms may be configured to receive query instructions of different regional locations issued by the management platform. The sensor network platform may perform data storage, data processing, and/or data transmission for data of different object platforms by using the sub-platforms of different sensor network platforms. Different sensor network sub-platforms may correspond to different regional locations. In340, the query instruction may be sent, based on the sensor network sub-platform, to the object platform corresponding to the sensor network sub-platform. In some embodiments, the object platform may include a plurality of object sub-platforms, and different object sub-platforms may correspond to different regional locations. For example, if different object sub-platforms are set in different regional locations, the sensor network sub-platform and the object sub-platform corresponding to a same regional position may have a corresponding relationship. The sensor network sub-platform may send the query instruction to an object sub-platform corresponding the sensor network sub-platform. In350, a query result may be obtained based on the object platform according to the query instruction, and the query result may include current information of the intended place and recommendation information of a relevant place. The query result may be information related to the intended place. For example, the query result may include current information of the intended place and recommendation information of a relevant place. The current information may be real-time information of the intended place. For example, the current information may include information on real-time pedestrian volume of the intended place, information on whether the intended place is currently under control (access restriction), information on restrictions on roads around the intended place, or the like. The relevant place may be a place with similar features to the intended place. For example, the relevant place may be a place with similar functions. If the intended place is a shopping mall, the relevant place may be a pedestrian street, a supermarket, etc. If the intended place is a hospital, the relevant place may be an outpatient department, a pharmacy, or the like. As another example, the relevant place may be a place where there is a similar regional location. If the intended place is a certain parking lot, the relevant place may be other parking lots nearby. In some embodiments, a spatial distance between the relevant place and the intended place may be within a preset range. The recommendation information may be information that the relevant place is recommended to the user. For example, the recommendation information may be information in forms such as text, voice, an image, or the like, or any combination thereof. In some embodiments, the query result may be further determined in ways such as based on a sensor, manual input, a preset rule, or the like. For example, the object platform may obtain information on real-time pedestrian volume as the query result through a pedestrian volume monitoring sensor. As another example, the object platform may use response of an artificial customer service to the query request as the query result. As yet another example, the object platform may determine the query result by comparing regional locations of each place and the intended place, and the object platform may determine the query result by classifying the places based on function, or the like. For example, by comparing a coordinate, a longitude and a latitude, etc., the relevant place of the intended place may be determined, or the relevant place related to the function of the intended place may be determined through a preset place function classification table. In360, the query result may be fed back, based on the object platform, to the user platform through the sensor network sub-platform, the management platform, and the service platform corresponding to the object platform respectively. The process of feeding back the query result from the object platform to the user platform is an inverse process of the above transmission process of the user request and the query instruction, which will not be repeated herein. In some embodiments, the process300may also include an operation370that the object platform determines, based on the query request for the intended place and the current information of the intended place, a management and control strategy for the intended place. The number of the operation370is merely for convenience of description, and does not mean that the sequence of the operations is limited. For example, the operation370may be performed between the operation350and the operation360. In some embodiments, the operation370in the process300may be optional, that is, the operation370may not be included. The management and control strategy may be a plan to limit the pedestrian volume, limit the movement, and limit the access for the intended place. For example, the management and control strategy may include information such as a control time range, a control space range, a control operator arrangement, a pedestrian volume diversion planning, etc. In some embodiments, the management and control strategy may be determined based on a preset of system. For example, when the current pedestrian volume of the intended place exceeds a threshold of pedestrian volume preset by the system, the management and control strategy may be determined to be measures such as limiting the pedestrian volume and diverting the pedestrian volume. The object platform may obtain the management and control strategy generated by the system based on a sensor network sub-platform corresponding to the object platform. In some embodiments, the management and control strategy may be determined based on a user setting. For example, the user may make specific settings for the management and control strategies such as a road construction, an epidemic, etc. based on situations of emergencies. The object platform may obtain the management and control strategy manually input based on the sensing network sub-platform corresponding to the object platform. Further descriptions regarding the determining the management and control strategy of the intended place may be found inFIG.7and relevant descriptions. Through the method for counting and regulating pedestrian volume in a public place described in some embodiments of the present disclosure, a relevant place and a management and control strategy recommended to the user can be realized based on a situation of current pedestrian volume in a certain place. Through the intelligent recommendation based on pedestrian volume, on the premise of preventing the user from going to a place with high pedestrian volume, the request of the user may be met as much as possible, which can improve the user experience. It should be noted that the descriptions of the above process300is merely provided for the purposes of illustration, and not intended to limit the scope of the present disclosure. For those skilled in the art, multiple variations and modifications may be made to the process300under the teachings of the present disclosure. However, those variations and modifications do not depart from the scope of the present disclosure. For example, the process300may also include a preprocessing operation. FIG.4is a schematic diagram illustrating an exemplary process for determining a relevant place according to some embodiments of the present disclosure. As shown inFIG.4, a relevant place in a query result may be determined through the following process400. Based on an object platform, information of the intended place and the relevant place in a regional place map420may be obtained according to a query request410, and a query result430may be generated. In some embodiments, the query request may include an intended place. The object platform may extract information related to the intended place in the query request, and find a node corresponding to the intended place in the regional place map. For example, the query request of the user may include information such as a name, a position coordinate, a function type, etc. of the intended place. Based on the above information, the object platform may find the node corresponding to the intended place related to the above information in the regional place map. In some embodiments, the object platform may determine a node corresponding to the relevant place based on the node corresponding to the intended place. In some embodiments, the object platform may determine the node corresponding to the relevant place based on attributes of the node corresponding to the intended place. For example, the node corresponding to the relevant place may be determined based on a place location, a place type, etc. of the node corresponding to the intended place. In some embodiments, the relevant place (the corresponding node) may be determined through the operations of S1to S3. In some embodiments, the object platform may generate the query result. For example, the object platform may input the attributes of the nodes corresponding to the intended place and the relevant place into a preset table of query results as the query result. The regional place map420may represent mutual relationship among all places and features of each place included in a certain region. A regional size may be set according to the query request of the user, for example, a region may be a city, a district/county, a township, etc. The place in the regional map may be represented based on the node in the regional place map. The features of the place may be obtained based on the attributes of the node. The mutual relationship between the places may be obtained based on edges in the regional place map. In some embodiments, the regional place map may be further obtained by importing existing map data as basic data and based on manual marking, or the like. The regional place map may include a plurality of nodes, and different nodes in the regional place map may represent different places. For example, node1may represent a certain parking lot in the regional place map, node2may represent a certain hotel in the regional place map. In some embodiments, each node may include the attributes of the node. The attributes of the node may be parameters of the node. For example, the attributes of the node may include place real-time information, place basic information, etc. of the place corresponding to the node. The place real-time information may be information that dynamically changes according to a real-time situation of the place. For example, the place real-time information may include current pedestrian volume, current management and control information, or any combination thereof. The current pedestrian volume may be passenger flow volume in the place at a current time or within a certain time period close to the current time. In some embodiments, the current pedestrian volume may be detected by a sensor installed in a place or a relevant road section. The current management and control information may be management information of the pedestrian volume in the place at a current time or within a certain time period close to the current time. For example, the current management and control information may include information such as limiting, diverting, complete opening, etc. of pedestrian volume. In some embodiments, the current management and control information may be input by a user or obtained through a network. The place basic information may be relatively fixed information that does not change over time. For example, the place basic information may include a place type (that is, a place function), a place location (such as a coordinate or a latitude and a longitude), or any combination thereof. The place type may be a function type of the place. For example, the place type may include a type such as catering, shopping, school, accommodation, medical care, etc. The place location may be a geographical location to which the place specifically relates. For example, the place location may be information, such as a latitude, a longitude, a coordinate, etc. where the place is located. An edge424in the regional place map may be configured to connect two nodes, a mutual relationship of which meets a preset condition. Attributes of the edge may represent a relationship between different nodes in the regional place map. The attributes of the edge may include an edge weight. The edge weight may reflect a correlation between two nodes to be connected or connected. For example, the edge weight may be 1, 2, 10, etc. The larger the edge weight is, the weaker the correlation between the two nodes is. In some embodiments, the preset condition that needs to be met between the two connected nodes may be related to the edge weight between the two nodes. For example, the preset condition may be that the edge weight is lower than or equal to a preset threshold, or the like. In some embodiments, the edge weight may be determined by a place type difference and a spatial distance of places corresponding to two connected nodes. The place type difference may be a type difference of places corresponding to two nodes. In some embodiments, the place type difference may be represented by a place type difference value within a range of 0 to 5. The larger the place type difference value is, the greater the function type between two places is. For example, the place type difference value between different shopping malls may be 0. The place type difference value of a shopping mall and a retail department may be 1. The place type difference value of a shopping mall and a parking lot may be 5, or the like. The spatial distance may be a straight line distance between places. In some embodiments, the spatial distance may be represented by a spatial score within a range of 0 to 5. The larger the spatial score is, the closer a distance between two places is. For example, for two places with a spatial distance ranging from 0 to 200 meters, the spatial score may be 5. For two places with a spatial distance of more than 2000 meters, the spatial score may be 1. In some embodiments, an edge weight of an edge between two nodes may be calculated through a place type difference value and a spatial score. For example, an edge weight of two nodes A and B may be calculated through the following equation (1): QAB=10−(XAB−DAB) (1), where QABdenotes an edge weight between the two nodes A and B, XABdenotes a spatial score between the node A and the node B, and DABdenotes a place type difference value between the node A and the node B. In some embodiments, the spatial score and the place type difference value may be weighted for calculation, that is, the edge weight of an edge between two nodes may be weighted calculation values of the spatial score and the place type difference value of the two nodes. For example, the edge weight of the edge between the two nodes A and B may be calculated through the following equation (2): QAB=10−(m1XAB−m2DAB) (2), where m1denotes a weight of the spatial score, m2denotes a weight of the place type difference value, and m1+m2=1. In some embodiments, the weight of the spatial score and the weight of the place type difference value may be determined by manual setting, for example, which are determined by a query request of a user. In some embodiments, the preset condition that is met between two connected nodes may be that a place type difference of places corresponding to the two connected nodes is smaller than a preset difference value, or a spatial distance of places corresponding to the two connected nodes is less than a distance threshold. Merely by way of example, the preset conditions may be that a place type difference value of the places corresponding to the two connected nodes is less than 2, or the spatial distance of places corresponding to the two connected nodes is less than 200 m, or the spatial score is greater than 4. In some embodiments, the relevant place may be determined through the following process. Candidate places may be determined based on degree of association and an edge weight. The relevant place may be determined, based on a query request, from the candidate places. The degree of association may be degree of similarity and correlation between nodes. For example, the degree of association may be degree of correlation between node locations (i.e., place locations) and node types (i.e., place types). In some embodiments, the degree of association may be divided into different levels. For example, the degree of association may include different levels such as a primary association, a secondary association, etc. The primary association may be a level of the degree of association with a highest degree of similarity and correlation between nodes. The secondary correlation may be a level of the degree of association with a relatively high degree of similarity and correlation between nodes. For example, as shown inFIG.4, the node A and the node B are connected by only one edge, and the two nodes may be a primary association. The node A and the node D are connected by at least two edges, and the two nodes may be a secondary association, or the like. The candidate place may be a place with the degree of association and the edge weight meet a preset condition. For example, the preset condition may be that the degree of association between the candidate place and the intended place is a primary association or a secondary association, and the edge weight corresponding to the edge between the candidate place and the intended place is less than or equal to 4. In some embodiments, the relevant place (i.e., the corresponding node) may also be determined through the following operations. In S1, the regional place map may be divided, based on a preset algorithm, into several sub-maps. The preset algorithm may be the algorithm for dividing the regional place map according to certain rules. Based on the preset algorithm, each node in the regional place map may be clustered and divided based on certain features. Detailed descriptions regarding the preset algorithm may be found inFIG.5and relevant descriptions thereof. The sub-map may be a sub-map formed by nodes with similar features and edges between the nodes in the regional place map. For example, the sub-map may be a set of a certain part of nodes and edges in the regional place map. The sub-map may include at least one node. For example, based on the preset algorithm, the regional place map420inFIG.4may be divided into two sub-maps, for example, such as a sub-map421and a sub-map425. In S2, the target sub-map may be determined, based on the query request, from the several sub-maps. The target sub-map may be a sub-map that contains an intended place. In some embodiments, the target sub-map may include a target node (i.e., an intended place). In some embodiments, the object platform may designate the sub-map where the intended place is located as the target sub-map. For example, if the target node inFIG.4is a node422, the sub-map421may be determined as the target sub-map. In S3, a recommendation node may be determined based on the target sub-map, and a place corresponding to the recommendation node may be determined as the relevant place. A recommended node423may be a node that meets a preset requirement, for example, a node that is similar to the target node422. In some embodiments, one or more nodes in a same sub-map as the target node (such as a node in the target sub-map421inFIG.4) may be used as recommendation nodes. Through the regional place map described in the some embodiments of the present disclosure, the visual processing of the place function and the place location can be realized, which is convenient for the judgment of the relevant place. In addition, the candidate place may be determined by the degree of association and the edge weight between the nodes corresponding to the place, which improves the accuracy of the judgment process. FIG.5is a schematic diagram illustrating a preset algorithm according to some embodiments of the present disclosure. In some embodiments, the process500may be performed by an object platform. In510, one of n nodes included in the regional place map may be designated as a benchmark node to determine a shortest path from the benchmark node to other nodes. The benchmark node may be any node of all the nodes included in the regional place map. In some embodiments, in a division of the regional place map, each node (n) in the regional place map may be calculated as a benchmark node, respectively. The shortest path may be a shortest edge connecting the benchmark node to other nodes through edges. For example, inFIG.4, when a node A is the benchmark node, a shortest path from A to B may be an edge AB. A shortest path from A to C may be an edge AC. A shortest path from A to D may be an edge ABD (i.e., an edge AB+an edge BD) or an edge ACD (i.e., an edge AC+an edge CD). A shortest path from A to E may be an edge ABDE or an edge ACDE. A shortest path from A to F may be an edge ABDEF or an edge ACDEF. A shortest path from A to G may be an edge ABDEG or an edge ACDEG. A shortest path from A to H may be an edge ABDEH or an edge ACDEH. A shortest path from A to J may be an edge ABDEFJ, an edge ACDEFJ, an edge ABDEGJ, an edge ACDEGJ, an edge ABDEHJ, or an edge ACDEHJ. For a shortest path when another node is used as the benchmark node, please refer to the shortest path when the node A is the benchmark node. In520, based on the shortest path, an edge betweenness centrality value of all edges in the regional place map may be calculated. The edge betweenness centrality value (EBC value) may be a parameter representing a proportion of a count of shortest paths passing through a certain edge in the shortest paths from the benchmark node to other nodes with a certain node as a benchmark node in the regional place map. For example, when the node A is the benchmark node, the edge betweenness centrality value of the edge AB may be: in shortest paths from the node A to other nodes (X), a sum of a proportion of paths passing through the edge AB in all paths from node A to other nodes X. For example, from the node A to the node J, there are six paths of ABDEFJ, ACDEFJ, ABDEGJ, ACDEGJ, ABDEHJ, and ACDEHJ. The three paths of ABDEFJ, ABDEGJ, and ABDEHJ pass the edge AB, so a value of 3/6 may be obtained. From the node A to the node F, there are two paths of ABDEF and ACDEF. The path ABDEF passes through the edge AB, so a value of 1/2 may be obtained. In the same way, from the node A to be node H, a value of 1/2 may be obtained. From the node A to the node G, a value of 1/2 may be obtained. From the node A to the node E, there are two paths of ABDE and ACDE. The path ABDE passes through the edge AB, so a value of 1/2 may be obtained. From the node A to the node D, there are two paths of ABD and ACD. The path ABD passes through the edge AB, so a value of 1/2 may be obtained. From the node A to the node B, there is only one path of AB, so a value of 1 may be obtained. From the node A to the node C, there is only one path of AC without passing through the edge AB, so a value of 0 may be obtained. Therefore, the edge betweenness centrality value of the edge AB may be 1/2+1/2+1/2+1/2+1/2+1/2+1+0=4. Taking A as the benchmark node, a calculation method of edge betweenness centrality values of other edges may refer to the calculation method of the edge AB. An edge betweenness centrality value of an edge AC may be 4. An edge betweenness centrality value of an edge BD may be 3. An edge betweenness centrality value of an edge CD may be 3. An edge betweenness centrality value of an edge DE may be 5. An edge betweenness centrality value of an edge EF may be 4/3. An edge betweenness centrality value of an edge EH may be 4/3. An edge betweenness centrality value of an edge EG may be 4/3. An edge betweenness centrality value of an edge FJ may be 1/3. An edge betweenness centrality value of an edge HG may be 1/3. An edge betweenness centrality value of an edge GJ may be 1/3. An edge betweenness centrality value of an edge BC may be 0. An edge betweenness centrality value of an edge FH may be 0. In530, each node in the regional place map may be designated as a benchmark node in turn, the edge betweenness centrality value of each edge in the regional place map may be calculated by repeating the above operations when each node is designated as a benchmark node. For example, when the node H is used as the benchmark node, shortest paths from the node H to other nodes may be obtained: HF, HE, HJ, HED, HEG, HJG, HEDB, HEDC, HEDBA, and HEDCA. Edge betweenness centrality values of each edge may be obtained. An edge betweenness centrality value of an edge FH may be 1. An edge betweenness centrality value of an edge EH may be 11/2. An edge betweenness centrality value of an edge HJ may be 3/2. An edge betweenness centrality value of an edge DE may be 4. An edge betweenness centrality value of an edge EG may be 1/2. An edge betweenness centrality value of an edge GJ may be 1/2. An edge betweenness centrality value of an edge BD may be 3/2. An edge betweenness centrality value of an edge CD may be 3/2. An edge betweenness centrality value of an edge AB may be 1/2. An edge betweenness centrality value of an edge AC may be 1/2. An edge betweenness centrality value of an edge BC may be 0. An edge betweenness centrality value of an edge EF may be 0. An edge betweenness centrality value of an edge FJ may be 0. For calculation when other nodes are used as benchmark nodes, please refer to the above calculation descriptions. In540, n edge betweenness centrality values of each edge in the regional place map obtained based on the operations may be obtained. For example, after the edge betweenness centrality value is calculated for each edge by sequentially taking the node A to the node J inFIG.4as benchmark nodes, 9 edge betweenness centrality values may be obtained for each edge. In some embodiments, as shown inFIG.6, results obtained from the above calculation may be made into a Table600to obtain the edge betweenness centrality value corresponding to each edge when each node as the benchmark node. In550, a total value of edge betweenness centrality of each edge may be obtained by summing the n edge betweenness centrality values of each edge. The total value of edge betweenness centrality may be a sum of the edge betweenness centrality values. In some embodiments, the total value of edge betweenness centrality may be obtained by directly adding each edge betweenness centrality value. For example, as shown inFIG.6, the total value of the edge betweenness centrality of the edge AB may be a sum of 9 edge betweenness centrality values of the edge AB. In the same way, the total value of edge betweenness centrality of each other edge may be calculated respectively. In some embodiments, the total value of edge betweenness centrality may be obtained by weighed sum of each edge betweenness centrality value. In some embodiments, the object platform may determine a second weight of each edge betweenness centrality value among n edge betweenness centrality values of each edge in combination with a user requirement. Based on the second weight, a weighted sum value of the n edge betweenness centrality values of an edge may be taken as a total value of edge betweenness centrality of the edge. The second weight may be contribution degree (importance degree) of each edge betweenness centrality value to the total value of edge betweenness centrality. In some embodiments, the second weight may be determined by a user request. For example, when the user request includes a higher degree of attention to real-time pedestrian volume information, the difference in pedestrian volume between two nodes corresponding to each edge may be used as a weight of the edge. The larger the difference is, the smaller the weight corresponding to the edge is. Applying weights to each edge betweenness centrality value based on the user request may amplify the impact of the user request on the result when the total value of edge betweenness centrality is calculated, so that subsequent division of the regional place map may be more in line with user experience. In560, a score value of edge betweenness centrality of each edge may be obtained based on the total value of the edge betweenness centrality of each edge. The score value of edge betweenness centrality may be a parameter obtained by evaluating the total value of edge betweenness centrality. In some embodiments, the score value of edge betweenness centrality may be a product of the total value of edge betweenness centrality and a score coefficient. The score coefficient may be set by a user or take a default value. For example, the score coefficient may be 0.5. As shown inFIG.6, if the total value of edge betweenness centrality of an edge AB is 8, the score value of edge betweenness centrality of the edge AB may be 4. In some embodiments, the recommendation information of the relevant place in the operation350may also include a recommendation index. The recommendation index may be determined by processing the current pedestrian volume information of each node and the relationship between each node and the intended place in the target sub-map through a recommendation model to determine the recommendation index. The recommendation model may be a model that determines a recommendation index. The recommendation model may be a machine learning model. For example, the recommendation model may be a convolutional neural network model. An input of the recommendation model may include the current pedestrian volume information of each node and the relationship between each node and the intended place. An output of the recommendation model may include a recommendation index of each node. The relationship between each node and the intended place may be any data related to the intended place. For example, the relationship between each mode and the intended place may be an edge weight of each edge connecting the corresponding node and the node corresponding to the intended place in the target sub-map, a score value of edge betweenness centrality of each edge connecting the corresponding node and the node corresponding to the intended place in the target sub-map, or the like. The recommendation index may be a parameter that reflects the degree to which the system recommends a certain relevant node. For example, the recommendation index may be a specific value such as 6, 9, etc., or the recommendation index may also be a recommendation level such as relatively recommended, strongly recommended, not recommended, or the like. In some embodiments, the recommendation index may be an integer within 0 to 10. In some embodiments, the recommendation model may be trained and obtained based on a large number of training samples with labels. In some embodiments, the training sample may be the historical pedestrian volume information of each node and the relationship between each node and the intended place. The labels may be recommendation indexes corresponding to each node. The labels may be obtained by manual annotation. The recommendation index may be determined by the recommendation model, which can quantify the recommendation degree of the node, reduce the unnecessary cost caused by manual recommendation, and improve the recommendation accuracy. In570, the regional place map may be divided based on the score value of edge betweenness centrality score of each edge. In some embodiments, the division process may be performed based on the score value of edge betweenness centrality. For example, an edge with a highest score value of edge betweenness centrality may be used as a segmenting edge, and the regional place map including the two nodes corresponding to the edge may be divided into two sub-maps. For example, based on the aforementioned calculation, score values of the edge betweenness centrality of each edge inFIG.4may be as follows. A score value of edge betweenness centrality of an edge FH may be 1. A score value of edge betweenness centrality of an edge EH may be 34/6. A score value of edge betweenness centrality of an edge HJ may be 19/6. A score value of edge betweenness centrality of an edge DE may be 20. A score value of edge betweenness centrality of an edge EG may be 14/3. A score value of edge betweenness centrality of an edge GJ may be 11/3. A score value of edge betweenness centrality of an edge BD may be 9. A score value of edge betweenness centrality of an edge CD may be 9. A score value of edge betweenness centrality of an edge AB may be 4. A score value of edge betweenness centrality of an edge AC may be 4. A score value of edge betweenness centrality of an edge BC may be 1. A score value of edge betweenness centrality of an edge EF may be 34/6. A score value of edge betweenness centrality of an edge FJ may be 19/6. In the regional place map420, the score value of the betweenness centrality score of the edge DE is the largest, that is, the edge DE may be used as a segmentation edge to divide the regional place map420, and a sub-map421and a sub-map425may be obtained. In some embodiments, an edge whose score value of edge betweenness centrality exceeds a threshold of score value may be used as a segmentation edge to divide the regional place map including the two nodes corresponding to each edge may be divided respectively to obtain a plurality sub-maps. The threshold of score value may be determined based on a user setting. In some embodiments, the target sub-map may be further divided according to the foregoing method based on the division result of the operations to obtain a better division result, and based on the final division result, places corresponding to other nodes in a same sub-map as the target node may be designated as recommended places. In some embodiments, the aforementioned division operations may be continuously repeated until each sub-map includes only one node. If one division corresponds to one stage, and after an original map is divided y times, each sub-map includes only one node, then the entire division process may have y stages, and the modularity (represented by a letter Q in the equation) value corresponding to each stage may be calculated. The division result corresponding to the stage with a largest modularity value (Q value) may be taken as the optimal division result, and the sub-map obtained corresponding to the division result, and other nodes located in the same sub-map as the target node may be designated as recommended places. The modularity value may be understood as a difference between a network and a random network under a certain clustering division. Because the random network may not have a sub-map structure, the larger the difference corresponding to a certain clustering division is, the better the sub-map division result is. The modularity value (Q value) may be obtained based on the following equation (3): Q=12m∑vw[Avw-kvkw2m]δ(cv,cw),(3) where m denotes a count of edges in the original map, v and w denote any two nodes in the map, Avwdenotes whether there is an edge between the two nodes (if there is an edge, the value is 1, otherwise the value is 0), kvand kwdenote degrees of the node V and the node W, (cv, cw) denotes whether the two node are in a same sub-map (if the two nodes are in the same sub-map, the value is 1, otherwise the value is 0). In some embodiments, a theoretical size range of the Q value is [−0.5, 1). Through the preset algorithm described in the some embodiments of the present disclosure, a sub-map with a higher degree of association may be obtained. When a user initiates a query request, the result may be directly queried in a sub-map with a higher degree of association, which avoids the huge amount of calculation caused by querying the entire regional place map and improves the query efficiency. FIG.7is a schematic diagram illustrating an exemplary process for determining a management and control strategy of an intended place according to some embodiments of the present disclosure. In some embodiments, the process700may be performed by an object platform. In some embodiments, the object platform may determine a flow management and control strategy of the intended place730based on a current pedestrian volume of the intended place710and a count of users querying the intended place720. The count of users querying the intended place720may be a count of users who issue a query request within a certain time period. In some embodiments, the count of users who query within a certain time period may be less than or equal to a count of query requests. For example, a same user may issue a plurality of query requests within the time period. By determining the count of users who query, the count of users may be used as a basis for a current or future pedestrian volume of the intended place. The flow management and control strategy may be a solution to manage the pedestrian volume. For example, when the pedestrian volume in a certain place exceeds a threshold of pedestrian volume, the flow management and control strategy may include a measure such as limiting pedestrian volume in the place, diverting pedestrian volume to a relevant place, increasing management and control personnel and resources, etc. When the pedestrian volume in a certain place is less than a threshold of pedestrian volume, the flow management and control strategy may include a measure such as fully opening the place, reducing management and control personnel and resources, etc. In some embodiments, the flow management and control strategy may be determined based on a preset threshold of pedestrian volume. For example, when a sum of the current pedestrian volume and the count of users who query is greater than the preset threshold of pedestrian volume, the flow management and control strategy may include a measure such as flow limiting and flow diverting, etc. When a sum of the current pedestrian volume and the count of users who query is less than the preset threshold of pedestrian volume, the flow management and control strategy may include a measure such as opening, etc. In some embodiments, it may be determined whether to divert the flow to other places based on edge weights of edges between places corresponding to other nodes and the intended place in the sub-map. For example, when there is no relevant place in the target sub-map corresponding to the intended place, no flow diverting may be performed. When there is at least one relevant place in the target sub-map corresponding to the intended place, the pedestrian volume may be diverted to the relevant place. The way to diverting the pedestrian volume may be that the pedestrian volume is diverted to a plurality of relevant places at the same time, or is preferentially diverted to a relevance place corresponding to a node connected by an edge with a low edge weight. The way to diverting the pedestrian volume may be to send recommendation information of the place corresponding to the node whose edge weight between the other node and the intended place is smaller than a preset threshold to a user terminal corresponding to a user who initiates a query. Through the above process of diverting the pedestrian volume, local management and control of the pedestrian volume can be realized, and on the premise of meeting the requests of users, unnecessary dangers caused by the accumulation of pedestrian volume can be avoided. In some embodiments, the method for counting and regulating pedestrian volume in a public place may further include determining whether to control the flow of the intended place currently based on predicted flow of the intended place at a future time. For example, when the predicted flow at the future time is greater than a preset threshold, the flow control may be performed on the intended place. The flow at the future time may be the pedestrian volume in the intended place at a future time. In some embodiments, the flow at the future time may be determined based on popularity of intended place. For detailed description regarding the flow management and control for the intended place at a future time, please refer to the determination and implementation of the above-mentioned flow management and control strategy. In some embodiments, the object platform may determine the popularity of the intended place based on the count of users who query the intended place. The popularity may be a preference of the user for the intended place. The popularity may be determined in a plurality of ways. For example, the popularity may be determined based on a count of the place that is regarded as an intended place in the query requests of users. The popularity may be determined based on user reviews of the place, such as a count of online positive reviews. The popularity may be determined based on real-time pedestrian volume information, or the like. In some embodiments, the popularity may be represented by a specific value, for example, the popularity may be a value from 1 to 5, the larger the value is, the more popular the intended place may be. In some embodiments, the object platform may adjust the current real-time flow of the intended place based on popularity to determine a flow at the future time. The flow at the future time may be calculated through the following equation (4): LW=LD+kH(4), where LWdenotes flow at the future time, LDdenotes current real-time flow, k denotes a constant (which may be any value from 60% to 100%), and H denotes a value of popularity. In some embodiments, an adjustment factor may be determined based on a confidence level of the recommendation model, and the flow at the future time may be adjusted through the adjustment factor to obtain the adjusted flow at the future time. The recommendation model may also include a confidence level. The confidence level may be a parameter reflecting the confidence degree of a recommendation index output by the recommendation model. In some embodiments, the confidence level may be represented as a percentage from 0 to 100%. The higher the confidence level is, the higher the confidence degree of the recommendation index output by the recommendation model is. The adjustment factor may be a parameter for correcting flow at a future time. In some embodiments, the adjustment factor may be calculated through the following equation (5): y=(n/10)×Z×100% (5), where y denotes an adjustment factor, n denotes a recommendation index output by a recommendation model, and Z denotes a confidence level of the recommendation model. The adjusted flow at the future time may be obtained through the following equation (6): L=LW×y(6), where L denotes the adjusted flow at the future time. By correcting the flow at the future time through the adjustment factor, a prediction result that is more in line with the actual situation may be obtained. The flow management and control strategy of the intended place may be determined by the method described in some embodiments of the present disclosure. The flow management and control strategy may be adjusted according to the change of the pedestrian volume, so as to meet the real-time management and control requirements of the dynamic change of the pedestrian volume. In addition, the future pedestrian volume may be predicted through popularity, so that future management and control can be planned in advance to improve current and future travel experience of users. Some embodiments of the present disclosure also disclose a computer-readable storage medium storing computer instructions. The computer instructions may be executed by a processor to perform the method for counting and regulating pedestrian volume in a public place. Having thus described the basic concepts, it may be rather apparent to those skilled in the art after reading this detailed disclosure that the foregoing detailed disclosure is intended to be presented by way of example only and is not limiting. Although not explicitly stated here, those skilled in the art may make various modifications, improvements and amendments to the present disclosure. These alterations, improvements, and modifications are intended to be suggested by this disclosure, and are within the spirit and scope of the exemplary embodiments of this disclosure. Moreover, certain terminology has been used to describe embodiments of the present disclosure. For example, the terms “one embodiment,” “an embodiment,” and/or “some embodiments” mean that a particular feature, structure or characteristic described in connection with the embodiment is included in at least one embodiment of the present disclosure. Therefore, it is emphasized and should be appreciated that two or more references to “an embodiment” or “one embodiment” or “an alternative embodiment” in various parts of this specification are not necessarily all referring to the same embodiment. In addition, some features, structures, or features in the present disclosure of one or more embodiments may be appropriately combined. Furthermore, the recited order of processing elements or sequences, or the use of numbers, letters, or other designations therefore, is not intended to limit the claimed processes and methods to any order except as may be specified in the claims. Although the above disclosure discusses through various examples what is currently considered to be a variety of useful embodiments of the disclosure, it is to be understood that such detail is solely for that purpose, and that the appended claims are not limited to the disclosed embodiments, but, on the contrary, are intended to cover modifications and equivalent arrangements that are within the spirit and scope of the disclosed embodiments. For example, although the implementation of various components described above may be embodied in a hardware device, it may also be implemented as a software only solution, e.g., an installation on an existing server or mobile device. Similarly, it should be appreciated that in the foregoing description of embodiments of the present disclosure, various features are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the disclosure aiding in the understanding of one or more of the various embodiments. However, this disclosure does not mean that the present disclosure object requires more features than the features mentioned in the claims. Rather, claimed subject matter may lie in less than all features of a single foregoing disclosed embodiment. In some embodiments, the numbers expressing quantities or properties used to describe and claim certain embodiments of the present disclosure are to be understood as being modified in some instances by the term “about,” “approximate,” or “substantially.” For example, “about,” “approximate,” or “substantially” may indicate ±20% variation of the value it describes, unless otherwise stated. Accordingly, in some embodiments, the numerical parameters set forth in the written description and attached claims are approximations that may vary depending upon the desired properties sought to be obtained by a particular embodiment. In some embodiments, the numerical parameters should be construed in light of the number of reported significant digits and by applying ordinary rounding techniques. Notwithstanding that the numerical ranges and parameters setting forth the broad scope of some embodiments of the present disclosure are approximations, the numerical values set forth in the specific examples are reported as precisely as practicable. Each of the patents, patent applications, publications of patent applications, and other material, such as articles, books, specifications, publications, documents, things, and/or the like, referenced herein is hereby incorporated herein by this reference in its entirety for all purposes, excepting any prosecution file history associated with same, any of same that is inconsistent with or in conflict with the present document, or any of same that may have a limiting affect as to the broadest scope of the claims now or later associated with the present document. By way of example, should there be any inconsistency or conflict between the description, definition, and/or the use of a term associated with any of the incorporated material and that associated with the present document, the description, definition, and/or the use of the term in the present document shall prevail. In closing, it is to be understood that the embodiments of the present disclosure disclosed herein are illustrative of the principles of the embodiments of the present disclosure. Other modifications that may be employed may be within the scope of the present disclosure. Thus, by way of example, but not of limitation, alternative configurations of the embodiments of the present disclosure may be utilized in accordance with the teachings herein. Accordingly, embodiments of the present disclosure are not limited to that precisely as shown and described. | 75,743 |
11861913 | DETAILED DESCRIPTION Currently, the technology supporting autonomous vehicles continues to improve. Improvements in digital camera technology, light detection and ranging (LIDAR), and other technologies have enabled vehicles to navigate roadways independent of drivers or with limited assistance from drivers. In some environments, such as factories, autonomous vehicles operate without any human intervention whatsoever. While autonomous technology is primarily focused on controlling the movement of vehicles in a traditional sense, little emphasis has been placed on alternative applications that may be implemented on top of these autonomous systems. Indeed, application-level systems generally tend to reinforce existing uses of autonomous systems. For example, experimental uses of autonomous technology have been utilized to perform functions such as returning vehicles to a known location after delivering a passenger or performing refueling of vehicles while not utilized by passengers. However, these approaches fail to fully utilize the hardware and processing power being implemented in autonomous vehicles. Thus, there currently exists a need in the state of the art of autonomous vehicles to provide additional services leveraging the existing hardware installed within such vehicles. In particular, there is a need to solve the technical problem of determining an operating status of a vehicle, such as an autonomous vehicle, during operation. In particular, this technical problem includes the need to determine whether the various computing devices and systems of a vehicle are properly detecting objects around the vehicle during navigation and/or other operation of the vehicle. In some cases, the operating status of the vehicle needs to be determined in real-time. At least some embodiments disclosed herein provide a technological solution to the above technical problem by using a map that stores physical objects previously detected by other vehicles (e.g., the hardware of these other vehicles is used to collect sensor data regarding objects encountered during travel). For example, these other vehicles can be vehicles that have previously traveled over the same road that a current vehicle is presently traveling on. By storing data regarding the previously-detected physical objects, new data received from the current vehicle regarding objects that are being encountered during travel can be compared to the previous object data stored in the map. Based on this comparison, an operating status of the current vehicle can be determined. The current vehicle is, for example, a manually-driven vehicle or an autonomous vehicle (e.g., a car, truck, aircraft, drone, watercraft, etc.). For example, a map can store data regarding a stop sign detected by one or more prior vehicles. The map includes a location of the stop sign. Data received from a current vehicle traveling at or near this same location is compared to data stored in the map. In one example, based on comparing the data received from the current vehicle to the stored map data, the operating status of the current vehicle is determined. For example, it may be determined that the current vehicle is failing to navigate properly based on a failure to detect the stop sign. In a different example, it may be determined that the current vehicle is failing to navigate properly based on a detection of the stop sign that does not properly match data that is stored in the map regarding the stop sign as collected from prior vehicles traveling on the same road. For example, the current vehicle may detect a location of the stop sign, but the newly-detected location does not match the location of the stop sign as stored in the map (e.g., does not match within a predetermined distance tolerance, such as for example, within 5-50 meters). In such a case, the current vehicle is determined as failing to operate properly (even though the object itself was detected, at least to some extent). Various embodiments as described below are used to determine the operating status of a vehicle (e.g., a status of normal operation or abnormal operation). Data is received regarding physical objects detected by prior vehicles. A map is stored that includes locations for each of these detected objects. For example, the map can include data collected by the prior vehicles. For example, the locations of the physical objects can be based on data received from the prior vehicles. In other examples, the locations of the physical objects can be based, at least in part, on other data. Subsequent to receiving the data regarding objects detected by the prior vehicles, new data is received regarding a new object detected by the current vehicle. The new data can include location data for the new object. The new data also may include an object type for the new object. In one embodiment, the map is stored in a cloud storage or other service (sometimes referred to herein simply as the “cloud”). A server having access to the map determines, based on comparing the received new data to the map data, whether the current vehicle is operating properly. For example, the server can determine based on this comparison that the newly-received data fails to match data for at least one object stored in the map. In response to this determination, the server can perform one or more actions. For example, the server can send a communication to the current vehicle. In one case, the communication can cause the current vehicle to take corrective actions, such as terminating an autonomous navigation mode. In various embodiments, a cloud service is used to determine a health status of an autonomous vehicle based on crowdsourced objects that are stored in a map at the cloud service. More specifically, objects detected by prior vehicles (e.g., passive objects, such as traffic signs, traffic lights, etc.) are transmitted to the cloud service. The cloud service creates a dynamic map containing the type of object detected and its location (e.g., the map stores data that a stop sign is located at a position x, y). The cloud service stores the map (e.g. in a database or other data repository). Vehicles in a normal or proper operating status that pass a passive object are expected to reliably detect the object and send its position (and optionally its type) to the cloud service. If the current vehicle fails to detect an existing object or has a false detection, this indicates an abnormal operating state of the current vehicle. In one embodiment, the cloud service determines that there is a system health issue with the current vehicle. The cloud service makes this determination by comparing the position of the current vehicle and any position data regarding the existing object that may be received from the current vehicle with data stored in the crowdsourced object map (e.g., this map was generated based on data previously received from prior vehicles that encountered the same existing object). If there is a mismatch (e.g., new and stored object location, type, and/or other data fail to match within a predetermined tolerance), then the cloud service determines that there is a system health problem with the current vehicle. If the cloud service determines that a system health problem exists, then the cloud service may determine and control one or more actions performed in response. In one embodiment, the actions performed can include signaling the current vehicle that it has system reliability problem. In one example, a communication to the current vehicle provides data regarding how the current vehicle should respond to the determination of the system health problem. For example, in response to receiving the communication, the current vehicle can switch off its autonomous driving mode, use a backup system, and/or activate a braking system to stop the vehicle. In another example, the cloud service can send a communication to a server or other computing device that monitors an operating status for other vehicles (e.g., a central monitoring service). For example, the cloud service can send a communication to a server operated by governmental authorities. The communication can, for example, identify that the current vehicle has one or more system health issues. In some cases, in response to a determination that the current vehicle has been in an accident, the communication can be sent to the server or other computing device. In such a case, one or more indications provided to the server or other computing device can include data obtained from the current vehicle (e.g., data stored by the vehicle regarding operating functions and/or state of the vehicle prior to the accident, such as within a predetermined time period prior to the accident). In one embodiment, the determination whether the current vehicle has been in an accident can be based on data from one or more sensors of the vehicle. For example, data from an accelerometer of the vehicle can indicate a rapid deceleration of the vehicle (e.g., deceleration exceeding a threshold). In another case, data can indicate that an emergency system of the vehicle has been activated, such as for example, an airbag, an emergency braking system, etc. In one embodiment, a route (e.g., data for the current location of the vehicle) taken by a current vehicle being monitored is sent periodically to a cloud service. One or more sensors on the current vehicle are used to obtain data regarding objects in the environment of the current vehicle as it travels along the route. Data from the sensors and/or data generated based on analysis of sensor data and/or other data can be, for example, transmitted to the cloud service wirelessly (e.g., using a 3G, 4G, or 5G network or other radio-based communication system). In one embodiment, in response to determining an operating status or state of a vehicle, one or more actions of the vehicle are configured. For example, an over-the-air firmware update can be sent to the vehicle for updating firmware of a computing device of the vehicle. In one example, the firmware updates a navigation system of the vehicle. The updated configuration is based at least in part on analysis of data that is collected from the vehicle. In various other embodiments, the configuration of one or more actions performed by the vehicle may include, for example, actions related to operation of the vehicle itself and/or operation of other system components mounted in the vehicle and/or otherwise attached to the vehicle. For example, the actions may include actions implemented via controls of an infotainment system, a window status, a seat position, and/or driving style of the vehicle. In some embodiments, the analysis of data collected by the current or prior vehicles includes providing the data as an input to a machine learning model. The current vehicle is controlled by performing one or more actions that are based on an output from the machine learning model. In one example, a machine learning model is trained and/or otherwise used to configure a vehicle (e.g., tailor actions of the vehicle). For example, the machine learning model may be based on pattern matching in which prior patterns of sensor inputs or other data is correlated with desired characteristics or configuration(s) for operation of the vehicle. In one embodiment, data received from the current vehicle may include sensor data collected by the vehicle during its real world services (e.g., when the user is a driver or a passenger). In one embodiment, the data is transmitted from the vehicles to a centralized server (e.g., of a cloud service), which performs machine learning/training, using a supervised method and the received sensor data and/or other data, to generate an updated ANN model that can be subsequently loaded into the vehicle to replace its previously-installed ANN model. The model is used to configure the operation of the vehicle. In some embodiments, the driver can take over certain operations from the vehicle in response to the vehicle receiving a communication that an operating status is abnormal. One or more cameras of the vehicle, for example, can be used to collect image data that assists in implementing this action. In one example, the vehicle is configured in real-time to respond to the received object data. FIG.1illustrates a system to determine an operating status of a vehicle using a crowdsourced object map, according to one embodiment. The system uses an Artificial Neural Network (ANN) model in some embodiments. The system ofFIG.1includes a centralized server101in communication with a set of vehicles111, . . . ,113via a communications network102. For example, vehicle113can be one of a plurality of prior vehicles that has detected objects during travel. These objects can include, for example, object155and object157. Sensors of vehicle113and the other prior vehicles collect and/or generate data regarding the objects that have been detected. Data regarding the detected objects is sent, via communications network102, to a computing device such as server101(e.g., which may be part of a cloud service). Server101receives the object data from vehicle113and the other prior vehicles. Server101stores a map including map data160, which may include a number of records for each object. In one example, map data160includes an object type162and object location164for each object. Subsequent to receiving the data regarding detected objects from the prior vehicles, a current vehicle111transmits data regarding new objects that are being detected during travel. For example, object155can be a new object from the perspective of vehicle111. Server101receives data regarding object155from vehicle111. Server101determines, based on comparing the data regarding object155that is received from vehicle111to data regarding object155that is stored in map data160, whether vehicle111has failed to properly detect at least one object. In some cases, server101may determine that vehicle111has failed to properly detect object155. For example, even though vehicle111may recognize object155(at least to some extent), the object location data received from vehicle111may fail to correspond within a predetermined tolerance or threshold to the object location164that was previously stored in map data160. Other types of discrepancies in received data and stored data may alternatively and/or additionally be identified. In other cases, vehicle111sends its current location to server101. The location of vehicle111is compared to object location164for object155. Server101determines that vehicle111has failed to detect the presence of object155. In one example, this determination may be made based on a failure of vehicle111to report any object data for object155. In response to determining that vehicle111has failed to detect object155, or has failed to properly detect at least a portion of data associated with object155, server101performs one or more actions. For example, server101can transmit a communication to vehicle111that causes a termination of an autonomous driving mode. In one embodiment, sensor data103can be collected in addition to map data160. Sensor data103can be, for example, provided by the current vehicle111and/or prior vehicles113(e.g., sensor data103may be for data other than object data, such as temperature, acceleration, audio, etc.). Sensor data103can be used in combination with map data160and/or other new data received from current vehicle111to perform an analysis of the operating status of vehicle111. In some cases, some or all of the foregoing data can be used to train artificial neural network model119. Additionally, in some cases, an output from artificial neural network model119can be used as part of making a determination that vehicle111has failed to properly detect object155and/or another object. In some embodiments, at least a portion of map data160can be transmitted to vehicle111and a determination regarding operating status of vehicle111can be locally determined by a computing device mounted on or within vehicle111. In some embodiments, artificial neural network model119itself and/or associated data can be transmitted to and implemented on vehicle111and/or other vehicles. An output from artificial neural network model119can be used to determine actions performed in response to determining that the vehicle has failed to properly detect an object. In one embodiment, data from vehicle111is collected by sensors located in vehicle111. The collected data is analyzed, for example, using a computer model such as an artificial neural network (ANN) model. In one embodiment, the collected data is provided as an input to the ANN model. For example, the ANN model can be executed on server101and/or vehicle111. The vehicle111is controlled based on at least one output from the ANN model. For example, this control includes performing one or more actions based on the output. These actions can include, for example, control of steering, braking, acceleration, and/or control of other systems of vehicle111such as an infotainment system and/or communication device. In one embodiment, the server101includes a supervised training module117to train, generate, and update ANN model119that includes neuron biases121, synaptic weights123, and activation functions125of neurons in a network used for processing collected data regarding a vehicle and/or sensor data generated in the vehicles111, . . . ,113. In one embodiment, once the ANN model119is trained and implemented (e.g., for autonomous driving and/or an advanced driver assistance system), the ANN model119can be deployed on one or more of vehicles111, . . . ,113for usage. In various embodiments, the ANN model is trained using data as discussed above. The training can be performed on a server and/or the vehicle. Configuration for an ANN model as used in a vehicle can be updated based on the training. The training can be performed in some cases while the vehicle is being operated. Typically, the vehicles111, . . . ,113have sensors, such as a visible light camera, an infrared camera, a LIDAR, a RADAR, a sonar, and/or a set of peripheral sensors. The sensors of the vehicles111, . . . ,113generate sensor inputs for the ANN model119in autonomous driving and/or advanced driver assistance system to generate operating instructions, such as steering, braking, accelerating, driving, alerts, emergency response, etc. During the operations of the vehicles111, . . . ,113in their respective service environments, the vehicles111, . . . ,113encounter items, such as events or objects, that are captured in the sensor data. The ANN model119is used by the vehicles111, . . . ,113to provide the identifications of the items to facilitate the generation of commands for the operations of the vehicles111, . . . ,113, such as for autonomous driving and/or for advanced driver assistance. For example, a vehicle111may communicate, via a wireless connection115to an access point (or base station)105, with the server101to submit the sensor input to enrich the sensor data103as an additional dataset for machine learning implemented using the supervised training module117. The wireless connection115may be made via a wireless local area network, a cellular communications network, and/or a communication link107to a satellite109or a communication balloon. In one example, user data collected from a vehicle can be similarly transmitted to the server. Optionally, the sensor input stored in the vehicle111may be transferred to another computer for uploading to the centralized server101. For example, the sensor input can be transferred to another computer via a memory device, such as a Universal Serial Bus (USB) drive, and/or via a wired computer connection, a BLUETOOTH connection or WiFi connection, a diagnosis tool, etc. Periodically, the server101runs the supervised training module117to update the ANN model119based on updated data that has been received. The server101may use the sensor data103enhanced with the other data based on prior operation by similar vehicles (e.g., vehicle113) that are operated in the same geographical region or in geographical regions having similar traffic conditions (e.g., to generate a customized version of the ANN model119for the vehicle111). Optionally, the server101uses the sensor data103along with object data received from a general population of vehicles (e.g.,111,113) to generate an updated version of the ANN model119. The updated ANN model119can be downloaded to the current vehicle (e.g., vehicle111) via the communications network102, the access point (or base station)105, and communication links115and/or107as an over-the-air update of the firmware/software of the vehicle. Optionally, the vehicle111has a self-learning capability. After an extended period on the road, the vehicle111may generate a new set of synaptic weights123, neuron biases121, activation functions125, and/or neuron connectivity for the ANN model119installed in the vehicle111using the sensor inputs it collected and stored in the vehicle111. As an example, the centralized server101may be operated by a factory, a producer or maker of the vehicles111, . . . ,113, or a vendor of the autonomous driving and/or advanced driver assistance system for vehicles111, . . . ,113. FIG.2shows an example of a vehicle configured using an Artificial Neural Network (ANN) model, according to one embodiment. The vehicle111ofFIG.2includes an infotainment system149, a communication device139, one or more sensors137, and a computer131that is connected to some controls of the vehicle111, such as a steering control141for the direction of the vehicle111, a braking control143for stopping of the vehicle111, an acceleration control145for the speed of the vehicle111, etc. The computer131of the vehicle111includes one or more processors133, memory135storing firmware (or software)127, the ANN model119(e.g., as illustrated inFIG.1), and other data129. In one example, firmware127is updated by an over-the-air update in response to a determination by server101that vehicle111is failing to properly detect objects during travel. Alternatively, and/or additionally, other firmware of various computing devices or systems of vehicle111can be updated. The one or more sensors137may include a visible light camera, an infrared camera, a LIDAR, RADAR, or sonar system, and/or peripheral sensors, which are configured to provide sensor input to the computer131. A module of the firmware (or software)127executed in the processor(s)133applies the sensor input to an ANN defined by the model119to generate an output that identifies or classifies an event or object captured in the sensor input, such as an image or video clip. Data from this identification and/or classification can be included in object data sent from current vehicle111to server101to determine if an object is being properly detected. Alternatively, and/or additionally, the identification or classification of the event or object generated by the ANN model119can be used by an autonomous driving module of the firmware (or software)127, or an advanced driver assistance system, to generate a response. The response may be a command to activate and/or adjust one of the vehicle controls141,143, and145. In one embodiment, the response is an action performed by the vehicle where the action has been configured based on an update command from server101(e.g., the update command can be generated by server101in response to determining that vehicle111is failing to properly detect objects). In one embodiment, prior to generating the control response, the vehicle is configured. In one embodiment, the configuration of the vehicle is performed by updating firmware of vehicle111. In one embodiment, the configuration of the vehicle includes updating of the computer model stored in vehicle111(e.g., ANN model119). The server101stores the received sensor input as part of the sensor data103for the subsequent further training or updating of the ANN model119using the supervised training module117. When an updated version of the ANN model119is available in the server101, the vehicle111may use the communication device139to download the updated ANN model119for installation in the memory135and/or for the replacement of the previously installed ANN model119. These actions may be performed in response to determining that vehicle111is failing to properly detect objects. In one example, the outputs of the ANN model119can be used to control (e.g.,141,143,145) the acceleration of a vehicle (e.g.,111), the speed of the vehicle111, and/or the direction of the vehicle111, during autonomous driving or provision of advanced driver assistance. Typically, when the ANN model is generated, at least a portion of the synaptic weights123of some of the neurons in the network is updated. The update may also adjust some neuron biases121and/or change the activation functions125of some neurons. In some instances, additional neurons may be added in the network. In other instances, some neurons may be removed from the network. In one example, data obtained from a sensor of vehicle111may be an image that captures an object using a camera that images using lights visible to human eyes, or a camera that images using infrared lights, or a sonar, radar, or LIDAR system. In one embodiment, image data obtained from at least one sensor of vehicle111is part of the collected data from the current vehicle that was analyzed. In some instances, the ANN model is configured for a particular vehicle111based on the sensor and other collected data. FIG.3shows a method to determine an operating status of a vehicle (e.g., vehicle111) based on object data (e.g., object location and type) received from prior vehicles and stored in a map (e.g., map data160), according to one embodiment. In block601, data is received regarding objects detected by prior vehicles. The detected objects include a first object (e.g., a stop sign). In block603, a map is stored that includes the detected objects. For example, each object has an object type and a location (e.g., a geographic position). In block605, subsequent to receiving the data regarding objects detected by the prior vehicles, new data is received regarding one or more objects detected by a new vehicle (e.g., vehicle111). In block607, based on comparing the new object data from the new vehicle to data stored in the map, a determination is made that the new vehicle has failed to detect the first object. In block609, in response to determining that the new vehicle has failed to detect the first object, an action is performed. For example, the action can include sending at least one communication to a computing device other than the new vehicle. In one example, the computing device is a server that monitors an operating status for each of two or more vehicles. In one embodiment, a method includes: receiving, by at least one processor, data regarding objects detected by a plurality of vehicles, the detected objects including a first object; storing, by the at least one processor, a map comprising the detected objects, each object having an object type and a location; subsequent to receiving the data regarding objects detected by the plurality of vehicles, receiving first data regarding objects detected by a first vehicle; determining, based on comparing the received first data to the map, that the first vehicle has failed to detect the first object; and in response to determining that the first vehicle has failed to detect the first object, performing an action. In one embodiment, the first object is a traffic sign, a traffic light, a road lane, or a physical structure. In one embodiment, the method further comprises determining a location of the first vehicle, and determining that the first vehicle has failed to detect the first object includes comparing the location of the first vehicle to the location of the first object stored in the map. In one embodiment, the first vehicle is a vehicle other than the plurality of vehicles. In another embodiment, the first vehicle is included in the plurality of vehicles. In one embodiment, the method further comprises analyzing the first data, wherein performing the action comprises configuring, based on analyzing the first data, at least one action performed by the first vehicle. In one embodiment, the first vehicle is an autonomous vehicle comprising a controller and a storage device, the action comprises updating firmware of the controller, and the updated firmware is stored in the storage device. In one embodiment, the method further comprises training a computer model using at least one of supervised or unsupervised learning, wherein the training is done using data collected from the plurality of vehicles, and wherein determining that the first vehicle has failed to detect the first object is based at least in part on an output from the computer model. In one embodiment, the first data comprises image data obtained from at least one sensor of the first vehicle. In one embodiment, the method further comprises analyzing the first data, wherein the first data comprises image data, and analyzing the first data comprises performing pattern recognition using the image data to determine a type of object detected by the first vehicle. In one embodiment, the method further comprises providing the first data as an input to an artificial neural network model, and the action performed is based on an output from the artificial neural network model. FIG.4shows a method to perform an action for a vehicle (e.g., vehicle111) based on a comparison of object data received from the vehicle with object data stored in a map, according to one embodiment. In block611, data is received regarding objects detected by prior vehicles. For example, the data is received by server101. In block613, a map is stored that includes locations for each of the objects detected by the prior vehicles. For example, the stored map includes map data160and is stored in the cloud. In block615, after receiving the data regarding objects detected by the prior vehicles, data is received regarding at least one new object detected by a new vehicle. The received data includes location data for the at least one new object. In block617, a computing device compares the received new object data to the prior object data stored in the map. Based on this comparison, the computing device determines that the new object data fails to match the stored map data for at least one object. In block619, in response to determining that the new object data fails to match data stored in the map, one or more actions are performed. In one embodiment, a non-transitory computer storage medium stores instructions which, when executed on a computing device, cause the computing device to perform a method comprising: receiving data regarding objects detected by a plurality of vehicles; storing a map including respective locations for each of the detected objects; subsequent to receiving the data regarding objects detected by the plurality of vehicles, receiving first data regarding at least one object detected by a first vehicle, the first data comprising location data for the at least one object; determining, based on comparing the received first data to the map, that the first data fails to match data for at least one object stored in the map; and in response to determining that the first data fails to match data for at least one object stored in the map, performing an action. In one embodiment, the first data comprises data obtained from an artificial neural network model of the first vehicle. In one embodiment, a system includes: at least one processor; and memory storing instructions configured to instruct the at least one processor to: receive data regarding objects, each object detected by at least one of a plurality of vehicles, and the detected objects including a first object; store, based on the received data, a map including the detected objects, each of the detected objects associated with a respective location; receive first data regarding at least one object detected by a first vehicle; determine, based on comparing the received first data to the map, that the first vehicle has failed to detect the first object; and in response to determining that the first vehicle has failed to detect the first object, performing at least one action. In one embodiment, performing the at least one action comprises sending a communication to the first vehicle, the communication causing the first vehicle to perform at least one of deactivating an autonomous driving mode of the first vehicle or activating a backup navigation device of the first vehicle. In one embodiment, performing the at least one action comprises sending at least one communication to a computing device other than the first vehicle. In one embodiment, the computing device is a server that monitors a respective operating status for each of a plurality of vehicles. In one embodiment, the instructions are further configured to instruct the at least one processor to determine that an accident involving the first vehicle has occurred, and wherein the at least one communication to the computing device comprises data associated with operation of the first vehicle prior to the accident. In one embodiment, the instructions are further configured to instruct the at least one processor to compare a location of an object detected by the first vehicle to a location of the first object, wherein determining that the first vehicle has failed to detect the first object is based at least in part on comparing the location of the object detected by the first vehicle to the location of the first object. In one embodiment, the received data regarding objects detected by the plurality of vehicles includes data collected by a plurality of sensors for each of the vehicles. In one embodiment, performing the at least one action is based on an output from a machine learning model, and wherein the machine learning model is trained using training data, the training data comprising data collected by sensors of the plurality of vehicles. FIG.5shows an autonomous vehicle303configured in response to determining an operating status of the vehicle, according to one embodiment. In one embodiment, a system controls a display device308(or other device, system, or component) of an autonomous vehicle303. For example, a controller307controls the display of images on one or more display devices308. Server301may store, for example, map data160. Server301may determine, using map data160, that vehicle303is failing to properly detect objects. In response to this determination, server301may cause the controller307to terminate an autonomous navigation mode. Other actions can be performed in response to this determination including, for example, configuring a vehicle303by updating firmware304, updating computer model312, updating data in database310, and/or updating training data314. The controller307may receive data collected by one or more sensors306. The sensors306may be, for example, mounted in the autonomous vehicle303. The sensors306may include, for example, a camera, a microphone, a motion detector, and/or a camera. At least a portion of the sensors may provide data associated with objects newly detected by vehicle303during travel. The sensors306may provide various types of data for collection by the controller307. For example, the collected data may include image data from the camera and/or audio data from the microphone. In one embodiment, the controller307analyzes the collected data from the sensors306. The analysis of the collected data includes providing some or all of the collected data as one or more inputs to a computer model312. The computer model312can be, for example, an artificial neural network trained by deep learning. In one example, the computer model is a machine learning model that is trained using training data314. The computer model312and/or the training data314can be stored, for example, in memory309. An output from the computer model312can be transmitted to server301as part of object data for comparison to map data160. In one embodiment, memory309stores a database310, which may include data collected by sensors306and/or data received by a communication interface305from computing device, such as, for example, a server301(server301can be, for example, server101ofFIG.1in some embodiments). In one example, this communication may be used to wirelessly transmit collected data from the sensors306to the server301. The received data may include configuration, training, and other data used to configure control of the display devices308by controller307. For example, the received data may include data collected from sensors of autonomous vehicles other than autonomous vehicle303. This data may be included, for example, in training data314for training of the computer model312. The received data may also be used to update a configuration of a machine learning model stored in memory309as computer model312. InFIG.5, firmware304controls, for example, the operations of the controller307in controlling the display devices308and other components of vehicle303. The controller307also can, for example, run the firmware304to perform operations responsive to communications from the server301. The autonomous vehicle303includes volatile Dynamic Random-Access Memory (DRAM)311for the storage of run-time data and instructions used by the controller307. In one embodiment, memory309is implemented using various memory/storage technologies, such as NAND gate based flash memory, phase-change memory (PCM), magnetic memory (MRAM), resistive random-access memory, and 3D XPoint, such that the memory309is non-volatile and can retain data stored therein without power for days, months, and/or years. In one embodiment server301communicates with the communication interface305via a communication channel. In one embodiment, the server301can be a computer having one or more Central Processing Units (CPUs) to which vehicles, such as the autonomous vehicle303, may be connected using a computer network. For example, in some implementations, the communication channel between the server301and the communication interface305includes a computer network, such as a local area network, a wireless local area network, a cellular communications network, or a broadband high-speed always-connected wireless communication connection (e.g., a current or future generation of mobile network link). In one embodiment, the controller307performs data intensive, in-memory processing using data and/or instructions organized in memory309or otherwise organized in the autonomous vehicle303. For example, the controller307can perform a real-time analysis of a set of data collected and/or stored in the autonomous vehicle303. In some embodiments, the set of data further includes collected or configuration update data obtained from server301. At least some embodiments of the systems and methods disclosed herein can be implemented using computer instructions executed by the controller307, such as the firmware304. In some instances, hardware circuits can be used to implement at least some of the functions of the firmware304. The firmware304can be initially stored in non-volatile storage media, such as by using memory309, or another non-volatile device, and loaded into the volatile DRAM311and/or the in-processor cache memory for execution by the controller307. In one example, the firmware104can be configured to use the techniques discussed herein for controlling display or other devices of a vehicle as configured based on collected user data. FIG.6shows a vehicle703configured via a communication interface using a cloud service, according to one embodiment. For example, vehicle703is configured in response to a determination by server701that vehicle703is failing to properly detect objects during navigation. The vehicle703includes a communication interface705used to receive a configuration update, which is based on analysis of collected object data. For example, the update can be received from server701and/or client device719. Communication amongst two or more of the vehicle703, a server701, and a client device719can be performed over a network715(e.g., a wireless network). This communication is performed using communication interface705. In one embodiment, the server701controls the loading of configuration data (e.g., based on analysis of collected data) of the new configuration into the memory709of the vehicle. Server701includes memory717. In one embodiment, data associated with usage of vehicle703is stored in a memory721of client device719. A controller707controls one or more operations of the vehicle703. For example, controller707controls user data714stored in memory709. Controller707also controls loading of updated configuration data into memory709and/or other memory of the vehicle703. Controller707also controls display of information on display device(s)708. Sensor(s)706provide data regarding operation of the vehicle703. At least a portion of this operational data can be communicated to the server701and/or the client device719. Memory709can further include, for example, configuration data712and/or database710. Configuration data712can be, for example, data associated with operation of the vehicle703as provided by the server701. The configuration data712can be, for example, based on collected and/or analyzed object data. Database710can store, for example, configuration data for a user and/or data collected by sensors706. Database710also can store, for example, navigational maps and/or other data provided by the server701. In one embodiment, when a vehicle is being operated, data regarding object detection activity of vehicle703can be communicated to server701. This activity may include navigational and/or other operational aspects of the vehicle703. As illustrated inFIG.6, controller707also may control the display of images on one or more display devices708(e.g., an alert to the user can be displayed in response to determining by server701and/or controller707that vehicle703is failing to properly detect objects). Display device708can be a liquid crystal display. The controller707may receive data collected by one or more sensors706. The sensors706may be, for example, mounted in the vehicle703. The sensors706may include, for example, a camera, a microphone, a motion detector, and/or a camera. The sensors706may provide various types of data for collection and/or analysis by the controller707. For example, the collected data may include image data from the camera and/or audio data from the microphone. In one embodiment, the image data includes images of one or more new objects encountered by vehicle703during travel. In one embodiment, the controller707analyzes the collected data from the sensors706. The analysis of the collected data includes providing some or all of the object data to server701. In one embodiment, memory709stores database710, which may include data collected by sensors706and/or configuration data received by communication interface705from a computing device, such as, for example, server701. For example, this communication may be used to wirelessly transmit collected data from the sensors706to the server701. The data received by the vehicle may include configuration or other data used to configure control of navigation, display, or other devices by controller707. InFIG.6, firmware704controls, for example, the operations of the controller707. The controller707also can, for example, run the firmware704to perform operations responsive to communications from the server701. The vehicle703includes volatile Dynamic Random-Access Memory (DRAM)711for the storage of run-time data and instructions used by the controller707to improve the computation performance of the controller707and/or provide buffers for data transferred between the server701and memory709. DRAM711is volatile. FIG.7is a block diagram of an autonomous vehicle including one or more various components and/or subsystems, each of which can be updated in various embodiments to configure the vehicle and/or perform other actions associated with the vehicle (e.g., configuration and/or other actions performed in response to a determination by server101that the vehicle is failing to properly detect objects). The system illustrated inFIG.7may be installed entirely within a vehicle. The system includes an autonomous vehicle subsystem402. In the illustrated embodiment, autonomous vehicle subsystem402includes map database402A, radar devices402B, Lidar devices402C, digital cameras402D, sonar devices402E, GPS receivers402F, and inertial measurement units402G. Each of the components of autonomous vehicle subsystem402comprise standard components provided in most current autonomous vehicles. In one embodiment, map database402A stores a plurality of high-definition three-dimensional maps used for routing and navigation. Radar devices402B, Lidar devices402C, digital cameras402D, sonar devices402E, GPS receivers402F, and inertial measurement units402G may comprise various respective devices installed at various positions throughout the autonomous vehicle as known in the art. For example, these devices may be installed along the perimeter of an autonomous vehicle to provide location awareness, collision avoidance, and other standard autonomous vehicle functionality. Vehicular subsystem406is additionally included within the system. Vehicular subsystem406includes various anti-lock braking systems406A, engine control units406B, and transmission control units406C. These components may be utilized to control the operation of the autonomous vehicle in response to the streaming data generated by autonomous vehicle subsystem402A. The standard autonomous vehicle interactions between autonomous vehicle subsystem402and vehicular subsystem406are generally known in the art and are not described in detail herein. The processing side of the system includes one or more processors410, short-term memory412, an RF system414, graphics processing units (GPUs)416, long-term storage418and one or more interfaces420. The one or more processors410may comprise central processing units, FPGAs, or any range of processing devices needed to support the operations of the autonomous vehicle. Memory412comprises DRAM or other suitable volatile RAM for temporary storage of data required by processors410. RF system414may comprise a cellular transceiver and/or satellite transceiver. Long-term storage418may comprise one or more high-capacity solid-state drives (SSDs). In general, long-term storage418may be utilized to store, for example, high-definition maps, routing data, and any other data requiring permanent or semi-permanent storage. GPUs416may comprise one more high throughput GPU devices for processing data received from autonomous vehicle subsystem402A. Finally, interfaces420may comprise various display units positioned within the autonomous vehicle (e.g., an in-dash screen). The system additionally includes a reporting subsystem404which performs data collection (e.g., collection of data obtained from sensors of the vehicle that is used to drive the vehicle). The reporting subsystem404includes a sensor monitor404A which is connected to bus408and records sensor data transmitted on the bus408as well as any log data transmitted on the bus. The reporting subsystem404may additionally include one or more endpoints to allow for system components to transmit log data directly to the reporting subsystem404. The reporting subsystem404additionally includes a packager404B. In one embodiment, packager404B retrieves the data from the sensor monitor404A or endpoints and packages the raw data for transmission to a central system (illustrated inFIG.8). In some embodiments, packager404B may be configured to package data at periodic time intervals. Alternatively, or in conjunction with the foregoing, packager404B may transmit data in real-time and may compress data to facilitate real-time communications with a central system. The reporting subsystem404additionally includes a batch processor404C. In one embodiment, the batch processor404C is configured to perform any preprocessing on recorded data prior to transmittal. For example, batch processor404C may perform compression operations on the data prior to packaging by packager404B. In another embodiment, batch processor404C may be configured to filter the recorded data to remove extraneous data prior to packaging or transmittal. In another embodiment, batch processor404C may be configured to perform data cleaning on the recorded data to conform the raw data to a format suitable for further processing by the central system. Each of the devices is connected via a bus408. In one embodiment, the bus408may comprise a controller area network (CAN) bus. In some embodiments, other bus types may be used (e.g., a FlexRay or MOST bus). Additionally, each subsystem may include one or more additional busses to handle internal subsystem communications (e.g., LIN busses for lower bandwidth communications). FIG.8is a block diagram of a centralized autonomous vehicle operations system, according to various embodiments. As illustrated, the system includes a number of autonomous vehicles502A-502E. In one embodiment, each autonomous vehicle may comprise an autonomous vehicle such as that depicted inFIG.7. Each autonomous vehicle502A-502E may communicate with a central system514via a network516. In one embodiment, network516comprises a global network such as the Internet. In one example, central system514is implemented using one or more of servers101,301, and/or701. In one example, one or more of autonomous vehicles502A-502E are autonomous vehicle703. The system additionally includes a plurality of client devices508A,508B. In the illustrated embodiment, client devices508A,508B may comprise any personal computing device (e.g., a laptop, tablet, mobile phone, etc.). Client devices508A,508B may issue requests for data from central system514. In one embodiment, client devices508A,508B transmit requests for data to support mobile applications or web page data, as described previously. In one embodiment, central system514includes a plurality of servers504A. In one embodiment, servers504A comprise a plurality of front end webservers configured to serve responses to client device508A,508B. The servers504A may additionally include one or more application servers configured to perform various operations to support one or more vehicles. In one embodiment, central system514additionally includes a plurality of models504B. In one embodiment, models504B may store one or more neural networks for classifying autonomous vehicle objects. The models504B may additionally include models for predicting future events. In some embodiments the models504B may store a combination of neural networks and other machine learning models. Central system514additionally includes one or more databases504C. The databases504C may include database record for vehicles504D, personalities504E, and raw data504F. Raw data504F may comprise an unstructured database for storing raw data received from sensors and logs as discussed previously. The present disclosure includes methods and apparatuses which perform these methods, including data processing systems which perform these methods, and computer readable media containing instructions which when executed on data processing systems cause the systems to perform these methods. Each of the server101and the computer131of a vehicle111, . . . , or113can be implemented as one or more data processing systems. A typical data processing system may include includes an inter-connect (e.g., bus and system core logic), which interconnects a microprocessor(s) and memory. The microprocessor is typically coupled to cache memory. The inter-connect interconnects the microprocessor(s) and the memory together and also interconnects them to input/output (I/O) device(s) via I/O controller(s). I/O devices may include a display device and/or peripheral devices, such as mice, keyboards, modems, network interfaces, printers, scanners, video cameras and other devices known in the art. In one embodiment, when the data processing system is a server system, some of the I/O devices, such as printers, scanners, mice, and/or keyboards, are optional. The inter-connect can include one or more buses connected to one another through various bridges, controllers and/or adapters. In one embodiment the I/O controllers include a USB (Universal Serial Bus) adapter for controlling USB peripherals, and/or an IEEE-1394 bus adapter for controlling IEEE-1394 peripherals. The memory may include one or more of: ROM (Read Only Memory), volatile RAM (Random Access Memory), and non-volatile memory, such as hard drive, flash memory, etc. Volatile RAM is typically implemented as dynamic RAM (DRAM) which requires power continually in order to refresh or maintain the data in the memory. Non-volatile memory is typically a magnetic hard drive, a magnetic optical drive, an optical drive (e.g., a DVD RAM), or other type of memory system which maintains data even after power is removed from the system. The non-volatile memory may also be a random access memory. The non-volatile memory can be a local device coupled directly to the rest of the components in the data processing system. A non-volatile memory that is remote from the system, such as a network storage device coupled to the data processing system through a network interface such as a modem or Ethernet interface, can also be used. In the present disclosure, some functions and operations are described as being performed by or caused by software code to simplify description. However, such expressions are also used to specify that the functions result from execution of the code/instructions by a processor, such as a microprocessor. Alternatively, or in combination, the functions and operations as described here can be implemented using special purpose circuitry, with or without software instructions, such as using Application-Specific Integrated Circuit (ASIC) or Field-Programmable Gate Array (FPGA). Embodiments can be implemented using hardwired circuitry without software instructions, or in combination with software instructions. Thus, the techniques are limited neither to any specific combination of hardware circuitry and software, nor to any particular source for the instructions executed by the data processing system. While one embodiment can be implemented in fully functioning computers and computer systems, various embodiments are capable of being distributed as a computing product in a variety of forms and are capable of being applied regardless of the particular type of machine or computer-readable media used to actually effect the distribution. At least some aspects disclosed can be embodied, at least in part, in software. That is, the techniques may be carried out in a computer system or other data processing system in response to its processor, such as a microprocessor, executing sequences of instructions contained in a memory, such as ROM, volatile RAM, non-volatile memory, cache or a remote storage device. Routines executed to implement the embodiments may be implemented as part of an operating system or a specific application, component, program, object, module or sequence of instructions referred to as “computer programs.” The computer programs typically include one or more instructions set at various times in various memory and storage devices in a computer, and that, when read and executed by one or more processors in a computer, cause the computer to perform operations necessary to execute elements involving the various aspects. A machine readable medium can be used to store software and data which when executed by a data processing system causes the system to perform various methods. The executable software and data may be stored in various places including for example ROM, volatile RAM, non-volatile memory and/or cache. Portions of this software and/or data may be stored in any one of these storage devices. Further, the data and instructions can be obtained from centralized servers or peer to peer networks. Different portions of the data and instructions can be obtained from different centralized servers and/or peer to peer networks at different times and in different communication sessions or in a same communication session. The data and instructions can be obtained in entirety prior to the execution of the applications. Alternatively, portions of the data and instructions can be obtained dynamically, just in time, when needed for execution. Thus, it is not required that the data and instructions be on a machine readable medium in entirety at a particular instance of time. Examples of computer-readable media include but are not limited to non-transitory, recordable and non-recordable type media such as volatile and non-volatile memory devices, read only memory (ROM), random access memory (RAM), flash memory devices, floppy and other removable disks, magnetic disk storage media, optical storage media (e.g., Compact Disk Read-Only Memory (CD ROM), Digital Versatile Disks (DVDs), etc.), among others. The computer-readable media may store the instructions. The instructions may also be embodied in digital and analog communication links for electrical, optical, acoustical or other forms of propagated signals, such as carrier waves, infrared signals, digital signals, etc. However, propagated signals, such as carrier waves, infrared signals, digital signals, etc. are not tangible machine readable medium and are not configured to store instructions. In general, a machine readable medium includes any mechanism that provides (i.e., stores and/or transmits) information in a form accessible by a machine (e.g., a computer, network device, personal digital assistant, manufacturing tool, any device with a set of one or more processors, etc.). In various embodiments, hardwired circuitry may be used in combination with software instructions to implement the techniques. Thus, the techniques are neither limited to any specific combination of hardware circuitry and software nor to any particular source for the instructions executed by the data processing system. The above description and drawings are illustrative and are not to be construed as limiting. Numerous specific details are described to provide a thorough understanding. However, in certain instances, well known or conventional details are not described in order to avoid obscuring the description. References to one or an embodiment in the present disclosure are not necessarily references to the same embodiment; and, such references mean at least one. In the foregoing specification, the disclosure has been described with reference to specific exemplary embodiments thereof. It will be evident that various modifications may be made thereto without departing from the broader spirit and scope as set forth in the following claims. The specification and drawings are, accordingly, to be regarded in an illustrative sense rather than a restrictive sense. | 59,451 |
11861914 | DETAILED DESCRIPTION First Embodiment FIG.1is a block diagram showing a configuration example of an object recognition device100according to a first embodiment of the present invention. The object recognition device100in this embodiment is applied to, for example, an automobile. The object recognition device100in this embodiment includes a sensor1and a controller10that processes information acquired by the sensor1. The controller10includes a point clouds grouping unit11, a polygon approximation unit12, and an approximate polygon point clouds belongingness determination unit13. The sensor1acts as a three-dimensional point clouds acquisition unit that acquires three-dimensional point clouds data of a surrounding environment of the sensor1, that is, an object existing in a surrounding environment of a vehicle on which the object recognition device100is mounted according to this embodiment. The sensor1is assumed to be, for example, a light detecting and ranging sensor (LiDAR), a radar, or a stereo camera, and LiDAR is adopted as the sensor1in this embodiment. The acquired three-dimensional point clouds data (hereinafter, simply referred to as “point clouds”) is input to the controller10. An example of the acquired point clouds will be described later with reference toFIG.2. The controller10includes, for example, a central processing unit (CPU), a read-only memory (ROM), a random-access memory (RAM), and an input/output interface (I/O interface). The ROM included in the controller10stores a program for executing each function of each functional unit described below. In other words, the controller10implements functions of the point clouds grouping unit11, the polygon approximation unit12, and the approximate polygon point clouds belongingness determination unit13described below by executing various programs stored in the ROM. The point clouds grouping unit11projects point clouds acquired by the sensor1onto a two-dimensional plane parallel to ground, and groups the point clouds according to a proximity of the point clouds. As such a grouping method, a method called Euclidean Clustering is used in this embodiment, but the method is not limited to this method, and another method of grouping according to the proximity of the point clouds may be used. The polygon approximation unit12performs so-called polygon fitting in which the point clouds grouped by the point clouds grouping unit11are approximated to a predetermined polygon. The approximate polygon point clouds belongingness determination unit13determines, on the basis of a positional relation between the sensor1and the approximate polygon approximated by the polygon approximation unit12, whether or not the sides corresponding to the grouped point clouds among sides constituting the approximate polygon are blind zones when viewed from the sensor1. When the approximate polygon point clouds belongingness determination unit13determined that the sides corresponding to the grouped point clouds are the blind zones when viewed from the sensor1, then determines that the point clouds corresponding to the sides belong to a plurality of objects (plural objects), and recognizes that the point clouds indicate positions of the plural objects. Meanwhile, when the approximate polygon point clouds belongingness determination unit13determined that the sides corresponding to the grouped point clouds are not the blind zone when viewed from the sensor1, then determines that the point clouds constituting the sides belong to one object (single object), and recognizes that the point clouds indicate a position of the single object corresponding to the approximate polygon generated by the polygon approximation. With such a configuration, the object recognition device100can determine whether an object indicated by acquired point clouds indicates a single object or plural objects. Subsequently, details of a method for determining whether the acquired point clouds indicate a single object or plural objects will be described with reference toFIGS.2and3. FIG.2is a diagram illustrating an example of point clouds acquired by the object recognition device100. In the example shown inFIG.2, a situation is shown in which a vehicle20is parked adjacent to a planting30on a side of a road (near a boundary between the road and a sidewalk) in front of a vehicle on which the sensor1is mounted. A fan-shaped line extending from the sensor1indicates a viewing angle of the sensor1, and a direction in which the viewing angle expands when viewed from the sensor1is a front of the vehicle. Here, the sensor1adopted as the three-dimensional point clouds acquisition unit in this embodiment is LiDAR or a sensor that outputs laser lights (emission waves) toward a plurality of directions within the viewing angle, and detects the laser lights (reflected waves) that hit and reflected at a plurality of reflection points on a surface of an object existing in the viewing angle, and then acquire relative positions with respect to the sensor1of plural reflection points2(hereinafter, simply referred to as the reflection points2) corresponding to positions of a side surface of the object on the sensor1side. The object recognition device100recognizes a position of the object existing in a surrounding environment of the object recognition device100on the basis of point clouds including plural detection points which acquired as plural reflection points2by using the sensor1. The sensor1may be any sensor as long as it can acquire a position of an object surface as a point clouds within a viewing angle, and is not limited to a radar or LiDAR. The sensor1may be, for example, a stereo camera. That is, the object recognition device100can calculate a position of an object surface for each pixel corresponding to an object existing within a predetermined viewing angle (an angle of view) imaged by, for example, the stereo camera, and recognize a position of the object existing in the surrounding environment of the object recognition device100on the basis of point clouds having positions corresponding to each pixel as detection points. In the following description, it is assumed that the sensor1adopted as the three-dimensional point group acquisition unit is LiDAR or a radar. Here, in a scene shown inFIG.2, as a result of grouping the point clouds acquired by the sensor1according to the proximity of the point clouds, point clouds at a side surface of the planting30on the sensor1side and point clouds at a rear end of the parked vehicle20may be grouped together. That is, in reality, point clouds corresponding to the plural objects including the planting30and the parked vehicle20may be grouped together as point clouds corresponding to one object (a single object). Thus, in the related art, rectangular approximation as polygon approximation is performed on the basis of the point clouds grouped as the single object, as a result, a dotted rectangular approximate polygon (an approximate rectangle) shown inFIG.2is generated. When such an approximate rectangle is generated, it is recognized that the single object corresponding to the approximate rectangle exists at the position where the approximate rectangle is generated. However, the position where the approximate rectangle is generated is actually a recess formed by arranging plural objects of the planting30and the parked vehicle20in an L-shape, and thus, there is a problem that the position of the object recognized by the approximate rectangle acquired by the polygon approximation and positions of actual objects are different from each other. In this embodiment, in order not to cause such a difference, it is determined whether or not the point clouds acquired by using the sensor1are point clouds corresponding to plural objects. If it can be determined that the point clouds correspond to the plural objects, even if an approximate polygon is once generated by the polygon approximation, it can be correctly recognized that the position where the approximate polygon is generated is a recess formed by the plural objects, and no object exists at that position. Hereinafter, the details of the method for determining whether or not the point clouds acquired by using the sensor1correspond to the plural objects will be described. FIG.3is a flowchart illustrating an object recognition method by the object recognition device100according to this embodiment. Processes illustrated in the flowchart are programmed in the controller10so as to be constantly executed at regular intervals while the object recognition device100is activated. In step S101, the controller10acquires the point clouds including plural reflection points2by using the sensor1. When the point clouds are acquired, a process in the following step S102is executed. In step S102, the controller10projects the point clouds acquired in step S101onto the two-dimensional plane parallel to the ground, and groups the point clouds according to the proximity of the point clouds. In step S103, the controller10performs the polygon approximation (polygon fitting) based on the point clouds grouped in step S102. The polygon approximated in this embodiment is a quadrangle (rectangle), but may be a triangle or another polygon. The approximate polygon is fitted so that an error between positions of sides constituting the approximate polygon and positions of the point clouds is the smallest. In step S104, the controller10performs a blind sides determination with respect to an approximate polygon. In the blind side determination, the controller10identifies a side that is located in a blind zone of the approximate polygon (a side which includes the reflection points2located in the blind zone, also referred to as the blind side below) when viewed from the sensor1among the sides constituting the approximate polygon approximated in step S103. In other words, the side of the approximate polygon which does not correspond to the sensor1is identified. Details of a method for identifying the side that is located in a blind zone will be described with reference toFIG.4. FIG.4is a diagram illustrating a method for determining the blind sides with respect to the approximate polygon according to this embodiment. Rectangles whose four corners are indicated by A to D in the diagram are the rectangles (approximate rectangles) approximated based on the point clouds acquired in step S101. FIG.4(a)is a diagram illustrating an example of the method for determining the blind sides with respect to the approximate polygon. In the present example, the side that can be observed by the sensor1(the side that is not located in a blind zone when viewed from the sensor1and faces the sensor1) is first identified on the basis of a relation between sides constituting the approximate rectangle generated in the polygon approximation and the sensor1, and then other sides are identified as blind sides. Specifically, first, a point closest to the sensor1and points on both sides of the point are identified, among the points A to D of the four corners of the approximate rectangle, and therefore a total three points are identified. Then, two of the identified three points are selected, and a combination of two points which maximizes an angle formed by a line connecting each of the selected two points and the sensor1is examined. Referring to a diagram on a left side ofFIG.4(a), the point closest to the sensor1is a point C, and the points on both sides of the point C are points A and D. As shown in the diagram, the combination of the two points, which maximizes the angle formed by the line connecting the two points among the three points A, C, and D and the sensor1, is the points A and D. Referring to a diagram on a right side ofFIG.4(a), the point closest to the sensor1is the point C, and the points on both sides of the point C are the points A and D. As shown in the diagram, the combination of the two points, which maximizes the angle formed by the line connecting the two points among the three points A, C, and D and the sensor1, is the points C and D. If respective distances the points C and D to the sensor1are the same and they are the closest points, either the point C or the point D can be selected. Among the sides constituting the approximate rectangle, all the line segments, which connect the two points selected as the combination of the two points which maximizes the angle and the point closest to the sensor1, are identified as the sides that can be observed by the sensor1, and the other sides are identified as the blind sides. With reference to the diagram on the left side ofFIG.4(a), observable sides are identified as a side A-C and a side C-D which are sides surrounded by thick solid lines in the diagram, and blind sides are identified as a side A-B and a side B-D which are sides other than the observable sides. With reference to the diagram on the right side ofFIG.4(a), the observable side is identified as a side C-D which is a side surrounded by the thick solid lines in the diagram, and blind sides are identified as the side A-B, the side A-C, and the side B-D which are sides other than the observable side. FIG.4(b)is a diagram illustrating another example of the method for determining the blind sides with respect to the approximate polygon. In the present example, the blind side is directly identified without identifying the observable side. Specifically, one of the four points A to D of the approximate rectangle is selected, when a straight line connecting the selected point and the sensor1intersects sides other than sides connected to the selected point, the sides connected to the selected point are identified as blind sides. By examining all of the points A to D, the blind sides can be identified from all the sides constituting the approximate rectangle. With reference to a diagram on a left side ofFIG.4(b), a straight line connecting the point B and the sensor1intersects the side C-D which is a side other than the side A-B and the side B-D connected to the point B, and thus, the side A-B and the side B-D, which are the sides connected to the point B and surrounded by the thick solid lines in the diagram, are identified as the blind zone sides. With reference to a diagram on a right side ofFIG.4(b), a straight line connecting the point A and the sensor1intersects the side C-D which is a side other than the side A-B and the side A-C connected to the point A, and thus, the side A-B and the side A-C, which are the sides connected to the point A and surrounded by the thick solid lines in the diagram, are identified as the blind sides. Further, a straight line connecting the point B and the sensor1intersects the side C-D which is a side other than the side A-B and the side B-D connected to the point B, and thus, the side A-B and the side B-D connected to the point B are also identified as the blind sides. It should be noted that the method described with reference toFIG.4(b)can be applied not only to a rectangular approximate rectangle but also to all other polygonal shapes. In this way, when the sides, which are located in the blind zones when viewed from the sensor1among the sides constituting the approximate rectangle, are identified, a process in a subsequent step S105is performed (seeFIG.3). In step S105, the controller10performs a determination of a belongingness of reflection points to an approximate polygon. In the determination of the belongingness of reflection points to the approximate polygon, the controller10determines which side of the sides constituting the approximate rectangle the plural reflection points2constituting the point clouds which are bases of the approximate rectangle generated by the polygon approximation correspond to (belong to). More specifically, the controller10determines whether or not the plural reflection points2constituting the point clouds acquired in step S101belong to a side that is not the blind side (a side that can be observed by the sensor1) specified in step S104. A method for determining a side to which the reflection points2belong will be described with reference toFIG.5. FIG.5is a diagram illustrating an example of a method for determining a belongingness of reflection points to an approximate polygon according to this embodiment. A rectangle represented by four corner points A to D indicates the approximate rectangle approximated in step S103. Reflection points2a,2b, and2cindicate a part of the plurality of reflection points2constituting the point groups acquired in step S101. In the determination of the belongingness of reflection points to the approximate polygon, based on a part of the plurality of reflection points2constituting the acquired point groups, it is determined which side of the sides constituting the approximate rectangle the point clouds formed by the reflection points2belong to. In the present example, first, perpendicular lines are drawn from the reflection points2to the sides constituting the approximate rectangle. In this case, when the perpendicular lines cannot be drawn from the reflection points2to the sides constituting the approximate rectangle, it is determined that there is no side to which the reflection points2belong. Meanwhile, when there are intersections between the perpendicular lines drawn from the reflection points2and the sides of the approximate rectangle, it is determined that the reflection points2belong to a side whose distance from the reflection points2to the intersections is the smallest among the sides where the intersections exist. With reference toFIG.5, for example, at the reflection point2a, perpendicular lines can be drawn on a side A-B and a side C-D. In the two perpendicular lines drawn from the reflection point2a, a side where an intersection with a perpendicular line having a minimum length exists is the side A-B. Thus, the reflection point2ais determined to belong to the side A-B. Since the reflection point2bcannot have a perpendicular line drawn on any side, it is determined that a side to which the reflection point belongs does not exist. The reflection point2ccan draw perpendicular lines on all sides, but among these perpendicular lines, an intersection with a perpendicular line having a smallest length exists on a side A-C. Therefore, the reflection point2cis determined to belong to the side A-C. In this way, it is possible to determine which side of the sides constituting the approximate rectangle the reflection points2constituting the point clouds acquired in step S101belong to. When it is determined the side to which the reflection points2belong, a process in the following step S106is performed. In step S106, the controller10determines whether or not the reflection points2constituting the point clouds acquired in step S101belong to a side that is not located in a blind zone when viewed from the sensor1among the sides constituting the approximate rectangle. That is, the controller10determines whether or not the side, determined in step S105, to which the reflection points2belong is a side (observable side) other than the blind sides. When it is determined that reflection points2belonging to a side other than the blind sides exist among the plural reflection points2constituting the point clouds acquired in step S101, it is determined that the reflection points2are not located in the blind zones of the approximate rectangle with respect to the sensor1, and then a process in step S107is performed. Meanwhile, when it is determined that there are no reflection points2belonging to a side other than the blind sides, that is, when it is determined that the reflection points2belong to the blind sides, it is determined that the reflection points2are located in the blind zones of the approximate rectangle with respect to the sensor1, and then a process in step S108is performed. In step S107, it is determined that the reflection points2constituting the acquired point clouds belong to a side that is not located in a blind zone, and thus, the controller10determines that the reflection points2are not located in the blind zones of the approximate rectangle with respect to the sensor1, and determines that the object indicated by the point clouds acquired in step S101is a single object. As a result, the object recognition device100recognizes that an object having an outer shape corresponding to the approximate rectangle actually exists in a top view at a position of the approximate rectangle generated by the polygon approximation based on the point clouds acquired in step S101. Meanwhile, in step S108, it is determined that the reflection points2constituting the acquired point clouds belong to the blind sides, and thus, the controller10determines that the reflection points2are located in the blind zone of the approximate rectangle with respect to the sensor1, and determines that the object indicated by the point clouds acquired in step S101are the plural objects. As a result, the object recognition device100recognizes that an actual location corresponding to the position where the approximate rectangle is generated by the polygon approximation based on the point clouds acquired in step S101is a recess formed by plural objects (for example, in the example shown inFIG.2, the planting30and the parked vehicle20), and an object corresponding to the approximate rectangle does not actually exist. When it is determined by the above processes whether the object indicated by the point clouds is configured with the reflection points2of the single object or the reflection points2of the plural objects, the controller10ends a series of processes related to object recognition. In addition, it should be noted that it is not always necessary to perform the process to determine which side of the approximate polygon the reflection points2constituting the acquired point clouds belong to (processes after step S104), after performing the polygon approximation. For example, when the object indicated by the point clouds is an elongated object, it may be difficult to determine a side to which the point clouds belong. For example, the elongated object existing on a side of a road may not have a great influence on running of a vehicle and may be negligible. Therefore, the processes after step S104can be performed only when a length of a shortest side among the sides constituting the approximate polygon is equal to or greater than a predetermined value. As a result, the processes after step S104are performed only for objects other than an elongated object whose side to which the reflection points2belong is difficult to determine or an elongated object that is so elongated that it does not need to be recognized, and thus, a calculation load can be reduced. By tracking the acquired point clouds in a time series, it may be possible to identify an attribute of the object indicated by the point clouds on the basis of a movement manner of the point clouds. More specifically, for example, by measuring a position of an object existing in a surrounding environment in a time series using a so-called time series tracking technique, it may be possible to add an attribute to the object indicated by the point clouds on the basis of a size and a movement manner of the point clouds. In this case, when the object can be clearly identified as a single object based on the attribute of the object indicated by the point clouds, the processes after step S104can be omitted. As a result, the processes after step S104are performed only when the object indicated by the point clouds cannot be clearly determined to be a single object, and thus, the calculation load can be reduced. As described above, by performing the processes described with reference toFIG.3, the object recognition device100in this embodiment can appropriately determine whether the approximate polygon generated by polygon approximate on the grouped point clouds correctly indicates a position of an actual object, or the approximate polygon does not indicate an actual object and there is actually a recess formed by plural objects and then the approximate polygon does not correctly indicate the position of the actual object. As a result, when it is performed the polygon approximation based on the point clouds acquired by recognizing the position of the object existing in the surrounding environment, it is possible to reliably determine whether or not the generated approximate polygon is a result which indicates the position of the actual object correctly. Meanwhile, when it is determined whether the object indicated by the point clouds acquired by using the sensor1is a single object or plural objects, it is not always necessary to perform the polygon approximation based on the point clouds. For example, when distances between the plurality of respective reflection points2constituting the point clouds and the sensor1can be detected with a high accuracy, the object recognition device100can determine without performing the polygon approximation that the plural reflection points2, which constitute the point clouds when the polygon approximation is performed, are located in the blind zones of the approximate polygon. As a result, it can determine that whether or not the object indicated by the point clouds includes plural objects. More specifically, when there are the reflection points2more closer to the sensor1on both sides of the reflection points2farthest from the sensor1among the plural reflection points2constituting the acquired point clouds, the object recognition device100determines without performing the polygon approximation that the reflection points2are located in the blind zones of the approximate polygon if the polygon approximation is performed on the point clouds, and then determines that plural objects are indicated by the point clouds. Also, when there are the reflection points2more farther from the sensor1on both sides of the reflection points2closest to the sensor1, the object recognition device100may be configured to determine without performing the polygon approximation that the reflection points2are not located in the blind zones of the approximate polygon if the polygon approximation is performed on the point clouds, and then may determine that the object indicated by the point clouds is a single object. However, normally, a measurement error occurs in the distances between the plurality of respective reflection points2and the sensor1, and thus, from the result acquired by the polygon approximation on the point clouds as described above, it is preferable to determine whether the object indicted by the point clouds is a single object or plural objects in response to determining whether or not the reflection points2are located in the blind zone of the approximate polygon. As described above, the object recognition device100in the first embodiment performs the object recognition method using the sensor1that acquires the position of the object existing in the surrounding environment as point clouds including the plurality of reflection points2(detection points) in the top view. The method includes grouping the point clouds according to a proximity; and determining, when performing polygon approximation on the grouped point clouds, whether or not at least part of the detection points constituting the grouped point clouds are located in a blind zone of an approximate polygon acquired by the polygon approximation on the point clouds with respect to the sensor; recognizing the grouped point clouds as point clouds corresponding to plural objects when it is determined that the detection points are located in the blind zone with respect to the sensor; and recognizing the grouped point clouds as point clouds corresponding to a single object of the approximate polygon when it is determined that the detection points are not located in the blind zone with respect to the sensor. Therefore, it is possible to determine whether or not the object, which is indicated by the approximate polygon acquired by the polygon approximation on the grouped point clouds, exists actually. Since it is possible to determine that the object indicated by the grouped point clouds is the plural objects, it is possible to correctly recognize that the grouped point clouds are point clouds corresponding to a recess formed by the plural object and there is no object at the position. When the length of the shortest side of the sides constituting the approximate polygon is longer than the predetermined value, the object recognition device100in the first embodiment determines whether or not at least part of the reflection points2constituting the point clouds corresponds to a side which is located in a blind zone with respect to the sensor1among the sides constituting the approximate polygon. In this way, it is possible to determine whether or not the reflection points2(detection points) are located in the blind zone only for objects other than an elongated object whose side to which the point clouds belong is difficult to determine or an elongated object that is so elongated that it does not need to be recognized, and thus, the calculation load can be reduced. The object recognition device100in the first embodiment measures the position of the object existing in the surrounding environment in the time series, identifies the attribute of the object measured in the time series, and when the grouped point clouds correspond to an object whose attribute is not identified, determines whether or not at least part of the detection points2constituting the point clouds are located in the blind zone of the approximate polygon with respect to the sensor1. As a result, it can be determined whether or not the reflection points2are located in the blind zone of the approximate polygon only when the object indicated by the point clouds cannot be clearly identified to be a single object, and thus, the calculation load can be reduced. Second Embodiment Hereinafter, an object recognition device200according to a second embodiment of the present invention will be described. FIG.6is a block diagram showing a configuration example of the object recognition device200according to this embodiment. The object recognition device200is different from the object recognition device100in the first embodiment in that a point clouds reduction unit21and a division unit22are further provided. The point clouds reduction unit21reduces the number of point clouds (the number of reflection points2) acquired by the sensor1. When it is determined that the object indicated by the acquired point clouds includes the plural objects, the division unit22recognizes a plurality of sides constituting an approximate polygon approximated based on the point clouds as plural objects, respectively. Processes performed by the point clouds reduction unit21and the division unit22will be described with reference toFIG.7. FIG.7is a flowchart illustrating an object recognition method by the object recognition device200according to this embodiment. Processes illustrated in the flowchart are programmed in the controller10so as to be constantly performed at regular intervals while the object recognition device200is activated. It is different from the object recognition method in the first embodiment described above with reference toFIG.3in that step S201and step S202are added. Hereinafter, the difference from the first embodiment will be mainly described, and descriptions of the same steps as in the first embodiment will be omitted. In step S201, the controller10reduces the point clouds acquired in step S101. A reduction method is not particularly limited, and for example, a voxel filter may be used. By reducing the point clouds in the step, a calculation load of later processes performed based on the point clouds can be reduced. If it is unnecessary to reduce the calculation load, it is unnecessary to perform this process in step S101, which is not a necessary process. Step S202is a process performed when it is determined that the object indicated by the acquired point clouds includes the plural objects. The controller10divides each side to which the reflection points2belong, that is, each blind zone side, from the sides constituting the approximate rectangle. The controller10performs a process of cutting out and recognizing (dividing and recognizing) each blind side. That is, the controller10recognizes that the reflection points2corresponding to each blind side are the reflection points2corresponding to a single object. Details of the method of cutting out and recognizing an object based on blind zone sides (object division method) will be described with reference toFIG.8. For the sake of simplicity, in the following description, a matter that the reflection points2are recognized as the reflection points2corresponding to a single object for each blind side is expressed as “cutting out” or “object division”. FIG.8is a diagram illustrating an object division method performed by the object recognition device200according to the embodiment. Rectangles whose four corners are indicated by A to D in the diagram are rectangles (approximate quadrangles) that are approximated to rectangles based on the point clouds acquired in step S101. FIG.8(a)is a diagram illustrating an object division method performed when it is determined that the object indicated by the point clouds includes plural objects. As described with reference toFIG.4in the first embodiment, it is identified in step S106that the reflection points2constituting the point clouds belong to the blind sides, and then it is determined that the object indicated by the point clouds include the plural objects. According to the object division in this embodiment, the reflection points2corresponding to the blind sides are recognized as the reflection points2corresponding to the single object for each blind side to which the reflection points2belong. For example, in a diagram on a left side ofFIG.8(a), by object division, for each blind side (side A-B and side B-D) to which the reflection points2belong, the reflection points2corresponding to each blind side are cut out as two rectangular single objects (each object having a shape shown by a thick line frame in the diagram) according to distribution of the point clouds. As another example, in a diagram on a right side ofFIG.8(b), by the object division, for each blind side (the side A-B, side A-C, and the side B-D) to which the reflection points2belong, the reflection points2corresponding to each blind side are cut out as three rectangular single objects (each object having the shape shown by the thick line frame in the diagram) according to the distribution of the point clouds. These cut out quadrangular single objects (hereinafter, simply rectangular objects) correspond to a position of an actual object existing in the surrounding environment, and thus, the object recognition device200can more correctly recognize the position of the object existing in the surrounding environment based on the rectangular objects. For example, when the diagram on the left side ofFIG.8(a)is a result of detecting the surrounding environment shown inFIG.2, a rectangular object related to the side A-B corresponds to the planting30, and a rectangular object related to the side B-D corresponds to the rear end of the parked vehicle20. As shown inFIG.8(a), it is not always necessary to cut out all the blind sides to which the reflection points2belong, and a part of at least one or more blind sides may be cut out as specified sides. In this case, for example, a side having a large number of reflection points2belonging to the blind side or a side having a large ratio of the number of reflection points2according to the length of the blind side may be preferentially cut out as the specified side. When it is determined that an object exists in the surrounding environment of the object recognition device200, not only the sensor1but also at least one or more other sensors different from the sensor1may be used, and the object existing in the surrounding environment may be simultaneously detected by using a plurality of sensors. In this case, only when the object detected by the sensor1matches the object detected by other sensors different from the sensor1, the matched object can be recognized as an object that actually exists in the surrounding environment. As a result, a position of an object existing in the surrounding environment can be detected with a higher accuracy than that using only the sensor1. When the object recognition device200is configured in this way, a plurality of rectangular objects corresponding to the plural objects detected by the sensor1are generated by performing the above object division, and thus, it is possible to more easily determine matching between the object detected by the sensor1and the object detected by the other sensors different from the sensor1. An example of a method for determining the matching of objects detected by a plurality of sensors will be described later in a description of a third embodiment. FIG.8(b)is a diagram illustrating a case where it is determined that the object indicated by the point clouds is not the plural objects. As shown in the diagram, when the reflection points2belong to a side (observable side) that is not a blind side, the object indicated by the point clouds is determined to be a single object (step S107), and thus, the object division is not performed. As described above, according to the object recognition device200in the second embodiment, when an object is recognized as plural objects, each side corresponding to the reflection points2among the sides constituting the approximate polygon is recognized as a single object. Therefore, the position of the object existing in the surrounding environment can be correctly recognized. According to the object recognition device200in the second embodiment, the side recognized as a single object is determined according to the number of corresponding reflection points2. Therefore, for example, an object that is close to the sensor1and reflects more laser lights output from the sensor1can be preferentially recognized as a single object. Third Embodiment Hereinafter, an object recognition device300according to the third embodiment of the present invention will be described. FIG.9is a block diagram showing a configuration example of the object recognition device300according to this embodiment. The object recognition device300is different from the object recognition device200in the second embodiment in that a camera3, an attribute determination unit32, and an information integration unit33are further provided. Hereinafter, the difference from the second embodiment will be mainly described. The camera3acts as an attribute identification source acquisition unit that acquires information for determining an attribute of an object existing in the surrounding environment in the attribute determination unit32described later. The attribute here is information representing a characteristic of the object, which is mainly identified from a shape of the object, such as a person (pedestrian), a car, a guardrail, and a planting. The camera3captures an image of the surrounding environment and provides captured video data (camera images) to the attribute determination unit32as attribute identification source. It should be noted that a configuration adopted as the attribute determination source acquisition unit is not limited to the camera3. The attribute identification source acquisition unit may be another sensor capable of acquiring information that can identify an attribute by subsequent processes. The attribute determination unit32identifies an attribute of each object existing in the surrounding environment (surrounding object) based on the camera image acquired by the camera3, and adds the identified attribute to the surrounding object. The information integration unit33integrates the surrounding object to which the attribute is added by the attribute determination unit32and the information about the object detected by using the sensor1. Details of the processes performed by the respective functional units will be described with reference toFIGS.10and11. First, a scene in which the information about the object acquired by the sensor1and the camera3is integrated will be described with reference toFIG.10.FIG.10is a diagram illustrating a method for causing the sensor1and the camera3according to the third embodiment to acquire information about an object existing in the surrounding environment. First, the method for causing the sensor1to acquire the information about the object existing in the surrounding environment is the same as the method described with reference toFIG.2in the first embodiment. That is, the sensor1acquires the point clouds corresponding to the position of the side surface, on the sensor1side, of the object existing in the fan-shaped sensor viewing angle extending from the sensor1. The point clouds acquired by the sensor1are identified as a single object or a plurality of rectangular objects cut out by the object division through the polygon approximation. As shown in the diagram, the camera3acquires an image of the surrounding environment in the same direction as the sensor1. That is, according to a scene shown inFIG.10, the camera3acquires an object similar to the object indicated by the point clouds acquired by the sensor1, that is, a camera image including the side surface of the planting30and the rear end of the parked vehicle20as information about the object existing in the surrounding environment. FIG.11is a flowchart illustrating an object recognition method by the object recognition device300according to this embodiment. Processes illustrated in the flowchart are programmed in the controller10so as to be constantly performed at regular intervals while the object recognition device300is activated. It is different from the object recognition method in the second embodiment described above with reference toFIG.7in that step S301to step S303are added. Hereinafter, the difference from the second embodiment will be mainly described, and the descriptions of the same steps as in the second embodiment will be omitted. In step S301, the controller10identifies an attribute of the surrounding object based on the image acquired by the camera3. In step S302, the controller10determines whether or not the information about the object acquired by the sensor1matches the information about the object acquired by the camera3. In this embodiment, matching of the information is determined based on a coincidence degree of the information about the object. The coincidence degree may be calculated based on, for example, a positional relation between the object detected by the sensor1and the object detected by the camera3. Specifically, for example, a distance from the sensor1to an object existing in the surrounding environment is detected and a distance from the camera3to the surrounding object is detected, and the coincidence degree may be calculated based on a difference between the distances between the respective objects from the sensor1and the camera3. It can be determined that the closer the distances are, the higher the coincidence degree between the objects detected by the sensor1and the camera3is. When the calculated coincidence degree exceeds a predetermined threshold value, it is determined that the information about the respective objects matches. In addition to or in place of such a calculation method, another calculation method may be adopted. Another calculation method for the coincidence degree will be described with reference toFIG.12. FIG.12is a diagram illustrating a method for determining whether or not the information about the object acquired by the sensor1matches the information about the object acquired by the camera3. First, in the camera image acquired by the camera3, an occupied frame of the object in the camera image (a frame surrounding an outer shape of the object in the image) is extracted. For example, when the parked vehicle20shown inFIG.10is shown in the image acquired by the camera3, a figure B which substantially matches an outer shape of the rear end surface of the parked vehicle20is extracted as the occupied frame of the parked vehicle20that appears when the parked vehicle20is captured from behind. Then, an attribute “car” identified in step S302is given to the figure B. Meanwhile, from the information about the object generated based on the point clouds acquired by the sensor1, a figure A is extracted as a rectangular projection frame showing the outer shape of the object when viewed from a horizontal direction. Specifically, when the object indicated by the point clouds acquired by the sensor1is a single object, in step S103, the approximate rectangle acquired by the polygon approximation is projected onto the camera image to make the approximate rectangle two-dimensional, thereby generating the figure A as a projection frame. When the object indicated by the point clouds acquired by the sensor1includes plural objects, a rectangular object cut out by the object division is projected onto the camera image and making it two-dimensional, thereby generating the figure A. In the camera image, the figure A is projected at a position and a size that substantially coincide with a position and a size of the point clouds acquired by the sensor1. However, since the approximate polygon that is a basis of the figure A is the information generated by projecting the approximate polygon on the two-dimensional plane, the information about a height of the figure A is not included. Therefore, an appropriate constant value is set with respect to a size of the figure A in a height direction as the object existing on the road. In this way, the figure A generated based on the point clouds acquired by the sensor1is projected in the camera image from which the figure B is extracted. In the camera image, a shared range (a matching range) between the figure B as the occupied frame of the imaged surrounding object and the figure A as the projection frame of the approximate polygon generated based on the point clouds or the cut out rectangular object is calculated. When the calculated shared range is equal to or greater than the threshold value, that is, when the following Formula (1) is satisfied, each object acquired separately by the sensor1and the camera3is determined to match (same object). The threshold value is appropriately set in consideration of a performance, etc., of the sensor1and the camera3to be adopted as a reliable value for matching of each object. [Formula 1] (A∧B)/(A∨B)>threshold value (1) In step S301, when it is determined that the information about the object acquired by the sensor1matches the information about the object acquired by the camera3and determined that these objects are the same object, a process in the following step S302is performed. When the above Formula (1) is not satisfied and the information about the object acquired by the sensor1does not match the information about the object acquired by the camera3, ending one cycle of the flow, then the processes from step S101are repeatedly performed. In step S302, the controller10integrates the information about the object acquired by the sensor1and the information about the object in the camera image determined to match the object. Therefore, the attribute identified based on the information acquired by the camera3is added to the object acquired by the sensor1. As a result, an amount, an accuracy, and a reliability of the information about the object acquired by the sensor1can be improved. As described above, according to the object recognition device300in the third embodiment, a recognition unit (camera3) different from the sensor1is used to recognize an object existing in the surrounding environment and identify an attribute of the object, then it is determined whether or not the plural objects or the single object recognized by using the sensor1matches the object recognized by using the camera3. Then, when it is determined that the plural objects or the single object matches the object recognized by using the recognition unit, the plural objects or the single object is applied with an attribute. As a result, an attribute can be added to the object whose attribute is unknown only by the information about the object acquired by the sensor1. Since the attribute is added in response to recognizing the same object multiple and determining the objects matching, the reliability of the information about the object acquired by the sensor1can be improved. According to the object recognition device300in the third embodiment, the distance from the sensor1to the object existing in the surrounding environment is detected, the distance from the camera3to the object existing in the surrounding environment is detected, and it is determined whether or not the plural objects or the single object recognized by using the sensor1and the object recognized by using the camera3are the same object based on the distance from the sensor1to the object and the distance from the camera3to the object. As a result, whether or not the respective objects match can be determined based on the positional relation between the object detected by the sensor1and the object detected by the camera3. According to the object recognition device300in the third embodiment, the camera3acquires the image including the object existing in the surrounding environment, the plural objects or the single object recognized by the sensor1is projected onto the image, the shared range between the object included in the image and the plural objects or the single object projected on the image are calculated, and based on the calculated shared range, it is determined whether or not the plural objects or the single object recognized by using the sensor1matches the object recognized by using the camera3. Therefore, it can be determined whether or not the respective objects match based on the shared range in the surface or space of the object detected by the sensor1and the object detected by the camera3. As a result, information can be integrated only for objects that are likely to be the same object. Although the embodiments of the present invention are described above, the above embodiments merely show some of application examples of the present invention and do not intend to limit a technical scope of the present invention to the specific configurations of the above embodiments. The above embodiments can be appropriately combined as long as there is no contradiction. | 50,846 |
11861915 | DETAILED DESCRIPTION Operating a vehicle in an autonomous driving mode involves evaluating information about the vehicle's external environment. A perception system of the vehicle, which has one or more sensors such as lidar, radar and/or cameras, detects surrounding objects. There can be dynamic objects such as vehicles, bicyclists, joggers or pedestrians, or other road users moving around the environment. In addition to identifying dynamic objects, the perception system also detects static objects such as buildings, trees, signage, crosswalks or stop lines on the roadway, the presence of parked vehicles on a side of the roadway, etc. Detecting and appropriately responding to traffic control devices such as signage can be particularly important when operating in an autonomous driving mode. However, there are many different road sign types used for different purposes, including regulatory signs (e.g., a stop, yield, no turn or speed limit sign), warning signs (e.g., notifying about an upcoming road condition such as a sharp turn or a no passing zone), school zone signs (e.g., identifying a school crossing or slow zone), guide signs (e.g., that provide information about a state our local route marker), emergency management and civil defense signs, motorist service and recreational signs (e.g., that provide information about nearby facilities), as well as temporary traffic control signs (which may be positioned on or adjacent to a roadway). In the United States, the Manual on Uniform Traffic Control Devices (MUTCD) provides standards as to the size, shape, color, etc., for such signage. In many situations the signage may be readily visible and simple to understand. However, other situations such as alternatives for a given sign, signs that indicate multiple conditions (e.g., permitted turns from different lanes), location-specific signs or non-standard signs can be challenging to not only detect, but to also understand and react to. By way of example, no-turn signage may have text that states “NO TURN ON RED”, a right-turn arrow inside a crossed-out red circle without any text, both text and the arrow indicator, date and/or time restrictions, etc. In order to avoid undue delay, the vehicle needs to correctly identify the sign and respond appropriately. Different approaches can be employed to detect and evaluate signage. For instance, images from camera sensors could be applied to a detector that employs machine learning (ML) to identify what the sign is. This could be enhanced by adding template matching to the ML approach. Imagery and lidar data could be employed to find high intensity patches, using an ML classifier to detect, e.g., speed limit signs. For non-standard or region-specific signage, camera and lidar information may be used to try to identify what the sign is. Alternatively, ray tracing may be applied to camera imagery to perform text detection to infer what the sign says. However, such specific approaches may be computationally intensive (e.g., have a high computation “cost” to the onboard computing system), may be difficult to maintain, and may not be scalable or extensible to new signs or variations of known signs. According to aspects of the technology, sensor information such as camera imagery and lidar depth, intensity and height (elevation) information are applied to a sign detector module. This enables the system to detect the presence of a given sign. A modular classification approach is applied to the detected sign. This can include selective application of one or more trained machine learning classifiers, as well as a text and symbol detector. An annotator can be used to arbitrate between the results to identify a specific sign type. Additional enhancements can also be applied, such as identifying the location (localization) of the signage in the surrounding3D scene, and associating the sign with other nearby objects in the driving environment. And should the system not be able to determine what the specific sign type is or what it means, the vehicle could send the details to a remote assistance service to determine how to handle the sign (e.g., by updating an electronic map). Example Vehicle Systems The technology may be employed in all manner of vehicles configured to operate in an autonomous driving mode, including vehicles that transport passengers or items such as food deliveries, packages, cargo, etc. While certain aspects of the disclosure may be particularly useful in connection with specific types of vehicles, the vehicle may be one of many different types of vehicles including, but not limited to, cars, vans, motorcycles, cargo vehicles, buses, recreational vehicles, emergency vehicles, construction equipment, etc. FIG.1Aillustrates a perspective view of an example passenger vehicle100, such as a minivan or sport utility vehicle (SUV).FIG.1Billustrates a perspective view of another example passenger vehicle120, such as a sedan. The passenger vehicles may include various sensors for obtaining information about the vehicle's external environment. For instance, a roof-top housing unit (roof pod assembly)102may include one or more lidar sensors as well as various cameras (e.g., optical or infrared), radar units, acoustical sensors (e.g., microphone or sonar-type sensors), inertial (e.g., accelerometer, gyroscope, etc.) or other sensors (e.g., positioning sensors such as GPS sensors). Housing104, located at the front end of vehicle100, and housings106a,106bon the driver's and passenger's sides of the vehicle may each incorporate lidar, radar, camera and/or other sensors. For example, housing106amay be located in front of the driver's side door along a quarter panel of the vehicle. As shown, the passenger vehicle100also includes housings108a,108bfor radar units, lidar and/or cameras also located towards the rear roof portion of the vehicle. Additional lidar, radar units and/or cameras (not shown) may be located at other places along the vehicle100. For instance, arrow110indicates that a sensor unit (not shown) may be positioned along the rear of the vehicle100, such as on or adjacent to the bumper. Depending on the vehicle type and sensor housing configuration(s), acoustical sensors may be disposed in any or all of these housings around the vehicle. Arrow114indicates that the roof pod102as shown includes a base section coupled to the roof of the vehicle. And arrow116indicated that the roof pod102also includes an upper section raised above the base section. Each of the base section and upper section may house different sensor units configured to obtain information about objects and conditions in the environment around the vehicle. The roof pod102and other sensor housings may also be disposed along vehicle120ofFIG.1B. By way of example, each sensor unit may include one or more sensors of the types described above, such as lidar, radar, camera (e.g., optical or infrared), acoustical (e.g., a passive microphone or active sound emitting sonar-type sensor), inertial (e.g., accelerometer, gyroscope, etc.) or other sensors (e.g., positioning sensors such as GPS sensors). FIGS.1C-Dillustrate an example cargo vehicle150, such as a tractor-trailer truck. The truck may include, e.g., a single, double or triple trailer, or may be another medium or heavy-duty truck such as in commercial weight classes4through8. As shown, the truck includes a tractor unit152and a single cargo unit or trailer154. The trailer154may be fully enclosed, open such as a flat bed, or partially open depending on the type of goods or other cargo to be transported. In this example, the tractor unit152includes the engine and steering systems (not shown) and a cab156for a driver and any passengers. As seen inFIG.1D, the trailer154includes a hitching point, known as a kingpin,158, as well as landing gear159for when the trailer is detached from the tractor unit. The kingpin158is typically formed as a solid steel shaft, which is configured to pivotally attach to the tractor unit152. In particular, the kingpin158attaches to a trailer coupling160, known as a fifth-wheel, that is mounted rearward of the cab. For a double or triple tractor-trailer, the second and/or third trailers may have simple hitch connections to the leading trailer. Or, alternatively, each trailer may have its own kingpin. In this case, at least the first and second trailers could include a fifth-wheel type structure arranged to couple to the next trailer. As shown, the tractor may have one or more sensor units162,163and164disposed therealong. For instance, one or more sensor units162and/or163may be disposed on a roof or top portion of the cab156(e.g., centrally as in sensor unit162or a pair mounted on opposite sides such as sensor units163), and one or more side sensor units164may be disposed on left and/or right sides of the cab156. Sensor units may also be located along other regions of the cab156, such as along the front bumper or hood area, in the rear of the cab, adjacent to the fifth-wheel, underneath the chassis, etc. The trailer154may also have one or more sensor units166disposed therealong, for instance along one or both side panels, front, rear, roof and/or undercarriage of the trailer154. As with the sensor units of the passenger vehicles ofFIGS.1A-B, each sensor unit of the cargo vehicle may include one or more sensors, such as lidar, radar, camera (e.g., optical or infrared), acoustical (e.g., microphone or sonar-type sensor), inertial (e.g., accelerometer, gyroscope, etc.) or other sensors such as geolocation-based (e.g., GPS) positioning sensors, load cell or pressure sensors (e.g., piezoelectric or mechanical), inertial (e.g., accelerometer, gyroscope, etc.). There are different degrees of autonomy that may occur for a vehicle operating in a partially or fully autonomous driving mode. The U.S. National Highway Traffic Safety Administration and the Society of Automotive Engineers have identified different levels to indicate how much, or how little, the vehicle controls the driving. For instance, Level 0 has no automation and the driver makes all driving-related decisions. The lowest semi-autonomous mode, Level 1, includes some drive assistance such as cruise control. At this level, the vehicle may operate in a strictly driver-information system without needing any automated control over the vehicle. Here, the vehicle's onboard sensors, relative positional knowledge between them, and a way for them to exchange data, can be employed to implement aspects of the technology as discussed herein. Level 2 has partial automation of certain driving operations, while Level 3 involves conditional automation that can enable a person in the driver's seat to take control as warranted. In contrast, Level 4 is a high automation level where the vehicle is able to drive without assistance in select conditions. And Level 5 is a fully autonomous mode in which the vehicle is able to drive without assistance in all situations. The architectures, components, systems and methods described herein can function in any of the semi or fully-autonomous modes, e.g., Levels 1-5, which are referred to herein as autonomous driving modes. Thus, reference to an autonomous driving mode includes both partial (levels 1-3) and full autonomy (levels 4-5). FIG.2illustrates a block diagram200with various components and systems of an exemplary vehicle, such as passenger vehicle100or120, to operate in an autonomous driving mode. As shown, the block diagram200includes one or more computing devices202, such as computing devices containing one or more processors204, memory206and other components typically present in general purpose computing devices. The memory206stores information accessible by the one or more processors204, including instructions208and data210that may be executed or otherwise used by the processor(s)204. The computing system may control overall operation of the vehicle when operating in an autonomous driving mode. The memory206stores information accessible by the processors204, including instructions208and data210that may be executed or otherwise used by the processors204. For instance, the memory may include illumination-related information to perform, e.g., occluded vehicle detection. The memory206may be of any type capable of storing information accessible by the processor, including a computing device-readable medium. The memory is a non-transitory medium such as a hard-drive, memory card, optical disk, solid-state, etc. Systems may include different combinations of the foregoing, whereby different portions of the instructions and data are stored on different types of media. The instructions208may be any set of instructions to be executed directly (such as machine code) or indirectly (such as scripts) by the processor(s). For example, the instructions may be stored as computing device code on the computing device-readable medium. In that regard, the terms “instructions”, “modules” and “programs” may be used interchangeably herein. The instructions may be stored in object code format for direct processing by the processor, or in any other computing device language including scripts or collections of independent source code modules that are interpreted on demand or compiled in advance. The data210, such as map (e.g., roadgraph) information, may be retrieved, stored or modified by one or more processors204in accordance with the instructions208. In one example, some or all of the memory206may be an event data recorder or other secure data storage system configured to store vehicle diagnostics and/or detected sensor data, which may be on board the vehicle or remote, depending on the implementation. The processors204may be any conventional processors, such as commercially available CPUs, GPUs, etc. Alternatively, each processor may be a dedicated device such as an ASIC or other hardware-based processor. AlthoughFIG.2functionally illustrates the processors, memory, and other elements of computing devices202as being within the same block, such devices may actually include multiple processors, computing devices, or memories that may or may not be stored within the same physical housing. Similarly, the memory206may be a hard drive or other storage media located in a housing different from that of the processor(s)204. Accordingly, references to a processor or computing device will be understood to include references to a collection of processors or computing devices or memories that may or may not operate in parallel. In one example, the computing devices202may form an autonomous driving computing system incorporated into vehicle100. The autonomous driving computing system may be capable of communicating with various components of the vehicle. For example, the computing devices202may be in communication with various systems of the vehicle, including a driving system including a deceleration system212(for controlling braking of the vehicle), acceleration system214(for controlling acceleration of the vehicle), steering system216(for controlling the orientation of the wheels and direction of the vehicle), signaling system218(for controlling turn signals), navigation system220(for navigating the vehicle to a location or around objects) and a positioning system222(for determining the position of the vehicle, e.g., including the vehicle's pose, e.g., position and orientation along the roadway or pitch, yaw and roll of the vehicle chassis relative to a coordinate system). The autonomous driving computing system may employ a planner/trajectory module223, in accordance with the navigation system220, the positioning system222and/or other components of the system, e.g., for determining a route from a starting point to a destination, for identifying a stop location at an intersection, for adjusting a short-term trajectory in view of a specific traffic sign, or for making modifications to various driving aspects in view of current or expected traction conditions. The computing devices202are also operatively coupled to a perception system224(for detecting objects in the vehicle's environment), a power system226(for example, a battery and/or internal combustion engine) and a transmission system230in order to control the movement, speed, etc., of the vehicle in accordance with the instructions208of memory206in an autonomous driving mode which does not require or need continuous or periodic input from a passenger of the vehicle. Some or all of the wheels/tires228are coupled to the transmission system230, and the computing devices202may be able to receive information about tire pressure, balance and other factors that may impact driving in an autonomous mode. The computing devices202may control the direction and speed of the vehicle, e.g., via the planner/trajectory module223, by causing actuation of various components. By way of example, computing devices202may navigate the vehicle to a destination location completely autonomously using data from map information and navigation system220. Computing devices202may use the positioning system222to determine the vehicle's location and the perception system224to detect and respond to objects when needed to reach the location safely. In order to do so, computing devices202may cause the vehicle to accelerate (e.g., by increasing fuel or other energy provided to the engine by acceleration system214), decelerate (e.g., by decreasing the fuel supplied to the engine, changing gears, and/or by applying brakes by deceleration system212), change direction (e.g., by turning the front or other wheels of vehicle100by steering system216), and signal such changes (e.g., by lighting turn signals of signaling system218). Thus, the acceleration system214and deceleration system212may be a part of a drivetrain or other type of transmission system230that includes various components between an engine of the vehicle and the wheels of the vehicle. Again, by controlling these systems, computing devices202may also control the transmission system230of the vehicle in order to maneuver the vehicle autonomously. Navigation system220may be used by computing devices202in order to determine and follow a route to a location. In this regard, the navigation system220and/or memory206may store map information, e.g., highly detailed maps that computing devices202can use to navigate or control the vehicle. While the map information may be image-based maps, the map information need not be entirely image based (for example, raster). For instance, the map information may include one or more roadgraphs, graph networks or road networks of information such as roads, lanes, intersections, and the connections between these features which may be represented by road segments. Each feature in the map may also be stored as graph data and may be associated with information such as a geographic location and whether or not it is linked to other related features, for example, signage (e.g., a stop, yield or turn sign) or road markings (e.g., stop lines or crosswalks) may be linked to a road and an intersection, etc. In some examples, the associated data may include grid-based indices of a road network to allow for efficient lookup of certain road network features. In this regard, the map information may include a plurality of graph nodes and edges representing road or lane segments that together make up the road network of the map information. In this case, each edge may be defined by a starting graph node having a specific geographic location (e.g., latitude, longitude, altitude, etc.), an ending graph node having a specific geographic location (e.g., latitude, longitude, altitude, etc.), and a direction. This direction may refer to a direction the vehicle must be moving in in order to follow the edge (i.e., a direction of traffic flow). The graph nodes may be located at fixed or variable distances. For instance, the spacing of the graph nodes may range from a few centimeters to a few meters and may correspond to the speed limit of a road on which the graph node is located. In this regard, greater speeds may correspond to greater distances between graph nodes. Thus, the maps may identify the shape and elevation of roadways, lane markers, intersections, stop lines, crosswalks, speed limits, traffic signal lights, buildings, signs, real time traffic information, vegetation, or other such objects and information. The lane markers may include features such as solid or broken double or single lane lines, solid or broken lane lines, reflectors, etc. A given lane may be associated with left and/or right lane lines or other lane markers that define the boundary of the lane. Thus, most lanes may be bounded by a left edge of one lane line and a right edge of another lane line. The perception system224includes sensors232for detecting objects external to the vehicle. The detected objects may be other vehicles, obstacles in the roadway, traffic signals, signs, road markings (e.g., crosswalks and stop lines), objects adjacent to the roadway such as sidewalks, trees or shrubbery, etc. The sensors232may also detect certain aspects of weather conditions, such as snow, rain or water spray, or puddles, ice or other materials on the roadway. By way of example only, the sensors of the perception system may include light detection and ranging (lidar) sensors, radar units, cameras (e.g., optical imaging devices, with or without a neutral-density filter (ND) filter), positioning sensors (e.g., gyroscopes, accelerometers and/or other inertial components), infrared sensors, and/or any other detection devices that record data which may be processed by computing devices202. The perception system224may also include one or more microphones or other acoustical arrays, for instance arranged along the roof pod102and/or other sensor assembly housings, as well as pressure or inertial sensors, etc. Such sensors of the perception system224may detect objects in the vehicle's external environment and their characteristics such as location, orientation (pose) relative to the roadway, size, shape, type (for instance, vehicle, pedestrian, bicyclist, etc.), heading, speed of movement relative to the vehicle, etc., as well as environmental conditions around the vehicle. The perception system224may also include other sensors within the vehicle to detect objects and conditions within the vehicle, such as in the passenger compartment. For instance, such sensors may detect, e.g., one or more persons, pets, packages, etc., as well as conditions within and/or outside the vehicle such as temperature, humidity, etc. Still further sensors232of the perception system224may measure the rate of rotation of the wheels228, an amount or a type of braking by the deceleration system212, and other factors associated with the equipment of the vehicle itself. The raw data obtained by the sensors (e.g., camera imagery, lidar point cloud data, radar return signals) can be processed by the perception system224and/or sent for further processing to the computing devices202periodically or continuously as the data is generated by the perception system224. Computing devices202may use the positioning system222to determine the vehicle's location and perception system224to detect and respond to objects and roadway information (e.g., signage or road markings) when needed to reach the location safely, such as by adjustments made by planner/trajectory module223, including adjustments in operation to deal with occlusions and other issues. As illustrated inFIGS.1A-B, certain sensors of the perception system224may be incorporated into one or more sensor assemblies or housings. In one example, these may be integrated into front, rear or side perimeter sensor assemblies around the vehicle. In another example, other sensors may be part of the roof-top housing (roof pod)102. The computing devices202may communicate with the sensor assemblies located on or otherwise distributed along the vehicle. Each assembly may have one or more types of sensors such as those described above. Returning toFIG.2, computing devices202may include all of the components normally used in connection with a computing device such as the processor and memory described above as well as a user interface subsystem234. The user interface subsystem234may include one or more user inputs236(e.g., a mouse, keyboard, touch screen and/or microphone) and one or more display devices238(e.g., a monitor having a screen or any other electrical device that is operable to display information). In this regard, an internal electronic display may be located within a cabin of the vehicle (not shown) and may be used by computing devices202to provide information to passengers within the vehicle. Other output devices, such as speaker(s)240may also be located within the passenger vehicle to provide information to riders, or to communicate with users or other people outside the vehicle. The vehicle may also include a communication system242. For instance, the communication system242may also include one or more wireless configurations to facilitate communication with other computing devices, such as passenger computing devices within the vehicle, computing devices external to the vehicle such as in other nearby vehicles on the roadway, and/or a remote server system. Connections may include short range communication protocols such as Bluetooth™, Bluetooth™ low energy (LE), cellular connections, as well as various configurations and protocols including the Internet, World Wide Web, intranets, virtual private networks, wide area networks, local networks, private networks using communication protocols proprietary to one or more companies, Ethernet, WiFi and HTTP, and various combinations of the foregoing. FIG.3Aillustrates a block diagram300with various components and systems of a vehicle, e.g., vehicle150ofFIGS.1C-D. By way of example, the vehicle may be a truck, farm equipment or construction equipment, configured to operate in one or more autonomous modes of operation. As shown in the block diagram300, the vehicle includes a control system of one or more computing devices, such as computing devices302containing one or more processors304, memory306and other components similar or equivalent to components202,204and206discussed above with regard toFIG.2. For instance, the data may include map-related information (e.g., roadgraphs) to perform a stop line determination. The control system may constitute an electronic control unit (ECU) of a tractor unit of a cargo vehicle. As with instructions208, the instructions308may be any set of instructions to be executed directly (such as machine code) or indirectly (such as scripts) by the processor. Similarly, the data310may be retrieved, stored or modified by one or more processors304in accordance with the instructions308. In one example, the computing devices302may form an autonomous driving computing system incorporated into vehicle150. Similar to the arrangement discussed above regardingFIG.2, the autonomous driving computing system of block diagram300may be capable of communicating with various components of the vehicle in order to perform route planning and driving operations. For example, the computing devices302may be in communication with various systems of the vehicle, such as a driving system including a deceleration system312, acceleration system314, steering system316, signaling system318, navigation system320and a positioning system322, each of which may function as discussed above regardingFIG.2. The computing devices302are also operatively coupled to a perception system324, a power system326and a transmission system330. Some or all of the wheels/tires228are coupled to the transmission system230, and the computing devices202may be able to receive information about tire pressure, balance, rotation rate and other factors that may impact driving in an autonomous mode. As with computing devices202, the computing devices302may control the direction and speed of the vehicle by controlling various components. By way of example, computing devices302may navigate the vehicle to a destination location completely autonomously using data from the map information and navigation system320. Computing devices302may employ a planner/trajectory module323, in conjunction with the positioning system322, the perception system324and other subsystems to detect and respond to objects when needed to reach the location safely, similar to the manner described above forFIG.2. Similar to perception system224, the perception system324also includes one or more sensors or other components such as those described above for detecting objects external to the vehicle, objects or conditions internal to the vehicle, and/or operation of certain vehicle equipment such as the wheels and deceleration system312. For instance, as indicated inFIG.3Athe perception system324includes one or more sensor assemblies332. Each sensor assembly232includes one or more sensors. In one example, the sensor assemblies332may be arranged as sensor towers integrated into the side-view mirrors on the truck, farm equipment, construction equipment or the like. Sensor assemblies332may also be positioned at different locations on the tractor unit152or on the trailer154, as noted above with regard toFIGS.1C-D. The computing devices302may communicate with the sensor assemblies located on both the tractor unit152and the trailer154. Each assembly may have one or more types of sensors such as those described above. Also shown inFIG.3Ais a coupling system334for connectivity between the tractor unit and the trailer. The coupling system334may include one or more power and/or pneumatic connections (not shown), and a fifth-wheel336at the tractor unit for connection to the kingpin at the trailer. A communication system338, equivalent to communication system242, is also shown as part of vehicle system300. Similar toFIG.2, in this example the cargo truck or other vehicle may also include a user interface subsystem339. The user interface subsystem339may be located within the cabin of the vehicle and may be used by computing devices202to provide information to passengers within the vehicle, such as a truck driver who is capable of driving the truck in a manual driving mode. FIG.3Billustrates an example block diagram340of systems of the trailer, such as trailer154ofFIGS.1C-D. As shown, the system includes a trailer ECU342of one or more computing devices, such as computing devices containing one or more processors344, memory346and other components typically present in general purpose computing devices. The memory346stores information accessible by the one or more processors344, including instructions348and data350that may be executed or otherwise used by the processor(s)344. The descriptions of the processors, memory, instructions and data fromFIGS.2and3Aapply to these elements ofFIG.3B. The trailer ECU342is configured to receive information and control signals from the tractor unit, as well as information from various trailer components. The on-board processors344of the ECU342may communicate with various systems of the trailer, including a deceleration system352, signaling system354, and a positioning system356. The ECU342may also be operatively coupled to a perception system358with one or more sensors arranged in sensor assemblies364for detecting objects in the trailer's environment. The ECU342may also be operatively coupled with a power system360(for example, a battery power supply) to provide power to local components. Some or all of the wheels/tires362of the trailer may be coupled to the deceleration system352, and the processors344may be able to receive information about tire pressure, balance, wheel speed and other factors that may impact driving in an autonomous mode, and to relay that information to the processing system of the tractor unit. The deceleration system352, signaling system354, positioning system356, perception system358, power system360and wheels/tires362may operate in a manner such as described above with regard toFIGS.2and3A. The trailer also includes a set of landing gear366, as well as a coupling system368. The landing gear may provide a support structure for the trailer when decoupled from the tractor unit. The coupling system368, which may be a part of coupling system334, provides connectivity between the trailer and the tractor unit. Thus, the coupling system368may include a connection section370(e.g., for communication, power and/or pneumatic links to the tractor unit). The coupling system also includes a kingpin372configured for connectivity with the fifth-wheel of the tractor unit. Example Implementations As noted above, there can be any number of reasons why it is challenging to detect and act on signs. View400ofFIG.4Aillustrates a number of examples. In particular,FIG.4Ashows a roadway402at which there is a stop sign404at the intersection. Stop line406is painted on the roadway402. The roadway402may also include lane lines408and/or “STOP” text or another graphic410indicating that vehicles should come to a stop at the intersection. In this example, a separate crosswalk412is present. A pedestrian crossing sign414is positioned beneath the stop sign404. Due to its placement, the sign414may be obscured by pedestrians walking in front of it. A no right turn sign416is also positioned near the intersection. Here, shrub418may at least partly obscure that sign from oncoming vehicles. Finally, a portable no parking sign420is placed along the curb. This sign may not comply with MUTCD standards, and thus may be hard to recognize, especially if it is placed at an angle relative to the roadway402. FIG.4Billustrates another view450, in which each sign applies to multiple lanes. Here, there are 3 northbound lanes452L,452C and452R, in which each lane must either go left, straight or have the option to go straight or right. While arrows454may be painted on the roadway, sign456indicates the direction limitation(s) for each respective lane. Similarly, westbound lanes458L and458R also have their own constraints. Here, the left lane458L must turn left, while the right lane458R can go either left or straight. These limitations are shown by arrows460painted on the roadway, as well as by sign462. For an autonomously driven vehicle, it may be hard to detect the arrows painted on the road surface due to other vehicles. It may be easier to detect the signs456and460, which may be suspended above the roadway. However, it can be challenging to identify the requirements for each specific lane, and how the listed turn actions correlate to the lane the vehicle is in. In order to address these and other signage situations, a pipeline architecture is provided.FIG.5Aillustrates view500of the pipeline, which employs an asynchronous, computational graph architecture. Initially, a set of sensor data for objects in the vehicle's driving environment is obtained from the perception system (e.g., perception system224ofFIG.2or perception system324ofFIG.3A). As shown, the set of sensor data includes camera imagery502, lidar depth information504, lidar intensity information506and lidar height (elevation) information508. The camera imagery may come from one or more cameras or other imaging devices disposed along the vehicle. The lidar information may come from lidar point cloud data obtained by one or more lidar units disposed along the vehicle. In some instances, imagery from one camera is processed as stand-alone imagery. In contrast, in other instances, imagery from multiple cameras of the perception system may be fused or otherwise integrated for processing. Some sensor information, e.g., secondary lidar returns, may be discarded prior to processing. Information from other sensors may also be utilized to augment the evaluation process. At block510, the input sensor data (e.g., each of502-508) is received by a generic sign detector module. Employing a separate detector for every sign type is computationally inefficient and not scalable, since there are hundreds of sign types and adding a new sign type can require deploying an entirely new model. In addition, labels for each sign type may be independently collected through different labeling frameworks and policies, which further complicates an approach that employs separate detectors. Thus, according to aspects of the technology, the generic detection approach results in detections for signs even if the sign type is not yet supported by the vehicle operating in the autonomous driving mode. This can provide useful information even without knowing the sign type. For instance, the density of signs can indicate a construction zone, or a large intersection or a highway interchange where there are many lanes that have different turning rules, weight limits, etc. Knowing that signs are present can enable the vehicle to request remote assistance to understand signs with interesting properties (e.g., a sign located where no sign is expected to be, a sign with a non-standard color and/or shape, or other interesting properties). The system can have different operating points for different applications (e.g., high recall to feed into the classifiers, since the classifiers can filter out false positives (and false negatives), and another high precision operating point for other downstream applications such as segmentation). For instance, a machine learning detector has many possible operating points, each with a corresponding recall and precision. Recall equals the percentage of true positive objects that the detector detects while precision equals the percentage of detected objects which are true positives. Since the detected output is fed to downstream classifiers, these can serve to filter out false positives (detected objects which are not really signs). However, if other downstream applications need to use the raw generic sign detection output, in that situation a higher precision operating point may be employed, which does not result in too many false positive detections (e.g., false positives that exceed some threshold). The input to the detector is the entire camera image, while the input to classifiers is the detected patch (the portion of the image where the detector thinks there's a sign). Thus, another benefit to the generic detector approach is that it permits the system to train the detector less often, while retraining classifiers more often as new signs are surfaced. In addition, this approach provides an extensible system because splitting detection and classification makes the addition of new sign types easier. For example, this should only necessitate retraining the classifier(s) on image patches, but should not require retraining the detector. Also, the system can predict rich attributes as additional heads of the detector and benefit from the entire camera context as opposed to a camera patch, which for example can help with predicting sign placement (e.g., where in the scene the sign is located, and whether it is handheld, temporary or permanent, etc.). Here, some attributes such as sign placement require more context than just the patch. Consider a stop sign, which could be handheld (e.g., by a crossing guard or construction worker), on a school bus, on a permanent post, or on a temporary fixture such as a barricade or a cone. By only looking at the sign patch, it may be difficult or impossible to infer what kind of fixture to which the stop sign is attached. However, the full camera image can provide enough context to predict that. Multi-task learning has also proven to improve the performance across tasks. Thus, a neural network trained to predict sign attributes on top of the regular detection task can outperform one that does not predict attributes on the original detection problem. In view of this, one aspect of the generic sign detector module is to identify the presence of any signs in the vicinity of the vehicle. Another aspect of the module is to predict sign properties such as background color (e.g., white/black, white/red, red, yellow, green, blue, etc.), shape (e.g., rectangle, octagon, etc.), placement, depth, and heading. In particular, this module is used to detect any signs, irrespective of type (e.g., stop sign, speed limit sign, etc.). At an initial detection stage, the system may generate and store (and/or output) a set of details regarding the detected objects, the camera model, and a timestamp with the camera readout time. The set of details can include one or more of the following: (i) depth information (e.g., linear distance between the camera and the object), (ii) sign properties (e.g., sign type, confidence value for the sign type, placement (e.g., permanent, portable, handheld, on a school bus, on another vehicle type, unknown), etc.), (iii) the location of the detected object in the image frame, (iv) background color (e.g., white or black, red, yellow, orange, unknown), (v) speed limit sign properties (e.g., the speed limit value of the sign in miles per hour or kilometers per hour, a speed limit sign history of, e.g., the last observed speed limit sign, etc.) Other details may include, by way of example, sign shape and/or sign content. A unique identifier may be associated with the set of details for each detected object. Each sign placement may be assigned its own prediction score for how likely that placement is to be correct (e.g., a percentage value between 0-100%, a ranking of 1, 2 or 3, or some other score type). Similarly, the background color may or may not include a prediction, score or other ranking on the likelihood for a given color. And the sign shape may or may not be associated with a confidence value. FIG.5Bshows an exemplary scenario540for generic sign detection, in which a vehicle542is approaching a block that has buildings including a pizza parlor544, a post office546and a hair salon548. As shown, there is a NO RIGHT TURN sign550at the corner, and a UTILITY WORK AHEAD sign552on the sidewalk. The dashed boxes around the signs indicate that they have been detected in the received imagery (e.g., via return signals indicated by the dash-dot lines from the boxes to the sensor module on the roof of the vehicle). In this scenario, from the input sensor data the generic sign detector module may identify the sign550as being a white rectangle permanent fixture, which is 53 meters from the vehicle and at a 24° angle. It may also identify the sign552as being an orange diamond temporary fixture 27 meters from the vehicle and at a 14° angle. By way of example only, the sign550may be determined to be permanent due to the single central pole contacting the ground, while the sign552may be determined to be temporary due to the identification of a set of legs extending from the base of the sign support. Following the initial detection stage, once the system generates the set of details regarding the detected objects, the generic sign detector module performs a sign dispatching operation. In particular, the general sign detector module takes in detections and corresponding attributes from the detection stage discussed above, and routes these detections to relevant classifiers in block512ofFIG.5A. For example, a detection deemed to have a red background can be routed to a stop sign classifier514but not to a speed limit sign classifier516, a yellow and orange sign classifier518, or a white regulatory sign classifier522. Here, it may also route to other classifiers522and/or to a text and symbol detector524. In another example, the text and symbol detector524may comprise separate detectors for text and symbols. This approach can significantly help with resource management in order to avoid having too many classifiers running at the same time on the same detections. Thus, using the NO RIGHT TURN sign550ofFIG.5B, in example560ofFIG.5C, the general sign detector510may pass the sign's information on to the stop sign classifier514, the white regulatory sign classifier520, and the text and symbol detector524. In contrast, for the UTILITY WORK AHEAD sign552ofFIG.5B, in example580ofFIG.5D, the general sign detector510may pass the sign's information on to the yellow and orange sign classifier518, another classifier522(e.g., a construction warning classifier), and the text and symbol detector524. In addition to routing the detections to various classifiers, the dispatcher stage of operation by the generic sign detector is responsible for creating a batched input from the image patch detections. This involves cropping a region around each detected sign (as specified by the config file) and batches various detections into one input which will then go to the sign type classifier(s). The output of the dispatcher operation comprises image patches with corresponding object IDs. In one scenario, the output is a set of patches from one image, taken by one camera, where the generic sign detector indicated there could be a sign. For instance, the system may crop all the regions in a given image where the generic sign detector found a possible sign. This allows the system to trace a particular detection back to the corresponding imagery obtained by the perception system. Every classifier in block512that receives an input from the dispatcher from the generic sign detector block runs its underlying deep neural network, e.g., a convolutional neural network (CNN), on the given input. The output of the sign classification stage is a mapping from object ID to the predicted scores over the classifier's classes. For example, speed limit sign classifier516may output predicted scores over the following classes:Class 0: 15 mphClass 1: 20 mphClass 2: 25 mphClass 3: 30 mphClass 4: 35 mphClass 5: 40 mphClass 6: 45 mphClass 7: 50 mphClass 8: Other speed limitClass 9: Not a speed limit In this particular example, for every object ID, the speed limit sign classifier516would output10predicted scores (i.e., one for each class). The text and symbol detector524detects individual components from a fixed vocabulary of keywords and symbols. For instance, as shown in example600ofFIG.6A, the detector identifies the words “Work” and “Ahead”, which may be accounted for by the system (e.g., the planner/trajectory module) to adjust the vehicle's speed and/or to change lanes from a prior planned path. This separate detector is particularly helpful for long-tail cases and rare examples. For instance, as shown in the upper half of example620inFIG.6B, there are many different ways to indicate no turn on red. And as shown in the lower half of this example, the text and symbol detector is able to parse out both text and symbols from different signs to arrive at a determination of “No Right Turn on Red”. Returning toFIG.5A, after the classifiers and text/symbol detector in block512operate on the information for the detected sign(s), the results of those operations are sent to a sign type annotator block526. Given the classifications from all sign type classifiers (as well as information from the text and symbol detector), the sign type annotator is responsible for creating an annotation regarding the particular type of sign it is. If an object is only classified by one classifier, the procedure is straightforward, since the object would be labeled as being of the type of that classifier. Thus, as shown in example700ofFIG.7A, if a stop sign was classified only by the stop sign classifier, with the text detected as “STOP”, then the annotation would be “Stop Sign”. However, as shown in example720ofFIG.7B, if an object is classified by multiple classifiers (e.g., a white regulatory sign classifier and a turn restriction classifier), then, merging the two classification results can be more complicated. Here, the information from the text and symbol detector (e.g., “ONLY” and “ONLY” as the two recognized words, and multiple turning arrows as the symbols), then this information can be used in conjunction with the classifications from the white regulatory sign classifier and the turn restriction classifier to annotate it as a turn sign for multiple lanes. In one scenario, the system may retain the history of all predicted sign types over a track (e.g., a given period of time along a particular section of roadway), in order to avoid one-frame misclassifications. This history can be used to get rid of most inconsistencies in the classification results. Any remaining inconsistencies after considering the text/symbol detector information and the history data can be resolved via a priority list for signage. By way of example, if both the stop sign and speed limit sign classification scores are above their respective thresholds, indicating that the sign could be both a stop sign and a speed limit sign, the system may select the stop sign as the proper classification because that type of sign has more critical behavioral implications for vehicle operation. In addition, if permanent signs are present, then once signs are added to the map (e.g., as updates to the roadgraph data) the system can use this information as a priori data. Here, for instance, the system could use such data to prefer predictions that are consistent with the map. In one scenario, if separate detectors were employed, then every supported sign type could be published on the vehicle's internal communication bus (e.g., a Controller Area Network (CAN) bus or a FlexRay bus) by the respective detector as an object with its own type (e.g., a potential stop sign or a potential slow sign). However, because the pipelined approach discussed herein has one generic sign detector with multiple classifiers, the detector can publish sign-related objects, and each classifier has the ability to modify these objects by adding type information. Thus, sign types can be treated as modifiable attributes. This will allow the system to avoid one-off misclassification mistakes, and keep richer history and information about sign type prediction, which for example can in turn allow us to correct a misclassification that happened at a first distance once the vehicle is closer to the sign and the perception system has a clearer view of it. Upon performing any annotation, the system may then further evaluate and process the sign-related data.FIG.8illustrates one example800. For instance, as shown and in accordance with the discussion ofFIG.5A, sensor information from block802is used in generic sign detection at block804. The output from the generic sign detection is selectively provided to one or more of the classifiers, and to a text/symbol detection module, which are in block806. The results from block806are then annotated with a (likely) sign type at block808. Next, the system may perform sign localization at block810and/or sign-object association at block812. While shown in series, these may be performed in parallel or in the opposite order. These operations may include revising or otherwise modifying the sign annotations. Localization involves identifying where in the real world the sign is, since this may impact driving decisions made by the vehicle. This can include combining lidar inputs projected to the image views to understand where the sign is in the vehicle's surrounding environment. In particular, the system estimates the sign's position in the3D world by estimating its coordinates in a global coordinate system. This can be done using a combination of approaches including the depth prediction from the sign detection stage and using elevation map data. Alternatively or additionally, this can also include using other prior knowledge about the sign type and the sizes it can exist in (e.g., a permanent stop sign may only have a few permissible physical sizes), and fusing context information from the roadgraph or other objects in the vehicle's environment. The localization information can be added to the existing information about the sign. Sign-object association associates the sign with other objects in the environment. This includes associating signs with existing mapped signs, and for unmapped signs with other objects that hold them. For instance, if a sign is already in the map, the detected sign may be marked as a duplicate. If it is not a duplicate, the system can react to the new sign, including modifying a current driving operation, updating the onboard map and/or notifying a back-end service about the new sign. The sign-object association at block812can also associate the sign with other detections from other models. This can include a pedestrian detection model, where there may be a construction worker, police officer or a crossing guard holding a stop sign. It could also include a vehicle detection model, such as identifying whether another vehicle is a school bus, a construction vehicle, an emergency vehicle, etc. By way of example,FIG.9Aillustrates a scene900where the system may detect a first barricade902and a ROAD CLOSED sign904, and a second barricade906and a DO NOT ENTER sign908. Here, the system may associate the ROAD CLOSED sign with the first barricade and the DO NOT ENTER sign with the second barricade. As this information may indicate that there is ongoing construction along the roadway, the vehicle's map may be updated accordingly and a notification may be sent to a back-end system, for instance so that other vehicles may be notified of the road closure. FIG.9Billustrates another scene910, in which the system may detect a STOP sign912in the roadway and a construction sign914adjacent to the roadway. The construction sign may be determined to be a temporary sign due to its placement on the side of the road and/or due to the recognition of a set of legs extending from the base of the sign support. In this scene, the pedestrian detection model may identify a person916as a construction worker (e.g., due to a determination that the person is wearing a hard hat or a reflective vest). The system may recognize that the stop sign is adjacent to and being held by the construction worker. In this situation, the system may react to the stop sign by modifying the planned driving trajectory in order to come to a stop. FIG.9Cillustrates yet another scene920, in which the sign pipeline of the system detects stop sign922and a vehicle model determines that the adjacent vehicle924is a school bus. This may be done based on the overall shape of the vehicle, its color (e.g., yellow), text926(e.g., “SCHOOL BUS” or “REGIONAL DISTRICT #4”) and/or other indicia along the vehicle (e.g., the presence of red or yellow flashing lights). Here, once the system determines the presence of a stop sign associated with a school bus, and that the sign is extended and not retracted, the planner/trajectory module may cause the vehicle to a stop. There may be situations where a sign is detected but due to the association with another object, the system determines there is no need to react to the sign. For instance,FIG.9Dillustrates a scene930where there is a road with two lanes,932L and932R, and a vehicle934in the left lane932L. Here, the sign pipeline system detects a set of signs936which have instructions for other vehicles to keep right. However, because the system associates the set of signs with the vehicle, which may include determining that the signs are loaded onto the rear of the vehicle, it may be determined (e.g., by the planner/trajectory module) that there is no need to move into the right lane932R or otherwise alter the current trajectory. Returning toFIG.8, once annotation is complete and any subsequent processing including localization or object association has been performed with corresponding modifications to the annotations, the information about the detected signs is published by the system on the vehicle's internal communication bus. At this point, various onboard systems, such as the planner/trajectory module, may use the annotated sign information to make decisions related to autonomous driving. Sign-related information, including the observed presence of a new sign not on a map, a sign that the pipeline was unable to classify, or an interesting feature of a sign (e.g., a non-standard color or shape), can be transmitted to a back-end system for evaluation or further processing. For instance, offboard processing may be performed for one or more of the classifiers. In one scenario, a back-end system may perform fleet management operations for multiple autonomous vehicles, and may be capable of real time direct communication with some or all of the autonomous vehicles in the fleet. The back-end system may have more processing resources available to it than individual vehicles. Thus, in some situations the back-end system may be able to quickly perform the processing for road sign evaluation in real time, and relay that information to the vehicle so that it may modify its planned driving (e.g., stopping) operations accordingly. The back-end system may also use the received sign information to train new sign classifiers or to update existing sign classifiers, as well as to train the generic sign detector. In some examples, machine learning models for sign classifiers, which may include neural networks, can be trained on sign information, map data and/or additional human labeled data. The training may be based on gathered real-world data (e.g., that is labeled according to road environment, intersection type, signage such as stop or yield signs, etc.). From this, one or more models may be developed and used in real-time evaluation by the autonomous vehicles, after the fact (e.g., post-processing) evaluation by the back-end system, or both. By way of example, the model structure may be a deep net, where the exact structure and parameters can be searched through automated machine learning, e.g., using a Neural Architecture Search (NAS) type model. Based on this, the onboard system (e.g., planner/trajectory module and/or navigation system of the vehicle's autonomous driving system) can utilize the model(s) in the parallel architecture approach discussed herein. By way of example, a model may take the characteristics of a traffic sign and outputs a traffic sign type. The model may be for a specific type of sign, such that different models are used for different classifiers (e.g., sign classifiers514-522ofFIG.5A). As noted above, traffic sign types may include regulatory, warning, guide, services, recreation, construction, school zone, etc. In some instances, certain signs such as stop signs or railroad crossing signs may be considered sign types. In order to be able to use the model(s) to classify traffic sign types, the model(s) may first be trained “offline” that is, ahead of time and/or at a remote computing device and thereafter sent to the vehicle via a network or otherwise downloaded to the vehicle. One or more of server computing devices may generate the model parameter values by first retrieving training data from a storage system. For instance, the one or more server computing devices may retrieve a set of imagery. The imagery may include camera images corresponding to locations where traffic signs are likely to be visible, such as images that are a predetermined distance from and oriented towards known traffic signs. For instance, images captured by cameras or other sensors mounted on vehicles, such as vehicle100,120or150, where the cameras are within a certain distance of a traffic sign and are oriented towards the traffic sign may be retrieved and/or included in the set. The camera image may be processed and used to generate initial training data for the model. As noted above, the imagery may be associated with information identifying the location and orientation at which the image was captured. Initial training data for the model may be generated from imagery in various ways. For instance, human operators may label images of traffic signs as well as the type of traffic sign by reviewing the images, drawing bounding boxes around traffic signs, and identifying the types of traffic signs. In addition or alternatively, existing models or image processing techniques may be used to label images of traffic signs as well as the type of traffic sign. Given an image of a traffic sign, which may be considered a training input, and a label indicating the type of traffic sign, which may be considered a training output, the model for a given classifier may be trained to output the type of traffic sign found in a captured image. In other words, the training input and training output are used to train the model on what input it will be getting and what output it is to generate. As an example, the model may receive images containing signs, such as shown in the dashed boxes inFIG.5B. The model may also receive labels indicating the type of sign each image shows including “regulatory sign”, “construction sign”, etc. In some instances, the type of sign may be specific, such as “no right turn sign” and “utility work ahead”. Based on this training data, the model may learn to identify similar traffic signs. In this regard, the training may increase the precision of the model such that the more training data (input and output) used to train the model, the greater the precision of the model at identifying sign types. In some instances, the model may be configured to provide additional labels indicative of the content of the sign. In this regard, during the training of the machine learning models, the training data may include labels corresponding to the attributes of the traffic signs. For instance, labels indicative of the attributes of a service sign including “rectangular shape,” “blue color,” and “text” stating “rest area next right”, may be input into the machine learning model along with a label indicating the sign type as a service sign. As such, when the training model is run on an image of the service sign and the label, the model may learn that that the sign is a service sign indicating a rest area ahead. Based on this determination, the model may learn that other signs which include attributes such as a “rectangular shape,” “blue color,” and “text” stating “rest area next right” may also be service signs. Once the model for a given classifier is trained, it may be sent or otherwise loaded into the memory of a computing system of an autonomous vehicle for use, such as memory of vehicle100,120or150. For example, as a vehicle drives around, that vehicle's perception system may capture sensor data of its surroundings. This sensor data, including any images including traffic signs, may be periodically, or continuously, sent to the back-end system to be used as input into the model. The model may then provide a corresponding sign type for each traffic sign in the images. For example, a vehicle may capture an image containing sign550and/or552as shown inFIG.5B. The model may output a label indicating the sign type is a regulatory or construction sign. In some instances, the model may also provide the specific type of sign. For example, the model may output “warning sign” and “railroad crossing ahead” sign types. The provided sign type and attributes may then be used to determine how to control the vehicle in order to respond appropriately to the detected signs as described herein. Labels annotated by humans comprise bounding boxes of where there are signs in an image, along with a sign type annotation (e.g., stop sign, yield sign, etc.), as well as attributes, including but not limited to color (e.g., red, green, orange, white, etc.), placement (handheld, permanent, temporary, school bus), content (text, figures, etc.), depth, etc. The detector is trained by feeding it full images with the bounding boxes and the attribute annotations. The detector will learn to predict bounding boxes as well as the extra attributes such as color and shape. To train a classifier, the detector is run to obtain detected signs. Those detections are joined with the labels. If a detected sign overlaps significantly with a given label, then the sign type of that label is assigned to it (e.g., stop sign). If the detected sign does not overlap significantly with that label, then the system deems it as not being a sign. The patch is then cropped around the detection, and so the system has image patches plus their labels as input to the training model. For a given classifier, the system only keeps the classes that that classifier predicts (e.g., all speed limits) and marks everything else as “unknown”. One example of a back-end system for fleet-type operation is shown inFIGS.10A and10B. In particular,FIGS.10A and10Bare pictorial and functional diagrams, respectively, of an example system1000that includes a plurality of computing devices1002,1004,1006,1008and a storage system1010connected via a network1016. System1000also includes vehicles1012and1014configured to operate in an autonomous driving mode, which may be configured the same as or similarly to vehicles100and150ofFIGS.1A-Band1C-D, respectively. Vehicles1012and/or vehicles1014may be parts of one or more fleets of vehicles that provide rides for passengers or deliver packages, groceries, cargo or other items to customers. Although only a few vehicles and computing devices are depicted for simplicity, a typical system may include significantly more. As shown inFIG.10B, each of computing devices1002,1004,1006and1008may include one or more processors, memory, data and instructions. Such processors, memories, data and instructions may be configured similarly to the ones described above with regard toFIG.2or3A. The various computing devices and vehicles may communicate directly or indirectly via one or more networks, such as network1016. The network1016, and intervening nodes, may include various configurations and protocols including short range communication protocols such as Bluetooth™, Bluetooth LE™, the Internet, World Wide Web, intranets, virtual private networks, wide area networks, local networks, private networks using communication protocols proprietary to one or more companies, Ethernet, WiFi and HTTP, and various combinations of the foregoing. Such communication may be facilitated by any device capable of transmitting data to and from other computing devices, such as modems and wireless interfaces. In one example, computing device1002may include one or more server computing devices having a plurality of computing devices, e.g., a load balanced server farm, that exchange information with different nodes of a network for the purpose of receiving, processing and transmitting the data to and from other computing devices. For instance, computing device1002may include one or more server computing devices that are capable of communicating with the computing devices of vehicles1012and/or1014, as well as computing devices1004,1006and1008via the network1016. For example, vehicles1012and/or1014may be a part of a fleet of autonomous vehicles that can be dispatched by a server computing device to various locations. In this regard, the computing device1002may function as a dispatching server computing system which can be used to dispatch vehicles to different locations in order to pick up and drop off passengers or to pick up and deliver cargo or other items. In addition, server computing device1002may use network1016to transmit and present information to a user of one of the other computing devices or a passenger of a vehicle. In this regard, computing devices1004,1006and1008may be considered client computing devices. As shown inFIGS.10A-Beach client computing device1004,1006and1008may be a personal computing device intended for use by a respective user1018, and have all of the components normally used in connection with a personal computing device including a one or more processors (e.g., a central processing unit (CPU), graphics processing unit (GPU) and/or tensor processing unit (TPU)), memory (e.g., RAM and internal hard drives) storing data and instructions, a display (e.g., a monitor having a screen, a touch-screen, a projector, a television, or other device such as a smart watch display that is operable to display information), and user input devices (e.g., a mouse, keyboard, touchscreen or microphone). The client computing devices may also include a camera for recording video streams, speakers, a network interface device, and all of the components used for connecting these elements to one another. Although the client computing devices may each comprise a full-sized personal computing device, they may alternatively comprise mobile computing devices capable of wirelessly exchanging data with a server over a network such as the Internet. By way of example only, client computing devices1006and1008may be mobile phones or devices such as a wireless-enabled PDA, a tablet PC, a wearable computing device (e.g., a smartwatch), or a netbook that is capable of obtaining information via the Internet or other networks. In some examples, client computing device1004may be a remote assistance workstation used by an administrator or operator to communicate with riders of dispatched vehicles. Although only a single remote assistance workstation1004is shown inFIGS.10A-B, any number of such workstations may be included in a given system. Moreover, although operations workstation is depicted as a desktop-type computer, operations workstations may include various types of personal computing devices such as laptops, netbooks, tablet computers, etc. By way of example, the remote assistance workstation may be used by a technician or other user to help process sign-related, including labeling of different types of signs. Storage system1010can be of any type of computerized storage capable of storing information accessible by the server computing devices1002, such as a hard-drive, memory card, ROM, RAM, DVD, CD-ROM, flash drive and/or tape drive. In addition, storage system1010may include a distributed storage system where data is stored on a plurality of different storage devices which may be physically located at the same or different geographic locations. Storage system1010may be connected to the computing devices via the network1016as shown inFIGS.10A-B, and/or may be directly connected to or incorporated into any of the computing devices. Storage system1010may store various types of information. For instance, the storage system1010may store autonomous vehicle control software which is to be used by vehicles, such as vehicles1012or1014, to operate such vehicles in an autonomous driving mode. Storage system1010may also store one or more models and data for training the models such as imagery, parameter values for the model, a data structure of, e.g., labeled sign attributes. The storage system1010may also store a training subsystem to train the model(s), as well as resultant information such as trained classifiers, the generic sign detector, and the text and symbol detector. The trained classifiers and detectors may be shared with specific vehicles or across the fleet as needed. They may be updated in real time, periodically, or off-line as additional sign-related information is obtained. The storage system1010can also include route information, weather information, etc. This information may be shared with the vehicles1012and1014, for instance to help with operating the vehicles in an autonomous driving mode. FIG.11illustrates a flow diagram1100according to one aspect of the technology, which provides a method of controlling a vehicle operating in an autonomous driving mode. At block1102, the method includes receiving, by one or more sensors of a perception system of the vehicle, sensor data associated with objects in an external environment of the vehicle, the sensor data including camera imagery and lidar data. At block1104, one or more processors of a computing system of the vehicle apply a generic sign detector to the sensor data to identify whether one or more road signs are present in an external environment of the vehicle. At block1106, the method includes identifying, by the one or more processors according to the generic sign detector, that a road sign is present in the external environment of the vehicle. At block1108, properties of the road sign are predicted according to the generic sign detector. At block1110, the method includes routing, based on the predicted properties of the road sign, an image of the road sign to one or more selected sign classifiers of a group of sign classifiers to perform a sign type specific evaluation of the image. At block1112, the image of the road sign is also routed to a text and symbol detector to identify any text or symbols in the image. At block1114, the method includes annotating a sign type to the road sign based on (i) classification results from the sign type specific evaluation by each selected sign classifier and (ii) any text or symbol information identified by the text and symbol detector. And at block1116, the method includes determining, based on annotating the sign type, whether to cause the vehicle perform a driving action in the autonomous driving mode. Although the technology herein has been described with reference to particular embodiments, it is to be understood that these embodiments are merely illustrative of the principles and applications of the present technology. It is therefore to be understood that numerous modifications may be made to the illustrative embodiments and that other arrangements may be devised without departing from the spirit and scope of the present technology as defined by the appended claims. | 73,680 |
11861916 | DETAILED DESCRIPTION In this description, references to “an embodiment”, “one embodiment” or the like, mean that the particular feature, function, structure or characteristic being described is included in at least one embodiment of the technique introduced here. Occurrences of such phrases in this specification do not necessarily all refer to the same embodiment. On the other hand, the embodiments referred to also are not necessarily mutually exclusive. I. Overview A driver monitoring system that can automatically detect gradual decrease in driver alertness and warn the driver early enough is an important step in reducing accidents on the roadway. Introduced here, therefore, is a camera-based driver monitoring system (DMS) that continuously monitors the head and eye movements of the driver and tracks the attention level of the driver. When the attention level progressively decreases with time, the DMS introduced here (hereinafter also called “the system”) can interact with the driver to improve the driver's attention level. The system continuously captures images of the driver and applies computer vision and machine learning techniques to captured images to extract facial and eye features that can be used to track changes in the head and eye movement. Implementing such a camera-based DMS that is cost effective and suitable for real time driving environment has many challenges. Some of the significant challenges are the impact of harsh lighting conditions, vibration due to vehicle movement and natural driver body movement on the captured images that are used to extract facial and eye features. The DMS introduced here includes a method for eye and face movement tracking that is robust and invariant to lighting, vibration and body movement. The system uses computer vision and machine learning techniques to improve the safety and convenience of the in-car experience. The system includes three primary detection functions: drowsiness detection, distraction detection and gaze detection. These functions are developed by monitoring and tracking changes in the driver facial images. The system continuously tracks head, eyelid and pupil movements. It converts changes in head, eye and pupil movement to time series events. Machine learning and computer vision techniques are used to convert images captured by a Near InfraRed (NIR) camera to time series head and eye movement data for predicting drowsiness, distraction and gaze levels. Computer vision and machine learning techniques are used to determine the location of landmarks in the face and eye region. The locations of the landmarks in the eye region are used to measure the openness level of the eye. This measured openness level is then converted to fully open, partially open, or closed eye state per image. The eye state is tracked continuously to detect measured parameters, such as eye blink count, blink rate, eye closure duration, eye open duration, speed of eyelid closure, and/or speed of eyelid opening. The system continuously monitors these parameters for any significant change from established constant threshold values or driver specific calibrated (profile) values. Environmental changes, such as changing lighting condition, vehicle vibration, and body movement, can introduce errors in the detected location of the landmarks. These errors can significantly impact the accuracy of the measured openness level of the eye. The proposed system describes a method that can produce robust and invariant measurement of eye openness level. The proposed approach is robust to variation in distance of the driver's head from camera; yaw, roll and pitch orientation of the head; and human variations such as race, ethnicity, gender, and age. The system can use deep learning and computer vision techniques to measure the eye openness level to improve robustness against lighting and vibration changes, road environment, and human head movements. II. Example Implementation The system measures the alertness level of the driver by evaluating a sequence of images captured by a camera and measuring the openness level of the eye, and head orientation. The openness level of the eye is converted to eye state. Eye state in turn is used to detect a collection of parameters such as blink count, blink duration, eye open duration, eye close duration, speed of eye opening, and speed of eye closure. These measured parameters along with head orientation (yaw, pitch and roll) are used to determine the alertness level of the driver. FIG.1illustrates at a high level the hardware configuration of the system according to at least some embodiments. As shown, the system1includes an image capture and store subsystem (also called camera subsystem)2, a processing subsystem3and an input/output (I/O) subsystem4. The camera subsystem2includes at least one camera (e.g., a near-IR camera), to capture images of the driver's head, face and eyes. The I/O subsystem4includes one or more devices to provide a user interface5capable of receiving inputs such as commands and preferences from the driver (e.g., touchscreen, microphone with voice recognition software and hardware) and capable of outputting information such as alerts and operating instructions to the driver (e.g., speaker, display, seat vibrator). The processing subsystem3is responsible for all of the major processing functions described herein. The processing subsystem3can be implemented to include programmable circuitry programmed/configured by software and/or firmware, or entirely by special-purpose circuitry, or by a combination of such forms. Such special-purpose circuitry (if any) can be in the form of, for example, one or more application-specific integrated circuits (ASICs), programmable logic devices (PLDs), field-programmable gate arrays (FPGAs), system-on-a-chip systems (SOCs), etc. The processing subsystem3further includes facilities to store digital data, such as one or more memories (which may be read-write, read-only, or a combination thereof), hard drives, flash drives, or the like. Software or firmware to implement the various functions and steps described herein may be stored on a machine-readable storage medium and may be executed by one or more general-purpose or special-purpose programmable microprocessors. A “machine-readable medium”, as the term is used herein, includes any mechanism that can store information in a form accessible by a machine (a machine may be, for example, a computer, network device, cellular phone, personal digital assistant (PDA), manufacturing tool, any device with one or more processors, etc.). For example, a machine-accessible medium includes recordable/non-recordable media, e.g., read-only memory (ROM); random access memory (RAM); magnetic disk storage media; optical storage media; flash memory devices; etc. At a functional level, in at least some embodiments, and as shown inFIG.2, the system1includes three major subsystems: the image capture and store subsystem (or camera subsystem)2, a driver behavior tracking subsystem6and an alerting subsystem7, The system1further includes four ancillary functional modules, namely: a system startup module8, a detect head orientation and feature landmarks module9, an analyze driver behavior data module10and a dynamic threshold update module11. Each of these subsystems and modules is described in detail below. The system further creates and maintains several datasets, including a stored images dataset12, a landmarks dataset13, an eye state dataset14and a blinks dataset15. Each of the elements shown inFIGS.1and2can be implemented entirely within a given vehicle (e.g., an automobile), although in some embodiments, at least portions of one or more elements (e.g., storage and/or processing resources) may be implemented remotely from the vehicle. The system startup module8is responsible for initializing the system1. As part of the initialization sequence, it requests the dynamic threshold update module11to perform calibration. Once the dynamic threshold update module11completes the calibration process, the system startup module8initializes the other modules. The camera subsystem2continuously captures images of the driver at a rate of, for example, approximately 30 frames per second, and stores the images for further processing. The images can be stored in a local storage device (not shown), such as a memory or hard drive, or remotely in the cloud. A. Head Orientation and Feature Landmarks Detection Module FIG.3shows an example of the overall process performed by the Head Orientation and Feature Landmarks Detection Module9. After getting the next frame at step301, the process at step302analyzes the captured image and applies computer vision techniques for image enhancement to minimize the impact of environmental challenges such as vibration and lighting. Some image correction techniques applied include contrast limited adaptive histogram equalization (CLAHE) and Gaussian blurring. Once the image quality is improved (step303), the process detects the head region of interest (ROI) of the driver's head at step305, and then detects the driver's face at step306. If the driver's face is not detected, a record is inserted into the stored images dataset12at step304, indicating frame number and a no-face-detected flag. If a face was detected, the top and bottom coordinates of the bounding box containing the face are predicted by the module. At step307the process uses the previous image's eye region bounding box coordinates to extract the eye region of the current image and performs an image similarity check for both left and right eyes. If the extracted left and right eye image comparisons are within the measure of threshold for similarity (step308), then the previous image's landmark data are used to add an entry to the landmarks dataset13at step315; otherwise the process continues to step309. At step309the process uses deep learning techniques to detect the roll, yaw, and pitch of the driver's head from a 2D gray scale image. At step310the process uses deep learning techniques to detect a fixed number of landmarks on the face.FIG.4shows an example of facial landmarks on the face of a person such as may be detected at step310, where each dot represents a landmark. Step311uses the eye corner landmarks from the facial landmarks to extract the eye ROI for both the left and right eye. Step312uses deep learning techniques to detect a fixed number of landmarks around the eyelid contours and the center of the pupil for both left and right eyes, as shown inFIG.5. Step313treats each predicted coordinate as a time-series datum and applies one or more smoothing techniques such as exponential smoothing, moving average, convolutional smoothing, Kalman smoothing, and polynomial smoothing. The full data are then written to the landmarks dataset step314. After step314, the process loops back to step301. B. Driver Behavior Data Analysis Module Referring again toFIG.2, the Driver Behavior Data Analysis Module10processes the data stored in the landmarks dataset to determine the driver's behavior. An example of the process performed by this module is shown inFIG.6. Initially, the process gets the next landmark data from the landmarks dataset at step601. The primary purpose of steps602and603is to ensure the driver is consistently maintaining a good view of the surrounding road environment necessary for safe driving by taking into consideration driver characteristics, driver behavior, and road environment. Hence, at step602the process analyzes the head movement of the driver. Lack of head movement over a prolonged period of time can be considered an indicator of gradual onset of decreased alertness level. Prolonged head orientation that is outside a “safe viewing cone” area can indicate distraction, such as due to devices, road activities, or interaction with vehicle occupants. At step603the process analyzes the eyelid movement of each eye of the driver and calculates the eye openness measure (EOM) for each eye. In particular, step603calculates the robust eye openness measurement angle based on predicted eyelid landmarks. The alert records generated by this step and step602are among several types of alert records generated by the system. These alert records may be stored in the system for further processing and/or they may be used to directly alert the driver. Step604converts the measured EOM value to one of three eye states namely open, partially open, and closed, as discussed below in greater detail in connection withFIG.10. Step605detects any blinks represented in the data. Step606determines the driver attention level based on the eye movement data. In particular, step606tracks the following scenarios and generates an alert record for the alert module to process:Continuous tracking of eye closure duration.Prolonged partial eye closure with minimal change in head orientation as an indicator of drowsiness onset due to lack of head movement and partially open eye.Change in driver's eyelid movement pattern over a prolonged period of time. For example, decrease in blink rate, increase in eye closure time, or decrease in velocity of eye closure can indicate onset of driver fatigue or drowsiness.After step606, the process loops back to step601. C. Analysis of Driver Attention Level Based on Head Movement Step602inFIG.6, analyzing the driver's attention level based on head movement, will now be described in greater detail.FIG.12shows an example of the process of determining the driver's attention level based on head movement. At step1201a new data record is retrieved from the landmarks dataset. The dynamic profile and threshold criteria parameters that are used in evaluating whether to create an alert record or not are calculated at step1202. The alert tracking parameters such as head pitch and yaw are set to initial state at step1212if the vehicle is stationary (step1203), and the process then loops back to step1201. The head orientation and feature landmarks detection module has a prioritized alert event checking mechanism. The system alerts the driver if his/her head orientation is outside the safe cone viewing area (as shown inFIG.11). When the vehicle is moving, prolonged head orientation that can indicate onset of decrease in driver attention is tracked. If the measured yaw or pitch angle is greater than the dynamic threshold angles at step1204, then the corresponding alert frame counter is incremented at step1205. At step1206, if the yaw alert counter exceeds the dynamic yaw alert threshold counter or the pitch alert counter exceeds the dynamic pitch alert threshold counter, then an alert record is generated and the record is inserted into the alerts dataset at step1211. If neither threshold is exceeded at step1206, then at step1207a batch of head orientation data of size k is retrieved, where parameter k is a dynamic value retrieved from the profile and threshold data module. If the standard deviation, calculated for the batch, of the yaw or pitch angles indicates very low head movement (by being less than a corresponding threshold), this implies that the driver is not routinely scanning the road environment. Therefore, if the calculated standard deviation value is less than the dynamically calculated standard deviation threshold values from the profile and threshold data module at step1208, then an alert record is generated and inserted into alerts dataset at step1211. If no alert is detected, then a check is made to see if a previous head orientation-based alert was created. If no clear alert record exists for that alert record, then a clear alert record is generated and inserted into the alerts dataset at step1210to reset the alert records. D. Calculation of EOM A method for calculating the EOM, per step603inFIG.6, is now discussed in greater detail in relation toFIGS.7A,7B and9. The eye landmark record from the landmarks dataset is retrieved at step901(FIG.9), and the upper and lower contours of the eye are plotted programmatically at step902. At step903the left and right intersection points of the upper eyelid contour71and lower eyelid contour72are identified as landmarks P0and P1, respectively, as shown inFIGS.7A,7B and8, and a line segment P0P1that connects P0to P1is then plotted. The line segment P0P1is then segmented into four equal segments at step904, such that it produces points PA, PBand PC, where P0PA=PAPB=PBPC=PCP1 Line segments perpendicular to P0P1and at PA, PBand PCintersecting upper and lower eyelid contours are then plotted at step905. The intersection points of the perpendicular lines at PA, PBand PCwith the upper and lower eyelid contours, designated as P2, P4, P6, P3, P5and P7inFIG.7A, are then identified at steps906and907, respectively. The process then defines a plurality of triangles, such that the vertices of each triangle are formed by a different 3-tuple of landmarks on the upper and lower eyelid contours, and where two of those landmarks in the 3-tuple are located on a different eyelid contour (upper or lower) from the third landmark in the 3-tuple. Using the properties of a triangle, therefore, the angles at each of eyelid landmarks P2, P4, P6, P3, P5and P7(i.e., excluding the endpoints P0and P1) are calculated at step908in accordance with equations (1) through (3) below, and the average of the angles is calculated at step909, per equation (4) below. Consider for example the triangle formed by the eyelid landmarks P4, P3and P7as shown inFIG.7B: adistance(P3,P7) bdistance(P4,P3) cdistance(P4,P7) a2=b2+c2−2·b·c·cos α(P4) (1) cosα(P4)=(b2+c2-a2)2·b·c(2) α(P4)=arccos((b2+c2-a2)2·b·c)(3) EOM=(∑k=1nα)n(4) Whereα(P4), is the angle at landmark P4,EOM, is the Eye Openness Measurement,n, is the number of angles calculated, andα, is the angle at each eyelid landmark with respect to other eyelid landmarks. An alternate method used for measuring eye openness level is illustrated schematically inFIG.8. This alternate method involves measuring the height of the eyelid at selected points and averaging the eye height by the number of such measurements multiplied by the width of the eye in accordance with equation (5). EHWR=P2-P3+P4-P5+P6-P73P1-P0(5) In a real-time environment, the eyelid landmarks detected by the system can vary from frame to frame. These variations have a negative impact on the accuracy of eyelid movement tracking. The accuracy of the detection of landmarks at the edge of the eyes is more prone to variation than the landmarks on the eyelid. These variations on successive images can have significant negative impact on the accuracy of the eyelid tracking. Therefore, the EOM calculation method ofFIGS.7A,7B and9attempts to reduce the impact of eye edge landmarks by avoiding them when measuring angles among the eyelid landmarks. The alternate method ofFIG.8is more susceptible to variation in the eye edge landmarks because the ratio is formed by dividing the height of the eyelid by the width of the eyelid (i.e., the distance between the edges of the eye). Since the width of the eye is almost twice the height of the eye, the ratio produced by this alternate EOM method is often less than 0.5 when the eye is fully open and tends towards zero as the eyelid closes. As a result, the inaccuracies in landmark detection on eye angle ratio (EAR) tend to have significantly more negative impact as the eyelid closes. The EOM calculation method ofFIGS.7A,7B and9on the other hand tends to have larger range of angles from 30° to 180° for fully open eyelid to closed eyelid. The angle between the landmarks on the eyelid increases as the eyelid closes. This behavior of the EOM calculation method ofFIGS.7A,7B and9reduces the impact of inaccuracies in eyelid landmark detection in comparison to at least the method ofFIG.8. E. Calculation of Eye State Step604inFIG.6, eye state calculation, will now be described in greater detail.FIG.10illustrates an example of the overall process that may be performed in step604. The process begins by retrieving the next EOM value from the EOM dataset (step1001), and based on pre-configured thresholds (step1002and step1003) representing percentage of eye closure, categorizing the EOM value into open eye (step1009), partially closed eye (step1010) or closed eye state (step1011), which is then inserted into the eye state dataset14at step1004. Partially closed eye state (step1010) is used to determine the driver alertness level when the driver is looking down such that the eye is not fully closed, but the percentage of open eye is not sufficient for safe driving. That is, with a partially closed eye, the driver does not meet the criteria for “safe cone viewing area”. The “safe cone viewing area” is the area of the windshield between the driver door and the rear-view mirror, as shown inFIG.11. Once the eye state is determined, the time series eye state data set is smoothed at step1005using, for example, any of the smoothing methods described above. Next, the process checks for eye blinks at step1006. If a blink is detected, blink related parameters such as blink duration, eye closure duration, eyelid closure speed, and/or eyelid open speed are calculated, and a blink record is created and stored in the blinks dataset15. The blinks dataset15includes two subsets, called the blinks-per-second dataset and the blinks-per-minute dataset (not shown). If the eye closure duration exceeds the microsleep duration threshold, a flag is set to indicate a microsleep event in the blink record. Next the process checks whether one second of blink data has been collected at step1007. If so, a set of per-second parameters are calculated, a per-second record is created, and the record is stored in a blinks-per-second dataset at step1007. The parameters can include, for example, blink rate per second, average blink duration, average eye closure, average speed of eyelid closure, average speed of eyelid opening, and/or count of eye closures that exceed the microsleep duration threshold. Similarly, a similar set of parameters are calculated for each minute of record and stored in a blinks-per-minute dataset at step1008. F. Analysis of Driver Attention Level Based on Eye Movement: FIG.13shows in greater detail, an example of step606(FIG.6), determining the driver's attention level based on eye movement. This process is responsible for detecting unsafe driving based on a collection of analyses applied on eye movement datasets. As in the process ofFIG.12, when an unsafe event is detected, an alert record representing the type of unsafe event is created and inserted into the alerts dataset. Similarly, when the unsafe event clears, a corresponding clear alert record is also inserted to the alerts dataset. Step1301gets the eye state record, calculated in step606ofFIG.6. Step1302continuously analyzes the eye state data to detect any eye closure duration that is higher than the dynamic eye closed state threshold value. Step1303detects continuous partially closed eye state in conjunction with head orientation. It is possible to have partially closed eye states due to reclined seat and head orientation. These scenarios should not create an alarm state. Therefore, step1303takes into consideration the head orientation, duration, and eye state for analysis. Step1304detects changes in eye closure behavior over time that indicate onset of decrease in driver attention level. This step analyzes parameters such as blink rate, speed of eyelid movement, and duration of eye closure. G. Detection of Recent Prolonged Full Eye Closure Event FIG.14shows in greater detail, an example of the process included in step1303, detecting a recent prolonged full eye closure event. This process at step1401initially retrieves driver profile and threshold data (e.g., dynamic threshold parameters k, m and n, discussed below) from the dynamic threshold update module11(FIG.2). At steps1402and1403, the most recent batch of eye states is analyzed and the number of continuous frames for which the eye state is closed is counted. For each frame where the eye state is closed, the closed eye frame count is incremented (step1404). The closed eye frame count is compared at step1405against the dynamic eye state closed count threshold (retrieved from profile and threshold data at step1401), and if the closed eye frame count is greater, a recent prolonged eye closure alert record is created at step1406and inserted into the alerts dataset at step1407; otherwise, the process loops back to step1401. If during the process of looking for continuous eye closed state at step1403, a partial closed or open eye state is encountered, the closed eye frame count is reset at step1408and checked for a previous recent prolonged eye closure alert record without a clear alert record at step1409. If it exists, a clear recent prolonged eye closure alert record is created and inserted into the alerts dataset at step1410; otherwise, the process loops back to step1401. H. Detection of Recent Prolonged Partial Eye Closure Event FIG.15shows in greater detail, an example of the process included in step1303, detecting a recent prolonged partial eye closure event. This step counts the number of continuous partially closed eye state frames. Continuous partial eye closure state may also happen due to a reclined driver seat. When the seat is reclined, a driver tends to adjust their head such that the head remains within the safe driving cone. Such an orientation, by itself, is not a concern for safety as long as the driver maintains safe driving behavior. Therefore, it is important to take into consideration not only the duration of partial eye closure state but also the head orientation and corresponding duration of the head orientation. This process (FIG.15) at step1501initially retrieves driver profile and threshold data for partial eye closed state (e.g., dynamic threshold parameters k, m and n, discussed below) from the dynamic threshold update module11(FIG.2). The most recent batch of eye states is analyzed and the number of continuous frames for which the eye state is partially closed is counted at steps1502and1503. For each frame where the eye state is partially closed, the partially closed eye frame count is incremented (step1504). If at step1505the partially closed eye frame count is greater than the dynamic eye state partially closed count threshold retrieved from profile and threshold data at step1501, then at step1506the process calculates the standard deviation of head pitch over n multiples of partial eye closure frame count, where parameter n is determined by the profile and threshold module. Otherwise, the process loops back to step1501. After step1506, if the standard deviation is less than the corresponding threshold for head pitch standard deviation (indicating very little or narrow head movement) at step1507, then an alert record indicating prolonged partial eye closure event is created at step1508and is inserted into alerts dataset at step1509. Otherwise, the process loops back to step1501. If a partial eye state is not detected at step1503, then the partially closed eye frame count is reset at step1510, and if a previous alert exists with a clear alert message (step1511), then a clear alert message for prolonged partial eye closure event is created and inserted into the alerts dataset at step1512. I. Eye Behavior Trend Analysis FIG.16shows in greater detail, an example of the process included in step1304(FIG.13), eye behavior trend analysis. This analysis involves projecting future behavior based on historical behavioral data. The module analyzes the trend in the eye closure duration stored in blinks-per-minute dataset. The process initially retrieves driver profile and threshold data (e.g., dynamic threshold parameters k, m and n, discussed below) from the dynamic threshold update module11(FIG.2) at step1601, and retrieves data from the blinks-per-minute dataset at step1602. If the eye closure duration shows a trend of k % increase for a sample of m minutes at step1603, that is deemed indicative of the onset of drowsiness. In that event, an alert of type ‘change in eye closure trend’ is created and the record is inserted into alerts dataset at step1604. The parameter k is a specified threshold percentage value, and parameter m is a specified number of minutes. Similarly, if the microsleep count does not decrease by at least one count in a sample of n minutes at step1605, that is deemed indicative of an ongoing drowsiness state. In that event, therefore, an alert of type ‘change in microsleep trend’ is created and the record is inserted into alerts dataset at step1606. If no alert condition is detected at step1603or step1605, then a check is made for a previous alert of type ‘change in eye closure trend’ or ‘change in microsleep trend’ at step1607or1609, respectively. Otherwise the process loops back to step1601. If a previous alert exists without a corresponding clear alert request, then a record is created to clear the ‘change in eye closure trend’ and is inserted into alerts dataset. After step314, the process loops back to step301. J. Dynamic Threshold Update Module The dynamic threshold update module11(FIG.2) is responsible for providing the thresholds used for eye state and head movement comparison to determine driver alertness level. The thresholds can be broadly classified into time-based threshold values and angle-based threshold values. Time-based threshold values are used in deciding how long the driver can remain inattentive safely before such inattentiveness represents a decrease in alertness. Time-based threshold values are also of two types. One type deals with immediate events and the other deals with trends over a longer period of time. Angle-based threshold values can be broadly categorized into two types, one based on head orientation (roll, yaw, and pitch) and the other based on eye movement (eyelid openness measure). The time-based threshold values deal with reaction or response time required for a safe level of driver attention. The required reaction time can vary depending on the influence of various environmental parameters. As the driving environment changes, the comparison thresholds are updated proportionately to keep up with the changing environment. Parameters that may influence the driver's reaction or response time to an event while driving can be broadly categorized into the following types:Driver parameters, for example: age, physical limitations, EOM, blink rate, eye closure time, head orientation anglesVehicle parameters, for example: length, weight, speed, route mapsRoad parameters, for example: traffic congestion, road type (straight, curvy)Weather parameters, for example: time of day, season These parameters tend to have significant impact on a driver's reaction time to a driving event. Therefore, the dynamic threshold update module can take into consideration some or all of these parameters in calculating the threshold parameters. For example, consider the following scenario: A 35-year-old male is driving a sedan at 100 km/hour on a summer Sunday morning on a highway with low traffic. Typical response time for a young adult is around 200 ms. At 100 km/hour the sedan travels 27 meters per second. A typical sedan length is 4.5 meters. Therefore in 200 ms, the vehicle would have travelled 5.4 meters which is more than the length of a sedan car. Therefore, it is important to take into consideration the above-mentioned input parameters when deciding how long a driver can remain inattentive before it becomes unsafe. The reaction time also increases due to the influence of parameters such as age, drowsiness/fatigue, and distraction. The dynamic threshold update module input data sources include:Vehicle parameters: Vehicle status information including but not limited to speed, turn signal, and distance to objects in front of the vehicle, and camera frame rate.Driver parameters: Driver parameters stored locally on a device, driver mobile phone, and/or cloud services.Road parameters: Road parameters can be obtained from navigation systems, route maps, road infrastructure, cloud services, and/or driver mobile phone.Weather parameters: Weather parameters can be obtained from road infrastructure, driver mobile phone, and/or cloud services. Each input parameter has a preconfigured safe response time. For example, Tables 1, 2 and 3 describe examples of the input parameters and corresponding safe response times and weights. TABLE 1SpeedInfluenceLowMediumHighfractionalSpeed (km/h)(0 to 30)(31 to 50)(51 to 100)weightResponse (ms)S1S2S3ωS TABLE 2Driver AgeInfluencefractionalAge (years)20 to 4041 to 5051 to 100weightResponse (ms)A1A2A3ωA TABLE 3Traffic CongestionInfluenceTrafficfractionalCongestionLowMediumHighweightResponse (ms)T1T2T3ωT Each reaction time threshold parameter can be a weighted sum of these input parameters. The general equation for a time-based threshold parameter is: TRT=(∑k=1nωk*response_timek)n whereTRT is threshold response timeωkis weight contribution of the parameterresponse_timekis the independent response_time of the parametern is the number of input parameters Angle threshold values for head orientation can be based on generic data supplied with the system or calibrated roll, yaw, and pitch values for the driver. These values can be measured from calibration. They represent a head orientation suitable for the safe cone viewing area. The eye openness level measure-based thresholds can also be obtained either from custom driver calibration or from generic data supplied with the system. An example of a process for calculating and monitoring the driver's response/reaction times is shown inFIG.17. After the driver begins driving the car (step1701), the system at step1702gets input parameters from the driver's profile as well as vehicle, road and weather conditions. The system then calculates response/reaction times and updates the thresholds continuously at step1703. At step1704the system determines whether or not the driver is driving within the “safe cone viewing area” and whether a recent prolonged partial eye closure event has been detected. If either condition is detected, the system proceeds to step1705, in which the system updates the count of unsafe driving time. If neither condition is detected, the process loops back to step1703. If the unsafe driving time exceeds a corresponding threshold at step1706, then the system creates an alert record and inserts it into the alerts dataset at step1707. K. Driver Parameter Calibration The Dynamic Threshold Update Module11is also responsible for obtaining the driver parameters, such as age, physical limitations, head orientation angles, and eye openness level measures through calibration. An example of a spatial configuration for driver's parameter calibration inside a vehicle is shown inFIG.18, and an example of the process of driver's parameter calibration is shown inFIG.19. To perform calibration (e.g., when the car is first started), the process initially performs a facial recognition to identify the driver at step1901. If the driver is identified (step1902), then the driver profile data are retrieved from local storage at step1903. A check is performed for a newer version of the driver profile by requesting an update from the driver's mobile device and/or a cloud service. If an update is available then it is stored locally. If the driver is not identified, then at step1904the process performs a calibration to create a profile for the driver. At step1905the process gets the driver's age and physical limitations (if any). The process then at step1906provides audio and visual instructions to the driver to look at specific locations inside the vehicle, examples of which are indicated by the diamond shapes inFIG.18. At each location, the module at step1907captures a batch of images for a period of, for example, five seconds. The captured images are then used to determine head orientation angles and eye openness level measures using facial and eye landmarks at step1908. The measured values are then stored at step1909in local storage system, cloud service, and/or the driver's mobile device for future access. L. Alerting Subsystem The Alerting Subsystem7(FIG.2) generates the alerts. An example of the overall process performed by this subsystem is shown inFIG.20. All alerts recorded in the alerts dataset are processed (step2001) by the Alerting Subsystem7. The process inFIG.20checks whether the alert level recorded in the dataset is high at step2002. If the alert level is high, the process informs the driver at step2003by any one or more methods, such as seat/steering wheel vibration, visual display, and/or audible alert. If the alert level is not high, the process stops all previously generated alerts at step2004. After step2003, the process loops back to step2001. III. Examples of Certain Embodiments Certain embodiments of the technology introduced herein are summarized in the following numbered examples:1. A method comprising:obtaining image data representing images of an eye of a driver, the images having been captured while the driver is operating a vehicle;computing an eye openness measure of the eye of the driver, based on the image data;determining that the driver is in an unsafe state for driving based on the eye openness measure; andgenerating an alert, while the driver is operating the vehicle, based on determining that the driver is in the unsafe state.2. A method as recited in example 1, wherein computing an eye openness measure of the eye of the driver comprises determining an eye state for the eye, and wherein the eye state is from the set of eye states consisting of: a fully open state, a partially open state, and a closed state.3. A method as recited in example 1 or example 2, further comprising:determining a parameter set based on the eye state, the parameter set including at least one of a blink count, blink rate, eye closure duration, eye open duration, speed of eyelid closure, or speed of eyelid opening; andapplying at least one parameter of the parameter set to a threshold or profile to identify an amount of deviation.4. A method as recited in any of examples 1 through 3, further comprising:determining a plurality of thresholds;wherein said determining that the driver is in an unsafe state for driving comprises applying acquired or computed data to the plurality of thresholds, and wherein the plurality of thresholds includes a plurality of time-based thresholds and a plurality of angle-based thresholds.5. A method as recited in any of examples 1 through 4, wherein at least one of the time-based thresholds relates to driver reaction time.6. A method as recited in any of examples 1 through 5, wherein at least a first one of the angle-based thresholds relates to the eye openness measure.7. A method as recited in any of examples 1 through 6, wherein at least a second one of the angle-based thresholds relates to a head orientation of the driver.8. A method as recited in any of examples 1 through 7, wherein:at least one of the time-based thresholds relates to driver reaction time;at least a first one of the angle-based thresholds relates to the eye openness measure; andat least a second one of the angle-based thresholds relates to a head orientation of the driver.9. A method as recited in any of examples 1 through 8, further comprising dynamically updating the plurality of thresholds while the driver is operating the vehicle.10. A method as recited in any of examples 1 through 9, wherein computing the eye openness measure of the eye of the driver comprises:generating an upper eyelid contour of the eye and a lower eyelid contour of the eye based on the image data representing images of the eye;identifying a plurality of landmarks on each of the upper eyelid contour and the lower eyelid contour;defining a plurality of triangles, such that each of the plurality of triangles has vertices formed by a different 3-tuple of landmarks among the plurality of landmarks;computing interior angles of the plurality of triangles; andcomputing the eye openness measure of the eye as a function of the interior angles.11. A method as recited in any of examples 1 through 10, wherein two of the three vertices of each triangle are located on a different one of the upper eyelid contour and the lower eyelid contour than the third vertex of the triangle.12. A method as recited in any of examples 1 through 9, wherein computing an eye openness measure of the eye of the driver comprises:obtaining landmark data of the eye;generating an upper eyelid contour of the eye and a lower eyelid contour of the eye based on the landmark data;plotting a first line between a first intersection point of the upper eyelid contour and the lower eyelid contour and a second intersection point of the upper eyelid contour and the lower eyelid contour;dividing the first line into at least four equal segments;plotting a plurality of second lines, wherein each of the plurality of second lines passes through an intersection point between a different two of the at least four equal segments of the first line, perpendicularly to the first line;identifying a plurality of landmarks on each of the upper eyelid contour and the lower eyelid contour, the plurality of landmarks comprising each intersection point between a line of the plurality of second lines and either the upper eyelid contour or the lower eyelid contour;defining a plurality of triangles, such that each of the plurality of triangles has vertices formed by a different 3-tuple of landmarks among the plurality of landmarks, wherein two of the landmarks in each said 3-tuple are located on a different one of the upper eyelid contour and the lower eyelid contour than the third landmark in the 3-tuple;computing each interior angle of each of the plurality of triangles; andcomputing the eye openness measure of the eye as a function of the average of all of the interior angles of the plurality of angles.13. A method as recited in any of examples 1 through 12, further comprising:detecting a prolonged partial eye closure based on a head orientation angle and an eye state of the eye.14. A method as recited in any of examples 1 through 13, further comprising:calculating an eye behavior trend for the driver based on detected eye closure and microsleep patterns of the driver.15. A non-transitory machine-readable storage medium having instructions stored thereon, execution of which by a processing system in a vehicle causes the processing system to perform a process comprising:obtaining image data representing images of an eye of a driver of the vehicle, the images having been captured while the driver is operating the vehicle;computing an eye openness measure of the eye of the driver, based on the image data;determining that the driver is in an unsafe state for driving based on the eye openness measure; andgenerating an alert, while the driver is operating the vehicle, based on determining that the driver is in the unsafe state.16. A non-transitory machine-readable storage medium as recited in example 15, wherein computing an eye openness measure of the eye of the driver comprises determining an eye state for the eye, and wherein the eye state is from the set of eye states consisting of: a fully open state, a partially open state, and a closed state.17. A non-transitory machine-readable storage medium as recited in example 15 or example 16, said process further comprising:determining a parameter set based on the eye state, the parameter set including at least one of a blink count, blink rate, eye closure duration, eye open duration, speed of eyelid closure, or speed of eyelid opening; andapplying at least one parameter of the parameter set to a threshold or profile to identify an amount of deviation.18. A non-transitory machine-readable storage medium as recited in any of examples 15 through 17, said process further comprising:determining a plurality of thresholds;wherein said determining that the driver is in an unsafe state for driving comprises applying acquired or computed data to the plurality of thresholds, and wherein the plurality of thresholds includes a plurality of time-based thresholds and a plurality of angle-based thresholds.19. A non-transitory machine-readable storage medium as recited in any of examples 15 through 18, wherein:at least one of the time-based thresholds relates to driver reaction time;at least a first one of the angle-based thresholds relates to the eye openness measure; andat least a second one of the angle-based thresholds relates to a head orientation of the driver.20. A non-transitory machine-readable storage medium as recited in any of examples 15 through 19, wherein computing the eye openness measure of the eye of the driver comprises:generating an upper eyelid contour of the eye and a lower eyelid contour of the eye based on the image data representing images of the eye;identifying a plurality of landmarks on each of the upper eyelid contour and the lower eyelid contour;defining a plurality of triangles, such that each of the plurality of triangles has vertices formed by a different 3-tuple of landmarks among the plurality of landmarks;computing interior angles of the plurality of triangles; andcomputing the eye openness measure of the eye as a function of the interior angles.21. A system for monitoring driver alertness in a vehicle, the system comprising:a camera to capture image data representing images of an eye of a driver of the vehicle while the driver is operating the vehicle;an output device; anda processing subsystem, coupled to the camera and the output device, to receive the image data from the camera;compute an eye openness measure of the eye of the driver, based on the image data;determine that the driver is in an unsafe state for driving based on the eye openness measure;generate an alert, while the driver is operating the vehicle, based on determining that the driver is in the unsafe state; andcause the output device to output to the driver a signal indicative of the alert.22. A system as recited in example 21, wherein to compute an eye openness measure of the eye of the driver comprises to determine an eye state for the eye, and wherein the eye state is from the set of eye states consisting of: a fully open state, a partially open state and a closed state.23. A system as recited in example 21 or example 22, wherein the processing subsystem further is to:determine a parameter set based on the eye state, the parameter set including at least one of a blink count, blink rate, eye closure duration, eye open duration, speed of eyelid closure or speed of eyelid opening; andapply at least one parameter of the parameter set to a threshold or profile to identify an amount of deviation.24. A system as recited in any of examples 21 through 23, wherein the processing subsystem further is to:determine a plurality of thresholds;wherein to determine that the driver is in an unsafe state for driving comprises to apply acquired or computed data to the plurality of thresholds, and wherein the plurality of thresholds include a plurality of time-based thresholds and a plurality of angle-based thresholds.25. A system as recited in any of examples 21 through 24, wherein:at least one of the time-based thresholds relates to driver reaction time;at least a first one of the angle-based thresholds relates to the eye openness measure; andat least a second one of the angle-based thresholds relates to a head orientation of the driver.26. A system as recited in any of examples 21 through 25, wherein the processing subsystem further is to update the plurality of thresholds while the driver is operating the vehicle.27. A system as recited in any of examples 21 through 26, wherein to compute the eye openness measure of the eye of the driver comprises:generating an upper eyelid contour of the eye and a lower eyelid contour of the eye based on the image data representing images of the eye;identifying a plurality of landmarks on each of the upper eyelid contour and the lower eyelid contour;defining a plurality of triangles, such that each of the plurality of triangles has vertices formed by a different 3-tuple of landmarks among the plurality of landmarks;computing interior angles of the plurality of triangles; andcomputing the eye openness measure of the eye as a function of the interior angles. Unless contrary to physical possibility, it is envisioned that (i) the methods/steps described herein may be performed in any sequence and/or in any combination, and that (ii) the components of respective embodiments may be combined in any manner. The machine-implemented operations described above can be implemented by programmable circuitry programmed/configured by software and/or firmware, or entirely by special-purpose circuitry, or by a combination of such forms. Such special-purpose circuitry (if any) can be in the form of, for example, one or more application-specific integrated circuits (ASICs), programmable logic devices (PLDs), field-programmable gate arrays (FPGAs), system-on-a-chip systems (SOCs), etc. Software or firmware to implement the techniques introduced here may be stored on a machine-readable storage medium and may be executed by one or more general-purpose or special-purpose programmable microprocessors. A “machine-readable medium”, as the term is used herein, includes any mechanism that can store information in a form accessible by a machine (a machine may be, for example, a computer, network device, cellular phone, personal digital assistant (PDA), manufacturing tool, any device with one or more processors, etc.). For example, a machine-accessible medium includes recordable/non-recordable media (e.g., read-only memory (ROM); random access memory (RAM); magnetic disk storage media; optical storage media; flash memory devices; etc.), etc. The term “logic”, as used herein, means: a) special-purpose hardwired circuitry, such as one or more application-specific integrated circuits (ASICs), programmable logic devices (PLDs), field programmable gate arrays (FPGAs), or other similar device(s); b) programmable circuitry programmed with software and/or firmware, such as one or more programmed general-purpose microprocessors, digital signal processors (DSPs) and/or microcontrollers, system-on-a-chip systems (SOCs), or other similar device(s); or c) a combination of the forms mentioned in a) and b). Any or all of the features and functions described above can be combined with each other, except to the extent it may be otherwise stated above or to the extent that any such embodiments may be incompatible by virtue of their function or structure, as will be apparent to persons of ordinary skill in the art. Unless contrary to physical possibility, it is envisioned that (i) the methods/steps described herein may be performed in any sequence and/or in any combination, and that (ii) the components of respective embodiments may be combined in any manner. Although the subject matter has been described in language specific to structural features and/or acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as examples of implementing the claims and other equivalent features and acts are intended to be within the scope of the claims. | 51,645 |
11861917 | DETAIL DESCRIPTION Apparatuses, systems and methods for generating data representative of a vehicle driving environment may include the following capabilities: 1) determine whether a vehicle driver is looking at a road (i.e., tracking the driver's face/eyes, with emphasis on differentiating between similar actions, such as a driver who is adjusting a radio while looking at the road versus adjusting the radio while not looking at the road at all); 2) determine whether a driver's hands are empty (e.g., including determining an approximate size/shape of object in a driver's hands to, for example, differentiate between a cell phone and a large cup, for example); 3) identify a finite number of driver postures; and 4) logging rotated and scaled postures that are normalized for a range of different drivers. An associated mobile application may accommodate all popular platforms, such as iOS, Android and Windows, to connect an onboard device to a cell phone. In addition, to act as data connection provider to remote servers, the mobile application may provide a user friendly interface for reporting and troubleshooting. Accordingly, associated memory, processing, and related data transmission requirements are reduced compared to previous approaches. Turning toFIG.1, an example report100, representative of vehicle in-cabin insurance risk evaluation, is depicted. The report100may include a title105(e.g., In-Cabin Risk Evaluation Report), a photograph of a driver110, a name of a driver111, and a drive identification115including, for example, a calendar date116and a time117. The report100may also include value121(e.g., 67 centimeters) for a range of movement of most distinct postures120. The report100is a chronological diagram130of various driver postures129,131,132,133,134,135including details of a driver posture that the driver was in for the longest total time125. The driver posture that the driver was in for the longest total time125may include a skeletalFIG.126representing the posture, a total elapsed time127, and a number of individual occurrences of the posture128. The report100may further include a graph (e.g., a posture vs. time distribution bar chart) including a title (e.g., posture vs. time distribution), a number of times a given posture was determined to have been gained, and a total time in a given posture. The report100may also include the top five postures during an associated ride including skeletal figures representative of the respective postures150,155,160,165,170, a time in any given posture during an associated ride151,156,161,166,171, and a number of occurrences of any given posture during an associated ride152,157,162,167,172. With reference toFIG.2, a high-level block diagram of vehicle in-cabin system200is illustrated that may implement communications between a vehicle in-cabin device205and a remote computing device210(e.g., a remote server) to provide vehicle in-cabin device205location and/or orientation data, and vehicle interior occupant position data to, for example, an insurance related database270. The vehicle in-cabin system200may acquire data from a vehicle in-cabin device205and generate three dimensional (3D) models of a vehicle interior and occupants within the vehicle interior. The vehicle in-cabin system200may also acquire data from a microphone251,252and determine a source of sound and volume of sound within a vehicle interior. For clarity, only one vehicle in-cabin device205is depicted inFIG.2. WhileFIG.2depicts only one vehicle in-cabin device205, it should be understood that any number of vehicle in-cabin devices205may be supported. The vehicle in-cabin device205may include a memory220and a processor232for storing and executing, respectively, a module221. The module221, stored in the memory220as a set of computer-readable instructions, may be related to a vehicle interior and occupant position data collecting application that, when executed on the processor232, causes vehicle in-cabin device location data to be stored in the memory220. Execution of the module221may also cause the processor232to generate at least one 3D model of at least a portion of a vehicle occupant (e.g., a driver and/or passenger) within the vehicle interior. Execution of the module221may further cause the processor232to associate the vehicle in-cabin device location data with a time and, or date. Execution of the module221may further cause the processor232to communicate with the processor255of the remote computing device210via the network interface230, the vehicle in-cabin device communications network connection231and the wireless communication network215. The vehicle in-cabin device205may also include a compass sensor227, a global positioning system (GPS) sensor229, and a battery223. The vehicle in-cabin device205may further include an image sensor input235communicatively connected to, for example, a first image sensor236and a second image sensor237. While two image sensors236,237are depicted inFIG.2, any number of image sensors may be included within a vehicle interior monitoring system and may be located within a vehicle interior. The vehicle in-cabin device205may also include an infrared sensor input240communicatively connected to a first infrared sensor241and a second infrared sensor242. While two infrared sensors241,242are depicted inFIG.2, any number of infrared sensors may be included within a vehicle interior monitoring system and may be located within a vehicle interior. The vehicle in-cabin device205may further include an ultrasonic sensor input245communicatively connected to a first ultrasonic sensor246and a second ultrasonic sensor247. While two ultrasonic sensors246,247are depicted inFIG.2, any number of ultrasonic sensors may be included within a vehicle interior monitoring system and may be located within a vehicle interior. The vehicle in-cabin device205may also include a microphone input250communicatively connected to a first microphone251and a second microphone252. While two microphones251,252are depicted inFIG.2, any number of microphones may be included within a vehicle interior monitoring system and may be located within a vehicle interior. The vehicle in-cabin device205may further include a display/user input device225. As one example, a first image sensor236may be located in a driver-side A-pillar, a second image sensor237may be located in a passenger-side A-pillar, a first infrared sensor241may be located in a driver-side B-pillar, a second infrared sensor242may be located in a passenger-side B-pillar, first and second ultrasonic sensors246,247may be located in a center portion of a vehicle dash and first and second microphones251,252may be located on a bottom portion of a vehicle interior rearview mirror. The processor232may acquire position data from any one of, or all of, these sensors236,237,241,242,246,247,251,252and generate at least one 3D model (e.g., a 3D model of at least a portion of a vehicle driver) based on the position data. The processor232may transmit data representative of at least one 3D model to the remote computing device210. Alternatively, the processor232may transmit the position data to the remote computing device210and the processor255may generate at least one 3D model based on the position data. In either event, the processor232or the processor255retrieve data representative of a 3D model of a vehicle operator and compare the data representative of the 3D model of at least a portion of the vehicle driver with data representative of at least a portion of the 3D model vehicle operator. The processor232and, or the processor255may generate a vehicle driver warning based on the comparison of the data representative of the 3D model of at least a portion of the vehicle driver with data representative of at least a portion of the 3D model vehicle operator to warn the vehicle operator that his position is indicative of inattentiveness. Alternatively, the processor232and/or the processor255may generate an advisory based on the comparison of the data representative of the 3D model of at least a portion of the vehicle driver with data representative of at least a portion of the 3D model of a vehicle operator to advise the vehicle operator how to correct her position to improve attentiveness. The network interface230may be configured to facilitate communications between the vehicle in-cabin device205and the remote computing device210via any hardwired or wireless communication network215, including for example a wireless LAN, MAN or WAN, WiFi, the Internet, or any combination thereof. Moreover, the vehicle in-cabin device205may be communicatively connected to the remote computing device210via any suitable communication system, such as via any publicly available or privately owned communication network, including those that use wireless communication structures, such as wireless communication networks, including for example, wireless LANs and WANs, satellite and cellular telephone communication systems, etc. The vehicle in-cabin device205may cause insurance risk related data to be stored in a remote computing device210memory260and/or a remote insurance related database270. The remote computing device210may include a memory260and a processor255for storing and executing, respectively, a module261. The module261, stored in the memory260as a set of computer-readable instructions, facilitates applications related to determining a vehicle in-cabin device location and/or collecting insurance risk related data. The module261may also facilitate communications between the computing device210and the vehicle in-cabin device205via a network interface265, a remote computing device network connection266and the network215and other functions and instructions. The computing device210may be communicatively coupled to an insurance related database270. While the insurance related database270is shown inFIG.2as being communicatively coupled to the remote computing device210, it should be understood that the insurance related database270may be located within separate remote servers (or any other suitable computing devices) communicatively coupled to the remote computing device210. Optionally, portions of insurance related database270may be associated with memory modules that are separate from one another, such as a memory220of the vehicle in-cabin device205. Turning toFIG.3A, a vehicle device300ais depicted. The vehicle device300amay be similar to, for example, the vehicle device205ofFIG.2. The vehicle device300amay include a vehicle device registration module310a, a reference image data receiving module315a, an image sensor data receiving module320a, a geographic information system (GIS) data receiving module325a, a compass data receiving module327a, a vehicle device location/orientation module329a, a day/time data receiving module330a, a skeletal pose data generation module335a, a vehicle telemetry system data receiving module340a, a driver action prediction data generation module345a, a driver action time stamp data generation module350a, a driver action time stamp data transmission module355a, a driver warning generation module360a, and a report generation module365astored on a memory305aas, for example, computer-readable instructions. With reference toFIG.3B, a vehicle device300bis depicted. The vehicle device300bmay be similar to, for example, vehicle device205ofFIG.2. The vehicle device300bmay include a previously classified image data receiving module310b, a current image data receiving module315b, and a vehicle occupant action detection module320bstored on a memory305bas, for example, computer-readable instructions. Turning toFIG.4a remote computing device400is depicted. The remote computing device400may be similar to the remote computing device210ofFIG.2. The remote computing device400may include a reference image data transmission module410, a driver action time stamp data receiving module415, a driver action time stamp data analysis module420, a report generation module425, and a driver action time stamp data storage module430stored on a memory405. With reference toFIG.5A, a flow diagram for an example method of registering a vehicle device (e.g., vehicle device205,300a,300b) within a vehicle500ais depicted. The method500amay be implemented by a processor (e.g., processor232) executing, for example, a portion of the modules310a-365aofFIG.3A. In particular, the processor232may execute a vehicle device registration module310aand a reference image data receiving module315ato cause the processor232to acquire image data from, for example, an image sensor (e.g., image sensor265,270236237ofFIG.2) (block505a). The processor232may further execute the vehicle device registration module310ato cause the processor232to analyze the image sensor data to determine reference position of the vehicle device205,300a,300b(block510a). The processor232may further execute the vehicle device registration module310ato cause the processor232to store data representative of the determined reference position of the vehicle driver (block515a). The method500amay be implemented, for example, in response to a driver of a vehicle placing a vehicle device205,300a,300bwithin an associated vehicle (e.g., a driver may place the vehicle device205,300a,300bon a dash of the vehicle near a passenger side A-pillar). Thereby, a generic vehicle module205,300a,300bmay be installed by a vehicle driver in any vehicle. Vehicle driver postures may be rotated and scaled to be standardized (or normalized) vehicle device205,300a,300blocations within a vehicle and standardized (or normalized) to an average human (i.e., applicable to all drivers). Subsequent to being registered within a given vehicle, a vehicle device205,300a,300bmay use image sensors265,270to detect driver movements and record/categorize distinct driver postures (e.g., skeletal diagrams125,150,155,160,165,170. The methods and systems of the present disclosure may present results in two ways: 1) via detailed report of different postures; and 2) via graphical representation of the postures detected with timeframe (e.g., as in report100ofFIG.1). With reference toFIG.5B, a flow diagram for an example method of generating data representative of vehicle occupant actions500ais depicted. The method500amay be implemented by a processor (e.g., processor232) executing, for example, a portion of the modules310b-320bofFIG.3B. In particular, the processor232may execute the previously classified image data receiving module310bto cause the processor232to, for example, receive previously classified image data (block505b). The previously classified image data may be, for example, representative of images and/or extracted image features that have been previously classified as being indicative of degrees of vehicle operator risk. More particularly, the previously classified image data may include images and/or extracted image features that have previously been classified as being representative of a vehicle operator using a cellular telephone, a vehicle occupant looking out a vehicle side window, a vehicle occupant adjusting a vehicle radio, a vehicle occupant adjusting a vehicle heating, ventilation and air conditioning system, two vehicle occupants talking with one-another, a vehicle occupant reading a book or magazine, a vehicle occupant putting on makeup, a vehicle occupant looking at themselves in a mirror, etc. Alternatively, or additionally, the previously classified image data may, for example, be representative of known vehicle occupant locations/orientations, known cellular telephone locations/orientations, known vehicle occupant eye locations/orientations, known vehicle occupant head location/orientation, known vehicle occupant hand location/orientation, a known vehicle occupant torso location/orientation, a known seat belt location, a known vehicle seat location/orientation, etc. The processor232may execute the current image data receiving module315bto cause the processor232to, for example, receive current image data (block510b). For example, the processor232may receive current image data from at least one vehicle sensor (e.g., at least one of a compass sensor327, a GPS sensor329, an image sensor336,337, an infrared sensor341,342, an ultrasonic sensor346,347, and/or a microphone351,352). The current image data may include images and/or extracted image features that are representative of a vehicle occupant using a cellular telephone, a vehicle occupant looking out a vehicle side window, a vehicle occupant adjusting a vehicle radio, a vehicle occupant adjusting a vehicle heating, ventilation and air conditioning system, two vehicle occupants talking with one-another, a vehicle occupant reading a book or magazine, a vehicle occupant putting on makeup, a vehicle occupant looking at themselves in a mirror, etc. Alternatively, or additionally, the current image data may, for example, be representative of vehicle occupant locations/orientations, cellular telephone locations/orientations, vehicle occupant eye locations/orientations, vehicle occupant head location/orientation, vehicle occupant hand location/orientation, a vehicle occupant torso location/orientation, a seat belt location, a vehicle seat location/orientation, etc. The processor232may execute the vehicle occupant action detection module320bto cause the processor232to, for example, detect a vehicle occupant action (block515b). For example, the processor232may compare the current image data with the previously classified image data and may determine that a current image and/or extracted image feature is representative of one of the previously classified images and/or extracted image features. A vehicle occupant action may be detected using, for example, a probability function where each term may be a weighted factor derived from image data, and may include images and/or extracted image features that are representative of a vehicle occupant using a cellular telephone, a vehicle occupant looking out a vehicle side window, a vehicle occupant adjusting a vehicle radio, a vehicle occupant adjusting a vehicle heating, ventilation and air conditioning system, two vehicle occupants talking with one-another, a vehicle occupant reading a book or magazine, a vehicle occupant putting on makeup, a vehicle occupant looking at themselves in a mirror, vehicle occupant locations/orientations, cellular telephone locations/orientations, vehicle occupant eye locations/orientations, vehicle occupant head location/orientation, vehicle occupant hand location/orientation, a vehicle occupant torso location/orientation, a seat belt location, a vehicle seat location/orientation, etc. As a specific example, if the current image data has a higher likelihood of being representative of a vehicle occupant using a mobile telephone, the vehicle occupant action may be representative of use of a mobile telephone. When the current image data has a higher likelihood to be representative of the vehicle occupant looking out a side window, the vehicle occupant action may be representative of the vehicle occupant looking at an object of interest out the side window. Any given image or image feature may be weighted individually based upon, for example, a likelihood that the particular image or image feature is representative of a particular vehicle occupant action. Systems and methods of the present disclosure may include detecting, transmitting, and categorizing in aggregate. There may be times when the system encounters previously-unclassified behaviors. For example, a device may detect driver movements from the current image data. The device may attempt to classify the current image data to previously-classified image data. Based on the uniqueness of the current image data, the device may determine that the probability of a match to a known behavior is below an acceptable threshold. The system onboard the individual device may create a sample of the 3D or 2D image data and stores on the device storage medium. When the behavior logs are uploaded to an external server, the sample image data of the unique behavior may be uploaded. Thereby, at a central data repository, a sample of a unique behavior may be collected along with other samples of unique behaviors (sent from other individual systems). From the collection of samples, pattern recognition algorithms may be applied in order to categorize the previously-uncategorized behaviors. As new categories are developed, these new classifications may be sent to update other devices in the field so that their classification systems may be even more robust for all the possible behaviors that may occur. Turning toFIG.6, a flow diagram of a method of generating data representative of a driver's action along with data representative of a time stamp600is depicted. The method600may be implemented by a processor (e.g., processor232ofFIG.2) executing, for example, at least a portion of the modules310-365ofFIG.3. In particular, the processor232may execute an image sensor data receiving module320to cause the processor232to receive image sensor data from an image sensor (e.g., image sensor265,270236237ofFIG.2) (block605). The processor232may further execute the image sensor data receiving module320to cause the processor232to receive point cloud data from an image sensor (e.g., image sensor236,237ofFIG.2) (block610). The processor232may execute a skeletal pose data generation module335to cause the processor232to process the point cloud data through, for example, a pose estimator to generate skeletal diagram data (block615). The processor232may execute a reference image data receiving module315to cause the processor232to receive data representative of trained prediction modules (block620). The processor232may execute a driver action prediction data generation module345to cause the processor232to compare the skeletal diagram data with the trained prediction models (block620). The processor232may execute a day/time data receiving module330to cause the processor232to receive data representative of a day and/or time associated with a particular drive day/time (block625) The processor232may execute a driver action time stamp data generation module350to cause the processor232to generate data representative of driver actions along with a timestamp of the action based on the driver action data generated in block620and further based on the data representative of the day/time (block625). The processor232may execute a driver action time stamp data transmission module360to cause the processor232to transfer the driver action time stamp data to, for example, a driver's cellular telephone via, for example, a Bluetooth communication (e.g., wireless transceiver275ofFIG.2) (block630). The method600may use skeleton tracking and face tracking technologies to identify different driver postures. Driver joints data points (e.g., joints data points1806-1813ofFIG.18) may be clustered to create entries which represent a unique driver posture. These postures may then be used for making predictions about the subject's driving habits. With reference toFIG.7, and for prototype purposes, the system may implement a method to make predictions for a single driver700. The method700may be implemented by a processor (e.g., processor232ofFIG.2) executing, for example, a portion of the modules310-365ofFIG.3. In particular, the processor232may execute an image sensor data receiving module320to cause the processor232to collect image data (block705). The processor232may execute a skeletal pose data generation module335to cause the processor232to generate cluster data (block710). The processor232may execute a driver action prediction data generation module345to predict driver's actions (block715). Turning toFIG.8, a flow diagram for an example method of registering (or training) a vehicle device (e.g., vehicle device205,300) in a vehicle800. The method may be implemented by a processor (e.g., processor232ofFIG.2) executing, for example, at least a portion of the modules310-365ofFIG.3. The method800may include receiving data points for a driver's skeletal diagram (block805), initiating sensors and related programs (block810), setting a sensor range to “near mode” (block815), setting positioning to a “seated mode” (block820), and instructing a driver on proper position for calibration (block825) (e.g., driver should lean forward or move their hands/body (block826)). The method800may also include polling the sensors (e.g., image sensors236,237) for driver initial position (block830) and obtaining ten tracked points (e.g., points1806-1813ofFIG.18) (block835). The method may further include instructing a driver to move to a normal seated position (block840) and storing vehicle device registration data (block845). With reference toFIG.9, a flow diagram for a method categorizing various driver's joints points (e.g., points1806-1813ofFIG.18)900is depicted. The method900may include registering initial data points of a driver's skeleton diagram (block905), saving all ten triplets associated with a driver's skeleton diagram and associated timestamp (block910), finding nearest points for each point (block915) (e.g., select nearest two vertical and nearest two horizontal points (block916)). The method900may also include categorizing the highest points as a drivers head (e.g., point1807ofFIG.18) (block920), categorizing the lowest two points as the driver's hands (e.g., points1811,1813ofFIG.18) (block925), and storing the categorized points (block930). Turning toFIG.10, a flow diagram for an example method of predicting driver actions1000is depicted. The method1000may include tracking changes in the skeleton data points which are different than initial data points and record the changes in a database (block1005), calculating average depth of ten initial points (block1010), calculating variability percentage (block1015) (e.g., variability sensitivity may differ depending on point and algorithms (block1016)), draw ranges for ten joint positions (block1020), and determine if an trip ended (block1025). If the trip is determined to have ended (block1025), the method includes saving the last position and ending the method1000(block1030). If the trip is determined to not have ended (block1025), the method1000checks a driver's current position vs. last logged position (range) (block1035), and determines whether the driver's current position is new (block1040). If the driver's current position is determined to be new (block1040), the method1000saves all ten triplets and timestamps the triplets (block1045), and then returns to block1020. If the driver's current position is determined to not be new (block1040), the method1000returns to block1035. BR1 and TR1.1, 1.2 and 1.3 may be used to identify a new driver (e.g., an algorithm for recognizing the driver being a new driver). The system may use the detailed algorithm mentioned as described inFIGS.8-10. BR2 and TR2.1, 2.2 and 2.3 may be used to track movement of driver's upper body (e.g., an algorithm for tracking the movement of the driver's upper body is detailed inFIGS.8-10). BR3 and TR3.1, 3.2 and 3.3 may be used to log driver's clearly distinct postures at different times (e.g., an algorithm is to identify and log distinct postures from the movements tracked as part of BR2). The methods and systems of the present disclosure may be implemented using C++. Associated application programming interfaces (APIs) and software development kits (SDKs) may support these platforms. Source code for the system may be controlled with, for example, versioning software available from Tortoise SVN. With reference toFIG.11, a sequence diagram for generating a vehicle in-cabin insurance risk evaluation report1100is depicted. A report generator1105may record/log a trigger1111at instance1110. A data stream reader1115may identify a driver1120and record/log a trigger1121. A data manipulation1125may match/create and entry1126and return a driver ID1127. The data stream reader1115may read image sensor data1130and record/log a trigger1131. The data manipulation1125may store snapshot data1135and record/log a trigger1136. Cluster data1140may match a snapshot with an already registered cluster1145and may update cluster details1146at instance1150. The report generator1105may get data and create a report1156at instance1155. The data manipulation1125may return report data1161at instance1160, and the report generator1105may print the report1165. Turning toFIG.12, a detailed entity relationship (E-R) diagram1200is depicted. As depicted inFIG.12, a driver1230and a device1240may be connected to a has info block1205. The driver1230may be connected to a name1231, a driver ID1232, position coordinates1233(e.g., a face), a time stamp1235, and a device ID1236. The Device may be connected to a device ID1241, a model1242, and a manufacturer1243. The driver1230and a ride1245may be connected to a takes block1210. The ride1245may be connected to a ride ID1246, an end time1247, a vehicle1248(e.g., a car), a risk1249, and a start time1250. The ride1245and snapshots1255may be connected to a contains block1215. The snapshots1255may be connected to a snapshots ID1256, a ride ID1257, and a time stamp1258. The snapshots1255and a posture1260may be connected to a label with block1220. The posture126may be connected to a posture ID1261and a time stamp1262. The snapshots1255and joints1275may be connected to a consists of block1225. The joints1275may be connected to a x-value1266, a y-value1267, a z-value1268, a snapshot ID1269, and a joint ID1270. With reference toFIGS.13A and13B, a method for creating a read-only database account1300is depicted. A database layer1300may be developed in MySQL server. The method1300may start (block1305). All the rows in the database may be labeled as belonging to distribution G1(block1310). Block1310may include a data set1312and an initial covariance matrix, mean vector, H, and H, constant α, G11311. The database creation1300may restart from a first row (block1315). A probability that the row (1) dataset falls under distribution G1is obtained (blocks1320,1321). A probability that the row (2) dataset falls under distribution G1is obtained (blocks1325,1326). A categorization process1330may include finding a maximum probability1331. If a probability that the row (1) is found to be highest (block1330), the row is labeled with distribution G1(block1332). If a probability that the row (2) is found to be highest (block1330), a new G2is created and the row is labeled with distribution G2(block1333) and the updated G2and associated parameters are stored in the database as a cluster (block1334). The method1300proceeds to the next row in the database (block1335). A probability that the row (1) dataset falls under distribution G1, G2, . . . Gn, is obtained (blocks1340,1341). A probability that the row (2) dataset falls under a new distribution is obtained (blocks1345,1346). A categorization process1350may be similar to the categorization process1330. A determination as to whether the current row is the end of the database is made (block1355). If the current row is determined to not be the last row (block1355), the method1300returns to block1335. If the current row is determined to be the last row (block1355), the method1300proceeds to determine if the process discovered a new Gs or updated existing ones (block1360). If the process is determined to have discovered a new Gs or updated existing ones (block1360), the method1300returns to block1315. If the process is determined to not have discovered a new Gs or updated existing ones (block1360), all the existing clusters may be identified and results may be printed (block1365) and the method1300ends (block1370). Turning toFIG.14, a high-level block diagram of a development environment1400is depicted. The development environment1400may include an image sensor1410and a server1405hosting a database1415and VC++ implementation for collecting and clustering data. A user interface of the development environment may have a model car, parked car, or a dummy setup for a user to act as a driver. The system may analyze the movements of the driver during a trial period and may generate two sets of reports: 1) A live video of the skeleton frames with start, end and total time for the ride demo; and 2) A report shown also as charts of different postures and time spent for each posture as depicted, for example, inFIG.1. The development environment is focused on building a working model of the concept. The end-to-end system uses Microsoft Kinect, Microsoft Visual Studio C++, MySQL database and Microsoft Windows as platform. With reference toFIG.15, a system diagram1500is depicted for a development environment ofFIG.14. The system1500may include HTML and/or GUI APIs1505, a MYSQL database1510, and SigmaNI+Open NI SDKs1515. The system diagram1500depicts different C++ modules for different functionalities of the project. The system1500may also include an AppComponents::iDataManipulation component1525to interact with the MYSQL database1510. All other components may use APIs in this component to interact with MYSQL database. The system1500may further include an AppComponents::iReadDataStream component1535to interact with Sensor hardware via KinectSDK middleware (e.g., SigmaNI+Open NI SDKs1515). The iReadDataStream component1535may read a data stream from the sensor (e.g., image sensor236,237ofFIG.1) and may store the data structure in a Snapshot table for further clustering and processing. The system1500may also include an AppComponents::iClusterData component1530that may read snapshot data stored by the iReadDataStream component1535and may cluster the data to identify driver postures. The AppComponents::iClusterData component1530may begin to function once new data is stored in a database by the iReadDataStream component1535. The system1500may further include an AppComponents::iPredictionModule component1540that may function as a prediction engine, and may have algorithms to implement driving habit analysis for the captured data. The system1500may also include an AppComponents::iReportGenerator component1520that, for successful demonstration, a report will be generated. The AppComponents::iReportGenerator component1520may have APIs to read the data via iDataManipulation component1525from the database and generate report. This component will also display the live video of the participant on the screen. For the live video, it will capture the data directly from iReadDataStream component1535. An AppComponents::iDataManipulation1525may include input related to business objects acquired from or required by various business methods in other components. Output/Service may be provided for business objects extracted from a database via data access objects and methods. Depending on which component is calling, this component may have generic and client specific APIs for serving various business objects. Component/Entity process: Data connection; Connection pool; DAOs for below entities; Driver; Snapshot Object; RideDetails; and PosturesDetails. Constraints may include initial connection pool size of ten and max size may be thirty. An AppComponents::iReadDataStream component1535may include input for an event to start and stop reading a video and sensor data stream from hardware. A SDK APIs may be used for reading skeleton, face and hand tracking data. Output/Service may be provided via snapshot objects and relevant joints coordinates may be output and stored in the database using Data manipulation component1525. Live data may be transported to ReportGenerator component1520. Component/Entity process may work as a batch process to start and stop logging the read data in the database when triggered. The component also needs to be able to transmit live data to iReportGenerator component1520to show it on screen. Constraints may include appropriate buffering and error handling which may be done, to make sure appropriate error messages are displayed/captured for downstream components. An AppComponents::iClusterData component1530may input snapshot data read from iReadDataStream and a database. Output/Service may be provided and assign a postureID to a snapshot and update the posture-database. Component/Entity process may include: Retrieving snapshot and posture information from database; Matching snapshots with postures; Inserting new snapshot/posture information to database; Implementations of unsupervised clustering algorithms. Constraints may include a number of clusters generated has a limit. An AppComponents::iPredictionModule component1540may serve to take in data from a database, and turn the data into information to leverage. The AppComponents::iPredictionModule component1540may identify risky drivers, review their in-cabin driving habits, and eventually act to curb these risky habits. This section explains how the data may be modeled to better understand which factors correlate to a defined risk metric and how certain behavior patterns contribute to a higher insurance risk rating. An AppComponents::iReportGenerator1520may include input information taken from a database, the ten coordinates taken from the data stream during a demo, a start time, an elapsed time and some dummy information. Output/Service may be provided including a video of skeleton frames with start time and elapsed time and a report that displays charts that may illustrate what happened during the demo. The report may include a picture of the driver, the driver's name, and the range of movement of most distinct postures. The report may also have a line graph and a bar graph that show how much time the driver spent in each posture. The report may display the skeleton coordinates of the five postures the driver was in the most along with the time and number of occurrences of each. Component/Entity process may include: a Generator; a Report; a Video; a DAOs for below entities; a Ride; a Posture and a Joint. Constraints may include a demo that may have at least five different postures. Number of postures and number of occurrences should not exceed max array length. Turning toFIG.16, a system for generating data representative of a vehicle in-cabin insurance risk evaluation1600is depicted. The system1600may include a plurality of vehicle devices1605communicatively coupled to a data processing, filtering and load balancing server1615via a wireless webservice port1610to send and receive data. The system1600may also include a database server1620and database1621to store in-cabin data, and an algorithm server1625to continuously refine algorithm data and an associated prediction engine. When multiple sensors are used, a SigmaNI wrapper may be used as an abstraction layer for code. This may ensure that if a sensor is changed, or different sensors are user, minimal code changes are required. With reference toFIG.17, when SigmaNI is not an approved software, an implementation1700may directly interact with a SDK1710to get the driver data from a sensor1705for generation data representative of vehicle in-cabin insurance risk evaluations1715and storing the data in a database1720. The system1700may use sensors (e.g., image sensor236,237ofFIG.1) for detecting the driving postures, such as provided by Microsoft Kinect for windows, Carmine 1.09 and/or Softkinect DS325. The following SDKs may be used with the above hardware: a Kinect SDK, an OpenNI, a Softkinect SDK and/or a SigmaNI. Turning toFIG.18, a posture (or skeletal diagram)1800may include ten joint positions1806-1813for a driver's upper body1805. An associated cluster may include ten rounds with radius 10 cm and centered at ten 3-dimensional points. A match (posture p, cluster c) may return true if all the ten joint positions of the posture are contained in the ten balls for the cluster accordingly, otherwise returns false. A distance of two points may be measured using a Euclidean distance. For example, given a pair of 3-D points, p=(p1, p2, p3) and q=(q1, q2, q3): distance (p, q)=sqrt((p1−q1){circumflex over ( )}2+(p2−q2){circumflex over ( )}2+(p3-q3){circumflex over ( )}2). A cube in 3-dimential consists all points (x, y, z) satisfy following conditions: a<=x<=b, c<=y<=d, e<=z<=f, where b−a=d−c=f−e. When initialization: a first cluster may be defined by the ten joint positions of the first posture. A cluster may be added to the initial cluster list, denote CL Loop: for each of subsequent postures, say P, for each cluster in CL, say C, if Match (P, C): Label P with C, break, End For. If P does not have a cluster label, create a new cluster C′ and add C′ to CL End For and BR4 [TR 4.1, 4.2, 4.3 and 4.4] Create risk profile of the driver. With reference toFIG.19, an object design for a detailed entity relationship (E-R) diagram1900is depicted. An associated database layer may be developed in MySQL server. The entity relationship1900may include a device1905connected to a driver1910, connected to a ride1920, connected to a snapshot1930which is connected to both joints1935and a posture1945. The device1905may include a device ID1906, a model,1907and a manufacturer1908. The driver1910may include a driver ID1911, a device ID1912, a name1913, face coordinates1914, and a time stamp1915. The ride1920may include a ride ID1921, a driver ID1922, a start time1923, an end time1924, a car1925, and a risk1926. The snapshot may include a snapshot ID1931, a ride ID1932, and a time stamp1933. The joints1935may include a joint ID1936, a snapshot ID1937, a x-value1938, a y-value1939, and a z-value1940. The posture1945may include a posture ID1946, a snapshot ID1947, and a time stamp1948. Turning toFIG.20, a class diagram2000may include a BaseDAO2005, a DeviceDAO2010, a DriverDAO2015, a SnapshotDAO2025, a JointDAO2035, a PostureDAO2045, and a RideDAO2050. The BaseDAO2005may include a con: DBConnection2006and a #getConnection( )2007. The DeviceDAO2010may include a DeviceID:String2011, a Model: String2012, a Manufacturer: String2013, and a getters/setters2014. The DriverDAO2015may include a DriverID: String2016, a Name: String2017, a FaceCoordinates: Array(int100)2018, a Device Obj: Device DAO2019, a timestamp: timestamp2020, and a getters/setters2021. The SnapshotDAO2025may include a SnapshotID: String2026, a RideID: String2027, a TimeStamp: timestamp2028, a Joints: Array (jointDAO10)2029, and a getters/setters2030. The JointDAO2035may include a JointID: String2036, a X: int2037, a Y: int2038, a Z: int2039, and a getters/setters2040. The PostureDAO2045may include a PostureID: String2046, a SnapshotID: String2047, a getters/setters2048, and a fetTopPostureIDs (Postures)2049. The RideDAO2050may include a RideID: String2051, a DriverObj: DriverDAO2052, a StartTime: timestamp2053, an EndTime: timestamp2054, and a getters/setters2055. With reference toFIG.21, a class diagram2100may include a ReadSensorData component2105, a ReadKinectSensorData component2115, and a ReadCarmineSensorData component2120. The ReadSensorData component2105may include a Snapshot: SnapshotDAO2106, an initialize( ) parameter2107, a readDataStream( ) parameter2108, a saveSnapshot( ) parameter2109, a transmitLiveOparameter2110, and agetters/setter parameter2111. The ReadKinectSensorData component2115may include an initializeKinect( ) parameter2116. The ReadCarmineSensorData component2120may include an initializeCarmine( ) parameter2121. Turning toFIG.22, a class diagram2200may include a ClusterData component2205, a Posture component2225, a Snapshot component2230, and a Joint component2235. The ClusterData component2205may include a surSS: SnapShot2206, a postures: ArrayList(Posturess)2207, a postureID integer2208, a Match_Posture( )2209, a Update_DB( )2210, and a getters/setters2211. The Posture component2225may include a postureID: integer2226, a Cneters:Array(Joint)2227, a Radius:Number2228, and a getters/setters2229. The Snapshot component2230may include a SnapshotID: Integer2231, a Joints: Array(Joint)2232, a timestamp: Number2233, and a getters/setters2234. The Joint component2235may include a x: Number2236, a y: Number2237, a z: Number2238, and a getters/setters2239. With reference toFIG.23, a class diagram2300may include a ReportGenerator component2305, a Report component2310, and a Video component2320. The ReportGenerator component2305may include a Report: Report2306, a Video: Video2307, and a getters/setters2308. The Report component2310may include a DriverName: String2311, a RideObj: RideDAO2312, a getters/setters2313, a drawLineGraph(RideObj)2314, a drawBarGraph(RideObj)2315, and a drawTopPostures(RideObj)2316. The Video component2320may include a CurrentPosture: PostureDAO2321, a StartTime: timestamp2322, a CurrentTime: timestamp2323, a getters/setters2324, a displayPosture(CurrentPosture)2325, and a displayTimes(starTime, currentTime)2326. A car-sharing insurance product could more specifically insure the driver, regardless of the car. Traditional underwriting looks at the driver-vehicle combination. What car-sharing would allow you to do is to more heavily weight the risk of the driver alone. The methods and systems of the present disclosure may allow car-sharing to get that risk information on the driver and carry it forward to whatever car they use. This would be tailored for that particular driver's behavior, rather than demographic and vehicle-use factors. This would allow certain car-sharing entities to have a cost advantage. If they are paying less insurance—or more specific insurance— they could pass those savings to their customers and have a retention strategy. The methods and systems of the present disclosure may allow for emergency responders by, for example, using gesture recognition systems from an aftermarket/insurance device in order to provide an estimate to first responders about the severity of the crash and what kinds of resources/equipment/expertise is required in order to extricate. Using the gesture recognition systems from an aftermarket/insurance device in order to provide an estimate to first responders about the severity of the crash and what kinds of resources/equipment/expertise is required in order to triage—have some idea of what emergency medical needs could be upon arrival. Since the “golden hour” is so critical, and it's not always known how much of that hour has already expired, even a preliminary or broad clue could be helpful in the triage process. The aftermarket gesture recognition device is already operating at the time of the crash. It is collecting data about the driver's position/posture and the location of the arms relative to the body and structures in the vehicle (i.e. the steering wheel). Accelerometers in the device are able to recognize that a crash has occurred (if a pre-determined acceleration threshold has been reached). Upon crash detection the device could transmit via the driver's phone (which is already connected via Bluetooth) or perhaps transmit using an onboard transmitter that uses emergency frequencies (and therefore does not require consumer to pay for data fees). Using gesture recognition from any original equipment or aftermarket gesture tracking device, whether or not for insurance purposes. The methods and systems of the present disclosure may allow for Transition from Automated to Manual Driving Mode in the case of vehicle automation systems operating the piloting functions with the human in a supervisory role. The vehicle encounters a situation where it needs to transfer control to the driver, but the driver may or may not be ready to resume control. The methods and systems of the present disclosure may allow gesture recognition systems, or any gesture recognition system, to be used to determine if the driver is ready to resume control. If he/she is not ready, then get his/her attention quickly. The gesture recognition would be used to ascertain whether the driver is ready to resume control by evaluating the driver's posture, the location of hands, the orientation of head, body language. Use machine learning to evaluate driver engagement/attention/readiness-to-engage based on those variables. The gesture recognition could be any original in-vehicle equipment or aftermarket device. The methods and systems of the present disclosure may distinguish between Automated and Manual driving modalities for variable insurance rating for a scenario where there are many vehicles that are capable of automatically operating the piloting functions, and are capable of the driver manually operating the piloting functions. The driver can elect to switch between automated and manual driving modes at any point during a drive. Gesture recognition would be utilized to distinguish whether a driver is operating the vehicle manually, or whether the vehicle is operating automatically. This could be determined through either OEM or aftermarket hardware. The sensors and software algorithms are able to differentiate between automatic and manual driving based on hand movements, head movements, body posture, eye movements. It can distinguish between the driver making hand contact with the steering wheel (to show that he/she is supervising) while acting as a supervisor, versus the driver providing steering input for piloting purposes. Depending on who/what is operating the vehicle would determine what real-time insurance rates the customer is charged. The methods and systems of the present disclosure may provide a tool for measuring driver distraction where gesture recognition may be used to identify, distinguish and quantify driver distracted for safety evaluation of vehicle automation systems. This would be used to define metrics and evaluate safety risk for the vehicle human-machine interface as a whole, or individual systems in the case where vehicles have automation and vehicle-to-vehicle/vehicle-to-infrastructure communication capabilities. Where Vehicle automation: the vehicle is capable of performing piloting functions without driver input. Where Vehicle-to-vehicle/vehicle-to-infrastructure communication: the vehicle is capable of communicating data about the first vehicle dynamics or environmental traffic/weather conditions around the first vehicle. For any entity looking to evaluate the safety or risk presented by a vehicle with automated driving capabilities, DRIVES gesture recognition could be useful to quantify risk presented by driver distraction resulting from any vehicle system in the cabin (i.e. an entertainment system, a feature that automates one or more functions of piloting, a convenience system). With the rise of vehicle automation systems and capabilities, tools will be needed to evaluate the safety of individual systems in the car, or the car as a whole. Much uncertainty remains about how these systems will be used by drivers (especially those who are not from the community of automotive engineering or automotive safety). Determining whether they create a net benefit to drivers is a big question. The methods and systems of the present disclosure may allow gesture recognition could be used to identify the presence of distracted driving behaviors that are correlated with the presence of vehicle automation capabilities. The distracted could be quantified by duration that the driver engages in certain behaviors. Risk quantification may also be measured by weighting certain behaviors with higher severity than other behaviors, so the duration times are weighted. Risk quantification may also differentiate subcategories of behaviors based on degree of motion of hands, head, eyes, body. For example, The methods and systems of the present disclosure may distinguish texting with the phone on the steering wheel from texting with the phone in the driver's lap requiring frequent glances up and down. The latter would be quantified with greater risk in terms of severity of distraction. The purpose of this risk evaluation could be for reasons including but not limited to adhere to vehicle regulations, providing information to the general public, vehicle design testing or insurance purposes. This detailed description is to be construed as exemplary only and does not describe every possible embodiment, as describing every possible embodiment would be impractical, if not impossible. One may be implement numerous alternate embodiments, using either current technology or technology developed after the filing date of this application. | 52,534 |
11861918 | DETAILED DESCRIPTION Illustrative embodiments will be described herein with reference to exemplary information processing systems and associated computers, servers, storage devices and other processing devices. It is to be appreciated, however, that embodiments are not restricted to use with the particular illustrative system and device configurations shown. Accordingly, the term “information processing system” as used herein is intended to be broadly construed, so as to encompass, for example, processing systems comprising cloud computing and storage systems, as well as other types of processing systems comprising various combinations of physical and virtual processing resources. An information processing system may therefore comprise, for example, at least one data center or other type of cloud-based system that includes one or more clouds hosting tenants that access cloud resources. Such systems are considered examples of what are more generally referred to herein as cloud-based computing environments. Some cloud infrastructures are within the exclusive control and management of a given enterprise, and therefore are considered “private clouds.” The term “enterprise” as used herein is intended to be broadly construed, and may comprise, for example, one or more businesses, one or more corporations or any other one or more entities, groups, or organizations. An “entity” as illustratively used herein may be a person or system. On the other hand, cloud infrastructures that are used by multiple enterprises, and not necessarily controlled or managed by any of the multiple enterprises but rather respectively controlled and managed by third-party cloud providers, are typically considered “public clouds.” Enterprises can choose to host their applications or services on private clouds, public clouds, and/or a combination of private and public clouds (hybrid clouds) with a vast array of computing resources attached to or otherwise a part of the infrastructure. Numerous other types of enterprise computing and storage systems are also encompassed by the term “information processing system” as that term is broadly used herein. As used herein, “real-time” refers to output within strict time constraints. Real-time output can be understood to be instantaneous or on the order of milliseconds or microseconds. Real-time output can occur when the connections with a network are continuous and a user device receives messages without any significant time delay. Of course, it should be understood that depending on the particular temporal nature of the system in which an embodiment is implemented, other appropriate timescales that provide at least contemporaneous performance and output can be achieved. As used herein, “natural language” is to be broadly construed to refer to any language that has evolved naturally in humans. Non-limiting examples of natural languages include, for example, English, Spanish, French and Hindi. As used herein, “natural language processing (NLP)” is to be broadly construed to refer to interactions between computers and human (natural) languages, where computers are able to derive meaning from human or natural language input, and respond to requests and/or commands provided by a human using natural language. As used herein, “natural language understanding (NLU)” is to be broadly construed to refer to a sub-category of natural language processing in artificial intelligence (AI) where natural language input is disassembled and parsed to determine appropriate syntactic and semantic schemes in order to comprehend and use languages. NLU may rely on computational models that draw from linguistics to understand how language works, and comprehend what is being said by a user. As used herein, “image” is to be broadly construed to refer to a visual representation which is, for example, produced on an electronic display such as a computer screen or other screen of a device. An image as used herein may include, but is not limited to, a screen shot, window, message box, error message or other visual representation that may be produced on a device. Images can be in the form of one or more files in formats including, but not necessarily limited to, Joint Photographic Experts Group (JPEG), Portable Network Graphics (PNG), Graphics Interchange Format (GIF), and Tagged Image File (TIFF). In an illustrative embodiment, machine learning (ML) techniques are used to extract knowledge from images associated with system problems to predict the issues corresponding to the problems and provide users with targeted resolution steps. One or more embodiments leverage historical support case information comprising images and videos, and use the historical support case information as training data for one or more ML models. The trained ML models receive images corresponding to a problem from a user and determine matching images and support cases to automatically recommend resolutions which are specific to the problem. Although the embodiments herein are discussed in terms of images, the embodiments may alternatively apply to videos produced on a device in one or more formats such as, but not necessarily limited to, Moving Picture Experts Group (MPEG), Audio Video Interleave (AVI) and Windows Media Video (WMV). FIG.1shows an information processing system100configured in accordance with an illustrative embodiment. The information processing system100comprises user devices102-1,102-2, . . .102-M (collectively “user devices102”). The user devices102communicate over a network104with an image analysis and resolution platform110. The information processing system further comprises an assisted support channel170, which may communicate over the network with the user devices102and the image analysis and resolution platform110. The user devices102can comprise, for example, Internet of Things (IoT) devices, desktop, laptop or tablet computers, mobile telephones, or other types of processing devices capable of communicating with the image analysis and resolution platform110and/or the assisted support channel170over the network104. Such devices are examples of what are more generally referred to herein as “processing devices.” Some of these processing devices are also generally referred to herein as “computers.” The user devices102may also or alternately comprise virtualized computing resources, such as virtual machines (VMs), containers, etc. The user devices102in some embodiments comprise respective computers associated with a particular company, organization or other enterprise. The variable M and other similar index variables herein such as K and L are assumed to be arbitrary positive integers greater than or equal to two. The assisted support channel170comprises an interface layer171, a customer relationship management (CRM) system173and a file store175. According to one or more embodiments, a CRM system173includes technical support personnel (e.g., agents) tasked with assisting users that experience issues with their devices, systems, software, firmware, etc. Users such as, for example, customers, may contact the technical support personnel when they have device and/or system problems and require technical assistance to solve the problems. Users may access the assisted support channel170through one or more interfaces supported by the interface layer171. The interfaces include multiple communication channels, for example, websites, email, live chat, social media, mobile application and telephone sources. Users can access the assisted support channel170through their user devices102. In response to user inquiries and/or requests for assistance, technical support personnel may create support tickets and/or cases summarizing the issues and the steps taken to resolve the issues. As part of agent assisted support tickets and/or cases, screen shots and images related to the issues are collected along with any textual log files from the user (e.g., customer) and stored in the file store175. These images, as well as any textual log files, can be used as reference data for technical support personnel to help diagnose and fix that specific case. After the case is complete, this data and images remain in the file store175as historical records. The terms “client,” “customer” or “user” herein are intended to be broadly construed so as to encompass numerous arrangements of human, hardware, software or firmware entities, as well as combinations of such entities. Image analysis and problem resolution services may be provided for users utilizing one or more machine learning models, although it is to be appreciated that other types of infrastructure arrangements could be used. At least a portion of the available services and functionalities provided by the image analysis and resolution platform110in some embodiments may be provided under Function-as-a-Service (“FaaS”), Containers-as-a-Service (“CaaS”) and/or Platform-as-a-Service (“PaaS”) models, including cloud-based FaaS, CaaS and PaaS environments. Although not explicitly shown inFIG.1, one or more input-output devices such as keyboards, displays or other types of input-output devices may be used to support one or more user interfaces to the image analysis and resolution platform110, as well as to support communication between the image analysis and resolution platform110and connected devices (e.g., user devices102), between the assisted support channel170and connected devices and/or between other related systems and devices not explicitly shown. In some embodiments, the user devices102are assumed to be associated with repair technicians, system administrators, information technology (IT) managers, software developers release management personnel or other authorized personnel configured to access and utilize the image analysis and resolution platform110. The image analysis and resolution platform110and the assisted support channel170in the present embodiment are assumed to be accessible to the user devices102, and vice-versa, over the network104. In addition, the image analysis and resolution platform110is accessible to the assisted support channel170, and vice-versa, over the network104. The network104is assumed to comprise a portion of a global computer network such as the Internet, although other types of networks can be part of the network104, including a wide area network (WAN), a local area network (LAN), a satellite network, a telephone or cable network, a cellular network, a wireless network such as a WiFi or WiMAX network, or various portions or combinations of these and other types of networks. The network104in some embodiments therefore comprises combinations of multiple different types of networks each comprising processing devices configured to communicate using Internet Protocol (IP) or other related communication protocols. As a more particular example, some embodiments may utilize one or more high-speed local networks in which associated processing devices communicate with one another utilizing Peripheral Component Interconnect express (PCIe) cards of those devices, and networking protocols such as InfiniBand, Gigabit Ethernet or Fibre Channel. Numerous alternative networking arrangements are possible in a given embodiment, as will be appreciated by those skilled in the art. The image analysis and resolution platform110, on behalf of respective infrastructure tenants each corresponding to one or more users associated with respective ones of the user devices102, provides a platform for analyzing incoming image inputs and recommending appropriate resolutions. Referring toFIG.1, the image analysis and resolution platform110comprises an interface layer120, an image analysis and annotation engine130, an image matching and recommendation engine140, a data repository150and a search engine160. The image analysis and annotation engine130includes a text extraction component131and an intent analysis component133. The image matching and recommendation engine140includes an image analysis component141and a recommendation component143. The search engine160includes a knowledge base161. When there are system issues and/or outages, users may be presented with certain images on their devices (e.g., user devices102). For example, in connection with a blue screen of death (“B SOD”) problem, a user may see an image similar to the image400inFIG.4, but without the borders around the different textual portions. In another example, a new device may fail to function due to a missing device driver, and the user may encounter an image including one or more textual phrases about missing and/or uninstalled components. Although issues and/or outages may be visually captured as images by a user, there are no existing platforms to analyze the images and recommend resolutions to the issues and/or outages based on the analysis. Referring to the system100inFIG.1and to the operational flow200inFIG.2, the image analysis and annotation engine130, and the image matching and recommendation engine140provide end-to-end capability for enhanced self-support. The image analysis and annotation engine130uses a mask region-based convolutional neural network (Mask-R-CNN) algorithm, and the image matching and recommendation engine140uses natural language processing (NLP) and distance similarity algorithms. Referring to blocks201and202of operational flow200, following a start of the operational flow200, images and case resolution metadata are collected through assisted support channels (e.g., assisted support channel170) from an assisted support channel file store275(or175inFIG.1) and passed to the image analysis and annotation engine130, which semantically analyzes the image and extracts text from the image. For example, referring toFIGS.1and2, at block203, the images are analyzed by the text extraction component131, and at block204, the intent analysis component133analyzes the extracted text to determine semantic intent from an image. Many images include useful text that is typically present in the image to convey important information to a user. For example, important textual objects and entities found in the image text can be critical to understanding the context and semantic intent of an issue and may assist with providing automatic case resolution or guidance to a customer. For example, in keeping with the example that a device may fail to function due to a missing device driver, text in an image associated with that failure may convey to a user that a device driver is not installed. Based on the identified intents, the extracted text is annotated with intent identifiers and, at block205, the identified intents and annotated extracted text are stored along with the image and case resolution data in platform data repository250(or150inFIG.1) that forms a foundation for an image-based search by users. The data repository150/250includes mapped relationships between images, extracted text, intents and case resolutions to be used in connection with image-based analysis performed by the image analysis and resolution platform110upon receipt of one or more images from a user/customer in connection with a scenario where a user/customer is accessing a self-support portal to resolve issues and/or problems without assistance from support personnel. According to one or more embodiments, a self-support portal provides a user with access to the image analysis and resolution platform110via the interface layer120. Similar to the interface layer171, the interface layer120supports multiple communication channels, for example, websites, email, live chat, social media and mobile applications. Users can access the image analysis and resolution platform110through their user devices102. Referring to block210inFIG.2, a user seeking to solve a problem or address an issue without help from technical support personnel can upload an image file (e.g., JPEG, GIF, TIFF, PNG or other type of image file) associated with the issue or problem via the interface layer120and the image is received by the image analysis and resolution platform110. As noted herein, the image file may correspond to an image that appears on the screen of a user device in the event of a problem or issue such as, for example, a BSOD or an image including a message about a missing or uninstalled component. Similar to blocks203and204discussed above, at blocks211and212, the image analysis component141of the image matching and recommendation engine140leverages the text extraction component131to extract text from the uploaded image (or images if a user uploads multiple images), and leverages the intent analysis component133to analyze the extracted text to determine semantic intent from the image. Referring to block213, based on the identified intent, the recommendation component143predicts a resolution to the problem by matching the identified intent with corresponding historical images and associated resolutions stored in the platform data repository250(or150inFIG.1). In addition, referring to block214, a search engine (e.g., search engine160inFIG.1) searches a knowledge base (e.g., knowledge base161inFIG.1) with data annotated with intent identifiers based on the identified intent. The knowledge base161includes, for example, articles, product manuals and guided flows for resolving the given problem, which can be returned a user as search results in connection with a recommended resolution. As per block215, the recommendation component143provides the predicted issue, recommended resolution and search results to a user. The text extraction component131of the image analysis and annotation engine130extracts text and semantics from an uploaded customer image comprising structured and unstructured data. In order to parse through unstructured text, the embodiments utilize a combination of a Mask-R-CNN algorithm with optical character recognition (OCR), which accomplishes object detection, text object segmentation and text extraction for an inputted image. The pseudocode300inFIG.3corresponds to utilization of OCR in connection with recognizing, reading and extracting text embedded in an image. The code includes, for example, Python® code. Referring to the image400inFIG.4, the combined Mask-R-CNN and OCR algorithm creates boundary boxes401-1and401-2around areas in the image that include text objects. Mask-R-CNN is an object detection model used in the embodiments to provide a flexible mechanism to identify regions of interest (RoIs) in the images. In the images, the model identifies two types of objects, text objects and non-text objects. The Mask-R-CNN model is trained with OCR related data to identify RoIs in an image that are highly likely to contain text. This identification of RoIs is referred to herein as text localization. In addition to text localization, the model reads and extracts the text in a process referred to herein as text recognition. Referring toFIG.5, the Mask-R-CNN model is part of a multi-task network500that predicts multiple outputs from a single input image to achieve both text localization and text recognition. The model comprises three heads503,504and506, where a first head (bounding-box regression head503) proposes boundary boxes that are likely to contain objects of interest, a second head (classification head504) classifies which type of objects, for example, text and non-text (e.g., graphics) are contained in each box, and the third head (text recognition head506) recognizes the text using, for example, OCR. The bounding-box regression head503implements a region proposal network followed by a bounding-box regression network. The output of the bounding-box regression head503predicts RoIs/locations507in the image that might contain text. The classification head504comprises a binary classification component that estimates a class of an object inside an RoI as text versus everything else (e.g., non-text). The text recognition head506receives feature maps as an input from a convolutional stack502and RoI coordinates generated from the bounding-box regression head503. Convolutional neural networks (CNNs) apply a filter to an input to generate a feature map summarizing the presence of detected features in the input. The stacking of convolutional layers in the convolutional stack502allows for a hierarchical decomposition of the input, components of which are input to the heads503,504and506. The multi-task network500uses the identified RoI/locations507and fetches the relevant representations for each region from the convolutional stack502. According to an illustrative embodiment, a convolutional machine learning method with short-range kernel width is utilized. At each spatial step, the output of convolutions is used to predict an output letter and the overall sequence is collapsed through a Connectionist Temporal Classification (CTC) layer to output a final sequence for an RoI. The embodiments differ from conventional Mask-R-CNN algorithms at least by introducing text localization and recognition. Leveraging the Mask-R-CNN and OCR model and using labeled (annotated) images for training permits text extraction with higher performance and efficiency than conventional OCR based mechanisms. Referring toFIGS.1and6, the intent analysis component133/633uses NLU to analyze text and classify intent. The embodiments base a text message, where the words come one after another over a period of time, on a time series model, and leverage a Recurrent Neural Network (RNN). In order to efficiently analyze a message, the embodiments use a bi-directional RNN, which uses two separate processing sequences, one from left to right and another from right to left. In order to address RNNs having exploding or vanishing gradient issues for longer and complex dialogs or messages, the embodiments utilize a bi-directional RNN with long short-term memory (LSTM) for the NLU. The machine learning model used by the ML layer637is a bi-directional RNN with LSTM model. Unlike a traditional neural network, where input and output are independent, in an RNN the output from a previous step feeds into the input of a current step. As a result, when performing language processing, previous words are taken into account when predicting subsequent words of a sentence. An RNN includes a hidden state which remembers one or more words in the sentence. The bi-directional RNN of the embodiments performs bi-directional processing of a sentence (from past and from future in two directions in parallel). A bi-directional RNN addresses problems where sentences are too long, and some previous words in the sentence are not available due to limited hidden states. In addition, LSTM utilized by the embodiments introduces advanced memory units and gates to an RNN to improve accuracy and performance of the machine learning model. Referring to the operational flow600inFIG.6, intent analysis by the intent classification component633(which is the same or similar to the intent classification component133) uses intent corpus data636to train the machine learning model. This corpus data contains words and/or phrases and the corresponding intent associated with each of the words and/or phrases. A small sample of the intent corpus data used to train the machine learning model is shown in the table700inFIG.7. Referring to the table700, textual message samples are shown as corresponding to semantic intents corresponding to drivers (“System driver; Device driver installation”), blue screen of death (“B SOD”) and corruption (“File corruption; Registry file corruption; Hard drive failure”). The training data is input to the training component638of the ML layer637to train the machine learning model. Referring toFIG.6, according to an embodiment, a pre-processing component634cleans any unwanted characters and stop words from the corpus data. The pre-processing further comprises stemming and lemmatization, as well as changing text to lower case, removing punctuation, and removing incorrect or unnecessary characters. Once pre-processing and data cleanup is performed, the feature engineering component635tokenizes the input list of words in the sentences and/or phrases. Tokenization can be performed using, for example, a Keras library or a natural language toolkit (NLTK) library. A Keras tokenizer class can be used to index the tokens. After tokenization is performed, the resulting words are padded to make the words have equal lengths so that the words can be used in the machine learning model. For output encoding, tokenization and padding are performed on the intent list in the corpus636. After tokenization and padding, a list of intents is indexed and fed into the machine learning model for training. The intents may be one-hot encoded before being input to the model. Some features and/or parameters used in connection with the creation of the bi-directional RNN with LSTM model include an Adam optimizer, Softmax activation function, batch size and a number of epochs. These parameters or features are tuned to get the best performance and accuracy of the model. After the model is trained with the intent corpus training data, the model is used to predict the intent for incoming dialogs and/or messages. The accuracy of the model is calculated for hyperparameter tuning. Referring to the operational flow600inFIG.6, extracted text631(e.g., from an uploaded image) is pre-processed and engineered by the pre-processing and feature engineering components634and635, and then input to the ML layer637so that semantic intent can be classified (e.g., by the classification component639) using the trained machine learning model. Predicted intents680-1,680-2and680-3which may be outputs to the operational flow600include, for example, driver installation, BSOD and hard drive failure. FIG.8depicts example pseudocode800for intent analysis and classification according to an illustrative embodiment. For the implementation of the intent analysis component133/633, Python language and NumPy, Pandas, Keras and NLTK libraries can be used. The semantic intent classified from the extracted text of an image is stored in the data repository150along with associated case resolution information for future scenarios when a customer uploads an image in connection with a request for resolution of a problem, outage or other issue.FIG.9depicts a table900including semantic intent and associated case resolution information that can be stored in the data repository150. As shown in the table900, semantic intent 1 (“System driver; Device driver installation”) corresponds to a resolution of “Installed driver by uploading from www.company.com/drivers/system”, semantic intent 2 (“BSOD”) corresponds to a resolution of “Bootstrap was corrupt; system was re-imaged” and semantic intent 3 (“File corruption; Registry file corruption; Hard drive failure”) corresponds to a resolution of “Dispatch for replacing hard drive at customer site”. Referring to the operational flow1000inFIG.10, when a user attempts to look for help and/or support via a customer support portal with an image1001captured by the user, the image analysis component1041(same or similar to the image analysis component141inFIG.1) performs a reverse image search. This search is achieved by leveraging the text extraction component1031(same or similar to the text extraction component131inFIG.1) and intent analysis component1033(same or similar to the intent analysis component133inFIG.1or the intent analysis component633inFIG.6) to extract and analyze text from the user-provided image1001and determine the intent associated with the image1001. Then, using the predicted intent, the recommendation component1043(same or similar to the recommendation component143inFIG.1) attempts to find a match with historical image and resolution data from the platform data repository (e.g., repository150inFIG.1or250inFIG.2) to recommend a resolution. In some use cases, images comprising error messages can have very similar features, but the error messages or error codes may be different. For example, an error dialog image for a “device driver not found” message may look very similar to an error dialog image for a hard drive failure message from an image feature perspective. However, the textual messages are different, and returning case resolution information based on the images features alone may not give a user an adequate solution to the problem or issue. Accordingly, instead of matching image features, the embodiments leverage the text extraction component1031to extract text and/or messages from an image, and the intent analysis component1033to determine the intent(s) of the text and/or messages in an image. The ML layer1032of the text extraction component applies the combined Mask-R-CNN and OCR algorithm as described herein above to extract the text and/or messages from an image1001. The identified intent(s) associated with the image can then be passed to the recommendation component1043for recommending a resolution based on the text and/or messages in the image and not the visual image features. Advantageously, recommending a support solution based on intent, not the image itself, avoids returning irrelevant resolution information associated with similar visual images. In addition, the techniques of the embodiments are useful in situations where there are no existing historical images with which to form a match, but support tickets, case descriptions and/or notes from past resolutions are available to be compared with the determined intent to find a historical match to a current scenario. As an additional advantage, the embodiments create a foundation for potentially analyzing user textual searches and providing resolution recommendations from historical data by matching intents from the textual searches. The recommendation component1043provides a support resolution recommendation based on a comparison of historical case data with a user's problem or issue derived from a conclusion about the intent of the user-provided image1001. An ML layer1044of the recommendation component1043recommends a resolution to a given issue or problem by using NLP and distance similarity algorithms (e.g., Cosine or Euclidian) to identify similar intents from historical data in a repository (e.g., repository150or250) and recommend a resolution from the historical data. The repositories150and250store historical case data and metadata with corresponding resolutions and intents as obtained from previously uploaded and/or analyzed images and/or from support case descriptions and notes. The historical data is also used to train an ML model used by the ML layer1044that will receive intents (e.g., predicted intents1080-1,1080-2and/or1080-3), match the received intents to similar intents in the historical case data and metadata, and recommend the most appropriate resolution based on the matching intents and their resolutions. The predicted intents1080-1,1080-2and1080-3can be the same or similar to the predicted intents680-1,680-2and680-3discussed in connection withFIG.6. According to one or more embodiments, the ML layer1044of the recommendation component1043creates a term frequency-inverse document frequency (TF-IDF) vectorizer of the semantic intents of historical training data. At least part of (or all) the intent entries in the data repository150or250may comprise the historical training data for the intent analysis component1033(or633inFIGS.6and133inFIG.1). TF-IDF is a numerical statistic in NLP that reflects how important a word is to a document in a collection. In general, the TF-IDF algorithm is used to weigh a keyword in any document and assign an importance to that keyword based on the number of times the keyword appears in the document. Each word or term has its respective TF and IDF score. The product of the TF and IDF scores of a term is referred to as the TF-IDF weight of that term. The higher the TF-IDF weight (also referred to herein as “TF-IDF score”), the rarer and more important the term, and vice versa. It is to be understood that the embodiments are not limited to the use of TF-IDF, and there are alternative methodologies for text vectorization. In illustrative embodiments, the TF-IDF vectorizer is generated and used to build a TF-IDF matrix, which includes each word and its TF-IDF score for each intent entry in the historical training data (e.g., each intent entry in the data repository150or250). According to an embodiment, a TfidfVectorizer function from a SciKitLearn library is used to build the vectorizer. When a user provides an image in connection with an issue or problem (e.g., image1001), the intent(s) determined from that image will be used to build another TF-IDF matrix based on the determined intents. The TF-IDF matrices ignore stop words. The recommendation component1043uses a similarity distance algorithm between the two generated TF-IDF matrices to find matching intents between the uploaded image and the historical data. The similarity functions are as follows:tf=TfidfVectorizer(analyzer=‘word’,ngram_range=(1,3),min_df=0,stop_words=‘english’)tfidf_matrix_history=tf.fit_transform(ds[‘intents’]) In a similarity distance algorithm approach, using a vector space model where intents are stored as vectors of their attributes in an n-dimensional space, the angles between the vectors are calculated to determine the similarities between the vectors. The embodiments may utilize different distance algorithms such as, for example, Cosine and Euclidian distance. The application of a Cosine distance algorithm for different intents (e.g., system driver, device driver installation and BSOD error) is illustrated by the graph1100inFIG.11, where angles between the vectors are represented as θ1and θ2. According to one or more embodiments, an algorithm such as K-Nearest Neighbor (KNN) may be utilized to compute the distance similarity using Cosine distance by passing a parameter as a metric and a value as “cosine.” Once the intent associated with the uploaded image1001is matched with the intent(s) in the historical data, the recommendation component1043(or143inFIG.1) returns the associated resolution information to the user as a recommendation. As noted above, in addition to the recommendation, the intent is also passed to a textual search engine (e.g., search engine160) to return additional information from a knowledge base (e.g., knowledge base161) to the user. The additional information comprises, but is not necessarily limited to, articles, manuals and guided flows.FIG.12depicts example pseudocode1200for using the similarity distance algorithm between the two generated TF-IDF matrices to find matching intents between the uploaded image and the historical data and for providing resolution information to users. According to one or more embodiments, databases (e.g., knowledge base161), repositories (e.g., repositories150and250), stores (e.g., file stores175and275) and/or corpuses (e.g., corpus636) used by the image analysis and resolution platform110and/or assisted support channel170can be configured according to a relational database management system (RDBMS) (e.g., PostgreSQL). Databases, repositories, stores and/or corpuses in some embodiments are implemented using one or more storage systems or devices associated with the image analysis and resolution platform110and/or assisted support channel170. In some embodiments, one or more of the storage systems utilized to implement the databases comprise a scale-out all-flash content addressable storage array or other type of storage array. The term “storage system” as used herein is therefore intended to be broadly construed, and should not be viewed as being limited to content addressable storage systems or flash-based storage systems. A given storage system as the term is broadly used herein can comprise, for example, network-attached storage (NAS), storage area networks (SANs), direct-attached storage (DAS) and distributed DAS, as well as combinations of these and other storage types, including software-defined storage. Other particular types of storage products that can be used in implementing storage systems in illustrative embodiments include all-flash and hybrid flash storage arrays, software-defined storage products, cloud storage products, object-based storage products, and scale-out NAS clusters. Combinations of multiple ones of these and other storage products can also be used in implementing a given storage system in an illustrative embodiment. Although shown as elements of the image analysis and resolution platform110, the interface layer120, the image analysis and annotation engine130, the image matching and recommendation engine140, the data repository150and the search engine160in other embodiments can be implemented at least in part externally to the image analysis and resolution platform110, for example, as stand-alone servers, sets of servers or other types of systems coupled to the network104. For example, the interface layer120, the image analysis and annotation engine130, the image matching and recommendation engine140, the data repository150and the search engine160may be provided as cloud services accessible by the image analysis and resolution platform110. The interface layer120, the image analysis and annotation engine130, the image matching and recommendation engine140, the data repository150and the search engine160in theFIG.1embodiment are each assumed to be implemented using at least one processing device. Each such processing device generally comprises at least one processor and an associated memory, and implements one or more functional modules for controlling certain features of the interface layer120, the image analysis and annotation engine130, the image matching and recommendation engine140, the data repository150and/or the search engine160. At least portions of the image analysis and resolution platform110and the components thereof may be implemented at least in part in the form of software that is stored in memory and executed by a processor. The image analysis and resolution platform110and the components thereof comprise further hardware and software required for running the image analysis and resolution platform110, including, but not necessarily limited to, on-premises or cloud-based centralized hardware, graphics processing unit (GPU) hardware, virtualization infrastructure software and hardware, Docker containers, networking software and hardware, and cloud infrastructure software and hardware. Although the interface layer120, the image analysis and annotation engine130, the image matching and recommendation engine140, the data repository150, the search engine160and other components of the image analysis and resolution platform110in the present embodiment are shown as part of the image analysis and resolution platform110, at least a portion of the interface layer120, the image analysis and annotation engine130, the image matching and recommendation engine140, the data repository150, the search engine160and other components of the image analysis and resolution platform110in other embodiments may be implemented on one or more other processing platforms that are accessible to the image analysis and resolution platform110over one or more networks. Such components can each be implemented at least in part within another system element or at least in part utilizing one or more stand-alone components coupled to the network104. It is assumed that the image analysis and resolution platform110in theFIG.1embodiment and other processing platforms referred to herein are each implemented using a plurality of processing devices each having a processor coupled to a memory. Such processing devices can illustratively include particular arrangements of compute, storage and network resources. For example, processing devices in some embodiments are implemented at least in part utilizing virtual resources such as virtual machines (VMs) or Linux containers (LXCs), or combinations of both as in an arrangement in which Docker containers or other types of LXCs are configured to run on VMs. The term “processing platform” as used herein is intended to be broadly construed so as to encompass, by way of illustration and without limitation, multiple sets of processing devices and one or more associated storage systems that are configured to communicate over one or more networks. As a more particular example, the interface layer120, the image analysis and annotation engine130, the image matching and recommendation engine140, the data repository150, the search engine160and other components of the image analysis and resolution platform110, and the elements thereof can each be implemented in the form of one or more LXCs running on one or more VMs. Other arrangements of one or more processing devices of a processing platform can be used to implement the interface layer120, the image analysis and annotation engine130, the image matching and recommendation engine140, the data repository150and the search engine160, as well as other components of the image analysis and resolution platform110. Other portions of the system100can similarly be implemented using one or more processing devices of at least one processing platform. Distributed implementations of the system100are possible, in which certain components of the system reside in one datacenter in a first geographic location while other components of the system reside in one or more other data centers in one or more other geographic locations that are potentially remote from the first geographic location. Thus, it is possible in some implementations of the system100for different portions of the image analysis and resolution platform110to reside in different data centers. Numerous other distributed implementations of the image analysis and resolution platform110are possible. Accordingly, one or each of the interface layer120, the image analysis and annotation engine130, the image matching and recommendation engine140, the data repository150, the search engine160and other components of the image analysis and resolution platform110can each be implemented in a distributed manner so as to comprise a plurality of distributed components implemented on respective ones of a plurality of compute nodes of the image analysis and resolution platform110. It is to be appreciated that these and other features of illustrative embodiments are presented by way of example only, and should not be construed as limiting in any way. Accordingly, different numbers, types and arrangements of system components such as the interface layer120, the image analysis and annotation engine130, the image matching and recommendation engine140, the data repository150, the search engine160and other components of the image analysis and resolution platform110, and the elements thereof can be used in other embodiments. It should be understood that the particular sets of modules and other components implemented in the system100as illustrated inFIG.1are presented by way of example only. In other embodiments, only subsets of these components, or additional or alternative sets of components, may be used, and such components may exhibit alternative functionality and configurations. For example, as indicated previously, in some illustrative embodiments, functionality for the image analysis and resolution platform can be offered to cloud infrastructure customers or other users as part of FaaS, CaaS and/or PaaS offerings. The operation of the information processing system100will now be described in further detail with reference to the flow diagram ofFIG.13. With reference toFIG.13, a process1300for analyzing incoming image inputs and recommending appropriate resolutions as shown includes steps1302through1310, and is suitable for use in the system100but is more generally applicable to other types of information processing systems comprising an image analysis and resolution platform configured for analyzing incoming image inputs and recommending appropriate resolutions. In step1302, an input of at least one image associated with an issue is received, and in step1304, text from the at least one image is extracted. According to one or more embodiments, the extracting comprises identifying a plurality of regions in the at least one image comprising one or more text objects, and generating respective boundaries around respective ones of the plurality of regions. The extracting may also comprise classifying objects in respective ones of the plurality of regions as one of text objects and non-text objects, and recognizing text in the one or more text objects of each of the plurality of regions. In step1306, an intent is determined from the extracted text, and in step1308, a response to the issue is recommended based at least in part on the determined intent. In step1310, the recommended response is transmitted to a user. The extracting, determining and recommending are performed at least in part using one or more machine learning models. In accordance with an embodiment, the one or more machine learning models comprises a Mask-R-CNN and/or bi-directional RNN with LSTM for NLU. The method may comprise training the one or more machine learning models with training data comprising a plurality of text entries and a plurality of intents corresponding to the plurality of text entries. In an illustrative embodiment, the method comprises training the one or more machine learning models with training data comprising a plurality of intents and a plurality of issue resolutions corresponding to the plurality of intents. A TF-IDF vectorizer of the plurality of intents is created from the training data, and a TF-IDF matrix is built, the TF-IDF matrix comprising a plurality of words corresponding to the plurality of intents and a plurality of TF-IDF scores for the plurality of words. According to an embodiment, recommending the response to the issue comprises building an additional TF-IDF matrix comprising a plurality of words corresponding to the determined intent and a plurality of TF-IDF scores for the plurality of words corresponding to the determined intent, comparing the TF-IDF matrix with the additional TF-IDF matrix to determine a matching intent from the plurality of intents to the determined intent, and recommending the issue resolution corresponding to the matching intent. In an illustrative embodiment, a data repository is maintained, the data repository comprising respective ones of a plurality of intents associated with respective ones of a plurality of images and respective ones of a plurality of issue resolutions corresponding to the respective ones of the plurality of intents. Recommending the response to the issue comprises comparing the determined intent with the plurality of intents, identifying a matching intent of the plurality of intents to the determined intent based on the comparing, and recommending the issue resolution corresponding to the matching intent. It is to be appreciated that theFIG.13process and other features and functionality described above can be adapted for use with other types of information systems configured to execute image analysis and problem resolution services in an image analysis and resolution platform or other type of platform. The particular processing operations and other system functionality described in conjunction with the flow diagram ofFIG.13is therefore presented by way of illustrative example only, and should not be construed as limiting the scope of the disclosure in any way. Alternative embodiments can use other types of processing operations. For example, the ordering of the process steps may be varied in other embodiments, or certain steps may be performed at least in part concurrently with one another rather than serially. Also, one or more of the process steps may be repeated periodically, or multiple instances of the process can be performed in parallel with one another. Functionality such as that described in conjunction with the flow diagram ofFIG.13can be implemented at least in part in the form of one or more software programs stored in memory and executed by a processor of a processing device such as a computer or server. As will be described below, a memory or other storage device having executable program code of one or more software programs embodied therein is an example of what is more generally referred to herein as a “processor-readable storage medium.” Illustrative embodiments of systems with an image analysis and resolution platform as disclosed herein can provide a number of significant advantages relative to conventional arrangements. For example, unlike conventional techniques, the embodiments advantageously provide techniques to perform image-based searches in a self-help portal that leverage existing support system knowledge and historical case data. The embodiments provide functionality for extracting text and/or messages from uploaded images and analyzing the extracted text to determine the intent associated with an image. Advantageously, instead of comparing visual image features to find matching images from historical data, the embodiments provide a framework to compare the determined intent with intents from images in the historical data, and to recommend corresponding resolutions associated with the matching intents to users. Conventional approaches typically use text-based searches which are not very effective when supporting and/or troubleshooting complex problem or issues. Text-based searches also undesirably rely on the use of keywords that must be entered to understand semantics and return useful content to a user. The embodiments advantageously provide intelligent programmatic analysis of customer image data for object detection, segmentation, text detection and text extraction. The embodiments combine computer vision techniques and machine learning techniques such as, for example, OCR and neural networks to semantically analyze images associated with cases in assisted support channels and to build a repository with the images, annotations, description and case resolution metadata for addressing the issues. The data in the repository is used to train an intent classification component that utilizes deep learning to identify intents of customer provided images. The identified intents are matched with intents of corresponding images in the repository, so that a recommendation component can predict a customer issue and recommend a resolution based on historical case information. Advantageously, the embodiments provide an optimized machine learning framework that combines select machine learning and image analysis techniques to extract the nuances of different images that may be visually similar, but convey different messages and intents. The image analysis and resolution platform performs a combination of image text extraction, intent analysis, intent comparison to historical support data and resolution recommendation to ensure that user attempts to resolve problems and/or issues are effectively and efficiently handled without need for agent intervention. The use of the text extraction, intent analysis and intent comparison techniques of the embodiments allows for the generation of accurate and useful case resolution information that is not achieved by visual image feature analysis, and is not provided by conventional text-based searches. It is to be appreciated that the particular advantages described above and elsewhere herein are associated with particular illustrative embodiments and need not be present in other embodiments. Also, the particular types of information processing system features and functionality as illustrated in the drawings and described above are exemplary only, and numerous other arrangements may be used in other embodiments. As noted above, at least portions of the information processing system100may be implemented using one or more processing platforms. A given such processing platform comprises at least one processing device comprising a processor coupled to a memory. The processor and memory in some embodiments comprise respective processor and memory elements of a virtual machine or container provided using one or more underlying physical machines. The term “processing device” as used herein is intended to be broadly construed so as to encompass a wide variety of different arrangements of physical processors, memories and other device components as well as virtual instances of such components. For example, a “processing device” in some embodiments can comprise or be executed across one or more virtual processors. Processing devices can therefore be physical or virtual and can be executed across one or more physical or virtual processors. It should also be noted that a given virtual device can be mapped to a portion of a physical one. Some illustrative embodiments of a processing platform that may be used to implement at least a portion of an information processing system comprise cloud infrastructure including virtual machines and/or container sets implemented using a virtualization infrastructure that runs on a physical infrastructure. The cloud infrastructure further comprises sets of applications running on respective ones of the virtual machines and/or container sets. These and other types of cloud infrastructure can be used to provide what is also referred to herein as a multi-tenant environment. One or more system components such as the image analysis and resolution platform110or portions thereof are illustratively implemented for use by tenants of such a multi-tenant environment. As mentioned previously, cloud infrastructure as disclosed herein can include cloud-based systems. Virtual machines provided in such systems can be used to implement at least portions of one or more of a computer system and an image analysis and resolution platform in illustrative embodiments. These and other cloud-based systems in illustrative embodiments can include object stores. Illustrative embodiments of processing platforms will now be described in greater detail with reference toFIGS.14and15. Although described in the context of system100, these platforms may also be used to implement at least portions of other information processing systems in other embodiments. FIG.14shows an example processing platform comprising cloud infrastructure1400. The cloud infrastructure1400comprises a combination of physical and virtual processing resources that may be utilized to implement at least a portion of the information processing system100. The cloud infrastructure1400comprises multiple virtual machines (VMs) and/or container sets1402-1,1402-2, . . .1402-L implemented using virtualization infrastructure1404. The virtualization infrastructure1404runs on physical infrastructure1405, and illustratively comprises one or more hypervisors and/or operating system level virtualization infrastructure. The operating system level virtualization infrastructure illustratively comprises kernel control groups of a Linux operating system or other type of operating system. The cloud infrastructure1400further comprises sets of applications1410-1,1410-2, . . .1410-L running on respective ones of the VMs/container sets1402-1,1402-2, . . .1402-L under the control of the virtualization infrastructure1404. The VMs/container sets1402may comprise respective VMs, respective sets of one or more containers, or respective sets of one or more containers running in VMs. In some implementations of theFIG.14embodiment, the VMs/container sets1402comprise respective VMs implemented using virtualization infrastructure1404that comprises at least one hypervisor. A hypervisor platform may be used to implement a hypervisor within the virtualization infrastructure1404, where the hypervisor platform has an associated virtual infrastructure management system. The underlying physical machines may comprise one or more distributed processing platforms that include one or more storage systems. In other implementations of theFIG.14embodiment, the VMs/container sets1402comprise respective containers implemented using virtualization infrastructure1404that provides operating system level virtualization functionality, such as support for Docker containers running on bare metal hosts, or Docker containers running on VMs. The containers are illustratively implemented using respective kernel control groups of the operating system. As is apparent from the above, one or more of the processing modules or other components of system100may each run on a computer, server, storage device or other processing platform element. A given such element may be viewed as an example of what is more generally referred to herein as a “processing device.” The cloud infrastructure1400shown inFIG.14may represent at least a portion of one processing platform. Another example of such a processing platform is processing platform1500shown inFIG.15. The processing platform1500in this embodiment comprises a portion of system100and includes a plurality of processing devices, denoted1502-1,1502-2,1502-3, . . .1502-P, which communicate with one another over a network1504. The network1504may comprise any type of network, including by way of example a global computer network such as the Internet, a WAN, a LAN, a satellite network, a telephone or cable network, a cellular network, a wireless network such as a WiFi or WiMAX network, or various portions or combinations of these and other types of networks. The processing device1502-1in the processing platform1500comprises a processor1510coupled to a memory1512. The processor1510may comprise a microprocessor, a microcontroller, an application-specific integrated circuit (ASIC), a field-programmable gate array (FPGA), a central processing unit (CPU), a graphical processing unit (GPU), a tensor processing unit (TPU), a video processing unit (VPU) or other type of processing circuitry, as well as portions or combinations of such circuitry elements. The memory1512may comprise random access memory (RAM), read-only memory (ROM), flash memory or other types of memory, in any combination. The memory1512and other memories disclosed herein should be viewed as illustrative examples of what are more generally referred to as “processor-readable storage media” storing executable program code of one or more software programs. Articles of manufacture comprising such processor-readable storage media are considered illustrative embodiments. A given such article of manufacture may comprise, for example, a storage array, a storage disk or an integrated circuit containing RAM, ROM, flash memory or other electronic memory, or any of a wide variety of other types of computer program products. The term “article of manufacture” as used herein should be understood to exclude transitory, propagating signals. Numerous other types of computer program products comprising processor-readable storage media can be used. Also included in the processing device1502-1is network interface circuitry1514, which is used to interface the processing device with the network1504and other system components, and may comprise conventional transceivers. The other processing devices1502of the processing platform1500are assumed to be configured in a manner similar to that shown for processing device1502-1in the figure. Again, the particular processing platform1500shown in the figure is presented by way of example only, and system100may include additional or alternative processing platforms, as well as numerous distinct processing platforms in any combination, with each such platform comprising one or more computers, servers, storage devices or other processing devices. For example, other processing platforms used to implement illustrative embodiments can comprise converged infrastructure. It should therefore be understood that in other embodiments different arrangements of additional or alternative elements may be used. At least a subset of these elements may be collectively implemented on a common processing platform, or each such element may be implemented on a separate processing platform. As indicated previously, components of an information processing system as disclosed herein can be implemented at least in part in the form of one or more software programs stored in memory and executed by a processor of a processing device. For example, at least portions of the functionality of one or more components of the image analysis and resolution platform110as disclosed herein are illustratively implemented in the form of software running on one or more processing devices. It should again be emphasized that the above-described embodiments are presented for purposes of illustration only. Many variations and other alternative embodiments may be used. For example, the disclosed techniques are applicable to a wide variety of other types of information processing systems and image analysis and resolution platforms. Also, the particular configurations of system and device elements and associated processing operations illustratively shown in the drawings can be varied in other embodiments. Moreover, the various assumptions made above in the course of describing the illustrative embodiments should also be viewed as exemplary rather than as requirements or limitations of the disclosure. Numerous other alternative embodiments within the scope of the appended claims will be readily apparent to those skilled in the art. | 61,645 |
11861919 | DETAILED DESCRIPTION In the following description, numerous details of the embodiments of the present disclosure, which should be deemed merely as exemplary, are set forth with reference to accompanying drawings to provide a thorough understanding of the embodiments of the present disclosure. Therefore, those skilled in the art will appreciate that modifications or replacements may be made in the described embodiments without departing from the scope and spirit of the present disclosure. Further, for clarity and conciseness, descriptions of known functions and structures are omitted. First Embodiment As shown inFIG.1, the present disclosure provides in this embodiment a text recognition method, which includes the following steps. Step S101: acquiring an image including text information, the text information including M characters, M being a positive integer greater 1. In this embodiment of the present disclosure, the text recognition method relates to the field of artificial intelligence, in particular to a computer vision technology and a deep learning technology, and it may be applied to an electronic device. The electronic device may be a server or a terminal, which will not be particularly defined herein. The image may be an image including the text information, i.e., it may also be called as text image. The text information may include at least two characters. The image may be a block image, i.e., it may include at least one image blocks, and each image block may include at least one character. The M characters may be all or a part of characters in the text information in the image, e.g., the M characters may be characters in one or more image blocks of the image, which will not be particularly defined herein. The image may be an image collected in real time, or a pre-stored image, or an image from the other device, or an image acquired from a network. For example, the image including the text information may be collected by a mobile phone or a computer in real time, e.g., an image of a shop sign, an image of a shop in a mall or an image of a traffic sign; or the image taken previously and including the text information may be stored in a device; or the image including the text information may be received from the other device or acquired from the network. Step S102: performing text recognition on the image to acquire character information about the M characters. In this step, a purpose of performing the text recognition on the image is to position and recognize the characters in the image, so as to acquire the character information about the M characters in the image. The character information about each character may include character position information and character category information about the character, and the character position information is used to represent a position of the character in the image. The position of the character in the image may be represented through a position of a center of the character and geometrical information about the character jointly. The geometrical information about the character refers to information about a bounding box (i.e., an enclosure) for the character. The bounding box for the character refers to a region surrounding the character, and it may be of a square shape, a rectangular shape or any other shapes. When the bounding box for the character is of a rectangular shape, the geometrical information about the character may include values of a width and a length of the bounding box for the character. The character category information about the character may represent a category of the character, and different characters may have different categories. In this way, it is able to determine the character in accordance with the character category information about the character. For example, character category information about one Chinese character may represent that the Chinese character is “” (meaning “crossing”), and character category information about another Chinese character may represent that the Chinese character is “” (meaning a bridge). The text recognition may be performed on the image using a character positioning and recognition module, and the character positioning and recognition module may be implemented in various ways. For example, the text recognition may be performed on the image through an existing or new target detection method, e.g., You Only Look Once (YOLO), Single Short multibox Detector (SSD), or faster Region-Convolutional Neural Network (R-CNN). For another example, the text recognition may be performed on the image through Fully Convolutional Networks (FCN). The following description will be given when the text recognition is performed on the image using the FCN. To be specific, pretreatment may be performed on the image to acquire an image with a predetermined size, and then the acquired image may be inputted into the FCN. The predetermined size may be set according to the practical need, e.g., the image may have a pixel size of 256*256. After the image with a pixel size of 256*256 has been inputted, as an input image, into the FCN, feature extraction may be performed on the input image by the FCN, so as to finally generate a feature map about the input image. The feature map may have a size smaller than the input image, e.g., it may be ⅛ of the size of the input image, i.e., it may have a pixel size of 32*32. The feature extraction may be performed by the FCN using ResNet, e.g., ResNet50 or the like, or Feature Pyramid Networks (FPN), which will not be particularly defined herein. Next, the feature map may pass through two convolutional branches, one of which is used for character recognition and the other of which is used for character positioning. The convolutional branch for character recognition may recognize a character through determining a category of the character. To be specific, with respect to each position in the feature map, it may determine whether there is a character at a current position through determining a category of the character, and when there is any character, it may recognize the category of the character. The quantity of channels for determining the category of the character, e.g., 3000 or 6763, may be set for the convolutional branch in accordance with the practical needs. Taking a commonly-used Chinese character set GB2312 as an example, 6763 channels may be set the convolutional branch. When the character recognition is performed through the convolutional branch, the character category information at the current position in the image may be determined in accordance with an excitation response from a channel of the convolutional branch. For example, when there is an excitation response from a target channel in the 6763 channels for determining the character category, it may be determined that there is a character at the current position, and then the character category information about the character may be determined in accordance with a character category corresponding to the target channel. For example, when the target channel corresponds to a Chinese character “” (meaning “crossing”) and there is an excitation response from the target channel, it may be determined that the character at the current position is “” (meaning “crossing”). For the convolutional branch for positioning a character, the information about the bounding box for the character may be determined through regression of the position of the center of the character. The quantity of channels for position regression may be set for the convolutional branch in accordance with the practical needs. For example, when the bounding box is of a square or rectangular shape, the quantity of channels may be set as four(4). When the character positioning is performed using the convolutional branch with four(4) channels and there is a character at the current position, the regression to the current position may be performed. A coordinate offset between the current position and an upper left vertex of a bounding box corresponding to the character and a coordinate offset between the current position and a lower right vertex of the bounding box corresponding to the character may be predicted through the four channels, or a coordinate offset between the current position and an upper right vertex of the bounding box corresponding to the character and a coordinate offset between the current position and a lower left corner of the bounding box corresponding to the character may be predicted through the four channels. A coordinate offset between the current position and one vertex of the bounding box corresponding to the character may be predicted on one dimension through each channel. The dimension may be a first dimension which is called as x dimension, or a second dimension which is called as y dimension. In the case that there is a character at the current position, the information about the bounding box of the character may be determined in accordance with coordinate information about the current position and the four coordinate offsets acquired through prediction, and the geometrical information about the character may be determined accordingly. For example, when the coordinate information about the current position is (10, 10), the coordinate offset between the current position and the upper left vertex of the bounding box of the character at the current position is (10, 10) and the coordinate offset between the current position and the upper right vertex of the bounding box of the character at the current position is (5, 5), the coordinate information about the upper left vertex of the bounding box may be (0, 20), the coordinate information about the upper right vertex may be (15, 5), and the width and the length of the bounding box may both be fifteen(15) respectively. Finally, through the two convolutional branches in the FCN, it is able to recognize the character category information about the M characters, and acquire, through positioning, the character position information about the M characters. It should be appreciated that, regardless of the FCN or a model using the target detection method, prior to the text recognition, usually training needs to be performed. To be specific, the FCN or the model using the target detection method may be trained through a large quantity of training images including the text information, and determining and marking out information about a position of a center of each character and information about a bounding box in the training image. After the training, the text recognition may be performed on the image through the FCN or the model using the target detection method. Step S103: recognizing reading direction information about each character in accordance with the character information about the M characters, the reading direction information being used to indicate a next character corresponding to a current character in a semantic reading order. In this step, the reading direction information refers to a next character corresponding to the current character in the semantic reading order. The semantic reading order refers to a reading order in accordance with text semantics. For example, when the text information includes “” (i.e., a name of a kind of traditional Chinese noodle made of rice), an arrangement order of the four Chinese characters is just the semantic reading order of the text information. A Chinese character next to the Chinese character “” (meaning “crossing”) is “” (meaning “bridge”), so the reading direction information about the character may be the Chinese character “” (meaning “bridge”). The reading direction information about each character may be recognized by a reading order decoding module in accordance with the character information about the M characters in various ways. For example, the reading direction information about each character may be recognized using a Graph Neural Network in accordance with the character information about the M characters. For another example, the reading direction information about each character may be recognized using a text semantic recognition technology in accordance with the character information about the M characters. A procedure of recognizing the reading direction information about each character in accordance with the character information about the M characters will be described hereinafter briefly taking the Graph Neural Network as an example. It should be appreciated that, the M characters may belong to a same image block or different image blocks. When the M characters belong to a same image block, the reading direction information about each character may be recognized using the Graph Neural Network in accordance with the character information about the M characters. When the M characters belong to different image blocks, with respect to each target image block, the reading direction information about a target character in the target image block may be recognized using the Graph Neural Network in accordance with character information about the target character in the target image block, and finally the reading direction information about each of the M characters may be acquired. The target image block may be an image block including at least two characters. A procedure of recognizing the reading direction information about each character using the Graph Neural Network in accordance with the character information about the M characters will be described hereinafter in more details when the M characters belong to a same image block. To be specific, an input of the Graph Neural Network consists of two important pieces of information, i.e., nodes and edges. The node corresponds to characters in a two-dimensional space respectively. In actual use, the node may be represented by a specific data structure or data object. With respect to each character, a text recognition device may create a node corresponding to the character in the form of creating a data object in accordance with the character information about the character, and an attribute of the node corresponding to the character may include the character information about the character. Correspondingly, subsequent to the creation of the node corresponding to the character, the data object representing the node may be acquired, and the data object of the node may be just node information about character. The edge refers to a connection relation between nodes, or an incidence matrix consisting of connection relations between the nodes. For example, when a node i is connected to a node j, an edge may be formed between the two nodes, and in the case that there is an edge between the nodes, the connection relation may be represented by a numerical value “1”. In the case that two nodes are not connected to each other, i.e., there is no edge between the two nodes, the connection relation may be represented by a numeral value “0”. With respect to each node, a connection relation between it and the other node may be set, so as to acquire edge connection information about the node. When setting the connection relation between a node and the other node, the node may be connected to all of M nodes corresponding to the M characters other than the node, i.e., there may exist edges between the node and any one of the other nodes. With respect to each node, when setting the connection relation between a node and the other node, the node may also be connected to a part of the M nodes corresponding to the M characters other than the node, but not be connected to the other part of the M nodes. For example, when the M nodes include a node1, a node2, a node3and a node4, the node1may be set to be connected to the node2and the node3, but fail to be connected to the node4. In addition, a loop connection between a node and the node itself may also be set. When a node is connected to the node itself, there may exist the loop between the node and the node itself, and a connection relation may be represented by a numerical value “1”, and when there is no loop, the connection relation may be represented by a numerical value “0”. Finally, with respect to each node, the edge connection information about the node may include M numerical values, and the edge connection information about all the nodes may be aggregated to form an M*M incidence matrix. An element at a position (i, j) in the incidence matrix may indicate whether the node i is connected to the node j. When a numerical value of the element is 1, it means that the node i is connected to the node j, and when the numerical value of the element is 0, it means that the node i is not connected to the node j. The node information and the edge connection information acquired previously may be inputted into the Graph Neural Network, so as to predict a node direction, thereby to acquire direction information about each node. An output of the Graph Neutral Network is also an M*M target incidence matrix, and an element at a position (i, j) of the target incidence matrix may represent whether the node i points to the node j. A relative position between the characters in the text information is fixed, and each character merely includes one ingoing edge and one outgoing edge. Hence, the element at the position (i, j) of the target incidence matrix may also represent whether a character next to a character corresponding to the node i in the semantic reading order is a character corresponding to the node j. When the element at this position has a numerical value “1”, it means that the character next to the character corresponding to the node i in the semantic reading order is the character corresponding to the node j, and when the element at this position has a numerical value “0”, it means that the character next to the character corresponding to the node i in the semantic reading order is not the character corresponding to the node j. In addition, as for a first character and a last character in the text information, direction information about the corresponding nodes may be represented through loop connection, i.e., in the case that an element at a position, e.g., (5, 5) in the target incidence matrix has a numerical value “1”, it means that there is a loop between a fifth node and the fifth node itself, and a character corresponding to this node is the first or last character. Finally, the reading direction information about each character may be determined in accordance with the target incidence matrix outputted by the Graph Neural Network. For example, with respect to the first node, when an element at a position (1, 2) in the target incidence matrix has a numerical value “1”, it means that the reading direction information about the character corresponding to the first node is a character corresponding to a second node. For another example, with respect to the second node, when an element at a position (2, 4) in the target incidence matrix has a numerical value “1”, it means that the reading direction information about the character corresponding to the second node is a character corresponding to a fourth node. It should be appreciated that, in order to enable the Graph Neural Network to have a capability of recognizing the reading direction information about each character, it is necessary to restrain and guide the Graph Neural Network in accordance with a large quantity of training text information and label information about the reading direction of the characters in the training text information, i.e., it is necessary to train the Graph Neural Network when using it. Step S104: ranking the M characters in accordance with the reading direction information about the M characters to acquire a text recognition result of the text information. In this step, the M characters may be ranked in accordance with the reading direction information about the M characters, so as to finally acquire the text recognition result of the text information. FIG.2shows a specific implementation of the text recognition method. As shown inFIG.2, an acquired image includes text information “” (i.e., a name of a kind of traditional Chinese noodle made of rice). Some artistic designs have been introduced into the text information about the image, so that the Chinese characters cannot be read in an order from left to right and from top to down directly. The image may be inputted to the character positioning and recognition module, and the text recognition may be performed on the image by the character positioning and recognition module to acquire character information about the four Chinese characters. The character information may include character category information and character position information. Based on the character position information in an order from top to down and from left to right, a first Chinese character is “” (meaning “rice”), a second Chinese character is “” (meaning “crossing”), a third Chinese character is “” (meaning “bridge”) and a fourth Chinese character is “” (meaning “thread”), i.e., an output result may be “” (which fails to constitute a meaningful name any more). At this time, a semantic error may occur. In this application scenario, as shown inFIG.2, the reading direction information about each character may be recognized by the reading order decoding module. For example, the first Chinese character “” (meaning “rice”) points to the fourth Chinese character “” (meaning “thread”), the second Chinese character “” (meaning “crossing”) points to the third Chinese character “” (meaning “bridge”), the third Chinese character “” (meaning “bridge”) points to the first Chinese character “” (meaning “rice”), and the fourth Chinese character points to the character itself. The four Chinese characters may be ranked in accordance with the reading direction information about each character, so as to determine the text recognition result of the text information as “” (i.e., a name of a kind of traditional Chinese noodle made of rice). In this embodiment of the present disclosure, the text recognition may be performed on the image to acquire the character information about the M characters, the reading direction information about each character may be recognized in accordance with the character information about the M characters, and then the M characters may be ranked in accordance with the reading direction information about the M characters to acquire the text recognition result of the text information. In this regard, no matter whether the text information in the image is a regular text or an irregular text, it is able to acquire the text recognition result conforming to the semantics, thereby to improve a recognition effect of the text in the image. In a possible embodiment of the present disclosure, the character information may include character position information. Prior to Step S104, the text recognition method may further include dividing the image into at least two image blocks in accordance with the character position information about the M characters, and the at least two image blocks may include the M characters. Step S104may specifically include determining reading direction information about a target character in a target image block in accordance with character information about the target character in the target image block, and the target image block may be an image block including at least two characters in the at least two image blocks. During the implementation, whether the M characters belong to a same image block may be determined in accordance with the character position information about the M characters. When the M characters belong to different image blocks, the image may be divided into at least two image blocks in accordance with the character position information about the M characters. As a division principle, the image may be divided into blocks in accordance with a distance between nodes, i.e., the nodes at a large distance from each other may be separated into different image blocks, and the nodes at a small distance from each other may be aggregated in a same image block. To be specific, the distance between two characters may be determined in accordance with the character position information about the M characters. The nodes at a distance smaller than a first predetermined threshold may be aggregated, and the nodes at a distance greater than a second predetermined threshold may be spaced apart from each other, so as to acquire at least two image blocks. Each image block may include at least one character. After the M characters have been spaced apart from each other with respect to different image blocks, in the case that the image includes at least two target image blocks, with respect to each target image block, reading direction information about a target character in the target image block may be determined in accordance with character information about the target character in the target image block, so as to acquire the reading direction information about each character. The target image block may be an image block including at least two characters in the at least two image blocks. In addition, in the case that the image block merely includes one character, the reading direction information about the character may point to the character itself by default, or the reading direction information about the character may be zero(0). In this embodiment of the present disclosure, the image may be divided into at least two image blocks in accordance with the character position information about the M characters, and the reading direction information about the target character in the target image block may be determined in accordance with the character information about the target character in the target image block. Hence, the reading direction information about the characters may be recognized on a basis of the target image block, and then the text recognition result of the text information in the image may be determined in accordance with the recognized reading direction information about the M characters. As a result, it is able to improve the accuracy of the text semantic recognition, thereby to further improve the text recognition effect. In a possible embodiment of the present disclosure, the determining the reading direction information about the target character in the target image block in accordance with the character information about the target character in the target image block may include: creating a node corresponding to each target character in the target image block, and acquiring node information about each target character, the node information including the character information; acquiring edge connection information about each node, the edge connection information representing a connection relation between the nodes; and determining the reading direction information about the target character in the target image block in accordance with the acquired node information and the acquired edge connection information. During the implementation, a way for the creation of the node corresponding to each target character in the target image block may be similar to the way for the creation of the node corresponding to each character mentioned hereinabove. The way for the creation of the node corresponding to each character is applied to a scenario where the M characters belong to a same image block, and during the creation, the node corresponding to each character in the M characters may be created. However, in this embodiment of the present disclosure, the way for the creation of the node is applied to a scenario where the M characters belong to different image blocks, and during the creation, the node corresponding to each target character in the target image block may be created with respect to each target image block. Correspondingly, the node information about a node corresponding to each target character may be acquired. After the creation of the node corresponding to each target character, the edge connection information about each node may be acquired. In this embodiment of the present disclosure, the edge connection information may represent a connection relation between the nodes corresponding to the target characters in the target image blocks. Next, the reading direction information about each target character may be recognized by the reading order decoding module in accordance with the acquired node information and the acquired edge connection information. A recognition mode of the reading order decoding module may be the Graph Neural Network, a text semantic recognition technology or the others, which will not be particularly defined herein. In this embodiment of the present disclosure, the node corresponding to each target character in the target image block may be created, and the node information about each target character and including the character information may be acquired, and then the edge connection information about each node and representing the connection relation between the nodes may be acquired. As a result, it is able to determine the reading direction information about the target character in the target image block in accordance with the acquired node information and the acquired edge connection information, thereby to recognize the text information in the semantic reading order. In a possible embodiment of the present disclosure, the determining the reading direction information about the target character in the target image block in accordance with the acquired node information and the acquired edge connection information may include inputting the acquired node information and the acquired edge connection information into the Graph Neural Network to predict the reading direction information, thereby to determine the reading direction information about the target character in the target image block. The Graph Neural Network may be a Graph Neutral Network with an existing structure or a new structure, which will not be particularly defined herein. Taking the Graph Neural Network with the existing structure as an example, to be specific, it may include a plurality of Graph Neural Network layers, and each Graph Neural Network layer may be any of common Graph Neural Network layers. After the acquired node information and the acquired edge connection information have been inputted into the Graph Neural Network, the plurality of Graph Neural Network layers may be stacked one on another to perform fusion and inference on the information. Next, an incidence matrix at a last layer of the Graph Neural Network may be restrained and guided during the training, so as to finally output a target incidence matrix representing the reading direction information about each target character. During the implementation, the characters may be arranged in various modes in a two-dimensional space, and distances between the characters may be different. Accordingly with respect to each target image block, the reading direction information about the target character in the target image may be recognized using an advanced Graph Neural Network in accordance with the node information about the node corresponding to the target character in the target image block and the edge connection information, so as to recognize the text in the target image block in the semantic reading order. In a possible embodiment of the present disclosure, the created nodes may include a first target node, and the first target node may be any of the created nodes. The acquiring the edge connection information about each node may include: determining a second target node corresponding to the first target node in the created nodes in accordance with the character position information about the target character in the target image block, a distance between the second target node and the first target node being smaller than a distance between a node of the created nodes other than the second target node and the first target node; and creating a first connection relation between the first target node and the second target node, and creating a second connection relation between the first target node and the node of the created nodes other than the second target node, to acquire edge connection information about the first target node. The first connection relation may represent that the two nodes are connected to each other, and the second connection relation may represent that the two nodes are not connected to each other. In a default case, usually there is no semantic relation between the characters at a large distance from each other, so in this embodiment of the present disclosure, when creating the connection relation between the nodes, the distance between the nodes may be taken into consideration, and the connection relation between the nodes may be processed using a k-Nearest Neighbor algorithm Taking a 5-Nearest Neighbor algorithm as an example, with respect to each node, the node may be connected to five(5) nodes nearest to the node, but it may not be connected to the other nodes. To be specific, with respect to any of the created node, i.e., the first target node, a distance between the first target node and the other node may be determined in accordance with the character position information about the target character in the target image block, and then the second target node corresponding to the first target node in the created nodes may be determined using a Nearest Neighbor algorithm in accordance with the determined distance. Next, the first connection relation between the first target node and the second target node and the second connection relation between the first target node and the node of the created nodes other than the second target node and the first target node may be created. In addition, a first connection relation between the first target node and itself may be further created, so as to finally acquire the edge connection information about the first target node. In this embodiment of the present disclosure, the second target node corresponding to the first target node in the created nodes may be determined in accordance with the character position information about the target character in the target image block, and then the first connection relation between the first target node and the second target node and the second connection relation between the first target node and the node of the created nodes other than the second target node and the first target node may be created. As a result, it is able to reduce the quantity of edges between the nodes, thereby to reduce a processing difficulty of the Graph Neural Network after the information has been inputted into the Graph Neural Network. Second Embodiment As shown inFIG.3, the present disclosure provides in this embodiment a text recognition device300, which includes: an acquisition module301configured to acquire an image including text information, the text information including M characters, M being a positive integer greater than 1; a text recognition module302configured to perform text recognition on the image to acquire character information about the M characters; a reading direction recognition module303configured to recognize reading direction information about each character in accordance with the character information about the M characters, the reading direction information being used to indicate a next character corresponding to a current character in a semantic reading order; and a ranking module304configured to rank the M characters in accordance with the reading direction information about the M characters to acquire a text recognition result of the text information. In a possible embodiment of the present disclosure, the character information may include character position information. The text recognition device may further include a division module configured to divide the image into at least two image blocks in accordance with the character position information about the M characters, and the at least two image blocks may include the M characters. The reading direction recognition module303is specifically configured to determine the reading direction information about a target character in a target image block in accordance with character information about the target character in the target image block, and the target image block may be an image block including at least two characters in the at least two image blocks. In a possible embodiment of the present disclosure, the reading direction recognition module303may include: a creation unit configured to create a node corresponding to each target character in the target image block; a first acquisition unit configured to acquire node information about each target character, and the node information including the character information; a second acquisition unit configured to acquire edge connection information about each node, the edge connection information representing a connection relation between the nodes; and a determination unit configured to determine the reading direction information about the target character in the target image block in accordance with the acquired node information and the acquired edge connection information. In a possible embodiment of the present disclosure, the determination unit is specifically configured to input the acquired node information and the acquired edge connection information into a Graph Neural Network for predicting the reading direction information, to determine the reading direction information about the target character in the target image block. In a possible embodiment of the present disclosure, the created nodes may include a first target node. The acquisition unit is specifically configured to: determine a second target node corresponding to the first target node in the created nodes in accordance with the character position information about the target character in the target image block, a distance between the second target node and the first target node being smaller than a distance between a node in the created nodes other than the second target node and the first target node; and create a first connection relation between the first target node and the second target node, and a second connection relation between the first target node and the node in the created nodes other than the second target node, to acquire edge connection information about the first target node. The first connection relation may represent that the two nodes are connected to each other, and the second connection relation may represent that the two nodes are not connected to each other. In this embodiment of the present disclosure, the text recognition device300may be used to implement the steps of the above-mentioned text recognition method with a same beneficial effect, which will not be particularly defined herein. The present disclosure further provides in some embodiments an electronic device, a computer-readable storage medium and a computer program product. FIG.4is a schematic block diagram of an exemplary electronic device400in which embodiments of the present disclosure may be implemented. The electronic device is intended to represent all kinds of digital computers, such as a laptop computer, a desktop computer, a work station, a personal digital assistant, a server, a blade server, a main frame or other suitable computers. The electronic device may also represent all kinds of mobile devices, such as a Personal Digital Assistant (PDA), a cell phone, a smart phone, a wearable device and other similar computing devices. The components shown here, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the present disclosure described and/or claimed herein. As shown inFIG.4, the electronic device400includes a computing unit401configured to execute various processings in accordance with computer programs stored in a Read Only Memory (ROM)402or computer programs loaded into a Random Access Memory403via a storage unit408. Various programs and data desired for the operation of the electronic device400may also be stored in the RAM403. The computing unit401, the ROM402and the RAM403may be connected to each other via a bus404. In addition, an input/output (I/O) interface405may also be connected to the bus404. Multiple components in the electronic device400are connected to the I/O interface405. The multiple components include: an input unit406, e.g., a keyboard, a mouse and the like; an output unit407, e.g., a variety of displays, loudspeakers, and the like; a storage unit408, e.g., a magnetic disk, an optic disk and the like; and a communication unit409, e.g., a network card, a modem, a wireless transceiver, and the like. The communication unit409allows the electronic device400to exchange information/data with other devices through a computer network and/or other telecommunication networks, such as the Internet. The computing unit401may be any general purpose and/or special purpose processing components having a processing and computing capability. Some examples of the computing unit401include, but are not limited to: a central processing unit (CPU), a graphic processing unit (GPU), various special purpose artificial intelligence (AI) computing chips, various computing units running a machine learning model algorithm, a digital signal processor (DSP), and any suitable processor, controller, microcontroller, etc. The computing unit401carries out the aforementioned methods and processes, e.g., the text recognition method. For example, in some embodiments of the present disclosure, the text recognition method may be implemented as a computer software program tangibly embodied in a machine readable medium such as the storage unit408. In some embodiments of the present disclosure, all or a part of the computer program may be loaded and/or installed on the electronic device400through the ROM402and/or the communication unit409. When the computer program is loaded into the RAM403and executed by the computing unit401, one or more steps of the foregoing text recognition method may be implemented. Optionally, in some other embodiments of the present disclosure, the computing unit401may be configured in any other suitable manner (e.g., by means of firmware) to implement the text recognition method. Various implementations of the aforementioned systems and techniques may be implemented in a digital electronic circuit system, an integrated circuit system, a field-programmable gate array (FPGA), an application specific integrated circuit (ASIC), an application specific standard product (ASSP), a system on a chip (SOC), a complex programmable logic device (CPLD), computer hardware, firmware, software, and/or a combination thereof. The various implementations may include an implementation in form of one or more computer programs. The one or more computer programs may be executed and/or interpreted on a programmable system including at least one programmable processor. The programmable processor may be a special purpose or general purpose programmable processor, may receive data and instructions from a storage system, at least one input device and at least one output device, and may transmit data and instructions to the storage system, the at least one input device and the at least one output device. Program codes for implementing the methods of the present disclosure may be written in one programming language or any combination of multiple programming languages. These program codes may be provided to a processor or controller of a general purpose computer, a special purpose computer, or other programmable data processing device, such that the functions/operations specified in the flow diagram and/or block diagram are implemented when the program codes are executed by the processor or controller. The program codes may be run entirely on a machine, run partially on the machine, run partially on the machine and partially on a remote machine as a standalone software package, or run entirely on the remote machine or server. In the context of the present disclosure, the machine readable medium may be a tangible medium, and may include or store a program used by an instruction execution system, device or apparatus, or a program used in conjunction with the instruction execution system, device or apparatus. The machine readable medium may be a machine readable signal medium or a machine readable storage medium. The machine readable medium includes, but is not limited to: an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, device or apparatus, or any suitable combination thereof. A more specific example of the machine readable storage medium includes: an electrical connection based on one or more wires, a portable computer disk, a hard disk, a random access memory (RAM), a read only memory (ROM), an erasable programmable read only memory (EPROM or flash memory), an optic fiber, a portable compact disc read only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination thereof. To facilitate user interaction, the system and technique described herein may be implemented on a computer. The computer is provided with a display device (for example, a cathode ray tube (CRT) or liquid crystal display (LCD) monitor) for displaying information to a user, a keyboard and a pointing device (for example, a mouse or a track ball). The user may provide an input to the computer through the keyboard and the pointing device. Other kinds of devices may be provided for user interaction, for example, a feedback provided to the user may be any manner of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received by any means (including sound input, voice input, or tactile input). The system and technique described herein may be implemented in a computing system that includes a back-end component (e.g., as a data server), or that includes a middle-ware component (e.g., an application server), or that includes a front-end component (e.g., a client computer having a graphical user interface or a Web browser through which a user can interact with an implementation of the system and technique), or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include a local area network (LAN), a wide area network (WAN), an Internet, and a blockchain network. The computer system can include a client and a server. The client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on respective computers and having a client-server relationship to each other. The server may be a cloud server, also called as cloud computing server or cloud server, which is a host product in a cloud calculating service system, so as to overcome such defects as large management difficulty and insufficient service extensibility in a conventional physical host and a Virtual Private Server (VPS). The server may also be a server of a distributed system, or a server combined with blockchain. It should be appreciated that, all forms of processes shown above may be used, and steps thereof may be reordered, added or deleted. For example, as long as expected results of the technical solutions of the present disclosure can be achieved, steps set forth in the present disclosure may be performed in parallel, performed sequentially, or performed in a different order, and there is no limitation in this regard. The foregoing specific implementations constitute no limitation on the scope of the present disclosure. It is appreciated by those skilled in the art, various modifications, combinations, sub-combinations and replacements may be made according to design requirements and other factors. Any modifications, equivalent replacements and improvements made without deviating from the spirit and principle of the present disclosure shall be deemed as falling within the scope of the present disclosure. | 48,491 |
11861920 | DETAILED DESCRIPTION OF THE INVENTION In many countries, processed foods that are said to be gluten free may actually contain a small amount of gluten, and a maximum amount of gluten must be adhered to before they can be sold as “gluten free”. In Brazil, for a processed food to be sold as “gluten free”, it must have a maximum of 20 parts per million (ppm) of gluten. People with celiac disease need to control the consumption of processed foods to avoid the immune system reaction, but even consuming “gluten-free” products, they end up ingesting amounts of gluten which, although minimal, can become significant when considering the effect of cumulative consumptions. Thus, when consuming “gluten-free” processed foods, the individual needs to pay attention to the amount of processed foods consumed in order not to consume the maximum amount of gluten indicated for the patient with celiac disease, which is up to 10 mg of daily gluten in processed foods. For example, if an individual were to consume 300 g of a food known as “gluten free” comprising in its composition a value of 20 ppm of gluten, approximately 6 mg of gluten would be consumed. Thus, if this same food has 50 ppm, when consuming 300 g, the individual would be ingesting 15 mg of gluten, that is, 5 mg above the recommended maximum, and may, therefore, present a clinical condition of inflammation of the small intestine and others symptoms associated with celiac disease. Controlling the food consumed daily by individuals with celiac disease is quite laborious, and it is necessary to control and balance the food in order not to exceed a maximum of 10 mg of gluten daily. In case the individual has consumed more than 10 mg of gluten and is sick due to the immune system reaction, he will have to assess which food and, in case he ate outside home, where that food was consumed. However, the immune response may take a few days, thus making it difficult to identify the cause of the occurrence of such indisposition/inflammation. Furthermore, another problem that celiac individuals face is the frequent uncertainty about whether an indisposition is associated with gluten consumption. This is because many of the symptoms of a patient with celiac disease are similar to other common illnesses, such as: diarrhea, malaise, headache, stomachache, and there may be confusion about the exact cause of the indisposition. Thus, rapid antibody tests are used in order to be sure that gluten consumption is the cause of the indisposition. There are some solutions on the market that test foods for gluten in their composition. These equipments use some technologies based on reagents, which change color in contact with gluten-containing foods, or through mass spectroscopy technology, measuring the components present in foods. Another technology for detecting the amount of gluten in food is image analysis, which, through artificial intelligence techniques, performs the analysis of food and makes a prediction of the amount of gluten and calories it may have. Thus, the present invention provides an effective solution for individuals with celiac disease to monitor the food they are consuming and balance their gluten consumption. As avoiding eating completely gluten-free food is practically impossible, a methodology that can be applied in a device with software to help the celiac individual to monitor such consumption is extremely necessary. The system of the present invention comprises, for data acquisition and information processing, a device configured to capture information related to the consumption of a food, such information allowing the identification, estimation and/or association of the gluten content of the food. The device is also configured to receive, through an input interface, information about an individual's food consumption. The device of the system according to the present invention is provided with a set of sensors, so that the device performs data acquisition and information processing and can, when available, be fed with data from the gluten sensors. The system can also be fed with data from antibody tests that are carried out for the individual. The detection of gluten and its quantities can also be performed, in addition to artificial intelligence techniques applied to image recognition, by any technologies of the spectrometry type, or even with the use of chemical reagents. In this way, the device of the system of the present invention works as a data repository, so that an individual makes records of meals (food, quantities, days and places), which are stored in the system to constitute the database, which it is continuously fed. Such data start to compose a database that receives and stores information related to food consumption and information on the individual's food consumption, the database also storing a maximum limit for the consumption of gluten admitted for the individual. Thus, it is possible, for example, that a screening is made about the food and the place where it was consumed, which may have provoked this immune system reaction due to having a high amount of gluten. The device with interface and the set of sensors for data acquisition and information processing system according to the present invention comprises a set of detection sensors to identify quantities of gluten in food, which may be part of the device itself or attached to it, for example: sensors for the detection of gluten, such as chemical sensors or by spectrometry. The device interface is any interface capable of performing information input by an individual. The set of sensors comprises at least one barcode reader, for obtaining data from industrialized food labels, as well as photo sensors, capable of, through images, identifying consumed foods, as well as their quantities, and measuring the amount of gluten involved by associating such foods and amounts with data from the system's database according to the present invention. In the case of capturing information, through artificial intelligence techniques (Machine Learning, Neural Networks, Support Vector Machine), a cloud software with Internet access performs the analysis of the food, accessing a previously available database with dozens of thousands of photos of food in different types of combinations and dishes. This database can be continuously updated, so that several other types of food can be detected (for example, if the artificial intelligence algorithm does not correctly detect the food, the user manually enters the data, and from that moment onwards the database will be updated). However, by comprising a set of sensors (as part of the sensor or attached to it), the device of the system according to the present invention makes an assessment of the amount consumed far beyond the bar code reading alone. Thus, based on the data obtained by the set of sensors of the system according to the present invention, an estimate of gluten consumption is carried out using statistical analysis. Thus, through the interface for data acquisition and information processing, data that form a database about food and its gluten content are inserted. Once the food and its consumed quantities are inserted into the device, a probabilistic analysis is performed taking into account each food consumed, so that, in a state of indisposition of the individual, a probability that such an indisposition is or is not the result of an immune system reaction is calculated. The data is of course processed by a software on the processor which is included in the device of the system of the present invention or in a software on the Internet that the processor has access to. In this sense, as illustrated inFIG.1, which represents a flowchart of information insertion in the device of the system according to the present invention, at the beginning of a meal, in which the data can be inserted manually, or even by images, by the user through an application/software on mobile devices. Another option is for the user to take a picture of the food on the plate and the device automatically identify, through the software, the consumed foods and their respective quantities. As shown inFIG.1, after this step of detecting the types of food in the meal, a software will carry out the analysis to estimate the amount of food and gluten in each food in the meal, taking into account a database with the amount of gluten in each food available in a library, whether online or offline. This database will always be updated to include new types of food with or without gluten. Also, as shown in the flowchart inFIG.1, it is necessary to inform if any of the foods being consumed is considered to be gluten-free, that is, with less than 20 ppm in its composition. This information can also be read using the barcode reader or QR code on the product packaging. A database of registered gluten free food labels will be made available to check if this product is registered and if it is really gluten free. If the product is not registered, this information will be sent to an external support team for checking and obtaining information. The system of the present invention can also, through sensors and identification means, identify the presence of gluten in foods, even those that are natural, which may include traces of gluten, arising, for example, from cross contamination. Thus, all the information acquired will be to estimate the amount of daily gluten consumption of an individual and also for a probabilistic analysis and consequent statistical prediction about whether an individual's indisposition is associated with gluten consumption or not. Thus,FIG.2represents a flowchart for the statistical prediction of the amount of ingested gluten consumption, the determination of the accumulated gluten consumption of processed foods called gluten free (that is, comprising an amount of gluten less than 20 ppm) during a pre-determined amount of time, for example in the last 24 hours, to have exceeded a maximum quantity of gluten that is safe for celiac patients and the estimate of if any food consumed has a value equal to or greater than 20 ppm in its composition (maximum limit established). Typical celiac disease reaction symptoms will be monitored 24 hours a day and a statistical prediction will be performed to check the likelihood of these reactions in the individual's body being due to gluten consumption above what is allowed for a person with celiac disease. For the statistical estimation of gluten consumption and prediction of possible side effects due to gluten consumption, a Poisson distribution probability analysis is performed, according to equation 1, which is a probability distribution for random variable that expresses the probability of a series of events occurring in a certain period of time if these events occur regardless of when the last event occurred. In equation (1), which makes it possible to estimate the probability of occurrence of a given event x, λ is the average of events over a period of time or the average rate of occurrence per measured unit; the random variable X being the count of the number of events occurring in the interval. ℙ(X=x)=e-λλxx!x=0,1,2,3…(Equation1) Thus, in real time, the system according to the present invention updates the amount of daily gluten consumed by the individual and makes this estimate available continuously. In the case of a meal in which a “gluten-free” processed food is consumed (thus having a gluten amount of less than 20 ppm), which, cumulatively, can generate an autoimmune response due to gluten consumption, the system generates warnings to the user (through a notification, such as: sound, visual or message alert on mobile device) that the food to be consumed may generate the said autoimmune response due to the consumption of gluten. The 24-hour cumulative Poisson distribution function, for example, of gluten consumption initially to be used for the probability calculation can be seen inFIG.3. This function is adjusted over time, according to the number of times the symptoms will be experienced by the patient and according to the antibody tests that the patient will perform. To adjust this curve, neural network or error minimization techniques can be used (least squares method, for example). In any case, a celiac individual may eventually exceed this amount or not and still have an indisposition. Thus, the present invention also relates to a prediction system for the association of indisposition in an individual to inadequate gluten consumption. In this sense, if a celiac individual experiences an indisposition, with symptoms such as diarrhea, vomiting, headache, fever, malaise, the data and information processing system is activated to carry out a step of evaluation and calculation of the probability of the cause of the indisposition, in order to predict whether the indisposition is a result of the immune system's reaction to gluten. InFIG.3, the gluten accumulated density function in the last 24 hours is shown. From 10 mg of gluten consumption, the probability of the patient to present symptoms increases more rapidly. This accumulated probability function graph will be readjusted over time according to the data collected from the patient. The curve can be adjusted and a new estimated curve obtained after adjustment processes using, for example, least squares techniques, neural networks, support vector machine or another artificial intelligence technique. As an example of the case of accumulated gluten consumption in the last 24 hours, given a Poisson distribution with rate λ=15, with the probability distribution as a function of gluten consumption shown inFIG.3. The table below shows the minimum quantity of gluten Qminin milligrams and the minimum probability Pminof the patient having symptoms and the maximum quantity of gluten Qmaxin milligrams and the maximum probability Pmaxof the patient having symptoms for this Poisson distribution. λQmin(mg), PminQmax(mg), Pmax1512; 0.2920; 0.62 The examples above demonstrate the calculation of the probability that the patient will develop an immune system response given a particular Poisson distribution with λ=15. This is just an example, and the algorithm can define other values of λ depending on the patient's history of symptoms and the antibody tests to be performed, in such a way as to obtain a Poisson distribution of probability that is the most suitable for a given patient. With the symptom data to be provided, the likelihood of the gluten reaction can be validated. A temporal follow-up regarding the amount of accumulated gluten should be carried out and adjustments over time can refine the process so that the system's response in relation to the reactions of the immune system due to the consumption of gluten by the patient with celiac disease is increasingly accurate. A particular case, which can happen a few times, is the patient having an antibody test for celiac disease. If the antibody test gives a positive result for that amount of gluten that was estimated by the system, it will be extra data to help correct the Poisson distribution curves. However, as it is not possible to be sure that the amount of gluten consumed estimated by the system was completely correct, these adjustments to the Poisson distribution with the antibody tests will only be performed after a certain minimum sampling of antibody tests, so that the sampling is statistically significant. For example, with the antibody test the curve can be readjusted every 10 tests performed. This is just an example, you can use more or less tests to correct the curve via definition of this parameter in the system software. By storing data in the database and over time, the system will store more data relating to health status and antibody tests performed, and with the present invention it will tend to be more accurate in diagnosing and alerting the individual with celiac disease. Having described an example of preferred embodiment of the present invention, it should be understood that the scope of the present invention encompasses other possible variations of the described inventive concept, being limited only by the content of the claims, including possible equivalents therein. | 16,309 |
11861921 | DETAILED DESCRIPTION To raise the success rate of ICSI, it is important to select and inject sperm favorable for fertilization into the egg. However, determining whether the sperm obtained by selection work is favorable or not largely depends on the experience of the embryologist acting as the worker, and disparities in fertilization success rates are likely to occur among embryologists. Hereinafter, embodiments of the present invention will be described in light of the above. First Embodiment FIG.1is a diagram illustrating an example of a configuration of a microscope system1according to the present embodiment.FIG.2is a diagram illustrating an example of a configuration of an inverted microscope100.FIG.3is a diagram illustrating an example of a configuration of an operation unit of an input device50.FIG.4is a diagram illustrating an example of a functional configuration of a processing device20.FIG.5is a diagram illustrating an example of a hardware configuration of the processing device20. The microscope system1illustrated inFIG.1is an inverted microscope system provided with a transillumination subsystem120used for micro-insemination, and is used by an embryologist who performs micro-insemination, for example. The microscope system1is provided with at least an eyepiece lens101, objectives102, a tube lens103, an imaging unit140, a processing device20, and a projection device153. Furthermore, in the microscope system1, a modulation element for visualizing an unstained sample used in micro-insemination is provided in each of an illumination optical path and an observation optical path. The microscope system1uses the projection device153to project a projected image onto an image plane where an optical image of the sample is formed by one of the objectives102and the tube lens103. With this arrangement, a user of the microscope system1sees an image in which the projected image is superimposed onto the optical image. In particular, by including an assisting image that assists with micro-insemination in the projected image, the microscope system1is capable of providing various information that assists with micro-insemination superimposed onto the optical image to the user who observes a sample by peering into the eyepiece lens101to perform the micro-insemination work. Hereinafter, a specific example of the configuration of the microscope system1will be described in detail with reference toFIGS.1to4. As illustrated inFIG.1, the microscope system1is provided with an inverted microscope100, a microscope controller10, a processing device20, a display device30, a plurality of input devices (input device40, input device50, input device60, input device70), and an identification device80. Furthermore, the microscope system1is connected to a database server2where various data is stored. As illustrated inFIG.1, the inverted microscope100is provided with a microscope body110, in addition to a plurality of objectives102, a stage111, a transillumination subsystem120, and an eyepiece tube170, which are attached to the microscope body110. The user is able to use the inverted microscope100to observe a sample according to the four microscopy methods of bright field (BF) observation, polarized (PO) observation, differential interference contrast (DIC) observation, and modulation contrast (MC) observation. Note that modulation contrast observation is also referred to as relief contrast (RC) observation. The plurality of objectives102are mounted onto a revolving nosepiece112. As illustrated inFIG.2, the plurality of objectives102include an objective102aused for BF observation, an objective102bused for PO observation and DIC observation, and an objective102cused for MC observation. Additionally, the objective102cincludes a modulator104. The modulator104has three zones with different degrees of transmittance (for example, a zone with approximately 100% transmittance, a zone with approximately 5% transmittance, and a zone with approximately 0% transmittance). InFIG.2, three objectives corresponding to different microscopy methods are illustrated as an example, but the plurality of objectives102may also include a plurality of objectives with different magnifications for each microscopy method. Hereinafter, a case where a 4× objective used for BF observation, 10×, 20×, and 40× objectives used for MC observation, a 20× objective used for PO observation, and a 60× objective used for DIC observation are included will be described as an example. The revolving nosepiece112is a switching device that switches the objective disposed on the optical path from among the plurality of objectives102. The revolving nosepiece112switches the objective disposed on the optical path according to the microscopy method and the observation magnification. The objective disposed on the optical path by the revolving nosepiece112guides transmitted light that has transmitted through a sample to the eyepiece lens101. A sample inserted into a container is placed on the stage111. The container is a Petri dish and the sample includes reproductive cells, for example. The stage111moves in the optical axis direction of the objective102disposed on the optical path, and also in a direction orthogonal to the optical axis of the objective102. Note that the stage111may be a manual stage or a motorized stage. The transillumination subsystem120illuminates the sample placed on the stage111from above the stage111. As illustrated inFIGS.1and2, the transillumination subsystem120includes a light source121and a universal condenser122. The light source121may be a light-emitting diode (LED) light source or a halogen lamp light source, for example. As illustrated inFIG.2, the universal condenser122includes a polarizer123(first polarizing plate), a plurality of optical elements housed in a turret124, and a condenser lens128. The polarizer123is used in MC observation, PO observation, and DIC observation. A plurality of optical elements used by being switched depending on the microscopy method are housed in the turret124. A DIC prism125is used in DIC observation. An aperture plate126is used in BF observation and PO observation. An optical element127is a combination of a slit plate127a, which is a light-shielding plate having a slit formed therein, and a polarizing plate127b(second polarizing plate) disposed to cover a portion of the slit. The optical element127is used in MC observation. The eyepiece lens101is included in the eyepiece tube170. The tube lens103is disposed between the eyepiece lens101and the objective102. The tube lens103forms an optical image of the sample on the basis of transmitted light in an image plane IP between the eyepiece lens101and the tube lens103. Additionally, a projected image described later is also formed in the image plane IP on the basis of light from the projection device153. With this arrangement, the projected image is superimposed onto the optical image in the image plane IP. The user of the microscope system1uses the eyepiece lens101to observe a virtual image of the image in which the projected image is superimposed onto the optical image formed in the image plane IP. As illustrated inFIG.1, the microscope body110includes a laser-assisted hatching unit130, an imaging unit140, and a projection unit150. Also, as illustrated inFIG.2, the microscope body110includes an intermediate magnification change unit160. Furthermore, the microscope body110includes a DIC prism105and an analyzer106, which are detachable from the optical path. As illustrated inFIG.2, the laser-assisted hatching unit130is a laser unit disposed between the objective102and the tube lens103. The laser-assisted hatching unit130shines laser light onto the sample by introducing laser light from between the objective102and the tube lens103. More specifically, the laser-assisted hatching unit130shines laser light onto the zona pellucida surrounding an embryo that grows from a fertilized egg, for example. The laser-assisted hatching unit130includes a splitter131, a scanner133, a lens134, and a laser135. The splitter131is a dichroic mirror, for example. The scanner133is a galvano scanner, for example, and adjusts the irradiation position of the laser light in a direction orthogonal to the optical axis of the objective102. The lens134converts the laser light into a beam of collimated light. With this arrangement, the laser light is condensed onto the sample by the objective102. The imaging unit140is an imaging device that acquires digital image data of the sample on the basis of the transmitted light. The imaging unit140is disposed between the tube lens103and the eyepiece lens101. As illustrated inFIG.2, the imaging unit140includes a splitter141and an imaging element143. The splitter141is a half mirror, for example. The tube lens103forms an optical image of the sample on a light-receiving face of the imaging element143. The imaging element143is for example a charge-coupled device (CCD) image sensor or a complementary metal-oxide-semiconductor (CMOS) image sensor that detects lights from the sample, and converts the detected light into an electrical signal by photoelectric conversion. The imaging unit140generates digital image data of the sample on the basis of the electrical signal obtained by the imaging element143. Note that the microscope system1described later is used to observe samples such as sperm, and the fine features of sperm, such as the tail portion for example, are approximately φ0.5 μm. To discern such features in an image, the pixel pitch is demanded to be φ0.5 μm or less when projected onto the object plane. In other words, the pitch of the pixel-projected image in the object plane calculated by dividing the pixel pitch by the total magnification (that is, the magnification of the objective×the magnification of the intermediate magnification change unit×the magnification of a camera adapter not illustrated) is demanded to be φ0.5 μm or less. For example, with the combination of a 20× objective, a 2× intermediate magnification change lens, and a 0.25× camera adapter, the total magnification is 10×. In this case, by using a digital microscope camera having pixel pitch of 3.45 μm, the pitch of the pixel-projected image in the object plane is 0.345 μm, and even the tail portion of sperm is discernible. Note that when selecting the actual digital camera, further consideration should be given such that the region formed by the effective pixels has a size that fills the entire field of view. The projection unit150is disposed between the tube lens103and the eyepiece lens101. As illustrated inFIG.2, the projection unit150includes a splitter151, a lens152, and a projection device153. The splitter151is a half mirror, for example. The projection device153projects a projected image on the basis of projected image data generated by the processing device20. The lens152projects the projected image by condensing light from the projection device153onto the image plane of the tube lens103, or in other words the same position as the image plane IP where the optical image is formed. For example, the size of a single sperm from head to tail is roughly 60 μm, and the size of the head is approximately 3 μm across the short side. If such a sperm is projected onto the image plane IP in front of the eyepiece lens with the combination of a 20× objective used for MC observation and a 1× intermediate magnification change lens, the image of the sperm has a size of 1.2 mm×0.06 mm. If projected image data containing such a sperm is created, the result is a rectangle with a minimum size of approximately 1.5 mm×0.1 mm. To project this minimum 0.1 mm gap to be perceivable in the field of view of the eyepiece lens, in the case where the projection magnification of the lens152is 1×, it is sufficient to use a projection device153including a light-emitting element with a pitch of 0.05 mm or less (in the monochromatic case). This arrangement makes it possible to display a projected image in which the above 0.1 mm gap is perceivable. Furthermore, the projection device153projects a projected image onto a field of view that not only satisfies the field number φ22 of the eyepiece lens, but also an even larger field number of φ23 or greater. Specifically, in the case where the lens152has a 1× projection magnification, a projection device153having an effective light-emitting area of φ23 or greater is used. With this arrangement, data about sperm in the periphery of the field of view entering the field of view from outside the eyepiece lens field of view is also included in the projected image data. Consequently, it is possible to recognize favorable sperm thoroughly from among all sperm inside the field of view, including the periphery of the eyepiece lens field of view. Note that in this case, the effective pixel area of the imaging element143obviously also needs to have a size of φ23 or greater in the eyepiece lens part. The intermediate magnification change unit160is disposed between the objective102and the tube lens103. As illustrated inFIG.2, the intermediate magnification change unit160includes a plurality of lenses (lens161, lens162, lens163), and by switching the lens disposed on the optical path from among these lenses, the magnification of the optical image formed in the image plane is changed. By using the intermediate magnification change unit160, the magnification of the optical image can be changed without switching the objective102positioned close to the sample. The DIC prism105and the analyzer106are disposed between the objective102and the tube lens103. The DIC prism105is used in DIC observation. The analyzer106is used in PO observation and DIC observation. In the inverted microscope100, when performing MC observation, the polarizer123and the optical element127are disposed on the illumination optical path as a first modulation element that modulates the illuminating light irradiating the sample, and the modulator104is disposed on the observation optical path as a second modulation element that modulates the transmitted light. Also, when performing PO observation, the polarizer123is disposed on the illumination optical path as a first modulation element, and the analyzer106is disposed on the observation optical path as a second modulation element. Also, when performing DIC observation, the polarizer123and the DIC prism125are disposed on the illumination optical path as a first modulation element, and the analyzer106and the DIC prism105are disposed on the observation optical path as a second modulation element. With this arrangement, an unstained sample can be visualized. The microscope controller10is a device that controls the inverted microscope100. The microscope controller10is connected to the processing device20, the input device50, and the inverted microscope100, and controls the inverted microscope100according to commands from the processing device20or the input device50. The display device30is a liquid crystal display, an organic EL (OLED) display, or a cathode ray tube (CRT) display, for example. The input device40includes a handle41and a handle42. The handle41and the handle42are operated to control the movements of micromanipulators not illustrated that move a pipette43and a pipette44. The pipette43and the pipette44are used to manipulate the sample in micro-insemination work. The pipette43is a holding pipette, for example, and the pipette44is an injection pipette, for example. The input device50is a hand switch device for changing the settings of the inverted microscope100. As illustrated inFIG.3, the input device50includes six buttons (button51to button56), for example, and by simply pressing these buttons, the user is able to quickly switch the settings of the inverted microscope100. If the user presses the button51, the settings of the inverted microscope100are switched to settings for BF observation at an observation magnification of 4× (hereinafter designated BF 4× observation). If the user presses the button52, the settings of the inverted microscope100are switched to settings for MC observation at an observation magnification of 10× (hereinafter designated MC 10× observation). If the user presses the button53, the settings of the inverted microscope100are switched to settings for MC observation at an observation magnification of 20× (hereinafter designated MC 20× observation). If the user presses the button54, the settings of the inverted microscope100are switched to settings for MC observation at an observation magnification of 40× (hereinafter designated MC 40× observation). If the user presses the button55, the settings of the inverted microscope100are switched to settings for PO observation at an observation magnification of 20× (hereinafter designated PO 20× observation). If the user presses the button56, the settings of the inverted microscope100are switched to settings for DIC observation at an observation magnification of 60× (hereinafter designated DIC 60× observation). The input device60is a keyboard. The input device70is a mouse. The input device60and the input device70are each connected to the processing device20. The identification device80is a device that acquires identification information attached to a sample. Note that attaching identification information to a sample includes the case where the identification information is affixed to a container housing the sample, for example. The identification information is information that identifies the sample, and more specifically is information that specifies the patient who provided the sample. The identification device80is a barcode reader, an RFID® reader, or a QR Code® reader, for example. The processing device20is a device that controls the microscope system1overall. As illustrated inFIG.1, the processing device20is connected to the inverted microscope100, the microscope controller10, the display device30, the input device60, the input device70, and the identification device80. Additionally, the processing device20is also connected to the database server2. The processing device20generates projected image data corresponding to a projected image on the basis of at least digital image data acquired by the imaging unit140. The projected image includes an assisting image that assists with micro-insemination. Thereafter, the processing device20controls the projection device153by outputting the projected image data to the projection device153. As illustrated inFIG.4, the processing device20is provided with a camera control unit21, an analysis unit22, a projected image generation unit23, and a projection control unit24mainly as components related to the control of the projection device153. The camera control unit21acquires digital image data of the sample by controlling the imaging unit140. The digital image data acquired by the camera control unit21is outputted to the analysis unit22. The analysis unit22analyzes at least the digital image data acquired by the camera control unit21, and outputs an analysis result to the projected image generation unit23. The projected image generation unit23generates projected image data corresponding to the projected image including the assisting image that assists with micro-insemination on the basis of the analysis result generated by the analysis unit22, and outputs the generated projected image data to the projection control unit24. More specifically, for example, in the case where the user uses the microscope system1to perform ICSI, the analysis unit22may for example generate an analysis result that specifies candidate cells, that is, reproductive cells suitable for fertilization from among the reproductive cells included in the sample, on the basis of at least the digital image data. In this case, the projected image generation unit23may also generate projected image data corresponding to the projected image including an image (first assisting image) that specifies candidate cells as the assisting image. The projection control unit24controls the projection of the projected image onto the image plane by controlling the projection device153. More specifically, the projection control unit24outputs the projected image data to the projection device153, thereby causing the projection device153to project the projected image onto the image plane on the basis of the projected image data acquired from the projection control unit24. The microscope system1configured as above is capable of superimposing the projected image including the assisting image that assists with micro-insemination onto the optical image. For this reason, the user is able to obtain information necessary for micro-insemination while observing the sample. Consequently, according to the microscope system1, it is possible to assist with micro-insemination performed by the user. This configuration makes it possible to reduce inconsistencies in fertilization success rates among embryologists performing micro-insemination, and an improvement in fertilization success rates may be expected. Furthermore, in the microscope system1, the projected image is projected onto the image plane between the eyepiece lens101and the tube lens103and superimposed onto the optical image. For this reason, the user is able to obtain various information that assists with micro-insemination while peering into the eyepiece lens101, and movement of the line of sight, such as line of sight going back and forth between a monitor and the eyepiece lens101, can be avoided compared to a case where the assisting image is displayed on a monitor or the like. Consequently, according to the microscope system1, the user is able to obtain information necessary for micro-insemination from the projected image by simply observing the sample using the optical image, without taking his or her eyes away from the eyepiece lens101. With this arrangement, the microscope system1is capable of assisting with the work of micro-insemination with the assisting image and reducing the burden on the user for micro-insemination, without changing the user's workflow. Also, the work time of the user is shortened, and as a result, the amount of time that the sample is exposed to open air under the microscope is also shortened, thereby reducing the damage received by the sample. Note that the processing device20included in the microscope system1may be a general-purpose device or a special-purpose device. The processing device20is not particularly limited in configuration, but may have a physical configuration like the one illustrated inFIG.5, for example. Specifically, the processing device20may be provided with a processor20a, a memory20b, an auxiliary storage device20c, an input/output interface20d, a medium driving device20e, and a communication control device20f, and these components may be interconnected by a bus20g. The processor20ais a processing circuit of any type, such as a central processing unit (CPU), for example. The processor20amay execute programs stored in the memory20b, the auxiliary storage device20c, and a storage medium20hto perform programmed processes, and thereby achieve the components (camera control unit21, analysis unit22, projected image generation unit23, projection control unit24) related to the control of the projection device153described above. In addition, the processor20amay also be configured using a dedicated processor such as an application-specific integrated circuit (ASIC) or a field-programmable gate array (FPGA), and may also be configured using a graphics processing unit (GPU). The memory20bis a working memory for the processor20a. The memory20bis a semiconductor memory of any type, such as random access memory (RAM), for example. The auxiliary storage device20cis a non-volatile memory such as erasable programmable ROM (EPROM), a hard disk drive (HDD), or a solid-state drive (SSD). The input/output interface20dexchanges information with external devices (inverted microscope100, microscope controller10, display device30, input device60, input device70, identification device80). The medium driving device20eis capable of outputting data stored in the memory20band the auxiliary storage device20cto the storage medium20h, and is also capable of reading out information such as programs and data from the storage medium20h. The storage medium20his a portable recording medium of any type. For example, the storage medium20hmay be an SD card, Universal Serial Bus (USB) flash memory, a Compact Disc (CD), or a Digital Versatile Disc (DVD). The communication control device20finputs and outputs information with respect to a network. For example, a device such as a network interface card (NIC), a Wi-Fi® module, a Bluetooth® module, or a BLE module may be adopted as the communication control device20f. The bus20ginterconnects components such as the processor20a, the memory20b, and the auxiliary storage device20csuch that data can be exchanged among the components. FIG.6is a flowchart illustrating an example of an ICSI procedure.FIG.7is a diagram illustrating an example of a configuration of a drop formed as a sample200inside a Petri dish210.FIG.8is a flowchart illustrating an example of a sperm selection procedure.FIG.9is a flowchart of an image projection process performed by the microscope system1.FIG.10is a diagram for explaining an image processing method performed by an analysis unit22.FIG.11is a diagram illustrating an example of an image seen from an eyepiece lens101. Hereinafter, an ICSI procedure that the user performs using the microscope system1will be described specifically with reference toFIGS.6to11. First, the user prepares a sample (step S1). At this point, the user creates a sample200including a plurality of drops inside a Petri dish210as illustrated inFIG.7for example, and places the sample200onto the stage111. A drop201is a cleaning drop used to clean the pipettes. Drops202are sperm suspension drops in which a sperm suspension is dropped into a PVP solution, for example. Drops203are egg manipulation drops in which eggs are placed in an m-HTF solution, for example. Note that the m-HTF solution is a Hepps-containing HTF solution to which 10% serum has been added. These drops are covered with mineral oil. Next, the user sets up the microscope system1(step S2). At this point, the user presses the button51of the input device50to switch the settings of the microscope system1to BF 4× observation, for example. Thereafter, the user operates the input device40to adjust the positions of the pipette43and the pipette44, and bring the pipette43and the pipette44into focus. Furthermore, the user moves the stage111to clean the pipette43and the pipette44with the drop201(cleaning drop). When setup is completed, the user checks the growth state of the eggs (oocytes) inside the drops203(egg manipulation drops) (step S3). At this point, the user presses the button53of the input device50to switch the settings of the microscope system1to MC 20× observation, for example. The user observes the state of the eggs at MC 20× observation, and selects an egg. Additionally, the user may also press the button55of the input device50to switch the settings of the microscope system1to PO 20× observation, for example. By observing the spindles of the eggs at PO 20× observation, the user may assess the maturity of the eggs to further select an egg. When the selection of an egg is finished, the user selects a sperm according to the procedure illustrated inFIG.8(step S4). First, the user presses the button53of the input device50to switch the settings of the microscope system1to MC 20× observation, for example. Next, the user moves the stage111to move the observation position to the drops202(sperm suspension drops), and bring the sperm into focus at MC 20× observation (step S11). Next, the user selects sperm at MC 20× observation, and picks out favorable sperm suitable for fertilization (step S12). Whether a sperm is favorable or not is generally determined on the basis of the appearance and motility of the sperm, but definitive criteria do not exist. For this reason, the selection of sperm often depends on the experience and intuition of the embryologist acting as the user of the microscope system1, and the judgment differs depending on the embryologist. This is a factor that leads to differences in fertilization success rates among embryologists. Accordingly, the microscope system1estimates that a sperm selected by an experienced embryologist with a high fertilization success rate are favorable sperm suitable for fertilization, and notifies the user of the microscope system1about the estimated sperm as candidate cells (candidate sperm). Specifically, in step S12, the microscope system1notifies the user of candidate cells by performing the image projection process illustrated inFIG.9. First, the microscope system1projects an optical image O1of the sample onto the image plane (step S21). At the same time, in the microscope system1, the imaging unit140acquires digital image data of the sample (step S22). The digital image data acquired by the imaging unit140is outputted to the processing device20, and the analysis unit22of the processing device20generates an analysis result that specifies candidate cells (candidate sperm) on the basis of the digital image data (step S23). The analysis algorithm that specifies the candidate cells is not particularly limited, but it is desirable to reproduce selection by an experienced embryologist with a high fertilization success rate. More specifically, it is desirable for the analysis unit22to analyze sperm on the basis of at least the appearance and motility of the sperm as a reproductive cell, and thereby reproduce selection by an experienced embryologist with a high fertilization success rate. Additionally, the digital image data used for the analysis may be still image data or moving image data. However, because it is difficult to analyze the motility of sperm on the basis of still image data, as illustrated inFIG.10, the analysis unit22may first process and combine the still image data of a still image M1with an image indicating motility (an image of arrows), and thereby generate still image data of a still image M2. The image indicating motility is an image indicating a trail of movement by the sperm from the point in time going back a predetermined length of time to the current point, and may be generated on the basis of plural image data acquired within a corresponding period. Additionally, the appearance and motility of the sperm may be analyzed on the basis of the still image data of the still image M2obtained by combination with the image indicating motility, and an analysis result that specifies a candidate sperm may be generated. Note that a rule-based algorithm that reproduces selection by an experienced embryologist may be adopted by the analysis unit22. Furthermore, an algorithm (model) for estimating favorable sperm may be trained to select sperm like an experienced embryologist through machine learning, and the trained model may be adopted by the analysis unit22. Note that the machine learning may be traditional machine learning in which features necessary for estimation are given in advance by humans, or deep learning in which features are extracted by the machine itself. When the analysis result is generated, the projected image generation unit23of the processing device20generates projected image data corresponding to a projected image P1including an assisting image A1that specifies each candidate cell based on the analysis result (step S24), and outputs the generated projected image data to the projection device153. Thereafter, the projection device153projects the projected image P1onto the image plane on the basis of the projected image data (step S25). With this configuration, an image V1in which the projected image P1including the assisting image(s) A1is superimposed onto an optical image O1is formed in the image plane, as illustrated inFIG.11for example. Each assisting image A1illustrated inFIG.11is an image that surrounds an image of a candidate cell. The projected image P1includes the assisting image A1at a position that does not overlap with the image of each candidate cell when projected onto the image plane. With this arrangement, the microscope system1can notify the user of candidate cells without interfering with the observation of the candidate cells. By causing the image V1in which the projected image P1is superimposed onto the optical image O1to be formed in the image plane, in step S12, the user can select sperm while paying attention to the candidate cells (candidate sperm) specified by the assisting images A1, and pick out favorable sperm. Consequently, the sperm selection work becomes easy, and the burden imposed by the selection work is reduced substantially. When favorable sperm are picked out, the user damages the tail of each favorable sperm at RC 20× observation to immobilize the favorable sperm (step S13). At this point, the user immobilizes the favorable sperm by abrading the tail of the favorable sperm against the floor of the Petri dish210with a pipette. Thereafter, the user observes the appearance of the immobilized favorable sperm in further detail, and further selects favorable sperm (step S14). At this point, the user presses the button54of the input device50to switch the settings of the microscope system1to MC 40× observation, for example. Subsequently, the user picks out favorable sperm at MC 40× observation. Note that in step S14, like step S12, the microscope system1may estimate favorable sperm that an experienced embryologist with a high fertilization success rate would select, and notify the user of the microscope system1about the estimated favorable sperm as candidate cells (candidate sperm). However, because the sperm is immobilized, step S14differs from step S12in that the analysis unit22analyzes the sperm on the basis of at least the appearance of the sperm. When the selection of favorable sperm at MC 40× observation is completed, the user further observes the heads of the favorable sperm in detail, and further selects favorable sperm according to the size of the blank existing in the head (step S15). At this point, the user presses the button56of the input device50to switch the settings of the microscope system1to DIC 60× observation, for example. Thereafter, the user picks out the favorable sperm having a small blank at DIC 60× observation. Note that step S15may also be performed under MC 40× observation. In this case, the user picks out favorable sperm by recognizing a bright spot in the head as a blank. Subsequently, the user draws up the chosen favorable sperm into the pipette44acting as the injection pipette, moves the observation position to one of the drops203(egg manipulation drops) (step S16), and ends the series of steps in the sperm selection illustrated inFIG.8. When sperm selection is completed, the user checks the position of the spindle to prepare for injection of favorable sperm (step S5). At this point, the user observes the egg chosen in step S3existing inside one of the drops203, and checks the position of the spindle of the egg. Specifically, the user presses the button55of the input device50to switch the settings of the microscope system1to PO 20× observation, for example. Thereafter, the user reorients the spindle by manipulating the pipette43acting as the holding pipette, such that the spindle of the egg visualized at PO 20× observation is positioned in the 12 o'clock or the 6 o'clock direction. This is to avoid damaging the spindle with the pipette that is thrust into the egg from the 3 o'clock or 9 o'clock direction in step S6described later. Finally, the user injects the sperm into the egg (step S6), and ends ICSI. At this point, the user presses the button53of the input device50to switch the settings of the microscope system1to MC 20× observation, for example. Thereafter, the user holds the egg in place with the pipette43acting as the holding pipette in the direction adjusted in step S5at MC 20× observation, and thrusts the pipette44acting as the injection pipette. Subsequently, the favorable sperm is injected into the egg from the pipette44. When the series of ICSI steps illustrated inFIG.6ends, the user returns the egg containing the injected sperm to an incubator for cultivation. Additionally, the user may also operate the processing device20using the input device60and the input device70to save information obtained by ICSI in the database server2. For example, patient information about the sperm and the egg (such as clinical data about the mother and examination results regarding the semen containing the sperm), and data about the culture fluid of the sperm and the egg (such as the type, concentration, and pH, for example) may be associated with information such as image data of the egg containing the injected sperm, image data of the favorable sperm picked out, and the ICSI work time, and saved in the database server2. This information may also be used in the analysis by the analysis unit22used in steps S12and S14ofFIG.8. In other words, the processing device20may generate projected image data corresponding to a projected image including an assisting image on the basis of digital image data as well as other data saved in the database server2. In this way, by synthesizing a variety of information not solely limited to image data to estimate favorable sperm, the achievement of even higher fertilization success rates may be expected. As above, in ICSI with the microscope system1, a projected image including an assisting image that specifies candidate sperm is projected onto the image plane. The size of sperm is approximately 60 μm, and an objective with a magnification of at least 20× is used to distinguish favorable sperm. Because the field number of an inverted microscope is generally approximately 22, the actual field of view is approximately φ1 mm. It is extremely difficult to perform the work of selecting freely-moving sperm inside a region with an actual field of view of φ1 mm. Generally, because the sperm estimated to be favorable sperm have high motility and the ICSI work needs to be performed in a short time, the sperm selection work requires the user to observe the appearance of relatively fast-moving sperm quickly and judge whether the sperm is favorable or not. In a work environment where such tough constraints are imposed, superimposing an assisting image that specifies a candidate sperm estimated as a favorable sperm onto an optical image greatly contributes to reducing the burden of the sperm selection work. Moreover, by utilizing the knowledge of experienced embryologists in the analysis for specifying candidate sperm, and incorporating such knowledge as an analysis algorithm, improved fertilization success rates can be achieved while at the same time also reducing inconsistencies in fertilization success rates among embryologists. Consequently, according to the microscope system1, it is possible to assist with sperm selection by the user effectively. FIGS.12to15are diagrams illustrating other examples of images seen from the eyepiece lens101. In step S12, the microscope system1may superimpose any of the projected images P2to P5illustrated inFIGS.12to15instead of the projected image P1illustrated inFIG.11onto the optical image O1. An image V2illustrated inFIG.12is obtained by superimposing a projected image P2onto the optical image O1.FIG.11illustrates an example in which the projected image P1includes the assisting image A1having a shape that surrounds each image of a candidate sperm, but the projected image may also include other images. The projected image P2includes an assisting image A2indicating the trail of movement of each candidate sperm in addition to the assisting image A1that specifies each candidate sperm. The assisting image A2expresses the motility of each candidate sperm with the trail of movement. By projecting the projected image P2illustrated inFIG.12onto the image plane, sperm selection by the user is made even easier. Note that, like the assisting image A1, to avoid interfering with the observation of the candidate sperm, it is desirable for the assisting image A2to be included at a position that does not overlap with the image of each candidate sperm in the projected image P2. An image V3illustrated inFIG.13is obtained by superimposing a projected image P3onto the optical image O1.FIG.11illustrates an example of specifying candidate sperm with a single type of image (assisting image A1), but candidate sperm may also be specified with multiple types of images. The projected image P3includes two types of images (assisting image A1and assisting image A3) that specify candidate sperm. The assisting image A3is an image that specifies candidate sperm having a lower degree of recommendation compared to the assisting image A1, and the color of the assisting image A3(light blue, for example) is different from the color of the assisting image A1(dark blue, for example). In other words, the assisting image A1and the assisting image A3are respectively colored according to the degree of recommendation of the candidate sperm specified by the assisting image. By projecting the projected image P3illustrated inFIG.13onto the image plane, the user is able to grasp which candidate sperm should be prioritized for further scrutiny, making the sperm selection work even easier to perform. Furthermore, the degree of recommendation of sperm may be absolute or relative. In actuality, some patients may only have suboptimal sperm overall, and in such cases, a relatively healthy sperm is selected from among the limited choices. In this case, even if the degree of recommendation is absolute, if the system is set to project multiple images indicating multiple types of degrees of recommendation, then at least an image indicating a relatively low degree of recommendation will be projected. Expressed in terms of the above example, the assisting image A3with a light blue color will be projected even if the assisting image A1with a dark blue color is not projected. Consequently, the possibility where no assisting images are projected at all can be greatly reduced. Note that it is sufficient to project multiple types of images indicating different degrees of recommendation, and three or more types of images indicating different degrees of recommendation may also be projected. Furthermore, the images are not limited to indicating high degrees of recommendation, and images indicating particularly low degrees of recommendation may also be projected. An image V4illustrated inFIG.14is obtained by superimposing a projected image P4onto the optical image O1.FIG.13illustrates an example of an assisting image colored according to the degree of recommendation of candidate sperm, but it is sufficient for an assisting image to have a different appearance depending on the degree of recommendation of the candidate sperm specified by the assisting image. The projected image P4includes four types of images (assisting image A1, assisting image A4, assisting image A5, and assisting image A6) that specify candidate sperm. These assisting images have different line styles or shapes from each other, and express degrees of recommendation of candidate sperm according to the differences in the line styles or shapes. Like the case of projecting the projected image P3illustrated inFIG.13, by projecting the projected image P4illustrated inFIG.14onto the image plane, the user is able to grasp which candidate sperm should be prioritized for further scrutiny, making the sperm selection work even easier to perform. An image V5illustrated inFIG.15is obtained by superimposing a projected image P5onto the optical image O1.FIG.11illustrates an example in which the projected image P1includes the assisting image A1having a shape that surrounds each image of a candidate sperm, but it is sufficient for the projected image to include an image that specifies a candidate sperm. The projected image P5illustrated inFIG.15includes an assisting image A7having a shape that points out an image of a candidate sperm. Like the case of projecting the projected image P1illustrated inFIG.11, by projecting the projected image P5onto the image plane, the user can easily grasp the candidate sperm, and the burden of the sperm selection work can be greatly reduced. FIG.16is a diagram illustrating a configuration of a neural network.FIG.17is a flowchart illustrating an example of a training procedure.FIG.18is a diagram for explaining a method of applying labels to teaching images. As described above, the analysis unit22may adopt a model trained by machine learning or a neural network trained by deep learning, for example. In other words, the analysis unit22may use a trained neural network to at least analyze digital image data. Hereinafter, a procedure for training the neural network NN illustrated inFIG.16to recognize favorable sperm will be described with reference toFIGS.16to18. First, the microscope system1records the work of selecting sperm performed under MC 20× observation as a moving image or a still image (step S31). At this point, during the sperm selection work, the imaging unit140acquires image data, and the processing device20saves the image data. Next, the microscope system1extracts images of sperm portions from the recorded image, and arranges the extracted images for display (step S32). At this point, the processing device20reads out the moving image data or still image data saved in step S31, extracts images of sperm portions from the moving image or still image as teaching images, and arranges the teaching images for display on the display device30. The teaching images arranged for display are evaluated by an experienced embryologist with a high fertilization success rate. As illustrated inFIG.18, after each of the teaching images has been evaluated by an embryologist, the microscope system1labels the teaching images on the basis of the evaluations by an experienced embryologist (step S33). At this point, the evaluation results (labels) provided by the experienced embryologist are saved in association with the teaching images. Hereinafter, data combining the teaching images and the labels will be referred to as teaching data. Note that in the example ofFIG.18, teaching images (T1, T10, T14, . . . ) that are clicked while a button B1is selected in a window W1are saved in association with a Grade A label. Also, teaching images (T2, T3, T6, T8, T9, T11, T15, . . . ) that are clicked while a button B2is selected are saved in association with a Grade B label. Also, teaching images (T4, T5, T13, T16, . . . ) that are clicked while a button B3is selected are saved in association with a Grade C label. Also, teaching images (T7, T12, . . . ) that are clicked while a button B4is selected are saved in association with a Grade D label. Note that Grades A, B, C, and D indicate successively lower degrees of recommendation in the above order. When the teaching data is created by step S33, the microscope system1uses a large amount of created teaching data to train a neural network (step S34). Thereafter, the microscope system1performs processes similar to steps S31to S33for selection work under MC 40× observation to train the neural network (step S35). With this arrangement, the microscope system1obtains a trained neural network. In other words, the trained neural network of the microscope system1is a neural network that has been trained using image data corresponding to images of sperm labeled as suitable or unsuitable for fertilization as the teaching data. Finally, the microscope system1verifies the trained neural network (step S36). At this point, the microscope system1verifies whether or not the neural network recognizes favorable sperm appropriately with respect to different sperm than the training stage, for example. If the verification result confirms that favorable sperm is recognized appropriately, the trained neural network obtained in step S35is adopted by the analysis unit22. As above, by generating teaching data and training a neural network according to the procedure illustrated inFIG.17, an analysis algorithm for sperm selection utilizing the knowledge of an experienced embryologist can be constructed easily. Consequently, for example, neural networks may be trained in units of hospitals, or further trained in units of hospitals, and a different model for each hospital may be adopted in the analysis unit22. This arrangement makes it possible to easily accommodate favorable sperm selection conforming to the guidelines of each hospital. Note that althoughFIG.17illustrates an example of using the microscope system1to generate the teaching data and train the neural network, the generation of the teaching data and the training of the neural network may also be performed by a different system from the microscope system1, and a trained neural network that has been constructed on another system may be applied to the microscope system1. FIG.19is a diagram for explaining a method of creating teaching data.FIG.18illustrates an example in which the microscope system1labels data by having an embryologist evaluate teaching images displayed on the display device30, but an embryologist may also label images seen using the eyepiece lens101. For example, when an experienced embryologist is observing sperm using the eyepiece lens101under MC 20× observation, the processing device20generates pointer image data corresponding to a pointer image PP that points out a position corresponding to a mouse movement operation (first input operation) performed by the embryologist, and the projection device153projects the pointer image PP onto the image plane on the basis of the pointer image data, as illustrated inFIG.19. An image V6illustrated inFIG.19is obtained by superimposing a projected image P6onto the optical image O1. The projected image P6includes the pointer image PP that points out the position corresponding to a mouse movement operation. Thereafter, when a mouse click operation (second input operation) by the embryologist is detected, the processing device20specifies the sperm selected by the embryologist on the basis of the position of the pointer image PP when the mouse click operation is detected. Subsequently, an image T1of the specified sperm is recorded as a teaching image. Note that at this time, the image T1may also be labeled according to the content of the second input operation. For example, the image may be labeled as Grade A if the mouse click operation is a left click, as Grade B if the mouse click operation is a left double-click, or as Grade C if the mouse click operation is a right click. With this arrangement, a teaching image can be acquired and labeled at the same time to generate teaching data. The image quality of images displayed on the display device30is degraded compared to the image quality of images observed using the eyepiece lens101, and therefore it is difficult to distinguish subtle individual differences between sperm from images displayed on the display device30. In contrast, as illustrated inFIG.19, by generating teaching data while the embryologist observes sperm using the eyepiece lens101, sperm can be selected and teaching data can be created while recognizing subtle individual differences between sperm under the same environment as the ICSI work. Consequently, the knowledge of an experienced embryologist with a high fertilization success rate can be converted into teaching data more correctly. FIGS.20and21are diagrams illustrating still other examples of images seen from the eyepiece lens101. The above illustrates an example in which the projected image includes an assisting image that specifies candidate sperm, but in addition to the assisting image that specifies candidate sperm, the projected image may also include another assisting image that assists with micro-insemination. An image V7illustrated inFIG.20is obtained by superimposing a projected image P7onto the optical image O1. The projected image P7includes an assisting image A9indicating information about the patient (one example of a seventh assisting image) in addition to the assisting image A1that specifies each candidate sperm. In the microscope system1, the identification device80acquires identification information attached to the sample. The processing device20acquires information about the patient providing the sample, on the basis of the identification information acquired by the identification device80. Specifically, for example, the processing device20acquires information about the patient providing the sample by extracting information about the patient corresponding to the identification information from the database server2. Note that the information about the patient includes information such as the name of the patient and an ID, for example. Furthermore, the processing device20generates projected image data corresponding to the projected image P7including the assisting image A1and the assisting image A9on the basis of at least the digital image data acquired by the imaging unit140and the information about the patient. Finally, the projection device153projects the projected image P7onto the image plane on the basis of the projected image data, thereby causing the image V7to be formed in the image plane. As illustrated inFIG.20, by projecting the assisting image A9indicating information about the patient onto the image plane, the user can perform ICSI while continually confirming the patient acting as the sperm donor. An image V8illustrated inFIG.21is obtained by superimposing a projected image P8onto the optical image O1. The projected image P8includes an assisting image A10indicating the elapsed time since the processing device20detected a predetermined operation (one example of an eighth assisting image) in addition to the assisting image A1that specifies each candidate sperm. The predetermined operation is an operation of placing a sample on the stage111, for example. In the microscope system1, the processing device20acquires the elapsed time since a sample was placed on the stage111. Furthermore, the processing device20generates projected image data corresponding to the projected image P8including the assisting image A1and the assisting image A10on the basis of at least the digital image data acquired by the imaging unit140and the elapsed time. Finally, the projection device153projects the projected image P8onto the image plane on the basis of the projected image data, thereby causing the image V8to be formed in the image plane. As illustrated inFIG.21, by projecting the assisting image A10indicating the elapsed time onto the image plane, the user can perform ICSI while confirming the elapsed time. Second Embodiment FIG.22is a flowchart illustrating another example of a sperm selection procedure.FIG.23is a diagram illustrating yet another example of an image seen from the eyepiece lens101. The configuration of the microscope system according to the present embodiment is similar to the configuration of the microscope system1, and therefore components of the microscope system according to the present embodiment will be referenced by the same signs as the components of the microscope system1. The present embodiment differs from the first embodiment in that the sperm selection work in ICSI is performed according to the procedure illustrated inFIG.22instead of the procedure illustrated inFIG.8. Specifically, first, the user presses the button52of the input device50to switch the settings of the microscope system to MC 10× observation, for example. Next, the user moves the stage111to move the observation position to the drops202(sperm suspension drops), and bring the drops202into focus at MC 10× observation (step S41). Next, the user observes the drops202at MC 10× observation, and moves the stage111to move the observation position to a region where favorable sperm are expected to exist. At this point, the microscope system estimates a region where favorable sperm are expected to exist, and assists with the work by the user by notifying the user about the estimated region as a candidate region. An image V9illustrated inFIG.23is an optical image O2at MC 10× observation. As illustrated by the image V9, at MC 10× observation, the detailed appearance of the sperm inside one of the drops202cannot be confirmed, but the existence of sperm can be confirmed. Accordingly, in step S42, first, the analysis unit22divides the sample into a plurality of regions on the basis of the digital image data, treats the region in which the amount of movement by sperm is greater than the amount of movement by sperm inside other regions as a candidate region, and generates an analysis result (second analysis result) that specifies the candidate region. In addition, on the basis of the analysis result generated by the analysis unit22, the projected image generation unit23generates projected image data corresponding to a projected image including an assisting image (second assisting image) that specifies the candidate region. Finally, the projection device153notifies the user of the candidate region by projecting the projected image onto the image plane on the basis of the projected image data. An image V10illustrated inFIG.23is obtained by superimposing a projected image P10onto the optical image O2. The projected image P10includes an assisting image A11that specifies each candidate region. Additionally, the projected image P10also includes an assisting image A12that specifies a region where the amount of movement by sperm is small. By causing the image V10in which the projected image P10is superimposed onto the optical image O2to be formed in the image plane, in step S42, the user can specify a region where favorable sperm are expected to exist by referencing the assisting image A11, and move the observation position to the specified region. Consequently, it is possible to avoid wasting time due to moving the observation position to regions where favorable sperm do not exist. Thereafter, the user can select sperm by performing work according to the procedure from step S43to step S47. Note that the procedure from step S43to step S47is similar to the procedure from step S12to step S16illustrated inFIG.8. As above, in the microscope system according to the present embodiment in which sperm selection is performed according to the procedure illustrated inFIG.22, an assisting image that specifies candidate sperm estimated to be favorable sperm is likewise superimposed onto an optical image, thereby making it possible to reduce the burden of the sperm selection work and assist with micro-insemination, similarly to the microscope system1. Furthermore, according to the microscope system according to the present embodiment, it is possible to avoid moving the observation position to regions where favorable sperm do not exist. Consequently, it is possible to avoid a situation of repeatedly moving the stage111to search for favorable sperm. Note that although the present embodiment illustrates an example of capturing an assisting image that specifies one or more candidate regions at MC 10× observation and projecting an assisting image that specifies candidate sperm at MC 20× observation, these magnifications are merely an example. It is sufficient if the assisting image that specifies one or more candidate regions is captured at a magnification lower than a predetermined magnification factor, and the assisting image that specifies the candidate sperm at a magnification equal to or higher than the predetermined magnification. For example, when an objective having a magnification equal to or higher than a predetermined magnification in combination with the tube lens103is disposed on the optical path by the revolving nosepiece112, the analysis unit22may generate an analysis result that specifies candidate cells, and on the basis of the analysis result, the projected image generation unit23may generate projected image data corresponding to a projected image including an assisting image that specifies the candidate cells. Furthermore, when an objective having a magnification lower than a predetermined magnification in combination with the tube lens103is disposed on the optical path by the revolving nosepiece112, the analysis unit22may generate an analysis result that specifies a candidate region, and on the basis of the analysis result, the projected image generation unit23may generate the projected image data corresponding to a projected image including an assisting image that specifies the candidate region. Third Embodiment FIG.24is a diagram illustrating yet another example of an image seen from the eyepiece lens101. The configuration of the microscope system according to the present embodiment is similar to the configuration of the microscope system1, and therefore components of the microscope system according to the present embodiment will be referenced by the same signs as the components of the microscope system1. In the microscope system1, an example of performing ICSI using the microscope system is illustrated, but the microscope system according to the present embodiment differs from the microscope system1according to the first embodiment in that testicular sperm extraction (TESE) is used. An image V11illustrated inFIG.24is obtained by superimposing a projected image P11onto an optical image O3. The optical image O3is an image of seminiferous tubules inside the testicles, extracted by making an incision in the scrotum. The optical image O3includes images of various tissues, including red blood cells and white blood cells. The projected image P11includes an assisting image (fourth assisting image) that specifies reproductive cells, namely sperm. In the microscope system according to the present embodiment, the analysis unit22generates an analysis result that specifies sperm included in the sample on the basis of at least digital image data. Also, on the basis of the analysis result generated by the analysis unit22, the projected image generation unit23generates projected image data including an assisting image that specifies each sperm as an assisting image. Furthermore, the projection device153projects the projected image onto the image plane on the basis of the projected image data. With this arrangement, as illustrated inFIG.24, the projected image P11including the assisting image A13is superimposed onto the optical image O3. Consequently, according to the microscope system according to the present embodiment, sperm mixed in among a variety of tissues can be specified easily in TESE. Consequently, it is possible to greatly reduce the burden of the sperm searching work and assist with micro-insemination, similarly to the microscope system1. Fourth Embodiment FIG.25is a flowchart illustrating an example of a procedure for preimplantation diagnosis.FIG.26is a diagram illustrating yet another example of an image seen from the eyepiece lens101. The configuration of the microscope system according to the present embodiment is similar to the configuration of the microscope system1, and therefore components of the microscope system according to the present embodiment will be referenced by the same signs as the components of the microscope system1. In the microscope system1, an example of performing ICSI using the microscope system is illustrated, but the microscope system according to the present embodiment differs from the microscope system1according to the first embodiment by being used for laser-assisted hatching for assisting with the implantation of an embryo (blastocyst) developed from a fertilized egg and also for the extraction of trophectoderm cells for preimplantation diagnosis. Note that in this example, the sample includes an embryo developed from a fertilized egg and the zona pellucida surrounding the embryo. Specifically, first, the user presses the button53or the button54of the input device50to switch the settings of the microscope system to MC 20× observation or MC 40× observation, for example. Additionally, the user moves the stage111to bring the zona pellucida surrounding the embryo into focus (step S51). Next, the user observes the zona pellucida, and decides a position for laser irradiation by the laser-assisted hatching unit130(step S52). In the case where the zona pellucida has a qualitative abnormality, such as being thick or hard, the embryo will be unable to pierce the zona pellucida and become implanted in the endometrium. To avoid such situations, laser-assisted hatching removes the zona pellucida to assist with implantation. In step S52, it is necessary to decide the position to be irradiated with laser light appropriately to remove the zona pellucida without injuring the embryo. Accordingly, in step S52, the microscope system calculates an appropriate irradiation position by image analysis and notifies the user. Specifically, the analysis unit22generates an analysis result that specifies a candidate spot suitable for irradiation with laser light in the zona pellucida, on the basis of at least digital image data acquired by the imaging unit140. In addition, on the basis of the analysis result generated by the analysis unit22, the projected image generation unit23generates projected image data corresponding to a projected image including an assisting image (fifth assisting image) that specifies the candidate spot as an assisting image that generates projected image data. Furthermore, the projection device153projects the projected image onto the image plane on the basis of the projected image data generated by the projected image generation unit23, and superimposes the projected image onto an optical image of the sample. An image V12illustrated inFIG.26is obtained by superimposing a projected image P12onto an optical image O4. The optical image O4includes an image of an embryo (inner cell mass O41, blastocoel O42, and trophectoderm O43) and an image of a zona pellucida O44surrounding the embryo. The projected image P12includes an assisting image A14that specifies a candidate spot suitable for irradiation with laser light. By causing the image V12in which the projected image P12is superimposed onto the optical image O4to be formed in the image plane, in step S52, the user can refer to the position of the assisting image A14to decide and set the position for laser irradiation in the laser-assisted hatching unit130. Consequently, an appropriate position for laser irradiation can be set easily. When the position for laser irradiation is decided, the user uses the laser-assisted hatching unit130to irradiate the position decided step S52in the zona pellucida with laser light and create an aperture in the zona pellucida (step S53). An image V13illustrated inFIG.26is an optical image O5of the sample after irradiation with laser light, and illustrates a state after an aperture AP has been formed in the zona pellucida O44by irradiation with laser light. Thereafter, the user observes the embryo and confirms the position of the trophectoderm (step S54). At this point, the microscope system specifies the position of the trophectoderm O43by image analysis and notifies the user. Specifically, the analysis unit22generates an analysis result that specifies the trophectoderm O43inside the embryo on the basis of at least digital image data acquired by the imaging unit140. In addition, on the basis of the analysis result generated by the analysis unit22, the projected image generation unit23generates projected image data corresponding to a projected image including an assisting image (sixth assisting image) that specifies the trophectoderm as an assisting image. Furthermore, the projection device153projects the projected image onto the image plane on the basis of the projected image data generated by the projected image generation unit23, and superimposes the projected image onto an optical image of the sample. An image V14illustrated inFIG.26is obtained by superimposing a projected image P14onto an optical image O5. The projected image P14includes an assisting image A15that specifies the trophectoderm O43. By causing the image V14in which the projected image P14is superimposed onto the optical image O5to be formed in the image plane, in step S54, the user can easily confirm the position of the trophectoderm with the assisting image A15. Thereafter, the user inserts a pipette into the aperture AP and extracts the trophectoderm O43(step S55). At this point, negative pressure is applied to the inserted pipette to suction the trophectoderm O43at the position confirmed in step S54. Because the trophectoderm is highly viscous, the trophectoderm protrudes out from the embryo after pulling the pipette out from the aperture AP. For this reason, the user uses the laser-assisted hatching unit130again to sever the trophectoderm protruding out by irradiating the space between the pipette and the embryo with laser light (step S56). Thereafter, the user inspects the extracted trophectoderm inside the pipette (step S57). At this point, several cells of the extracted trophectoderm are used to make a preimplantation diagnosis. As above, in the microscope system according to the present embodiment in which laser-assisted hatching and trophectoderm extraction are performed according to the procedure illustrated inFIG.25, it is likewise possible to assist with the work by the embryologist for micro-insemination. Consequently, it is possible to assist with micro-insemination similarly to the microscope system according to the embodiments described above. Note that like the other embodiments, the analysis unit22according to the present embodiment may also adopt a rule-based algorithm or a trained model constructed by machine learning. The embodiments described above illustrate specific examples for facilitating the understanding of the invention, and embodiments of the present invention are not limited thereto. Various modifications and alterations of a microscope system are possible without departing from the scope of the claims. For example,FIG.12illustrates an example of projecting the assisting image A1that specifies each candidate sperm together with the assisting image A2that indicates the trail of movement by each candidate sperm, but it is also possible to superimpose only an assisting image (third assisting image) that indicates the trail of movement by each candidate sperm onto the optical image. Also, the analysis unit22may specify a trail of movement by a reproductive cell included in the sample on the basis of digital image data, and on the basis of the analysis result, the projected image generation unit23may generate projected image data corresponding to a projected image including an assisting image that indicates the trail of movement by the reproductive cell as an assisting image. In other words, in addition to the assisting image that indicates the trail of movement by each candidate sperm, an assisting image that indicates a trail of movement by sperm other than the candidate sperm may also be projected. Additionally, the embodiments described above illustrate an example of a microscope system that observes a sample according to the four microscopy methods of bright field (BF) observation, polarized (PO) observation, differential interference contrast (DIC) observation, and modulation contrast (MC) observation, but the sample may also be observed according to another microscopy method such as phase-contrast (PC) observation in addition to the above. In the case where the microscope system performs phase-contrast observation, a phase-contrast objective is included. FIG.27is a diagram illustrating an example of a configuration of an inverted microscope300. The microscope system1may include the inverted microscope300instead of the inverted microscope100. The inverted microscope300differs from the inverted microscope100in that an imaging unit144is included instead of the imaging unit140, and the tube lens103is positioned between the imaging unit144and the eyepiece lens101. Note that the imaging unit144includes a lens145for condensing light incident without passing through the tube lens103onto the imaging element143. Even in the case of including the inverted microscope300, the microscope system1is capable of obtaining effects similar to the case of including the inverted microscope100. FIG.28is a diagram illustrating an example of a configuration of an inverted microscope400. The microscope system1may include the inverted microscope400instead of the inverted microscope100. The inverted microscope400differs from the inverted microscope100in that the imaging unit144is included instead of the imaging unit140, a projection unit154is included instead of the projection unit150, and the tube lens103is positioned between the projection unit154and the eyepiece lens101. Note that the imaging unit144includes a lens145for condensing light incident without passing through the tube lens103onto the imaging element143. The projection unit154includes a lens155having a different focal length than the lens152, so as to condense light onto the image plane IP through the tube lens103. Even in the case of including the inverted microscope400, the microscope system1is capable of obtaining effects similar to the case of including the inverted microscope100. | 73,724 |
11861922 | Skilled artisans will appreciate that elements in the figures are illustrated for simplicity and clarity and have not necessarily been drawn to scale. For example, the dimensions of some of the elements in the figures may be exaggerated relative to other elements to help to improve understanding of embodiments of the present invention. The apparatus and method components have been represented where appropriate by conventional symbols in the drawings, showing only those specific details that are pertinent to understanding the embodiments of the present invention so as not to obscure the disclosure with details that will be readily apparent to those of ordinary skill in the art having the benefit of the description herein. DETAILED DESCRIPTION Examples disclosed herein are directed to a computing device for extracting target data from a source document. The computing device includes: a memory storing target data extraction rules; a processor connected with the memory, the processor configured to: obtain text recognition data extracted from an image of the source document, the text recognition data indicating locations of text structures in the source document; define text lines based on the text recognition data; identify a reference string from the text recognition data; select a subset of the text lines based on a location of the reference string and the target data extraction rules; and output the subset of the text lines as the target data. Additional examples disclosed herein are directed to a method for extracting target data from a source document. The method comprises: storing target data extraction rules; obtaining text recognition data extracted from an image of the source document, the text recognition data indicating locations of text structures in the source document; defining text lines based on the text recognition data; identifying a reference string from the text recognition data; selecting a subset of the text lines based on a location of the reference string and the target data extraction rules; and outputting the subset of the text lines as the target data. FIG.1depicts a data extraction system100in accordance with the teachings of this disclosure. The system100includes a server101in communication with a computing device104(also referred to herein as simply the device104) via a communication link107, illustrated in the present example as including wireless links. For example, the link107may be provided by a wireless local area network (WLAN) deployed by one or more access points (not shown). In other examples, the server101is located remotely from the device104and the link107may therefore include one or more wide-area networks such as the Internet, mobile networks, and the like. The system100is deployed to extract target data from a source document, such as a label110, for example on a package112. The system100may be configured to extract, from the label110, address data114indicating a destination of the package112. In other examples, the system100may extract other target data from the label110, such as a recipient name, a cargo type, or other shipping data. More generally, the system100is deployed to extract target data from a source document, wherein the target data has a predictable spatial text pattern relative to a well-defined and recognizable reference string. Such a data extraction operation will be described in further detail below. The system100thus allows target data to be extracted without the burden of storing templates indicating where the target data ought to be for each variation of source document which may contain the target data (e.g., based on different company shipping labels, different document types, etc.). The device104further includes an image sensor106, such as a color image sensor, to obtain image data representing the label110. The image data may be used in the data extraction operation to extract the target data. Referring toFIG.2, the mobile computing device104, including certain internal components, is shown in greater detail. The device104includes a processor200interconnected with a non-transitory computer-readable storage medium, such as a memory204. The memory204includes a combination of volatile memory (e.g. Random Access Memory or RAM) and non-volatile memory (e.g. read only memory or ROM, Electrically Erasable Programmable Read Only Memory or EEPROM, flash memory). The processor200and the memory204may each comprise one or more integrated circuits. The memory204stores computer-readable instructions for execution by the processor200. In particular, the memory204stores a control application208which, when executed by the processor200, configures the processor200to perform various functions discussed below in greater detail and related to the data extraction operation of the device104. The application208may also be implemented as a suite of distinct applications. The processor200, when so configured by the execution of the application208, may also be referred to as a controller200. Those skilled in the art will appreciate that the functionality implemented by the processor200may also be implemented by one or more specially designed hardware and firmware components, such as a field-programmable gate array (FPGAs), application-specific integrated circuits (ASICs) and the like in other embodiments. In an embodiment, the processor200may be, respectively, a special purpose processor which may be implemented via dedicated logic circuitry of an ASIC, an FPGA, or the like in order to enhance the processing speed of the data extraction operations discussed herein. The memory204also stores a repository212containing, for example, data extraction rules. The data extraction rules may include, for example, regular expressions defining possible reference strings, rules regarding spatial relationships between a detected reference string and the target data, rules for defining text lines and other text structures, or the like. Other rules for use in the data extraction operation performed by the device104may also be stored in the repository212. The device104also includes a communications interface216interconnected with the processor200. The communications interface216includes suitable hardware (e.g. transmitters, receivers, network interface controllers and the like) allowing the device104to communicate with other computing devices—particularly the server101—via the link107. The specific components of the communications interface216are selected based on the type of network or other links that the device104is to communicate over. The device104can be configured, for example, to communicate with the server101via the link107using the communications interface to send extracted target data to the server101. As shown inFIG.2, the processor200is interconnected with the image sensor106. The processor200is enabled, via such connections, to issue commands for performing a data extraction operation. Specifically, the processor200may control the image sensor106to capture image data representing the label110. The processor200may also be connected to one or more input and/or output devices220. The input devices220can include one or more buttons, keypads, touch-sensitive display screens or the like for receiving input from an operator, for example to initiate the data extraction operation. The output devices220can further include one or more display screens, sound generators, vibrators, or the like for providing output or feedback to an operator, for example to output the extracted target data. The functionality of the device104, as implemented via execution of the applications208by the processor200will now be described in greater detail, with reference toFIG.3.FIG.3illustrates a method300of extracting target data from a label, which will be described in conjunction with its performance in the system100, and in particular by the device104, with reference to the components illustrated inFIGS.1and2. In other examples, the method300may be performed by other suitable computing devices, such as the server101. The method300begins at block305in response to an initiation signal, such as an input at the input/output device220. For example, an operator may activate a trigger button to initiate the method300. At block305, the device104is configured to obtain an image representing the label110. For example, the processor200may control the image sensor106to capture image data representing the label110. In other examples, the image may be obtained from an external or other source, for example via the communications interface216. In some examples, the device104may also perform preprocessing operations on the image. For example, the device104may identify one or more superfluous features, such as barcodes, logos, excess space around the label110, or other image features and remove the superfluous features. For example, the device104may crop the excess space around the label110out of the image, or may cover the barcode with a block of a predefined color (e.g., white). In some examples, prior to covering the barcode, the device104may decode the barcode for further processing, as will be described further below. At block310, the device104obtains text recognition data. For example, the device104may upload the image obtained at block305to a cloud-based or other external service for applying optical character recognition (OCR) process or other text recognition processes on the image. The external service may then return the text recognition data. In other examples, the device104may apply OCR or other text recognition processes on the image locally to obtain the text recognition data. Generally, the text recognition data indicates locations of text structures on the label110. For example, the text recognition data may indicate the coordinates of vertices of bounding boxes around each page, block, paragraph, word, and symbol. At block315, the device104defines text lines on the label110based on the text recognition data obtained at block310. In particular, in structured text documents, such as on labels, a text line consisting of words in a semantic context may provide logical text structure. However, text lines are often not output as a text structure from a text recognition process. Accordingly, the device104may define text lines based on the text recognition data. For example, referring toFIG.4, an example method400of defining text lines is depicted. At block405, the device104selects block and word text structures for further processing. In particular, structured text documents, such as labels, often have only one page, and hence the page element does not need to be processed. Paragraphs and symbols (i.e., single characters) do not provide as much semantic meaning, and are also not processed. In particular, text recognition processes are often designed for unstructured texts in natural language. Accordingly, paragraphs may be challenging to define for structured documents. Thus, the page, paragraph and symbol text structures are discarded or otherwise designated for not processing. At block410, the device104selects a leading word and defines a new text line, with the leading word as the most recent word in the text line. In particular, a leading word may be defined as the top-left word in a given block, which has not yet been assigned to a text line. That is, a leading word may be defined according to the relative proximity of words to the top edge of the corresponding block, as well as the left edge of the corresponding block. At block415, the device104determines whether there is a word to the right of the most recent word in the text line. For example, on the first iteration, the device104may determine whether there are any words to the right of the leading word selected at block410. If the determination is affirmative, the device104proceeds to block420. At block420, the device104selects the word immediately to the right of the most recent word in the text line and proceeds to block425. At block425, the device104determines whether the selected word satisfies a same-line condition. In particular, the same-line condition may be based on one or more of: a distance between words, a character height comparison, a word orientation, and a word alignment. For example, the same-line condition may evaluate the selected word and the most recent word to determine whether the words are within a threshold distance, whether the character heights of words are within a threshold percentage, whether the words are oriented in the same direction, and whether the words are approximately horizontally aligned. If, at block425, the selected word satisfies the same-line condition, the device104proceeds to block430. At block430, the selected word is added to the text line as the most recent word. The device104then returns to block415to determine whether any additional words are to be added to the text line. If, at block425, the selected word does not satisfy the same-line condition, the device104determines that the text line is complete and proceeds to block435. Similarly, if, at block415, the device104determines that there are no further words to the right of the most recent word, the device104determines that the text line is complete and proceeds to block435. At block435, the device104defines a bounding box for the text line. Specifically, the bounding box surrounds all words in the text line, including the leading word and any additional words satisfying the same-line condition. More particularly, the bounding box may be defined as the smallest bounding box surrounding all the words in the text line. The text line is thus defined by its bounding box and its member words. At block440, after defining the text line and its bounding box, the device104determines whether there are any further leading words. If there are, the device104returns to block410to select a new leading word and define a new text line. If the determination at block440is negative, the device104determines that all text lines have been defined and the method400ends. For example, referring toFIGS.5A and5B, a schematic of the text line detection of the label110is depicted. The label110has blocks500and510. In the block500, the word “TO” may be selected as a leading word. On iterating through the method400, the device104may determine that the word “JOHN”, to the right of the leading word “TO” does not satisfy a same-line condition, due to the spacing between “TO” and “JOHN” exceeding a threshold distance. Accordingly the word “TO” may be defined as a text line502. After having assigned the word “TO” to a the text line502, the word “JOHN” may subsequently be defined and selected as a leading word. Iterating through the method400, text lines504,506, and508are also defined. In the block510, the word “MAILIT” may be selected as a leading word. In other examples, the word “19” may be selected as a leading word. The definition of leading words may differ, for example, based on the weighting of the top edge proximity or the left edge proximity. For example, when “MAILIT” is the leading word, the device104may determine that neither of the words “19” and “STANDARD” satisfy the same-line condition, due to the difference in character size, spacing between the words exceeding the threshold distance, and the lack of horizontal alignment of either the top or bottom edges of the words. Accordingly, text lines512,514, and516may be defined in the block510. Returning now toFIG.3, after having defined the text lines, at block320, the device104is configured to identify a reference string in the text lines. Generally, the reference string may be a word matching a predefined regular expression and having a specific spatial relationship with the target data. In some examples, any potential reference strings (i.e., words matching the regular expression) may be verified against a predetermined list of valid reference strings. For example, a ZIP code may be used as a reference string to extract an US postal address. Further, any detected five-digit words matching the regular expression (i.e., potential ZIP codes) may be verified against a predetermined list of all valid ZIP codes. In some examples, words adjacent the potential reference strings may also be checked for other validity conditions, to improve accuracy of the identification of the reference strings. For example, the word before or after a detected ZIP code may be checked to determine whether the word matches the name or abbreviation of a US state. In some examples, prior to searching the text lines for a reference string, the device104may restrict the text lines to search based on the spatial relationship of text lines with other identifying features. For example, if, at block305, a barcode is detected in the image, the device104may decode the barcode to obtain barcode data. The barcode data may be used to retrieve data indicative of an approximate spatial relationship between the barcode and the reference string. Accordingly, the device104may utilize the spatial relationship and the detected location of the barcode to identify an approximate location of a reference string. The device104may select text lines within a threshold distance of the approximate location and search the selected text lines for a reference string. At block325, the device104selects a subset of text lines based on the location of the reference string, as obtained from the text recognition data, and target data extraction rules. In particular, the target data extraction rules may define a spatial relationship between the text lines associated with the target data and the reference string. For example, the text lines containing the target data may be aligned (e.g., left aligned) with the text line containing the reference string, and may be within a threshold distance above or below the line containing the reference string. In other examples, different alignments, threshold distances or other spatial relationships may also be defined by the target data extraction rules. Further, the target data extraction rules may define characteristics of valid text lines associated with the target data. For example, the text lines containing the target data may have homogeneous font features (e.g., have similar symbol sizes), be contained in the same block, or the like. In other examples, the target data extraction rules may define regular expressions that text lines containing the target data satisfies. In some examples, after selecting text lines satisfying the target data extraction rules, the device104may verify the location of the text lines against other identifying features. For example, if at block305, a barcode is detected in the image, the device104may decode the barcode to obtain barcode data. The barcode data may be used to retrieve data indicative of a defined approximate spatial relationship between the barcode and target data text lines. The device104may verify the relative spatial relationship between the detected barcode and the selected subset of text lines against the defined spatial relationship. If the verification fails, the device104may end the process, or may provide the selected subset of text lines with an indication of the failed verification. At block330, the selected subset of text lines is output as the target data. In some examples, at block320, the device104may identify more than one valid reference string. In such examples, the device104may proceed through the method300with each of the detected reference strings. For example, a shipping label may include multiple addresses, such as a recipient address and a sender address. Accordingly, at block330, the device104may output multiple selected subsets of text lines as the target data based on the corresponding reference strings. Referring toFIG.6, an example method600of selecting a subset of text lines (e.g., during the performance of block325) representing a US postal address is depicted. The US postal address extraction may utilize ZIP codes as reference strings. At block605, the device104first selects, as part of the address block (i.e., the subset of text lines representing the US postal address), the text line containing the ZIP code (also referred to herein as the ZIP code line). At block610, the device104checks the text line immediately below the ZIP code line to determine whether it is to be added to the address block. If the line below the ZIP code line specifies the country (e.g., matches one of US, USA, U.S.A., or United States of America) and is left-aligned with the ZIP code line, then it is a valid address block line, and is also added to the address block. If it does not specify the country, or is not left-aligned, then it is omitted. At block615, the device104selects text lines within a threshold distance of the ZIP code line. A vertical threshold distance may be defined based on the height of the bounding box of the ZIP code line, to account for the font size of the text on the label. For example, the threshold distance may be defined to be three times the height of the ZIP code line. Further, the device may select text lines within a threshold distance to the left (or right) of the ZIP code line to account for spacing based on city names or other structured text constraints of the label. For example, the threshold distance may be defined to be five times the width of the ZIP code word. Further, US postal address blocks are consistently parsed as a single block based on the structured text constraints of standard labels. Accordingly, in some examples, the device may select, rather than individual text lines, a block having at least one text line within the specified threshold distances. At block620, the device104verifies the font features and the alignment of the text lines in the block selected at block615. For example, lines in the address block above the ZIP code line have homogeneous font features (i.e., characters of consistent heights). Further, lines in the address block above the ZIP code line are left-aligned. Text lines failing the font homogeneity and alignment conditions are discarded. In particular, the device104may determine that two text lines are left aligned (or otherwise aligned) based on the bounding boxes of the two text lines. For example, as illustrated inFIG.7, the bounding boxes of two text lines700and704are depicted. The device104may construct a quadrilateral708between the top left corner of the first text line700, the top left corner of the second text line704, the bottom left corner of the second text line704, and the bottom left corner of the first text line700. If the computed area of the quadrilateral708is below a threshold value, the two text lines700and704are determined to be aligned. Further, in some examples, the threshold value may be dynamically computed according to the heights of the text lines700and704, and the average width of symbols or characters contained in the text lines700and704. Returning toFIG.6, at block625, the device104discards non-address lines. In some cases, the address blocks may contain non-address information, such as an entity name or a telephone number. Address lines may be differentiated by matching the lines to regular expressions including words that are digits (e.g., representing a street number or PO box) and one or more alphanumeric words (e.g., representing a street name). Text lines which fail to match the regular expressions are discarded. At block630, the remaining lines are defined to be the address block. Referring toFIG.8, an example label800is depicted. During extraction of the US postal address, the device104may first identify a ZIP code line802at block605of the method600. None of the lines below the ZIP code line802are left aligned with the ZIP code line and contain the country name, and hence no country line is added to the address block at block610. A block804includes lines within threshold distances of the ZIP code line802, and hence is added to the address block at block615. Each of text lines806,808,810,812,814, and816approximately satisfy the homogeneity of font features, however, the text line806is not left aligned with the remaining text lines, and hence the text line806is discarded at block620. At block625, the addressee, entity name, and telephone number lines808-812are discarded as being non-address lines. Thus, lines814,816, and802remain as the text lines defining the US postal address. In the foregoing specification, specific embodiments have been described. However, one of ordinary skill in the art appreciates that various modifications and changes can be made without departing from the scope of the invention as set forth in the claims below. Accordingly, the specification and figures are to be regarded in an illustrative rather than a restrictive sense, and all such modifications are intended to be included within the scope of present teachings. The benefits, advantages, solutions to problems, and any element(s) that may cause any benefit, advantage, or solution to occur or become more pronounced are not to be construed as a critical, required, or essential features or elements of any or all the claims. The invention is defined solely by the appended claims including any amendments made during the pendency of this application and all equivalents of those claims as issued. Moreover in this document, relational terms such as first and second, top and bottom, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. The terms “comprises,” “comprising,” “has”, “having,” “includes”, “including,” “contains”, “containing” or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises, has, includes, contains a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. An element proceeded by “comprises . . . a”, “has . . . a”, “includes . . . a”, “contains . . . a” does not, without more constraints, preclude the existence of additional identical elements in the process, method, article, or apparatus that comprises, has, includes, contains the element. The terms “a” and “an” are defined as one or more unless explicitly stated otherwise herein. The terms “substantially”, “essentially”, “approximately”, “about” or any other version thereof, are defined as being close to as understood by one of ordinary skill in the art, and in one non-limiting embodiment the term is defined to be within 10%, in another embodiment within 5%, in another embodiment within 1% and in another embodiment within 0.5%. The term “coupled” as used herein is defined as connected, although not necessarily directly and not necessarily mechanically. A device or structure that is “configured” in a certain way is configured in at least that way, but may also be configured in ways that are not listed. It will be appreciated that some embodiments may be comprised of one or more specialized processors (or “processing devices”) such as microprocessors, digital signal processors, customized processors and field programmable gate arrays (FPGAs) and unique stored program instructions (including both software and firmware) that control the one or more processors to implement, in conjunction with certain non-processor circuits, some, most, or all of the functions of the method and/or apparatus described herein. Alternatively, some or all functions could be implemented by a state machine that has no stored program instructions, or in one or more application specific integrated circuits (ASICs), in which each function or some combinations of certain of the functions are implemented as custom logic. Of course, a combination of the two approaches could be used. Moreover, an embodiment can be implemented as a computer-readable storage medium having computer readable code stored thereon for programming a computer (e.g., comprising a processor) to perform a method as described and claimed herein. Examples of such computer-readable storage mediums include, but are not limited to, a hard disk, a CD-ROM, an optical storage device, a magnetic storage device, a ROM (Read Only Memory), a PROM (Programmable Read Only Memory), an EPROM (Erasable Programmable Read Only Memory), an EEPROM (Electrically Erasable Programmable Read Only Memory) and a Flash memory. Further, it is expected that one of ordinary skill, notwithstanding possibly significant effort and many design choices motivated by, for example, available time, current technology, and economic considerations, when guided by the concepts and principles disclosed herein will be readily capable of generating such software instructions and programs and ICs with minimal experimentation. The Abstract of the Disclosure is provided to allow the reader to quickly ascertain the nature of the technical disclosure. It is submitted with the understanding that it will not be used to interpret or limit the scope or meaning of the claims. In addition, in the foregoing Detailed Description, it can be seen that various features are grouped together in various embodiments for the purpose of streamlining the disclosure. This method of disclosure is not to be interpreted as reflecting an intention that the claimed embodiments require more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive subject matter lies in less than all features of a single disclosed embodiment. Thus the following claims are hereby incorporated into the Detailed Description, with each claim standing on its own as a separately claimed subject matter. | 29,924 |
11861923 | DETAILED DESCRIPTION Embodiments disclosed herein relate to methods, apparatuses, and computer-readable storage media for processing images to detect sensitive text information therein. Herein, sensitive text information refers to text information of one or more predefined types such as IP address, birthday, phone number, email addresses, home addresses, social security numbers, credit card numbers, bank account information, health information, and/or the like. In various embodiments, the methods, apparatuses, and computer-readable storage media disclosed herein are configured for detecting sensitive text information in input images of various image types (such as high-resolution digital images, low-resolution digital images, scanned documents, screenshots, and/or the like). The methods, apparatuses, and computer-readable storage media disclosed herein may also be used for detecting sensitive text information in video clips. Moreover, the methods, apparatuses, and computer-readable storage media disclosed herein may alternatively be used for extracting text information from images and/or video clips. According to one aspect of this disclosure, the methods, apparatuses, and computer-readable storage media disclosed herein use a fuzzy matching method to detect sensitive text from one or more recognized text strings obtained from an input signal such as an image, a video clip, an audio clip, or the like, via suitable text recognition technologies such as optical character recognition (OCR) (for images and video clips), voice recognition (for audio clips), and/or the like. The fuzzy matching method uses a regular expression to define the pattern of a sensitive-text type and detect, from the recognized text strings, one or more substrings sufficiently similar to the regular expression. In other words, the fuzzy matching method detects each of the one or more substrings based on the similarity between the substring and the regular expression. If the similarity between a substring and the regular expression is greater than a predefined similarity threshold, the substring is considered a piece of sensitive text. Alternatively, if the dissimilarity (also called fuzziness or error tolerance) between a substring and the regular expression is smaller than a predefined fuzziness threshold, the substring is considered a piece of sensitive text. In some embodiments, the fuzzy match method uses the edit distance between the regular expression and a substring in the recognized text strings for measuring the fuzziness therebetween. In some embodiments, the edit distance is the Levenshtein distance between the regular expression and a substring in the recognized text strings. In some embodiments, the fuzziness threshold may be adjustable by a user. Higher fuzziness thresholds may be used in applications that require higher recall rates, or lower fuzziness thresholds may be used in applications that need to maintain higher precisions. In some embodiments, the detected pieces of sensitive text of some sensitive-text types may be validated to verify the correctness of the detected sensitive text and reduce incorrect sensitive-text detections. In some embodiments, the methods, apparatuses, and computer-readable storage media disclosed herein also process the input images before OCR to improve image quality. In some embodiments, after processing the input images and detecting sensitive text therein, the methods, apparatuses, and computer-readable storage media disclosed herein may modify the input images by masking or removing the detected sensitive text. The methods, apparatuses, and computer-readable storage media disclosed herein provide robust image-based sensitive-text detection with improved detection rates (also called “recall rates”) while maintaining sufficient precisions (that is, the correctness of sensitive-text detection). A. System Structure Turning now toFIG.1, a computer network system for sanitizing images to remove sensitive text information therein, is shown and is generally identified using reference numeral100. In these embodiments, the image-sanitization system100is configured for receiving one or more images, recognizing text information in the received images, detecting sensitive text information such as IP address, birthday, phone number, email addresses, home addresses, social security numbers, credit card numbers, bank account information, health information, and/or the like in the recognized text information, and modifying the received images to remove the detected sensitive information. As shown inFIG.1, the image-sanitization system100comprises one or more server computers102, a plurality of client computing devices104, and one or more client computer systems106functionally interconnected by a network108, such as the Internet, a local area network (LAN), a wide area network (WAN), a metropolitan area network (MAN), and/or the like, via suitable wired and wireless networking connections. The server computers102may be computing devices designed specifically for use as a server, and/or general-purpose computing devices acting server computers while also being used by various users. Each server computer102may execute one or more server programs. The client computing devices104may be portable and/or non-portable computing devices such as laptop computers, tablets, smartphones, Personal Digital Assistants (PDAs), desktop computers, and/or the like. Each client computing device104may execute one or more client application programs which sometimes may be called “apps”. Generally, the computing devices102and104comprise similar hardware structures such as hardware structure120shown inFIG.2. As shown, the hardware structure120comprises a processing structure122, a controlling structure124, one or more non-transitory computer-readable memory or storage devices126, a network interface128, an input interface130, and an output interface132, functionally interconnected by a system bus138. The hardware structure120may also comprise other components134coupled to the system bus138. The processing structure122may be one or more single-core or multiple-core computing processors, generally referred to as central processing units (CPUs), such as INTEL® microprocessors (INTEL is a registered trademark of Intel Corp., Santa Clara, CA, USA), AMD® microprocessors (AMD is a registered trademark of Advanced Micro Devices Inc., Sunnyvale, CA, USA), ARM® microprocessors (ARM is a registered trademark of Arm Ltd., Cambridge, UK) manufactured by a variety of manufactures such as Qualcomm of San Diego, California, USA, under the ARM® architecture, or the like. When the processing structure122comprises a plurality of processors, the processors thereof may collaborate via a specialized circuit such as a specialized bus or via the system bus138. The processing structure122may also comprise one or more real-time processors, programmable logic controllers (PLCs), microcontroller units (MCUs), μ-controllers (UCs), specialized/customized processors, hardware accelerators, and/or controlling circuits (also denoted “controllers”) using, for example, field-programmable gate array (FPGA) or application-specific integrated circuit (ASIC) technologies, and/or the like. In some embodiments, the processing structure includes a CPU (otherwise referred to as a host processor) and a specialized hardware accelerator which includes circuitry configured to perform computations of neural networks such as tensor multiplication, matrix multiplication, and the like. The host processor may offload some computations to the hardware accelerator to perform computation operations of neural network. Examples of a hardware accelerator include a graphics processing unit (GPU), Neural Processing Unit (NPU), and Tensor Process Unit (TPU). In some embodiments, the host processors and the hardware accelerators (such as the GPUs, NPUs, and/or TPUs) may be generally considered processors. The controlling structure124comprises one or more controlling circuits, such as graphic controllers, input/output chipsets and the like, for coordinating operations of various hardware components and modules of the computing device102/104. The memory126comprises one or more storage devices or media accessible by the processing structure122and the controlling structure124for reading and/or storing instructions for the processing structure122to execute, and for reading and/or storing data, including input data and data generated by the processing structure122and the controlling structure124. The memory126may be volatile and/or non-volatile, non-removable or removable memory such as RAM, ROM, EEPROM, solid-state memory, hard disks, CD, DVD, flash memory, or the like. The network interface128comprises one or more network modules for connecting to other computing devices or networks through the network108by using suitable wired or wireless communication technologies such as Ethernet, WI-FI® (WI-FI is a registered trademark of Wi-Fi Alliance, Austin, TX, USA), BLUETOOTH® (BLUETOOTH is a registered trademark of Bluetooth Sig Inc., Kirkland, WA, USA), Bluetooth Low Energy (BLE), Z-Wave, Long Range (LoRa), ZIGBEE® (ZIGBEE is a registered trademark of ZigBee Alliance Corp., San Ramon, CA, USA), wireless broadband communication technologies such as Global System for Mobile Communications (GSM), Code Division Multiple Access (CDMA), Universal Mobile Telecommunications System (UMTS), Worldwide Interoperability for Microwave Access (WiMAX), CDMA2000, Long Term Evolution (LTE), 3GPP, 5G New Radio (5G NR) and/or other 5G networks, and/or the like. In some embodiments, parallel ports, serial ports, USB connections, optical connections, or the like may also be used for connecting other computing devices or networks although they are usually considered as input/output interfaces for connecting input/output devices. The input interface130comprises one or more input modules for one or more users to input data via, for example, touch-sensitive screen, touch-sensitive whiteboard, touch-pad, keyboards, computer mouse, trackball, microphone, scanners, cameras, and/or the like. The input interface130may be a physically integrated part of the computing device102/104(for example, the touch-pad of a laptop computer or the touch-sensitive screen of a tablet), or may be a device physically separate from, but functionally coupled to, other components of the computing device102/104(for example, a computer mouse). The input interface130, in some implementation, may be integrated with a display output to form a touch-sensitive screen or touch-sensitive whiteboard. The output interface132comprises one or more output modules for output data to a user. Examples of the output modules comprise displays (such as monitors, LCD displays, LED displays, projectors, and the like), speakers, printers, virtual reality (VR) headsets, augmented reality (AR) goggles, and/or the like. The output interface132may be a physically integrated part of the computing device102/104(for example, the display of a laptop computer or tablet), or may be a device physically separate from but functionally coupled to other components of the computing device102/104(for example, the monitor of a desktop computer). The computing device102/104may also comprise other components134such as one or more positioning modules, temperature sensors, barometers, inertial measurement unit (IMU), and/or the like. The system bus138interconnects various components122to134enabling them to transmit and receive data and control signals to and from each other. FIG.3shows a simplified software architecture160of the computing device102or104. The software architecture160comprises one or more application programs164, an operating system166, a logical input/output (I/O) interface168, and a logical memory172. The one or more application programs164, operating system166, and logical I/O interface168are generally implemented as computer-executable instructions or code in the form of software programs or firmware programs stored in the logical memory172which may be executed by the processing structure122. The one or more application programs164executed by or run by the processing structure122for performing various tasks. The operating system166manages various hardware components of the computing device102or104via the logical I/O interface168, manages the logical memory172, and manages and supports the application programs164. The operating system166is also in communication with other computing devices (not shown) via the network108to allow application programs164to communicate with those running on other computing devices. As those skilled in the art will appreciate, the operating system166may be any suitable operating system such as MICROSOFT® WINDOWS® (MICROSOFT and WINDOWS are registered trademarks of the Microsoft Corp., Redmond, WA, USA), APPLE® OS X, APPLE® iOS (APPLE is a registered trademark of Apple Inc., Cupertino, CA, USA), Linux, ANDROID® (ANDROID is a registered trademark of Google LLC, Mountain View, CA, USA), or the like. The computing devices102and104of the image-sanitization system100may all have the same operating system, or may have different operating systems. The logical I/O interface168comprises one or more device drivers170for communicating with respective input and output interfaces130and132for receiving data therefrom and sending data thereto. Received data may be sent to the one or more application programs164for being processed by one or more application programs164. Data generated by the application programs164may be sent to the logical I/O interface168for outputting to various output devices (via the output interface132). The logical memory172is a logical mapping of the physical memory126for facilitating the application programs164to access. In this embodiment, the logical memory172comprises a storage memory area that may be mapped to a non-volatile physical memory such as hard disks, solid-state disks, flash drives, and the like, generally for long-term data storage therein. The logical memory172also comprises a working memory area that is generally mapped to high-speed, and in some implementations volatile, physical memory such as RAM, generally for application programs164to temporarily store data during program execution. For example, an application program164may load data from the storage memory area into the working memory area, and may store data generated during its execution into the working memory area. The application program164may also store some data into the storage memory area as required or in response to a user's command. In a server computer102, the one or more application programs164generally provide server functions for managing network communication with client computing devices104and facilitating collaboration between the server computer102and the client computing devices104. Herein, the term “server” may refer to a server computer102from a hardware point of view or a logical server from a software point of view, depending on the context. As described above, the processing structure122is usually of no use without meaningful firmware and/or software. Similarly, while a computer system such as the image-sanitization100may have the potential to perform various tasks, it cannot perform any tasks and is of no use without meaningful firmware and/or software. As will be described in more detail later, the image-sanitization system100described herein, as a combination of hardware and software, generally produces tangible results tied to the physical world, wherein the tangible results such as those described herein may lead to improvements to the computer and system themselves. B. Image-Based Sensitive-Text Detection As described above, the one or more server computers102may receive various images from the client computing devices104or from sever computers of other computer network systems. The received images may contain sensitive text information. In these embodiments, sensitive text information refers to text in the image that is represented in the same form thereof (that is, text represented in the image form), and the content thereof or the type thereof is sensitive (for example, related to privacy, national security, and/or the like) and may need to be sanitized. Examples of sensitive text information may be IP address, birthday, phone number, email addresses, home addresses, social security numbers, credit card numbers, bank account information, health information, and/or the like. FIG.4is a flowchart showing an image-based sensitive-text detection and sanitization process200executed by a processing structure122of a server computer102for receiving an input image202having text information, detecting sensitive text therein, and outputting a modified image with detected sensitive text information removed or otherwise sanitized. In these embodiments, sensitive text may be classified into a plurality of predefined types (denoted “sensitive-text type” hereinafter) such as IP address, birthday, phone number, email addresses, home addresses, social security numbers, credit card numbers, bank account information, health information, and/or the like. At step204, the processing structure122processes the input image202(such as rescaling, binarization, noise-removal, rotation, and/or the like) to improve image quality by correcting distortion thereof and/or removing noise therein. At step206, the processing structure122uses a suitable optical character recognition (OCR) method to recognize text information (such as typed, handwritten or printed text) in the input image202and output a list of bounding boxes. Each bounding box encloses one or more recognized text strings and indicates the position of the recognized text strings in the input image202such as the left-top position, width, and height of the bounding box in the input image202. At step208, the processing structure122concatenates all recognized text strings into a long text string. At step210, the processing structure122detects substrings of sensitive text from the long text string and forms a list of detected substrings of sensitive text together with their sensitive-text types and their bounding boxes. At step212, the processing structure122uses the list of detected substrings of sensitive text obtained at step210to redact the input image202by removing the detected sensitive-text information (such as the image portions related to the detected substrings of sensitive text) therefrom using any suitable technologies (for example, by masking the detected sensitive-text information, wiping off the detected sensitive-text information, and/or the like), and then output a modified or sanitized image222. FIG.5shows the detail of step210, wherein the processing structure122uses a fuzzy matching method (also called approximate string matching or fuzzy string searching) to detect sensitive text (step242) and runs a validation function (if possible) to verify the correctness of the detected sensitive text and reduce incorrect sensitive-text detections (step244). A brief review of approximate string matching may be found in https://en.wikipedia.org/wiki/Approximate_string_matching. In these embodiments, the fuzzy matching method uses a regular expression (also denoted a “regex” hereinafter; see https://en.wikipedia.org/wiki/Regular_expression for a brief introduction) to define the pattern of a sensitive-text type and to detect, from the long text string, one or more substrings having sufficient similarities to the regular expression. When there are a plurality of sensitive-text types, a plurality of regular expressions may be defined each corresponding to a sensitive-text type. The processing structure122then uses the fuzzy matching method to detect, from the long text string, one or more substrings having sufficient similarities to each of the plurality of regular expressions. As those skilled in the art understand, a regular expression is a sequence of characters defining a pattern of a text string. For example, the regular expression of “[a-z]” represents any lowercase letter between “a” and “z”, the regular expression of “[0-9]” represents a digital between “0” and “9”, and “?” representing that the preceding element may repeat zero or one time. Thus, one may use a regular expression to search a text string and find substrings matching the pattern defined by the regular expression. For example, one may use “[bc]?oat” to find “oat”, “boat”, and “coat”. However, the conventional regex-based text search only results in substrings of exact match. As OCR of a text image or the text portion of an image often has wrong character recognition (due to, for example, low image resolution), the conventional regex-based search may not find a sensitive text if the OCR thereof is wrong. For example, using the regular expression “[bc]?oat” would not successfully find the word “coat” in a text image if the OCR wrongfully recognized the word “coat” in the text image to “ooat”. Instead of finding substrings with exact match to the regular expression, the fuzzy matching method disclosed herein detects, from the long text string, substrings that are sufficiently similar to the pattern defined by the regular expression. For example, in some embodiments, the fuzzy matching method uses a regular expression P=p1p2. . . pm(which represents the pattern of a sensitive-text type) to find a substring Tj′,j=tj′. . . tjof the sensitive text in the long text string T=t1t2. . . tn, which, of all substrings of T, has an acceptable edit distance to the regular expression P. As those skilled in the art understand, edit distance is a measurement for quantifying the similarity or dissimilarity between strings, by counting the minimum number of operations required to transform one string into the other. In these embodiments, the Levenshtein distance is used as the edit distance (see https://en.wikipedia.org/wiki/Levenshtein_distance for a brief introduction). A Levenshtein distance between two strings s1and s2is the minimum number of single-character edit operations needed to transform s1into s2, where the possible edit operations include the deletion of a letter, the substitution of a letter by another letter, and the insertion of a letter. For example, the Levenshtein distance between the string “ooat” and the string “boat” is one (1) because one may edit “boat” by substituting the character “b” in “boat” with “o”. Clearly, the minimum number of edit operations needed to transform s1into s2equals to the minimum number of edit operations needed to transform s2into s1. More specifically, the Levenshtein distance lev (s1, s1) between string s1having a length (that is, number of characters) of |s1| and string s2having a length of |s2| is: lev(s1,s2)={❘"\[LeftBracketingBar]"s1❘"\[RightBracketingBar]",if|s2|=0;❘"\[LeftBracketingBar]"s2❘"\[RightBracketingBar]",if|s1|=0;lev(tail(s1),tail(s2)),ifs1[0]=s2[0];1+min{lev(tail(s1),s2)lev(s1,tail(s2))lev(tail(s1),tail(s2)),otherwise;(1) where s[0] represents the first character of string s, the function tail(s) outputs a substring of the string s from the second character of the string s to the last character thereof, and the tail function applied to a single character gives rise to an empty string (that is, its length is zero). The Levenshtein distance between a text string and a regular expression is the minimum number of single-character edit operations needed to transform the text string s into a string matching the regular expression P. For example, the Levenshtein distance between the string “ooat” and the regular expression of “[bc]?oat” is one (1) because one may edit “oat” (which is a text string matching the regular expression P) by inserting a character “o” at the beginning of “oat”, or by substituting the character “b” or “c” in “boat” or “coat” (both “boat” and “coat” are text strings matching the regular expression P) with “o”. Thus, the Levenshtein distance represents the difference or dissimilarity between a regular expression and a text string, and is denoted the “fuzziness” or “error tolerance”. A smaller fuzziness (that is, a smaller Levenshtein distance) implies lower dissimilarity or higher similarity between the text string and the regular expression. Therefore, the fuzzy matching method in these embodiments searches in the long text string T for substrings Tj′,jof sensitive-text with fuzziness or Levenshtein distance to the regular expression P smaller than a predefined fuzziness threshold (that is, smaller than a predefined Levenshtein-distance threshold). In some embodiments, the fuzzy matching method uses Thompson's nondeterministic finite automaton (NFA) for detecting sensitive-text substrings Tj′,jin the long text string T. The detail of Thompson's NFA may be found in the academic paper entitled “Approximate regular expression matching with multi-strings” to Belazzougui, Djamal, and Mathieu Raffinot and published in International Symposium on String Processing and Information Retrieval, pp. 55-66, Springer, Berlin, Heidelberg, 2001; and the academic paper entitled “A subquadratic algorithm for approximate regular expression matching” to Wu, Sun, Udi Manber, and Eugene Myers, and published in Journal of algorithms 19, no. 3 (1995): 346-360, the content of each which is incorporated herein by reference in its entirety. More specifically, a Thompson's NFA is constructed for each regular expression that represents a sensitive-text type. As shown inFIG.6, the Thompson's NFA comprises a plurality of nodes transition between each other. The nodes are generally of two types, including a first type of t-nodes (in which all ingoing transitions are F-transitions) and a second type of others nodes (denoted L-nodes). Herein a F-transition is a transition in NFA that the input thereof may be an empty string F. As shown inFIG.6, the construction Thompson's NFA is done recursively on the expression using the following rules:Referring toFIG.6(a), a regular expression consisting of a single character a generates an automaton with two states I and F, linked with one transition labeled with the character a. The state I is the initial state of the automaton and the state F is the accepting state of the automaton.Referring toFIG.6(b), a regular expression R=R1·R2(that is, R is the concatenation of R1and R2, where “·” represents concatenation), wherein R1and R2are represented as automaton of node L1and automaton of node L2, respectively, generates an automaton L1·L2which contains all original states and transitions of automatons of R1and R2except that the final state of automaton of R1is merged with initial state of the automaton of R2.Referring toFIG.6(c), a regular expression R=R1∪R2(that is, R is the aggregation of R1and R2, where “∪” represents union), wherein R1and R2are represented as automaton of node L1and automaton of node L2, respectively, generates an automaton L1|L2which contains the states and the transitions which appear in automaton of the regular expressions R1, R2with two new states I, F and four new transitions labeled with ε.Referring toFIG.6(d), a regular expression R=R1*, wherein R1is represented as automaton of node L, generates an automaton L* which contains all the original states of R1with two new states I, F and four new transitions labeled with ε. An example of Thompson's NFA built on an exemplary regular expression “GA(TAA|GG)*” is shown inFIG.6(e). By using the Thompson's NFA constructed for the regular expression of each sensitive-text type, the fuzzy matching method scans the long text string T and counts the smallest Levenshtein distance between each possible substring Ti,jof the long text string T and the final state in the Thompson's NFA. If a substring Ti,jhas a Levenshtein distance smaller than a predefined fuzziness threshold, the substring Ti,jis considered a substring sensitive text. After detecting a substring of sensitive text, the processing structure122includes the detected substring of sensitive text and the bounding box and sensitive-text type thereof into a sensitive-text list. As described above, some sensitive-text types (such as credit card number, resident identity card number, and the like) may be validated at step244shown inFIG.5. For example, a 18-digit resident identity card number based on ISO 7064:1983, MOD 11-2 (see https://en.wikipedia.org/wiki/Resident_Identity_Card) may be expressed as, from left to right, a18, a17, . . . , a1, where ai(i=1, . . . , 18) is a digit, and a1is a checksum for validating the first 17 digits, which is obtained by:(i) Calculating a weight coefficient for each digit ai(i=2, . . . , 18) as Wi=2i-1mod 11, where A mod B is the modulo operation returning the remainder of A divided by B.(ii) Calculating S=Σi=218aiWi.(iii) Calculating a1=(12−(S mod 11)) mod 11. Therefore, if a resident identity card number is detected at step242shown inFIG.5, the processing structure122may calculate a checksum for the detected resident identity card number and compare the calculated checksum with the rightmost digit of the detected resident identity card number for validation. As described above, the input image202may be modified to remove or otherwise sanitize the detected and optionally validated sensitive text therein and obtain the modified image222. The image-sanitization system100disclosed herein provides several advantages compared to conventional image-sanitization system. For example, Presidio (which is a data protection and anonymization API developed by Microsoft Corp., Redmond, WA, USA; see https://github.com/microsoft/presidio) provides identification and anonymization functions for personal identifiable information (PII) entities in text and images. The Presidio Image Redactor is a Python-based module for detecting and redacting PII text entities in images. Specifically, the Presidio Image Redactor defines a regex-based recognizer for each of sensitive data types. For example, the regular expression for IPv4 address is “\b(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\.(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\.(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\.(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\b”, and this regular expression is used in Presidio's IPv4 recognizer for matching IPv4 addresses in images. In operation, the Presidio Image Redactor uses OCR to recognize text in an image and then uses the regex-based recognizers one by one to detect sensitive text in the text obtained from the OCR. Compared to Presidio which uses conventional regular expression for detecting sensitive text from images, the image-sanitization system100disclosed herein utilizes the above-described fuzzy matching method for sensitive-text detection, thereby greatly improving the detection rate (also denoted “recall rate”) while maintaining sufficient precision (that is, the detection correctness) in image-based sensitive-text detection. The image-sanitization system100disclosed herein also utilizes image pre-processing operations to enhance image quality thereby improving the OCR accuracy. FIG.7shows comparison results of the Presidio Image Redactor and the image-sanitization system100using images of different types (screenshots, scanned documents, and scanned receipts) obtained from a private dataset, the FUNSD open dataset (https://guillaumejaume.github.io/FUNSD/), the SROIE open dataset (https://rrc.cvc.uab.es/?ch=13), respectively) and having sensitive text of different sensitive-text types (IP address, phone number, and date). In the comparisons, the image-sanitization system100is tested with fuzziness thresholds of one (1), two (2), and three (3). As shown, the recall rates of the image-sanitization system100are all increased compared to those of the Presidio Image Redactor. For example, the recall rates of the image-sanitization system100with fuzziness of one and two in detecting phone numbers are increased by 18% and 25%, respectively, compared to that of the Presidio Image Redactor, while maintaining comparable precisions. Also shown in the columns of time inFIG.7, the image-sanitization system100does not degrade the system efficiency compared to Presidio that uses standard regular-expression matching. Moreover,FIG.7shows that the image-sanitization system100supports a large range of sensitive-text types and various types of images. Those skilled in the art will appreciate that various alternative embodiments are readily available. For example, the fuzziness threshold used in the image-sanitization system100may be adjustable by a user. As can be seen fromFIG.7, higher fuzziness thresholds may be used in applications that require higher recall rates, or lower fuzziness thresholds may be used in applications that need to maintain higher precisions. In some embodiments, other suitable edit distance such as Hamming distance, Longest common subsequence (LCS) distance, Damerau-Levenshtein distance, Jaro-Winkler distance, and/or the like may be used for measuring the fuzziness between a regular expression and a substring in the recognized text strings. In some embodiments, the image-based sensitive-text detection and sanitization process200shown inFIG.4may be executed by a plurality of processing structures122of a plurality of server computers102. Each step may be performed by a respective program or program module running on a server computer102. In some embodiments, the image-based sensitive-text detection and sanitization process200may be performed by a client computing device104. In these embodiments, the client computing device104may or may not be connected to the network108. In some embodiments, the image-based sensitive-text detection and sanitization process200may not comprise the image pre-processing step204. In some embodiments, at step208of the image-based sensitive-text detection and sanitization process200, the processing structure122may convert the OCR detection results into a plurality of long text strings. Then, in step210, the processing structure122uses the fuzzy matching method to separately process each long text string. In some embodiments, the image-based sensitive-text detection and sanitization process200may not comprise the text mid-processing step208. Then, in step210, the processing structure122uses the fuzzy matching method to separately process each recognized text string. In some embodiments, the predefined sensitive-text types may be user-customizable wherein a user (such as an administrator) may add, remove, and/or modify one or more sensitive-text types in the system100. While the image-based sensitive-text detection and sanitization process200in above embodiments is used for detecting sensitive text information in images, in other embodiments, the image-based sensitive-text detection and sanitization process200may be used for detecting sensitive text information in other types of input data. For example, in some embodiments, the image-based sensitive-text detection and sanitization process200may be used for detecting sensitive text information in one or more frames of a video clip. In some other embodiments, the image-based sensitive-text detection and sanitization process200may be used for detecting sensitive text information in audio data. In these embodiments, instead of performing image pre-processing at step204, suitable audio signal pre-processings may be performed at step204. Moreover, instead of using OCR at step206, voice recognition may be used for converting the audio signal to a text string. The sensitive-text recognition step210may be similar to that described above. At step212, suitable audio processing technologies may be used to modify the audio signal and remove the detected sensitive text information. In some embodiments, a first processing structure may recognize text strings from a signal such as an image, a video clip, an audio clip, or the like, and transmit the recognized text strings to a second processing structure for sensitive-text detection. After detecting the sensitive text information using the fuzzy matching method, the second processing structure transferring the list of detected substrings of sensitive text back to the first processing structure for redaction. In some embodiments, the image-based sensitive-text detection and sanitization process200may not comprise the redaction step212. Rather, the processing structure122may use the detected list of detected substrings to locate meaningful pieces of text inside image data and help to extract them for further process. Although embodiments have been described above with reference to the accompanying drawings, those of skill in the art will appreciate that variations and modifications may be made without departing from the scope thereof as defined by the appended claims. | 36,942 |
11861924 | DETAILED DESCRIPTION Reference will now be made in detail to embodiments, examples of which are illustrated in the accompanying drawings. In the following detailed description, numerous specific details are set forth in order to provide a sufficient understanding of the subject matter presented herein. But it will be apparent to one of ordinary skill in the art that the subject matter may be practiced without these specific details. Moreover, the particular embodiments described herein are provided by way of example and should not be used to limit the scope of the invention to these particular embodiments. In other instances, well-known data structures, timing protocols, software operations, procedures, and components have not been described in detail so as not to unnecessarily obscure aspects of the embodiments of the invention. This application relates to the field of pixelated image analysis and editing including cropping and skew rectification of document images. Current image recognition and text alignment techniques struggle to pre-process pixelated images for OCR to produce crisp, clear results, especially for mobile phone type use cases. Systems and methods here may be used for pre-processing images, including using a computer for receiving a pixelated image of a paper document of an original size, downscaling the received pixelated image, employing a neural network algorithm to the downscaled image to identify four corners of the paper document in the received pixelated image, re-enlarging the downscaled image to the original size, identifying each of four corners of the paper document in the pixelated image, determining a quadrilateral composed of lines that intersect at four angles at the four corners of the paper document in the pixelated image, defining a projective plane of the pixelated image, and determining an inverse transformation of the pixelated image to transform the projective plane quadrilateral into a right angled rectangle. Overview In today's world, paper documents such as receipts and tax forms still exist. However, as digitization takes over, it is often useful to turn information found on paper documents into digitized or pixelated text to be stored and manipulated using computerized image capture and storage. The methods and systems described here include improved ways to pre-process such images to better use OCR to extract data from images taken of paper documents using various de-skewing and cropping techniques. Steps such as these are considered pre-processing of image data which may enhance the effectiveness of later optical character recognition (OCR) of such image data. Pre-processing such as de-skewing and cropping described herein may reduce background interference and align the documents in an image, thereby enhancing any text characters in the images for improved OCR results. Thus, by removing cluttered backgrounds in images and aligning identified text, such text can be more accurately processed using OCR. This may be accomplished by identifying four corners of a document to then de-skew and rotate text within the document as well as eliminate background clutter. FIG.1shows example steps of document understanding that may be accomplished to extract data regarding text from images of paper documents that include such text. First,102, the original image is captured. Such an original image may be captured using a mobile device. Such images may include noisy content, papers not aligned to the camera, or skewed documents and text. Next,104pre-processing the image may be used to improve the accuracy of the eventual text extraction. Pre-processing may include removing background noise, correction of the orientation of the source paper, removal of the background lighting or shadows depicted in the image. Next,106OCR may extract all text found in the image. Finally,108, the extraction may find relevant information, and/or to categorize and extract as values. Network Examples Including Image Receipt As discussed, paper documents such as tax forms or paper receipts are found and used in commerce. It would be advantageous to obtain pixelated or digitized copies of such paper records to identify the text on them for processing, storage, or other use. More and more, the easiest way for people to obtain such pixelated or digitized copies is to use a smartphone or mobile device to take a picture of the paper. Problems may arise, however, in extracting information about the text from such images due to the difficulty of recognizing text found on the paper documents because of various conditions of the original paper source, or the image taking process. For example, an image taken from a smartphone of a paper receipt may not be completely aligned with the camera lens making it difficult for the OCR system to correctly extract the text from the paper. Other photographic noise such as background clutter may be picked up by an OCR system as gibberish text. Using the methods here, an input such as a first image capture of a paper receipt or form may be pre-processed to enhance the ability to OCR such documents. Image Capture and Pre-Processing FIG.2shows an example network200which may be used to practice the methods described herein. Network200may include elements such as at least one mobile device, or client220, pre-processing center230, and/or at least one data storage250. In the examples described herein, the mobile client device220may be used to capture an image using its associated camera. Image data may then be sent to a pre-processing center230by way of a network210where the image may be processed and/or stored250. In some example embodiments, software may reside on the client220itself and the pre-processing and/or storage may be conducted on the client220. Each of these elements fromFIG.2may include one or more physical computing devices (e.g., which may be configured as shown inFIG.5) distributed computing devices, local computing devices, or any combination of computing devices. In some embodiments, one or more data storages250may be provided by the same computing device and/or the same device that provides pre-processing center230. In some embodiments, client220and data storage250may be provided by a single computing device. In some embodiments, client220may be any device configured to provide access to services. For example, client220may be a smartphone, personal computer, tablet, laptop computer, smart watch, smart watch or other wearable, or any other computing device. In some embodiments, data storage250may be any device configured to host a service, such as a server or other device or group of devices. In some embodiments, client220may be a service running on a device, and may consume other services as a client of those services (e.g., as a client of other service instances, virtual machines, and/or servers). The elements may communicate with one another through at least one network210. Network210may be the Internet and/or other public or private wired or wireless networks or combinations thereof. For example, in some embodiments, at least data pre-processing center230, and/or at least one data storage250may communicate with one another over secure channels (e.g., one or more TLS/SSL channels). In some embodiments, communication between at least some of the elements of system200may be facilitated by one or more application programming interfaces (APIs). APIs of system200may be proprietary and/or may be examples available to those of ordinary skill in the art such as Amazon® Web Services (AWS) APIs or the like. Specific examples of the pre-processing performed by the elements of system200in combination with one another are given below. However, the roles of client220, pre-processing center230, and data storage250may be summarized as follows. Client220may acquire an image by use of its associated camera feature(s). Client220may then locally store such image data and/or send the image data via the network210to the pre-processing center230where the pre-processing may take place as described inFIG.3. In some examples, image data may be stored in local and/or distributed data storage250. Client220, pre-processing center230and data storage250are each depicted as single devices for ease of illustration, but those of ordinary skill in the art will appreciate that client220, pre-processing center230and/or data storage250may be embodied in different forms for different implementations. For example, any of client220, pre-processing center230and/or data storage250may include a plurality of devices, may be embodied in a single device or device cluster, and/or subsets thereof may be embodied in a single device or device cluster. A single user may have multiple clients220, and/or there may be multiple users each having their own client(s)220. Client(s)220may each be associated with a single process225, a single user225, or multiple users and/or processes225. Furthermore, as noted above, network210may be a single network or a combination of networks, which may or may not all use similar communication protocols and/or techniques. Example Images FIG.3Ashows an example of the steps described herein, imparted on one image of a paper receipt. In the example, the receipt is received320with a skewed perspective from the angle the original image was captured. It also includes background noise332from the table the receipt was sitting on when the image was taken. As received, the text of the receipt330may be difficult to process by an OCR system and may be confused by the background clutter. Using the methods described herein, the four corners340,342,344,346of the receipt in the image322are first identified. In some example embodiments, the four corners are identified using machine learning/neural networks which have been trained to find such corners. After the corners are identified, the image may be de-skewed to correct any mis-alignment of the text334in the document. Additionally or alternatively, after the corners are identified, everything outside the rectangle324may be cropped to remove the background noise332. Such a resultant pixelated or digitized image336may be more accurately processed using OCR than the first image320with the text aligned and the background removed. FIG.3Bshows another example of an original image321of a receipt on a noisy background333. It then shows the receipt323corner identification341,343,345,347, as well as the cropped image325accomplished using the steps described herein. Method Process Step Examples FIG.4explains the example steps used to complete the processes to obtain the results as shown inFIG.3. InFIG.4, first, an image is received by the computing system and that image is downscaled402. In some examples, the image is received from a mobile client or smartphone camera image capture. In some examples, the downscale may be a reduction in pixels by grouping individual pixels to be processed in blocks. This downscale reduces the number of pixels to be processed and thereby increases computing efficiency, reduces the time to process images, and frees up compute resources for other tasks. Next, the image is passed through a neural network model to obtain four heat map slices404. This process utilizes neural networks which are trained to identify the four corners of a piece of paper captured in an image. The four heat maps identify the approximate locations of the four corners, as shown inFIG.3Aat340,342,344and346andFIG.3B341,343,345and347. Next, the heat map slices are rescaled406. In some examples, this rescaling includes bilinear interpolation to obtain the original size of the image. Next, the pixel with the highest predicted probability of a keypoint occurrence is identified, for each of the four corners408. Then a set of the four corner points is returned and the system determines if lines that connect the corners create angles that fall within a pre-determined tolerance,410. That is, if lines are drawn between the four points, do the line intersections create angles that fall within a tolerance around a ninety degree right angle. If the lines do fall within the tolerance,412, those corner determinations are used to define a quadrilateral and projective plane from which an inverse transformation of the image may be made in order to de-skew the image. If the lines fall outside the tolerance, the original image is returned. Background, located outside the quadrilateral formed by connecting the corners may be cropped out to remove any background noise or images. Stacked Hourglass Neural Network In some example methods described herein, a neural network may be used to create the corner heat maps, and thereby identify the four corners of the document in the image. Such a neural network may be a stacked hourglass neural network arrangement. Such a convolutional neural network (CNN) has been used for human pose prediction in image analysis.11See Stacked Hourglass Networks for Human Pose Estimation, Alejandro Newell, et. al., University of Michigan, Ann Arbor, arXiv:1603.06937v2, 26 Jul. 2016. In the application described herein, the CNN may be trained to identify the four corners of a document in an image analysis. In such an arrangement, the network may capture and consolidate information across all scales of any given image. This may include first, pooling down the image to a low resolution, then, up-sampling the image to combine features across multiple resolutions. In some examples, multiple hourglass modules may be used back-to-back to allow for repeated bottoms-up, top-down inference across scales. This may utilize a single pipeline with skip layers to preserve spatial information at each resolution and convolution and max pooling layers to process image features down to a low resolution. For each max pooling step, the CNN may branch off and apply more convolutions at the original pre-pooled resolution, so at the lowest resolution, the network begins the top-down sequence of up-sampling and combination of features across scales. Nearest neighbor up-sampling of the lower resolution followed by an elementwise addition of the two sets of features, may be used to bring together information across two adjacent resolutions, thus, for every layer on the way down for downscaling, there is a corresponding layer on the way up for up-sampling. At the output resolution, two consecutive rounds of 1×1 convolutions may be applied to produce the final network predictions. The result may be heat maps of the approximate locations of four corners of the paper document as described. Used with immediate supervision, repeated bidirectional inference may be used for increasing the network's performance. As described, such a neural network may be trained by introducing iterative examples to identify the four document corners in an image as an upper left, an upper right, a lower left, and lower right corner. Such training may include manual identification of the corners of a paper document capture in an image, and feeding that into the CNN model. For example, training may include using multiple images, for example many thousands images that are manually annotated to include location of the four corner points. Several data augmentation techniques may then be used to achieve greater generalization performance, including random affine transformation as well as in some examples, random background texture sampling. After the CNN model is trained, a new image may be fed into the model to find the corners. At runtime, the resized document may be 256×256, then the corners identified in the trained stacked-hourglass network. As discussed, the result of such analysis would be four different heat maps that project a probability of the location of each of the four corners. The image may be resized and the resultant point heat map may be resized to the original size image and maximum values may be found to identify the corner point locations. Then a quadrilateral may be formed by connecting the corners, after which, a measurement of the mean vertical and horizontal lengths from the quadrilateral may be made, as defined by the points, to make a projective transformation to transform the quadrilateral into a proper rectangle where the vertical and horizontal dimensions may be previously calculated. That is, the de-skewed image results in a rectangular shaped representation of the paper document, with right angled corners and the resultant text de-skewed. Example Computing Device FIG.5is a block diagram of an example computing device500that may implement various features and processes as described herein. For example, computing device500may function as client220, pre-processing center230data storage250, or a portion or combination of any of these elements. In some embodiments, a single computing device500or cluster of computing devices500may provide each of pre-processing center230data storage250, or a combination of two or more of these services. Computing device500may be implemented on any electronic device that runs software applications derived from instructions, including without limitation personal computers, servers, smart phones, media players, electronic tablets, game consoles, email devices, etc. In some implementations, computing device500may include one or more processors502, one or more input devices504, one or more display devices506, one or more network interfaces508, and one or more computer-readable mediums510. Each of these components may be coupled by bus512, and in some embodiments, these components may be distributed across multiple physical locations and coupled by a network. Display device506may be any known display technology, including but not limited to display devices using Liquid Crystal Display (LCD) or Light Emitting Diode (LED) technology. Processor(s)502may use any known processor technology, including but not limited to graphics processors and multi-core processors. Input device504may be any known input device technology, including but not limited to a keyboard (including a virtual keyboard), mouse, track ball, and touch-sensitive pad, or display. Bus512may be any known internal or external bus technology, including but not limited to ISA, EISA, PCI, PCI Express, NuBus, USB, Serial ATA or FireWire. Computer-readable medium510may be any medium that participates in providing instructions to processor(s)502for execution, including without limitation, non-volatile storage media (e.g., optical disks, magnetic disks, flash drives, etc.), or volatile media (e.g., SDRAM, ROM, etc.). Computer-readable medium510may include various instructions514for implementing an operating system (e.g., Mac OS®, Windows®, Linux). The operating system may be multi-user, multiprocessing, multitasking, multithreading, real-time, and the like. The operating system may perform basic tasks, including but not limited to: recognizing input from input device504; sending output to display device506; keeping track of files and directories on computer-readable medium510; controlling peripheral devices (e.g., disk drives, printers, etc.) which can be controlled directly or through an I/O controller; and managing traffic on bus512. Network communications instructions516may establish and maintain network connections (e.g., software for implementing communication protocols, such as TCP/IP, HTTP, Ethernet, telephony, etc.). Pre-processing service instructions518may include instructions that perform the various pre-processing functions as described herein. Pre-processing service instructions518may vary depending on whether computing device500is functioning as client220, pre-processing center230, data storage250, or a combination thereof. Application(s)520may be an application that uses or implements the processes described herein and/or other processes. The processes may also be implemented in operating system514. The described features may be implemented in one or more computer programs that may be executable on a programmable system including at least one programmable processor coupled to receive data and instructions from, and to transmit data and instructions to, a data storage system, at least one input device, and at least one output device. A computer program is a set of instructions that can be used, directly or indirectly, in a computer to perform a certain activity or bring about a certain result. A computer program may be written in any form of programming language (e.g., Objective-C, Java), including compiled or interpreted languages, and it may be deployed in any form, including as a stand-alone program or as a module, component, subroutine, or other unit suitable for use in a computing environment. Suitable processors for the execution of a program of instructions may include, by way of example, both general and special purpose microprocessors, and the sole processor or one of multiple processors or cores, of any kind of computer. Generally, a processor may receive instructions and data from a read-only memory or a random access memory or both. The essential elements of a computer may include a processor for executing instructions and one or more memories for storing instructions and data. Generally, a computer may also include, or be operatively coupled to communicate with, one or more mass storage devices for storing data files; such devices include magnetic disks, such as internal hard disks and removable disks; magneto-optical disks; and optical disks. Storage devices suitable for tangibly embodying computer program instructions and data may include all forms of non-volatile memory, including by way of example semiconductor memory devices, such as EPROM, SPROM, and flash memory devices; magnetic disks such as internal hard disks and removable disks; magneto-optical disks; and CD-ROM and DVD-ROM disks. The processor and the memory may be supplemented by, or incorporated in, ASICs (application-specific integrated circuits). To provide for interaction with a user, the features may be implemented on a computer having a display device such as a CRT (cathode ray tube) or LCD (liquid crystal display) monitor for displaying information to the user and a keyboard and a pointing device such as a mouse or a trackball by which the user can provide input to the computer. The features may be implemented in a computer system that includes a back-end component, such as a data server, or that includes a middleware component, such as an application server or an Internet server, or that includes a front-end component, such as a client computer having a graphical user interface or an Internet browser, or any combination thereof. The components of the system may be connected by any form or medium of digital data communication such as a communication network. Examples of communication networks include, e.g., a telephone network, a LAN, a WAN, and the computers and networks forming the Internet. The computer system may include clients and servers. A client and server may generally be remote from each other and may typically interact through a network. The relationship of client and server may arise by virtue of computer programs running on the respective computers and having a client-server relationship to each other, or by processes running on the same device and/or device cluster, with the processes having a client-server relationship to each other. One or more features or steps of the disclosed embodiments may be implemented using an API. An API may define one or more parameters that are passed between a calling application and other software code (e.g., an operating system, library routine, function) that provides a service, that provides data, or that performs an operation or a computation. The API may be implemented as one or more calls in program code that send or receive one or more parameters through a parameter list or other structure based on a call convention defined in an API specification document. A parameter may be a constant, a key, a data structure, an object, an object class, a variable, a data type, a pointer, an array, a list, or another call. API calls and parameters may be implemented in any programming language. The programming language may define the vocabulary and calling convention that a programmer will employ to access functions supporting the API. In some implementations, an API call may report to an application the capabilities of a device running the application, such as input capability, output capability, processing capability, power capability, communications capability, etc. Conclusion As disclosed herein, features consistent with the present inventions may be implemented by computer-hardware, software and/or firmware. For example, the systems and methods disclosed herein may be embodied in various forms including, for example, a data processor, such as a computer that also includes a database, digital electronic circuitry, firmware, software, computer networks, servers, or in combinations of them. Further, while some of the disclosed implementations describe specific hardware components, systems and methods consistent with the innovations herein may be implemented with any combination of hardware, software and/or firmware. Moreover, the above-noted features and other aspects and principles of the innovations herein may be implemented in various environments. Such environments and related applications may be specially constructed for performing the various routines, processes and/or operations according to the invention or they may include a general-purpose computer or computing platform selectively activated or reconfigured by code to provide the necessary functionality. The processes disclosed herein are not inherently related to any particular computer, network, architecture, environment, or other apparatus, and may be implemented by a suitable combination of hardware, software, and/or firmware. For example, various general-purpose machines may be used with programs written in accordance with teachings of the invention, or it may be more convenient to construct a specialized apparatus or system to perform the required methods and techniques. Although certain presently preferred implementations of the invention have been specifically described herein, it will be apparent to those skilled in the art to which the invention pertains that variations and modifications of the various implementations shown and described herein may be made without departing from the spirit and scope of the invention. Accordingly, it is intended that the invention be limited only to the extent required by the applicable rules of law. The foregoing description, for purpose of explanation, has been described with reference to specific embodiments. However, the illustrative discussions above are not intended to be exhaustive or to limit the invention to the precise forms disclosed. Many modifications and variations are possible in view of the above teachings. The embodiments were chosen and described in order to best explain the principles of the invention and its practical applications, to thereby enable others skilled in the art to best utilize the invention and various embodiments with various modifications as are suited to the particular use contemplated. The foregoing description, for purpose of explanation, has been described with reference to specific embodiments. However, the illustrative discussions above are not intended to be exhaustive or to limit the invention to the precise forms disclosed. Many modifications and variations are possible in view of the above teachings. The embodiments were chosen and described in order to best explain the principles of the invention and its practical applications, to thereby enable others skilled in the art to best utilize the invention and various embodiments with various modifications as are suited to the particular use contemplated. Etc. | 27,982 |
11861925 | DETAILED DESCRIPTION Implementations for document field detection are described. A “field” refers to an area within a document (e.g., a rectangular area), such that the area is designated as a placeholder in which variable data can be populated, thus creating a new instance of the document type (e.g., an invoice, an order, etc.). Fields are typically found in form documents. A document may have variety of fields, such as text fields (containing numerals, numbers, letters, words, sentences), graphics fields (containing a logo or any other image), tables (having rows, columns, cells), and so on. As used herein, “electronic document” (also referred to simply as “document” herein) may refer to any document whose image may be accessible to a computing system. The image may be a scanned image, a photographed image, or any other representation of a document that is capable of being converted into a data form accessible to a computer. For example, “electronic document” may refer to a file comprising one or more digital content items that may be visually rendered to provide a visual representation of the electronic document (e.g., on a display or a printed material). In accordance with various implementations of the present disclosure, a document may conform to any suitable electronic file format, such as PDF, DOC, ODT, JPEG, etc. “Document” may represent a financial document, a legal document, or any other document, e.g., a document that is produced by populating fields with alphanumeric symbols (e.g., letters, words, numerals) or images. “Document” may represent a document that is printed, typed, or handwritten (for example, by filling out a standard form). “Document” may represent a form document that has a variety of fields, such as text fields (containing numerals, numbers, letters, words, sentences), graphics field (containing a logo or any other image), tables (having rows, columns, cells), and so on. Examples of different types of documents that has fields include contracts, invoices, financial documents, business cards, personal identification documents, loan applications, order/discharge documents, accounting documents, reporting documents, patent search reports, various company forms, etc. Fields can be found in various types of documents, such as, invoices, financial documents, business cards, personal identification documents, loan applications, etc. Optical character recognition (OCR) of such a document may involve a preliminary step of identifying all the field contained by the document, which may be performed by neural networks that are trained on a large number of (e.g., thousands) sample documents that include fields. However, such approach does not provide field detection with adequate accuracy across different types of documents using universal sample documents because the documents may differ significantly. Thus, a large number of each type of sample documents may be needed for training the neural networks. Such methods of field detection require long term, extensive training with many manual operations, lack flexibility, pose a potential for disclosure of confidential data. Moreover, these methods also require accurate markup of each document. However, when manual operations are involved to mark up the documents, the human users often omit or incorrectly mark the fields, thus rendering the documents not suitable for being utilized as training samples. Additionally, to mitigate these incorrect markups, the user may have to redo the mark up or mark up additional documents of the same type to start the training again. In some cases, the user also is not able to perform the mark up correctly because the user may not know where a particular field is located on the document, or the field is not identified with a descriptive word. For example, a user may intend to mark up the “total” field that is expected to be populated with a number. However, if the field is not identified with the word “total,” or the user cannot locate the word “total,” or for other reasons, the user can instead mark up another field containing characters that are visually similar to the expected content of the “total” field, such as, another field containing numbers. Aspects of this disclosure address the above noted and other deficiencies by providing mechanisms for field detection in a document without the need to manually markup an extensive number of documents for training the neural network. The mechanisms can provide for rapid training of a trainable model on a small data set, such as a data set including no more than ten documents of a specific type with marked up fields. Upon training a model for a specific class of documents, the model is used to detect the fields in other user documents of the same class of documents. In one embodiment, aspects of the disclosure provide for training the neural network using a small number of marked-up documents to be used as training documents. These documents may have metadata that identifies one or more document fields based on user markup that indicates location of the respective document fields. The field detection is based on identifying spatial distributions of fields with respect to visual reference elements within the training documents. After images of the documents are received, text from the document images are obtained and various characters, including words, are obtained from the text in the document images. Reference elements on a document image can be used to define the location of the marked up fields. Any structural element that belongs to the document layout can be used as reference element. A reference element can include predefined visual elements, such as, a predefined word (e.g., keywords, custom dictionary words), a predefined graphical element (e.g., a visual divider, a logo) etc. on the document images. Reference elements on the document images can be identified by matching words from a custom dictionary, and/or words that appear on a given document (or in the corpus of the documents) with a frequency that exceeds a predefined threshold frequency. For example, an invoice may include Company Name, Total, Due Date, etc. for reference elements based on the frequency at which these keywords may appear on these types of documents. Locations of various document fields can be defined relative to the reference element. For each field in the training data set, a heat map can be generated with respect to each reference element. “Heat map” refers to a set of numeric elements, such that the value of each element is defined by a certain function computed at the image coordinates reflecting the position of the element. In some implementations, the heat map may be represented by a rectangular matrix, each element of which corresponds to a certain pixel in the vicinity of a reference element, such that the value associated with each pixel reflects the number of training documents in which the given field contains this pixel. The numeric values of heat map elements can be color coded for visualization (hence, the term), however, this step would be superfluous for neural network training, in which the numeric values, rather than colors, are used. Accordingly, the training phase may involve generating the heat maps of a relatively small set of training documents that are accompanied by metadata (“mark-up”) indicating the positions and names of the document fields. The generated heat maps may then be used for identifying the field positions in other documents. In some implementations, a system operating in accordance with aspects of the present disclosure may identify, within the input document image, a candidate region for each field of interest, based on the heat maps built for this field with respect to one or more reference elements. Each identified candidate region would include the input document image pixels corresponding to heat map elements satisfying a threshold condition (e.g., having their respective values exceeding a threshold, selecting a pre-defined share of pixels having the largest values, etc). The selected candidate regions may then be treated as the positions of the corresponding fields, i.e., by applying OCR techniques to the image fragments lying within the candidate regions. In some implementations, the extracted content of each document field can be evaluated using BPE (Byte Pair Encoding) tokens, by evaluating the differences (e.g., Euclidian distances) between the BPE token representing the extracted content of a given field of the input document and the BPE tokens computed for the same field in the training documents BPE token refers to a numeric vector representing an input text. In some implementations, the vector can be represented by an embedding of an interim representation of the input text, such that the interim representation may utilize an artificial alphabet, each symbol of which can encode a substring of one or more characters of the input text, as described in more detail herein below. The embeddings are generated in such a manner that semantically close inputs would produce numerically close embeddings. Accordingly, if the computed distance between the BPE token representing the content extracted from a candidate field and the BPE token(s) representing the same field in the training data set is less than a threshold, the likelihood that the field is detected correctly is relatively high, and the candidate field may be accepted for information extraction. The techniques described herein allow for automatic field detection in documents using artificial intelligence. The systems and methods described herein represent significant improvements in producing more accurate and efficient field detection in documents. The methods utilize trainable models which can be trained on a small number (e.g., less than ten) of sample documents, detect and classify fields with high quality. The methods make it possible to speed up and improve quality of data validation. In addition, the methods can also provide guidance to human users if the user might have marked up a field inaccurately or missed marking up a field. The methods allow for identification of erroneous document markup performed by human users, and correction and restoration of missing markup of fields in an effective way. Additionally, the methods allow to select subset of marked-up documents that contain complete and consistent markup that can in turn allow for training additional, more accurate models. Various aspects of the above referenced methods and systems are described in details herein below by way of examples, rather than by way of limitation. FIG.1depicts a high-level component diagram of an illustrative system architecture100, in accordance with one or more aspects of the present disclosure. System architecture100includes a computing device120, a repository160, and a server machine150connected to a network130. Network130may be a public network (e.g., the Internet), a private network (e.g., a local area network (LAN) or wide area network (WAN)), or a combination thereof. The computing device120may perform field detection on a document image. In one embodiment, computing device120may be a desktop computer, a laptop computer, a smartphone, a tablet computer, a server, a scanner, or any suitable computing device capable of performing the techniques described herein. Computing device120may receive one or more images. In an example, image110may be received by the computing device120. Image110may include an image of a document, a document page, or a part of a document page. The document page or the part of the document page depicted in image110may include one or more fields with variable text. In an example, various document fields within the document may need to be detected. Image110may be provided as an input to computing device120. In one embodiment, computing device120may include a field detection engine122. The field detection engine122may include instructions stored on one or more tangible, machine-readable storage media of the computing device120and executable by one or more processing devices of the computing device120. In one embodiment, field detection engine122may generate as output a number of detected fields, content extracted from the detected fields, and/or an output document with a number of detected fields and content corresponding to the detected fields. In one embodiment, field detection engine122may use a trained machine learning model140that is trained to detect fields within image110. The machine learning model140may be trained using training set of images. In some instances, the machine learning model140may be part of the field detection engine122or may be accessed on another machine (e.g., server machine150) by the field detection engine122. Based on the output (e.g., heat maps corresponding to pixels of the image) of the trained machine learning model140, field detection engine122may identify a candidate region in the input image110that is detected as a particular field. The field detection engine122may also extract words belonging to the detected field. Server machine150may be a rackmount server, a router computer, a personal computer, a portable digital assistant, a mobile phone, a laptop computer, a tablet computer, a camera, a video camera, a netbook, a desktop computer, a media center, or any combination of the above. The server machine150may include a field training engine151. The machine learning model140may refer to model artifacts that are created by the field training engine151using the training data that includes training inputs and corresponding target outputs (correct answers for respective training inputs). During training, patterns in the training data that map the training input to the target output (the answer to be predicted) can be found, and are subsequently used by the machine learning model140for future predictions. As described in more detail below, the machine learning model140may be composed of, e.g., a single level of linear or non-linear operations (e.g., a support vector machine [SVM]) or may be a deep network, i.e., a machine learning model that is composed of multiple levels of non-linear operations). Examples of deep networks are neural networks including convolutional neural networks, recurrent neural networks with one or more hidden layers, and fully connected neural networks. The machine learning model140may be trained to determine the probability of pixels of images belonging to a specified document field, as further described below. Once the machine learning model140is trained, the machine learning model140can be provided to field detection engine122for analysis of image110. For example, the field detection engine122may request heat maps for a number of keywords in the image110. In some examples, model140may consist of a convolutional neural network. The field detection engine122may obtain one or more outputs from the trained machine learning model140. The output may be a set of hypotheses for a document field location based on heat maps. The repository160may be a persistent storage that is capable of storing image110, heat maps, reference elements and points, document field hypotheses, detected fields and output images, as well as data structures to tag, organize, and index the image110. Repository160may be hosted by one or more storage devices, such as main memory, magnetic or optical storage based disks, tapes or hard drives, NAS, SAN, and so forth. Although depicted as separate from the computing device120, in an implementation, the repository160may be part of the computing device120. In some implementations, repository160may be a network-attached file server, while in other embodiments, repository160may be some other type of persistent storage such as an object-oriented database, a relational database, and so forth, that may be hosted by a server machine or one or more different machines coupled to the via the network130. FIG.2depicts a high level flow diagram of an example method200for field detection in a document, in accordance with one or more aspects of the present disclosure. From a high level, the operation of the system can be logically divided in 2 stages. The first stage is the learning stage220, and the second stage is input document field detection stage230. The learning stage220can receive as input various documents210containing various fields. In the example ofFIG.2, documents210include a bank invoice template212, a savings deposit214, a receipt216, an invoice218, etc. Each of the documents210contain multiple fields. For example, bank invoice template212includes fields in the form of a table212awith two columns and multiple rows, invoice218includes a graphics field containing a logo218a, a text field containing numbers218b, etc. Upon receipt of documents210, in learning stage220each type of documents is processed in order for the system to learn from the markup of fields in these documents. One or more models maybe derived in the learning stage220for detecting fields in documents. In the input document field detection stage230, the system processes an input document to detect the structure of the input document, detect the field(s) within the input document based on models derived in the learning state220, and extracts fields with its contents. FIG.3depicts a block diagram of various components of an example system300for field detection in a document, in accordance with one or more aspects of the present disclosure. In some implementations, a user, such as a human user or a computer system user, can identify a small number of documents containing one or more fields. The user can identify each type of document on which field detection is performed using the system300. In an implementation, the user can markup fields on the identified documents. To markup a field, a user can draw lines, circles, boxes, rectangles or other shapes, highlight, or otherwise create markings on or surrounding a portion of a document to designate the area as the identified field. The user can markup multiple fields that needs to be trained for detection on each document, such as fields for “Total” and “Address.” In addition, the user markup can also include identification of the marked up fields (e.g., “Date,” “Total,” etc.). A “small number,” as used herein, can represent 3-5 documents, for example. In an example, no more than ten documents of a specific type are marked up by a user. The user can markup all fields in the document, or mark selective fields on the document. Each field is identified and marked independently of other fields on the document. In some implementations, the user can then upload the identified documents to the system300as electronic documents. Electronic documents can be scanned images, photographed images, or any other representation of a document that is capable of being converted into a data form accessible to a computer. The uploaded documents are referred to herein as document images310. In an implementation, the user can upload documents that already include marked-up fields. In another implementation, the user can electronically markup the document images310using a user interface of the system300. For example, using a user interface, the user can indicate (e.g., by clicking on, dragging on, or using other gestures, etc.) the portion of a document comprising required word, numbers, etc. and further the system can mark up surrounding boundaries of the field automatically. In some implementations, received documents can be automatically grouped into various preliminary clusters such that each cluster has similar documents, which in turn can help the user to mark up the fields correctly. System300associates each document image of document images310with a metadata identifying a particular document field based on the markup in the document image. In some examples, the metadata identifies a document field containing a variable text. In some implementations, system300categorizes each document image310into a particular document class using document classification unit320. For example, the document images may be classified into a particular class based on similarity of document attributes. In one example, document images may be classified based on vendor name associated with the document. For each class, a small selection of document images (e.g., 2-6 document images) are collected in system300. In some implementations, the word selection sub unit330of system300is a submodule that uses a heuristic algorithm to analyze document text. The text can be analyzed for selection of words on document layout based on character types, such as letters, numerals, separators, etc. Heuristics can involve problem-solving by experimental and/or trial-and-error methods A typical heuristic algorithm is derived by using some function that is included in a system for searching a solution, often using decision trees. The algorithm can include steps for adjusting weights of branches of the decision tree based on the likelihood of a branch to lead to the ultimate goal node. Here, heuristics can be used to separate lines of text into groups of same type of characters. In an example, a cascade classification of text fragments in a document can be represented using a graph. The text fragments are nodes of the graph. The nodes of the graph are joined by edges (logical links between the text fragments). The graph can be analyzed and modified to further break down text fragments that were initially identified in each node. For example, a text fragment containing both letters and numbers can be split into two new nodes to separate the letters from the numbers. In an implementation, unit330obtains text from the document image310and splits the document text into continuous subsequences of characters. The character subsequences may belong to the same character type. For example, the character types can include letters, numbers, and separators. The sub unit330can separate the text into individual words. The sub unit330can obtain all possible words in the document image310. In some implementations, system300uses fields component classification unit340to classify each word of the document image310based on the likelihood of the word to be included within a particular field. In some implementations, reference elements on a document image are used to define the location of a document field. Any structural element that belongs to the document layout can be used as reference element. A reference element can include a predefined word (e.g., keyword), a predefined graphical element (e.g., a visual divider, a logo) etc. In some implementations, a set of reference elements can be obtained using the training sample of document images. In some examples, a specific dictionary of “frequency words” can be formed based on the training documents. Frequency words are list of words grouped by frequency of occurrence within a corpus of documents (e.g., the training sample documents). In an example, the frequency words can be grouped as a ranked list. In some examples, reference elements on the document images can be identified by matching the frequency words that appear on one or more document images with a frequency that exceeds a predefined threshold frequency. In some examples, reference elements can be identified using custom dictionaries of words, various word-character separators, stamps, and other pre-defined text and/or visual elements in the document image. A reference element can act as a focus point with respect to which the location of a document field is defined. The center of a rectangular area that encompasses the frequency word on a document image, for example, can be identified as the location of the reference element in that document image. In other examples, any other location respective to the reference element can be designated as the location of the reference element. In the example ofFIG.2, a location (e.g., center) of rectangular area218csurrounding the word “total” in the document218can be used as the location of the reference element “total.” In some implementations, a document field's location (also referred to herein as “field region”) can be identified relative to the reference element. In the example ofFIG.2, a document field location (e.g., location of numbers218b) corresponding to the reference element “total” can be identified relative to the location of the reference element “total” in the document. For each document image in the training set of document images (e.g., document images310), the location of the document field can be obtained based on the markup of the training sample document image. In an implementation, system300associates metadata with a particular document field based on the user markup on the document. The metadata can define the location of the document field relative to the reference element. For example, the metadata for a particular marked up document image can indicate that the location of a document field is 50 pixels to the right from the location of the reference element on the document image. The document field location can be expressed in terms of a range of locations, number of pixels, etc. The document field location can include a region on the document image. A region can be an area within the document image. A region can have a specific geometric shape, but not limited to, for example, a rectangle, quadrilateral, ellipse, circle, other polygon, etc. The document field location can refer to a region on the document image contained within the document field. In some implementations, a heat map is used to determine the likelihood of a word in the document image to be included in a particular field. For each given field in the training data set, a heat map can be generated with respect to each reference element. “Heat map” refers to a set of numeric elements, such that the value of each element is defined by a certain function computed at the image coordinates reflecting the position of the element. In some implementations, the heat map may be represented by a rectangular matrix, such as a table, a grid, etc. Each element of the heat map corresponds to a certain pixel in the vicinity of a reference element, such that the value associated with each pixel reflects the number of training documents in which the given field contains this pixel. Different data structures can be used to represent a heat map. For example, a heat map can be represented using histograms, charts, tables with cells, graphs, plots, etc. A heat map is a data visualization technique that shows magnitude of a phenomenon using color in two dimensions. The numeric values of heat map elements can be color coded for visualization (hence, the term), however, this step would be superfluous for neural network training, in which the numeric values, rather than colors, are used. In some implementations, a heat map can be generated for each reference element in the set of training document images310. The heat map is generated using the location of a document field relative to the reference element based on the metadata for the training document images. For example, a location for a document field can be represented by particular pixels on the image included within a box surrounding the document field, as identified by the markup on the document image. The heat map is represented by a data structure that includes a plurality of heat map elements. For example, a heat map can be created by dividing an image into a rectangular grid with specified cell size in pixels. In this example, the grid represents the heat map data structure and the cells represents the heat map elements. The image used for the heat map can correspond to each of the training document images, and each of the plurality of heat map elements can correspond to each of a number of document image pixels of the corresponding training document image. In an example, for each pair of reference element and document field location in the training set of document images, the cell is filled with a value that equals to the fraction of the area occupied by the region for the document field contained within the cell. In an implementation, for a chosen reference element for which a heat map is being built, a relative location of a field corresponding the reference element is determined in each of the training document images. For example, in a hypothetic first training document image, a numeric value “$1000” can be found 50 pixels to the right of the location of the reference element “total.” For the heat map data structure for the reference element “total,” it is determined whether each document image pixel in the first image corresponding to each heat map element (e.g., a cell) is included into a document field location as identified by the markup on the document image. If any document image pixel is fully contained within the document field location (e.g., the region covered by the document field), the heat map element corresponding to that document image pixel is assigned a value of “1.” For example, the value of a cell is set to “1” when the cell corresponds to an image pixel in the document that is contained into the marked up portion of the document image covering the region for “$1000.” The value of a cell is set to “0” when it corresponds to an image pixel in the document that is not occupied by the field region “$1000.” In one implementation, the value set in the cell indicates the number of document images in which the field contains a pixel corresponding to the heat map element. Thus, the heat map element stores a counter of the number of document images in which the document field contains a document image pixel associated with the heat map element. System300continues to update the heat map for the chosen reference element using the next document image in the training set of document images. Values of the heat map elements are updated to add the new values reflecting the next document image. For example, if the value of a particular cell in the heat map was already set to “1,” and the cell corresponds to an image pixel in the next document that is contained within the field region “$1000,” then the value of the cell is incremented by a value of “1,” to equal to a value of “2.” System300continues to aggregate the values of the heat map element for each of the document images in the training set of document images to identify the image pixels contained within a particular document field. In some implementations, the final histogram of the relative location for the selected reference element is considered to be the arithmetic mean of values in respective cells of the heat maps. In some implementations, system300can update the heat map for the chosen reference element to include heat map element values that relate to another document field. That is, for the chosen reference element for which the heat map is being built, the location of a different field is identified from all of the training document images. The location is identified relative to the chosen reference element for the heat map. For example, a location of the “invoice date” filed relative to the reference element “total” can be identified in the heat map, by setting the value of the heat map elements to “1” where the heat map elements correspond to the image pixels that are contained in the “invoice date” field in a first training image. Similarly, values of the heat map elements are aggregated for each additional document image in the training set for the location of “invoice date” field relative to the reference element “total.” Thus, a heat map for a chosen reference element can identify potential locations of each field of the document with respect to the chosen reference element. Accordingly, the training phase may involve generating the heat maps of a relatively small set of training documents that are accompanied by metadata indicating the locations of the document fields. The generated heat maps may later be used for identifying the field locations in other documents. FIG.4Ashows an example heat map401for a chosen reference element. The reference element410is a predefined keyword “Date” that is found in the training set of document images. Heat map401identifies locations of various document fields relative to element410. Reference element410is shown with dotted lines because the reference element is not part of the grid data structure420that represents the heat map401. Rather, the reference element410represents the location on the grid that corresponds to the location of the keyword “date” on the training set of document images. Heat map elements, such as cells431and432, correspond to image pixels in the training document images in which are contained within various document fields. In an example, cell432is shown to be darker in color than cell433, which indicates that the counter for cell432has a higher value than the counter for cell433, which in turn indicates that a higher number of document images have image pixels corresponding to cell432contained within the respective field than cell433. Similarly,FIG.4Bshows an example heat map402for a chosen reference element411with the keyword “Total” and locations of various fields identified with respect to reference element411indicated by the shaded cells. In this example, cell442is shown to be darker than cell443.FIGS.4A and4Bshows a grid data structure being used for the heat maps depicted therein. In an example, the grid size is a hyperparameter, such as 64×64 pixels. The hyperparameter also be of a different value, such as 32×32 px, 16×16 px, etc. The hyperparameter can be selected from values that depend on, for example, the document itself (number of marked-up fields, text size, etc.), document layout, etc. In some implementations, system300uses heat map attributes to classify each possible word found in the document images310for the likelihood of the word to be contained in a particular field region. Classification is made into positive and negative examples. Positive examples are words that are included in the particular field region as defined by the field coordinates (e.g., x axis, y axis) in the document. Negative examples are all words that are not included in the particular field region. The locations of the particular field regions identified in the heat maps relative to the reference elements are used as localizing features of the hypothesis generated by the fields component classification unit340. At the output of the unit340, one or more sets of field component hypotheses are generated. The hypotheses can indicate a probable location of a document field within a document relative to a reference element. The probable location is determined based on the positive examples identified using the heat maps. In some implementations, system300evaluates internal format of the extracted content of the identified fields in the training set of document images310using BPE (Byte Pair Encoding) tokens. BPE token refers to a numeric vector representing an input text. In some implementations, the vector can be represented by an embedding of an interim representation of the input text, such that the interim representation may utilize an artificial alphabet, each symbol of which can encode a substring of one or more characters of the input text, as described in more detail herein below. The embeddings are generated in such a manner that semantically close inputs would produce numerically close embeddings. FIG.5depicts an illustrative example of internal field format evaluation500, in accordance with one or more aspects of the present disclosure. In some implementations, to evaluate the internal format of the detected field, system300uses BPE tokens. In conventional systems, BPE tokens are usually used in natural language processing tasks. Aspects of the present disclosure uses BPE tokens for the evaluation of internal field format to more accurately and confidently detect document fields. Usage of BPE tokens for the evaluation results in a significant improvement in the field detection mechanism by increasing quality and speed of the field detection in documents. In some implementations, as part of the internal field format evaluation500, BPE tokenization510is used to obtain features describing the internal format of the content (e.g., variable text, words) of the detected fields on the document images. System300can use a mechanism for tokenizing strings multilingual dictionary of BPE tokens520. Dictionary520can include pre-trained embeddings530and a dictionary of pre-trained specific word-list by frequency540. Both embeddings530and dictionary540are pre-trained on the body of the text fields of an existing markup database. In an example, an arithmetic mean of embeddings is taken as the feature vector550of a text string of tokens included in the string. In some implementations, the BPE tokens are used for the content of the detected fields in the training dataset (e.g., training document images). As noted above, an artificial alphabet of symbols can be derived for using as BPE tokens (e.g., encodings). The alphabet includes individual characters and tokens of two characters, three characters, etc. In an example, the alphabet can include a thousand or more symbols representing different combinations of characters. Each word, or characters in the word, in the training document images can be represented using symbols from the derived alphabet representing the word, or the characters, to derive tokenized content. BPE embeddings, which are vector representation of the BPE tokens, are then derived. Words in the set of training documents that are semantically closer to each other (e.g., “1000” and “2000”) would produce numerically close embeddings (e.g., Euclidian distance between the two vectors are less than a predefined value). For example, in one document the “total” field may contain “1000” and in another document the “total” field may contain “2000.” When applying the BPE tokens to these values, the BPE embeddings would be close to each other. As a result, it can be confirmed that the values are correctly identified as the content of the “total” field. When the system processes an input document for detecting fields, an aggregate (e.g., mean, average, etc.) value of the BPE tokens for the values “1000” and “2000” of the training document is taken in consideration for comparison with values in the detected fields of the input document. If a detected field contains a value (e.g., “2500”) whose BPE embedding is close to the aggregate value of the BPE in the training documents, the field detection can be confirmed with higher confidence. A threshold range can be defined in order to determine whether a value in the input document is close to the aggregate value. Additionally, if a detected field contains multiple words, BPE tokens can be used in a similar way to compare with the reference embedding for the detected field. Referring back toFIG.3, in some implementations, system300applies component links classification unit350to the resulting hypotheses from unit340. For each pair of components (e.g., words), unit350calculates an estimate of the pair's joint membership in the hypothesized field of the document. For example, a hypothesized field may include multiple words, such as, an address field that includes a street number, street name, city, state, country, etc. Additionally, the possible locations of the fields identified in the heat maps can include multiple words. Accordingly, each hypotheses includes a sequence of one or more words from the multiple words matching the locations of the field. In some implementations, system300applies hypotheses filtration and evaluation unit360to the resulting data from unit350. Unit360uses additional field properties for filtering and evaluation of hypotheses obtained from unit350. For example, additional properties can include multi-page indicator (e.g., to indicate a field is on more than one page of the document), one-sided indicator (e.g., to indicate that content is only on one side of the document), two-sided indicator (e.g., to indicate that content is on both sides of the document) maximum and minimum geometric field size, and other attributes, either alone or in combination. In an implementation, the parameters can be set by a user of the system300. For example, the user can set a parameter for the content of a field to be multi-line or single line. In an example, the user can set a parameter to indicate that a “date” or “total” field in an invoice document can only be a single-line. In another implementation, the system300can set a parameter associated with a type of document. For example, the system300can set parameters such as geometric field parameters, threshold values, etc. These parameters can be defined based on a heuristic method. A typical heuristic algorithm is derived by using some function that is included in a system for searching a solution, often using decision trees. The algorithm can include steps for adjusting weights of branches of the decision tree based on the likelihood of a branch to lead to the ultimate goal node. Various combinations of parameters or methods can also be used in system300. Moreover, system300can use linear classifiers based on logistic regression (e.g., trained classifiers based on gradient boosting) as the component classifier and the hypothesis classifier for the document field. In some implementations, system300uses hypotheses quality evaluation and analysis unit370to evaluate the overall quality of the obtained hypotheses. In some examples, the hypotheses are obtained from unit360. In other examples, the hypotheses may be obtained from another unit of system300. Various threshold may be defined in the system to assess the quality of the hypotheses. For example, a set of words can be identified as being in a “confident” group when the probability of the set of words of belonging to a field at the classification output is higher than a specified threshold. For example, a specified threshold can be 0.5, 0.4, or another value that has been determined by an expert method such as test sample document evaluation, or a value that is a single pre-set hyperparameter (e.g., a parameter that is derived from a prior distribution). Hypothesis quality evaluation can involve identifying a “confident” word and words that are “reliably” linked to the confident word. A starting component from the hypotheses is selected for evaluation and a search area around the hypotheses is selected for limiting a search for other words. For example, one or more “confident” words are selected to start building one or more hypotheses. The search area is considered to be a rectangle describing the word with borders that fall behind the distance of a defined maximum field size from the borders of the starting component. Then a “confident chain” of hypotheses is assembled by including all words “reliably” associated with the starting component that lie within the search area. Then a final one or more hypotheses are assembled and evaluated, which include all words from the search area that are “reliably” associated with all components of the “confident chain.” Whether the words are “reliably” associated can be determined using a heuristically configurable threshold, for example, identifying what value above the configured threshold is accepted as reliably linked, what value below the specified threshold is accepted as an unreliable link, etc. For example, the specified threshold can be obtained by training a large sample of data received from the client by a suitable machine learning method and by testing its value by cross validation or other suitable method. Additionally, further analysis of the documents can be performed by comparing the fields of the hypothesis having the highest quality to other hypotheses. In some implementations, system300uses the field detection and retrieval unit380to detect and classify fields on other input document(s)380. System300may detect fields according to the selected hypotheses of potential fields with a quality value that meets a defined quality threshold and/or obtained analysis results on internal format of the content within the potential fields. For example, when system300receives an input document380for field detection and retrieval, system300can detect fields on the input document380, classify the fields, and extract content within the detected fields. In some examples, system300can automatically upload the document image with the detected fields and retrieved content to a data store, and/or a software portal.FIG.6depicts an illustrative example of an input document600with detected fields610, in accordance with one or more aspects of the present disclosure. System300detects the fields610based on the hypotheses derived using the training set of document images. Referring back toFIG.3, in some implementations, system300may receive an input document image and may identify, within the input document image, a candidate region for each field of interest based on the heat maps built for this field with respect to one or more reference elements. Each identified candidate region would include the input document image pixels corresponding to heat map elements satisfying a threshold condition (e.g., having their respective values exceeding a threshold, selecting a pre-defined share of pixels having the largest values, etc.). The selected candidate regions may then be treated as the positions of the corresponding fields, i.e., by applying OCR techniques to the image fragments lying within the candidate regions. In some examples, two or more heat maps can be used for detection of the fields where each heat map is built for a different reference element. For example, to detect a location of a specific field (e.g., field corresponding to “Invoice #” reference element, referred to as “Invoice #” field hereinafter) on a new input document image380, keywords from the document image are first identified using a dictionary of keywords. For example, identified keywords can include “Date,” “Total,” “Balance Due,” “Due Date,” etc. System300selects the heat maps for each of the keywords and identifies the probable position for the specific field within the document image. For example, system300selects a heat map for reference element “Date,” and identifies the probable position of the specific field (e.g., “Invoice #” field) relative to the reference element “Date.” The probable position of the field is obtained based on the hypotheses generated based on the values of heat map elements being above threshold values. Similarly, system300selects a heat map for reference element “Total,” and identifies the probable position of the specific field (e.g., “Invoice #” field) relative to the reference element “Total,”, and so on, for heat maps for the different keywords found in the input document. System300then compares the heat maps, and identifies intersection of the probable spots identified using the different heat maps. System300selects one or more spots on the input document which correspond to the maximum number of intersecting heat maps and defines the region including the spots as the candidate region for the specific field (e.g., “Invoice #” field) on the input document image. A threshold number can be specified for the number of intersecting spots. If the number of intersecting spots on the heat maps meet or exceed the threshold number, the spots area selected to be included in the candidate region. Accordingly, the candidate region is detected to be the specific field on the input document. In some implementations, content extracted from each detected document field can be evaluated using BPE tokens, by evaluating the differences (e.g., Euclidian distances) between the BPE token representing the extracted content of a given field of the input document and the BPE tokens computed for the same field in the training documents. If the computed distance between the BPE token representing the content extracted from a candidate field and the aggregate (e.g., mean, average) of the BPE token(s) representing the same field in the training data set is less than a threshold, the likelihood that the field is detected correctly is relatively high, and the candidate field may be accepted for information extraction. In some implementations, outputs of various modules can be connected for a priori document analysis (e.g., documents of such types as invoices, tables, receipts, key-value, etc.). Custom rules can also be used that describe knowledge about the nature of the input document (for example, country code, page number, and the like). In some implementations, after receiving a large set of documents (e.g., several thousand documents), system300can repeat the training process but with errors identified in the field detection process. This can further improve the quality of the field detection. In some implementations, system3400can determine the accuracy of the user markup on the training documents and correct any inaccuracy that is detected. Documents with custom field markup are received as training input. The markup in the batch can be complete (correct), partial, inconsistent (the same fields are marked in different places), erroneous. These markup represents exactly the markup that the user performed. For each marked field, possible stable combinations of the relative position of other fields are detected based on the markup of other fields, the search for these fields by the system, and various keywords (frequency words that are included in the field region). The relative position of fields can be determined by the absolute location (e.g., as it relates to the document the field is on, such as a line number or pixel identification on the document) or relative location (e.g., as compared to a particular element on the document, such as, a “total” field being to the right of the “date” field by 100 pixels), or the zone (e.g., range) of acceptable location (distribution) of certain fields or keywords (e.g., an indication that a “client number” field must always be to the left of the “client name” field and no further than 100 pixels away, otherwise it is considered to be not a value for the field). The fields for which there are stable combinations of other fields and keywords, and for which these combinations are repeated or correlated from document to document, are considered stable and probably correctly marked. Fields for which no stable regularities are found are considered either incorrectly marked or singly marked. Fields of the same type (for example, “total”) with different stable structures or combinations on different sets of documents are considered either inconsistent (if documents of the same cluster or vendor) or reveal heterogeneity of the documents on which they are located. Thus, the system can verify the correctness of the received markup and predict markup with a high confidence level when the system is first started with a small number of documents necessary for starting the training of the system, assuming that the system contains a universal pre-trained markup machine learning model containing standard rules regarding the intended types of user documents. Further, when collecting user markup statistics, the model is trained on user documents in the opposite direction, knowing stable combinations of fields and keywords, the system can identify areas of possible location of unmarked fields or incorrectly marked fields and give the user hints. For example, the system can provide hints on how to mark up a particular document correctly, or upload a selection of documents where the markup is clearly incorrect and needs to be corrected. FIG.7depicts a flow diagram of one illustrative example of a method for segmentation of a document into blocks of various types, in accordance with one or more aspects of the present disclosure. Method700and/or each of its individual functions, routines, subroutines, or operations may be performed by one or more processors of the computer system (e.g., example computer system800ofFIG.8) executing the method. In certain implementations, method700may be performed by a single processing thread. Alternatively, method700may be performed by two or more processing threads, each thread executing one or more individual functions, routines, subroutines, or operations of the method. In an illustrative example, the processing threads implementing method700may be synchronized (e.g., using semaphores, critical sections, and/or other thread synchronization mechanisms). Alternatively, the processing threads implementing method700may be executed asynchronously with respect to each other. Therefore, whileFIG.7and the associated description lists the operations of method700in certain order, various implementations of the method may perform at least some of the described operations in parallel and/or in arbitrary selected orders. In one implementation, the method700may be performed by one or more of the various components ofFIG.1, such as, field detection engine122, field training engine151, etc. At block710, the computer system implementing the method may receive a training data set. The training data set may comprise a plurality of document images. Each document image of the plurality of document images may be associated with respective metadata identifying a document field. The document field may contain characters, such as, a variable text. For each document image of the plurality of document images, the metadata may identify a marked up document field location corresponding to the document field. At block720, the computer system may generate a first heat map. The heat map may be represented by a data structure (e.g., a grid, a plot, etc.). The data structure may include a plurality of heat map elements (e.g., cells) corresponding to a plurality of document image pixels. In some examples, each heat map element stores a counter. The counter may indicate a number of document images in which the document field contains a document image pixel associated with the heat map element. At block730, the computer system may receive an input document image. At block740, the computer system may identify a candidate region within the input document image. The candidate image may comprise the document field. In some examples, the candidate region comprises a plurality of input document image pixels. In some examples, the input document image pixels correspond to heat map elements satisfying a threshold condition. In some examples, the candidate region is identified using a plurality of heat maps. The plurality of heat maps may comprise the first heap map and one or more additional heat maps. Each of the plurality of heat maps identify a potential document field location corresponding to the document field. Each of the plurality of heat maps identify the potential document field location relative to a respective reference element in each of the plurality of heat maps. In some examples, the respective reference element comprises a predefined word (e.g., a keyword such as “Date”), or a predefined graphical element (e.g., a visual divider, a logo, etc.). Additionally, in the training stage, the computer system may extract a content of each taining document image where the content is included in the potential document field location (e.g., a content “1000,” “2000,” and “1500,” respectively, in each document where the content is contained within a potential location for a “total” field). The computer system may then analyze the content of each document image using Byte Pair Encoding (BPE) tokens. In an implementation, to analyze the content, the computer system may represent the content of each document image using BPE tokens to derive tokenized content for each document image. The computer system may generate vector representation (e.g., BPE embeddings) of the tokenized content for each document image. The computer system may calculate a distance (e.g., a Euclidian distance) between a pair of embeddings (e.g., embeddings representing “1000” and “2000”) from two document images of the plurality of document images. If it is determined that distance is less than a predefined value, the computer system indicates that the potential document field location is likely to be correct. In the input document field detection stage, when a candidate field is detected (e.g., “total”) on an input document using the trained model, the content (e.g., “2899”) of the detected field can be extracted, and the BPE embeddings of the extracted content can be generated. An aggregate value of the BPE embeddings of the content (e.g., the content “1000,” “2000,” and “1500” in the training documents) of the field in the set of training documents can be calculated. If the computed distance between the BPE token representing the content extracted from the detected field on the input document and the aggregate BPE token(s) representing the same field in the training data set is less than a threshold, then the likelihood that the field is detected correctly is relatively high, and the candidate field may be accepted as the detected field and selected for information extraction. FIG.8depicts an example computer system800which can perform any one or more of the methods described herein, in accordance with one or more aspects of the present disclosure. In one example, computer system800may correspond to a computing device capable of performing method700ofFIG.7. The computer system800may be connected (e.g., networked) to other computer systems in a LAN, an intranet, an extranet, or the Internet. The computer system800may operate in the capacity of a server in a client-server network environment. The computer system800may be a personal computer (PC), a tablet computer, a set-top box (STB), a personal Digital Assistant (PDA), a mobile phone, a camera, a video camera, or any device capable of executing a set of instructions (sequential or otherwise) that specify actions to be taken by that device. Further, while only a single computer system is illustrated, the term “computer” shall also be taken to include any collection of computers that individually or jointly execute a set (or multiple sets) of instructions to perform any one or more of the methods discussed herein. The exemplary computer system800includes a processing device802, a memory804(e.g., read-only memory (ROM), flash memory, dynamic random access memory (DRAM) such as synchronous DRAM (SDRAM)), and a data storage device818, which communicate with each other via a bus830. Processing device802represents one or more general-purpose processing devices such as a microprocessor, central processing unit, or the like. More particularly, the processing device802may be a complex instruction set computing (CISC) microprocessor, reduced instruction set computing (RISC) microprocessor, very long instruction word (VLIW) microprocessor, or a processor implementing other instruction sets or processors implementing a combination of instruction sets. The processing device802may also be one or more special-purpose processing devices such as an application specific integrated circuit (ASIC), a field programmable gate array (FPGA), a digital signal processor (DSP), network processor, or the like. The processing device802is configured to execute instructions for performing the operations and steps discussed herein. The computer system800may further include a network interface device822. The computer system800also may include a video display unit810(e.g., a liquid crystal display (LCD) or a cathode ray tube (CRT)), an alphanumeric input device812(e.g., a keyboard), a cursor control device814(e.g., a mouse), and a signal generation device816(e.g., a network). In one illustrative example, the video display unit810, the alphanumeric input device812, and the cursor control device814may be combined into a single component or device (e.g., an LCD touch screen). The data storage device818may include a computer-readable medium824on which the instructions826embodying any one or more of the methodologies or functions described herein is stored. The instructions826may also reside, completely or at least partially, within the memory804and/or within the processing device802during execution thereof by the computer system800, the memory804and the processing device802also constituting computer-readable media. The instructions826may further be transmitted or received over a network via the network interface device822. While the computer-readable storage medium824is shown in the illustrative examples to be a single medium, the term “computer-readable storage medium” should be taken to include a single medium or multiple media (e.g., a centralized or distributed database, and/or associated caches and servers) that store the one or more sets of instructions. The term “computer-readable storage medium” shall also be taken to include any medium that is capable of storing, encoding or carrying a set of instructions for execution by the machine and that cause the machine to perform any one or more of the methodologies of the present disclosure. The term “computer-readable storage medium” shall accordingly be taken to include, but not be limited to, solid-state memories, optical media, and magnetic media. Although the operations of the methods herein are shown and described in a particular order, the order of the operations of each method may be altered so that certain operations may be performed in an inverse order or so that certain operation may be performed, at least in part, concurrently with other operations. In certain implementations, instructions or sub-operations of distinct operations may be in an intermittent and/or alternating manner. It is to be understood that the above description is intended to be illustrative, and not restrictive. Many other implementations will be apparent to those of skill in the art upon reading and understanding the above description. The scope of the disclosure should, therefore, be determined with reference to the appended claims, along with the full scope of equivalents to which such claims are entitled. In the above description, numerous details are set forth. It will be apparent, however, to one skilled in the art, that the aspects of the present disclosure may be practiced without these specific details. In some instances, well-known structures and devices are shown in block diagram form, rather than in detail, in order to avoid obscuring the present disclosure. Some portions of the detailed descriptions above are presented in terms of algorithms and symbolic representations of operations on data bits within a computer memory. These algorithmic descriptions and representations are the means used by those skilled in the data processing arts to most effectively convey the substance of their work to others skilled in the art. An algorithm is here, and generally, conceived to be a self-consistent sequence of steps leading to a desired result. The steps are those requiring physical manipulations of physical quantities. Usually, though not necessarily, these quantities take the form of electrical or magnetic signals capable of being stored, transferred, combined, compared, and otherwise manipulated. It has proven convenient at times, principally for reasons of common usage, to refer to these signals as bits, values, elements, symbols, characters, terms, numbers, or the like. It should be borne in mind, however, that all of these and similar terms are to be associated with the appropriate physical quantities and are merely convenient labels applied to these quantities. Unless specifically stated otherwise, as apparent from the following discussion, it is appreciated that throughout the description, discussions utilizing terms such as “receiving,” “determining,” “selecting,” “storing,” “setting,” or the like, refer to the action and processes of a computer system, or similar electronic computing device, that manipulates and transforms data represented as physical (electronic) quantities within the computer system's registers and memories into other data similarly represented as physical quantities within the computer system memories or registers or other such information storage, transmission or display devices. The present disclosure also relates to an apparatus for performing the operations herein. This apparatus may be specially constructed for the required purposes, or it may comprise a general purpose computer selectively activated or reconfigured by a computer program stored in the computer. Such a computer program may be stored in a computer readable storage medium, such as, but not limited to, any type of disk including floppy disks, optical disks, CD-ROMs, and magnetic-optical disks, read-only memories (ROMs), random access memories (RAMs), EPROMs, EEPROMs, magnetic or optical cards, or any type of media suitable for storing electronic instructions, each coupled to a computer system bus. The algorithms and displays presented herein are not inherently related to any particular computer or other apparatus. Various general purpose systems may be used with programs in accordance with the teachings herein, or it may prove convenient to construct more specialized apparatus to perform the required method steps. The required structure for a variety of these systems will appear as set forth in the description. In addition, aspects of the present disclosure are not described with reference to any particular programming language. It will be appreciated that a variety of programming languages may be used to implement the teachings of the present disclosure as described herein. Aspects of the present disclosure may be provided as a computer program product, or software, that may include a machine-readable medium having stored thereon instructions, which may be used to program a computer system (or other electronic devices) to perform a process according to the present disclosure. A machine-readable medium includes any procedure for storing or transmitting information in a form readable by a machine (e.g., a computer). For example, a machine-readable (e.g., computer-readable) medium includes a machine (e.g., a computer) readable storage medium (e.g., read only memory (“ROM”), random access memory (“RAM”), magnetic disk storage media, optical storage media, flash memory devices, etc.). The words “example” or “exemplary” are used herein to mean serving as an example, instance, or illustration. Any aspect or design described herein as “example” or “exemplary” is not necessarily to be construed as preferred or advantageous over other aspects or designs. Rather, use of the words “example” or “exemplary” is intended to present concepts in a concrete fashion. As used in this application, the term “or” is intended to mean an inclusive “or” rather than an exclusive “or”. That is, unless specified otherwise, or clear from context, “X includes A or B” is intended to mean any of the natural inclusive permutations. That is, if X includes A; X includes B; or X includes both A and B, then “X includes A or B” is satisfied under any of the foregoing instances. In addition, the articles “a” and “an” as used in this application and the appended claims should generally be construed to mean “one or more” unless specified otherwise or clear from context to be directed to a singular form. Moreover, use of the term “an embodiment” or “one embodiment” or “an implementation” or “one implementation” throughout is not intended to mean the same embodiment or implementation unless described as such. Furthermore, the terms “first,” “second,” “third,” “fourth,” etc. as used herein are meant as labels to distinguish among different elements and may not necessarily have an ordinal meaning according to their numerical designation. | 68,202 |
11861926 | Corresponding reference characters indicate corresponding parts throughout the several views. Although the drawings represent embodiments of the present disclosure, the drawings are not necessarily to scale, and certain features may be exaggerated in order to better illustrate and explain the present disclosure. The exemplification set out herein illustrates an embodiment of the disclosure, in one form, and such exemplifications are not to be construed as limiting the scope of the disclosure in any manner. For the purposes of promoting an understanding of the principles of the present disclosure, reference below is made to the embodiments illustrated in the drawings, which are described below. The exemplary embodiments disclosed herein are not intended to be exhaustive or to limit the disclosure to the precise form disclosed in the following detailed description. Rather, these exemplary embodiments were chosen and described so that others skilled in the art may utilize their teachings. One of ordinary skill in the art will realize that the embodiments provided can be implemented in hardware, software, firmware, and/or a combination thereof. Programming code according to the embodiments can be implemented in any viable programming language or a combination of a high-level programming language and a lower level programming language. DETAILED DESCRIPTION The present embodiments may relate to, inter alia, computer systems and computer-implemented methods that facilitate a receipt capturing process. In some aspects, a purchase of an item (such as a product or service) at a merchant location or online may trigger or cause a mobile device to ask the purchaser if they would like electronically capture receipt information, along with other information, such as notes or warranty information, and store the electronic receipt and other information at a remotely accessible and searchable database, memory unit, and/or server for future use or reference. In some aspects, a receipt capture tool residing on a customer mobile device may be initiated when a customer completes an in-store or online purchase. The receipt capture tool may prompt the customer to capture an image of a receipt detailing a purchase and an item (e.g., a product or service) purchased. For instance, the photo of a physical receipt may be taken by the mobile device, or an electronic receipt or email detailing the purchasing transmitted from a physical merchant or online merchant server may be stored. Receipt information may be extracted, such as via OCR (optical character recognition), and saved with other information pertinent to the item purchased, including warranty information. If the customer needs to return or repair the item (in the case of a product) purchased at a future date, the receipt and warranty information may be subsequently accessed via their mobile device. The receipt and warranty information may also be stored in a searchable database to facilitate easy retrieval by the customer. While the present embodiments may be used to capture and store hard copy or electronic receipts associated with items or products purchased (e.g., appliances, televisions, computers, mobile devices, electronic devices, etc.), the present embodiments may also be used to capture and store hard copy or electronic receipts associated with services purchased. For instance, a homeowner may pay a contractor or service person to make a repair at their residence. In the way, the captured receipt may also function as a home maintenance log, and may also be used to track warrantees on work performed, as well as to track the date the work was performed and by whom. In one aspect, the services may relate to automobiles, such as oil changes, maintenance, repairs, new parts, etc. The service-related receipts and other documents captured by the present embodiments may include auto maintenance receipts and bills, as well as auto warranty documents, and auto insurance receipts and any claim documents. In another aspect, the services may relate to homes, such as maintenance, repairs, new roofs and roof installation, new siding and siding installation, yard work, new window installation, kitchen or bathroom upgrade work, new appliances and associated installation, etc. The service-related receipts and other documents captured by the present embodiments may include home maintenance receipts and bills, as well as home warranty documents, and homeowners insurance receipts and any claim documents. Other types of service-related images and documents may also be captured and stored with the present receipt capture embodiments. Exemplary Computer System FIG.1is a block diagram of a computer system100for capturing receipt information of a purchased item at or near the time of the purchase to be stored in a secure database for further access. The receipt information may include, but is not limited to, a purchased item (e.g., the item name and/or description), a purchase date, a purchase amount, seller/retailer information, a merchant chain, type of merchant (such as online or brick-and-mortar), a retention period, warranty information, and a category of which the receipt is stored in the secure database. To do so, the system100may include a server110(e.g., a financial institution's computer system) and a computing device130associated with a customer that is communicatively coupled to the server110via a network150(e.g., a local area network (LAN), a wide area network (WAN), a personal area network (PAN), the Internet, etc.). The system100may further include one or more servers160,170that are associated with other financial institution's computer system such that the computing device130may communicate with different financial institutions. In general, the computing device130may include any existing or future devices capable of detecting, collecting, storing, transmitting, and/or displaying data to the customer. For example, the computing device may be, but not limited to, a computer, a notebook, a laptop, a mobile device, a smartphone, a tablet, wearable, smart glasses, or any other suitable computing device that is capable of communicating with the server110. In operation, the computing device130is operated by the customer to capture receipts. For example, the customer may use an application (e.g., a financial institution mobile app, a website application associated with a financial institution) running on the customer's mobile device (e.g., the computing device130) to capture a receipt via a receipt capturing tool. The customer may be prompted by a server associated with a financial institution (e.g.,110) at or near the time of the purchase of an item to capture the receipt of the purchased item via direct or indirect wireless communication or data transmission from the application running on the customer's mobile device. Alternatively, the customer may choose to manually record a receipt of a purchased item via the application running on the customer's mobile device that is in communication with the server associated with a financial institution. Additionally or alternatively, the customer may choose to manually record an electronic receipt of an item that is purchased online or via the internet from an online merchant, such as via Amazon.com or BestBuy.com. It should be appreciated that the receipt capturing tool may be used to capture receipts not only for purchased items, but also for purchased services (e.g., new installation, replacement, or improvements of a structure and/or an automobile). For example, if a home owner is paying a contractor or a service person to make a repair at home, the receipt capturing tool may capture the receipt of the purchased service at or near the time of the payment of the service. Alternatively, the home owner may choose to manually record the receipt (e.g., a hardcopy or electronic receipt) of the purchased service. As such, the application may also function as a home maintenance log and may be used to track the date of the service, the name of the company who performed the service, and/or any warranty information The computing device130includes a processor132(e.g., a central processing unit (CPU) and/or a graphics processing unit (GPU)), a memory134(e.g., random-access memory (RAM), read-only memory (ROM), and/or flash memory), an input/output (I/O) controller136(e.g., a network transceiver), a memory unit138, a display140, a user interface142(e.g., a display screen, a touchscreen, and/or a keyboard), a speaker/microphone144, and a camera146, all of which may be interconnected via one or more address/data bus. It should be appreciated that although only one processor132is shown, the computing device130may include multiple processors132. Although the I/O controller136is shown as a single block, it should be appreciated that the I/O controller136may include a number of different types of I/O components. The computing device130may further include a database148. As used herein, the term “database” may refer to a single database or other structured data storage, or to a collection of two or more different databases or structured data storage components. In the illustrative embodiment, the database148is part of the computing device130. In some embodiments, the computing device130may access the database148via a network such as network150. The database148may store data that is to be transmitted to the server110. For example, the data may include a photograph of a receipt or a scanned receipt, such as a hardcopy receipt from an in-store purchase. Additionally or alternatively, the data may include or be associated with a digital image of a receipt or an electronic receipt from an in-store purchase or from an online purchase. The computing device130may further include a number of software applications stored in a memory unit138, which may be called a program memory. The various software applications on the computing device130may include specific programs, routines, or scripts for performing processing functions associated with the methods described herein. Additionally or alternatively, the various software applications on the computing device130may include general-purpose software applications for data processing, database management, data analysis, network communication, web server operation, or other functions described herein or typically performed by a server. The various software applications may be executed on the same computer processor or on different computer processors. Additionally, or alternatively, the software applications may interact with various hardware modules that may be installed within or connected to the computing device130. Such modules may implement part of or all of the various exemplary method functions discussed herein or other related embodiments. The microphone and/or speaker144may be used to communicate with a user of the receipt capturing tool to facilitate capturing receipts. To do so, the microphone may be any electronic device that is capable of capturing sound and converting into an electrical audio output signal. In the illustrative embodiment, the microphone is configured to capture a voice of a user (e.g., a customer) using the receipt capturing tool. For example, as described further below, a customer may capture receipts (e.g., to enter or edit receipt information) using voice dictation via the microphone of the computing device130. The speaker may be any electronic device that is capable of generating sound in response to an electrical audio input signal. In some embodiments, the receipt capturing tool may play audio instructions via the speaker to guide the customer through the receipt capturing process. The camera146may be any electronic device that is capable of capturing an image. In the illustrative embodiment, the camera146may be used to capture an image of a hardcopy receipt from an in-store purchase. It should be appreciated that, in some embodiments, the camera146may be used to capture an image of an electronic receipt that is displayed on a display screen of another computing device. Although only one computing device130is shown inFIG.1, the server110is capable of communicating with multiple computing devices similar to the computing device130, wherein each computing device is associated with a customer and is configured to transmit data (e.g., any data input, receipt information, a photograph of a receipt, a scanned receipt, electronic receipt, online receipt, etc.) to the server110. Referring now to the server110, the server110includes a processor112(e.g., a microprocessor, a microcontroller), a memory114, and an input/output (I/O) controller116(e.g., a network transceiver). The server110may be a single server or a plurality of servers with distributed processing. The server110may receive data from and/or transmit data to the computing device130and may store data in a secure database120. In operation, the server110associated with a financial institution may capture a receipt for a customer, who has one or more accounts (e.g., debit account(s) and/or credit card(s)) with the financial institution, and retain the receipt at a secure database of the financial institution for record keeping. To do so, the server110may be configured to determine if the server110requires permission from the customer to activate a receipt capturing tool of at least one customer's account associated with the financial institution. In other words, the server110may determine whether the customer has an account that has a receipt capturing tool inactivated so that the server110may receive permission from the customer to activate the receipt capturing tool feature. As described above, the receipt capturing tool allows the customer to capture the receipt at or near the time of the purchase. Additionally or alternatively, the receipt capturing tool may allow the customer to capture or copy an electronic receipt, such as from an online purchase. If the server110determines that the customer has an account that has a receipt capturing tool inactivated, the server110may prompt the customer whether to activate a receipt capturing tool for that account. It should be appreciated that the server110may prompt the customer an option to activate a receipt capturing tool for one or more accounts. The prompt may be provided to the customer via a text, an email, and/or a phone call. If the customer has a preferred method of communication (e.g., a text, an email, and/or a phone call) set up with the financial institution, the server110may communicate with the customer via the preferred method of communication. Subsequently, the server110may receive a response from the customer whether to activate the receipt capturing tool. In some cases, the customer may select one or more accounts that the customer wants to activate the receipt capturing tool. If the server110determines that the customer indicated that the customer wishes to activate the receipt capturing tool for one or more accounts, the server110may prompt the customer to obtain a preferred communication method for receiving captured receipt alerts. The preferred communication method may include a text, an email, and/or a phone call. The server110may receive the preferred communication method from the customer and may activate the receipt capturing tool for the one or more accounts indicated by the customer. Subsequently, the server110may monitor for a purchase that is made with one or more of customer's accounts that have the receipt capturing tool activated. If, however, the server110determines that the customer does not have any account that has the receipt capturing tool inactivated or does not wish to activate the account that has the receipt capturing tool inactivated, the server110may further determine whether the customer has at least one account associated with the financial institution that has the receipt capturing tool activated. For example, the customer may have an account(s) that the customer already elected to opt in for the receipt capturing tool when the customer opened the account(s). If the server110determines that such an account exists, the server110may monitor for a purchase that is made with that account(s) (i.e., a customer's account(s) that has the receipt capturing tool activated). If the server110determines that the customer purchased an item using an account (e.g., a debit account or credit card) associated with the financial institution, and the server110may further determine whether the account used to purchase the item has the receipt capturing tool activated. If the server110determines that the account used to purchase the item has the receipt capturing tool activated, the server110may transmit a captured receipt alert to the customer via the preferred communication method. The captured receipt alert may include a purchased item, a purchase date, a purchase amount, and seller/retailer information. Additionally, the captured receipt alerts may further include an inquiry to the customer whether to capture and store the receipt of the purchased item. An exemplary screen shot of the customer's mobile device (e.g., the computing device130) of such a captured receipt alert is shown inFIG.5A. The server110may determine receipt information to pre-fill data fields to be stored in a secure database associated with the financial institution. An exemplary screen shot of the customer's mobile device (e.g., the computing device130) of a receipt detail screen with pre-filled data fields is shown inFIG.5C. The data fields may include, but not limited to, a purchased item (including the item name and description), a purchase date, a purchase amount, seller/retailer information, a retention period, warranty information, and a category of which the receipt is stored in the secure database. The retention period indicates how long the receipt will be stored in the secure database. In the illustrative embodiment, the retention period is set up as one year as a default but may be edited by the customer. Additionally, all purchases may be categorized as “Purchases” category as a default. However, the customer may create or add one or more categories to identify the item (e.g., electronics, clothing, shoes, cosmetics, etc.). The warranty information may be collected from the receipt and/or may be retrieved from a third party server (e.g., a manufacture's web server, a retailer's web server). It should be appreciated that, in some embodiments, the server110may have artificial intelligence capabilities that perform machine learning in analyzing a receipt and determining receipt information to pre-fill data fields. It should be appreciated that the data fields may be edited by the customer. In some embodiments, the data fields may be editable by the customer via a voice memo. For example, the customer may edit the purchased item information by selecting the corresponding data field and using the voice memo feature of the computing device130to specify that the purchased item is “Sony Flat Screen TV for Family Room.” It should be appreciated that the computing device130and/or the server110may perform a voice recognition to translate the voice memo into text in the respective data field. Subsequently, the server110may store the receipt information as indicated in the data fields in the secure database associated with the financial institution. An exemplary screen shot of the customer's mobile device (e.g., the computing device130) that confirms that the receipt has been stored is shown inFIG.5D. As discussed below, the secure database may be a part of the server110. Alternatively, in some embodiments, the server110may access the secure database via a network such as the network150. This allows the customer to access the secure database to search, retrieve, view, print, text, or email any of the customer's receipts and/or electronic receipts stored in the secure database, which is described further inFIGS.6A-6D. If, however, the server110determines that the customer does not have any account that has the receipt capturing tool activated or the account used to purchase the item does not have the receipt capturing tool activated, the server110may still capture and store a receipt via a manual receipt capturing process when requested by the customer. To do so, the server110may receive a request from the customer that the customer wishes to capture a receipt manually. For example, the customer may use an application (e.g., a financial institution mobile app, a web site application associated with the financial institution) associated with the financial institution to manually record a receipt. Exemplary screen shots of the financial institution mobile app are shown inFIGS.5A-5D and6A-6D. The customer may press a “Capture a Receipt” button612in the application running on the customer's mobile device (e.g., the computing device130) as shown inFIG.6Bto record a receipt. Once the server110determines to manually capture the receipt, the server110may prompt the customer to take a photograph of a receipt using the customer's mobile device (e.g., the computing device130) or upload a scanned receipt to the application on the customer's mobile device (e.g., the computing device130). For example, when the “Capture a Receipt” button612is pressed, the application may ask the customer to take a photograph of the receipt, as shown inFIG.5B. Alternatively, in some embodiments, the server110may receive a photographed receipt or a scanned receipt from the customer via a website associated with the financial institution. Additionally or alternatively, the server110may prompt the customer whether to manually capture the receipt information without providing a photographed or scanned receipt. In some embodiments, when the “Capture a Receipt” button612is pressed, the application may ask the customer to take a photograph of a hard copy receipt, as shown inFIG.5B, such as when purchasing an item in a brick-and-mortar store. Additionally or alternatively, the, when the “Capture a Receipt” button612is pressed, the application may ask the customer to take a photograph of an electronic receipt shown on a mobile device display or other computing device display, such as when purchasing an item online and/or from an online merchant. Additionally or alternatively, when the “Capture a Receipt” button612is pressed, the application may retrieve or receive a digital image of a hard copy receipt or an electronic version of the hard copy receipt from a server associated with a brick-and-mortar store when purchasing an item in the brick-and-mortar store. Additionally or alternatively, when the “Capture a Receipt” button612is pressed, the application may retrieve or receive a digital image of an electronic receipt or an electronic version of the receipt from a server associated with an online merchant when purchasing an item online. Subsequently, the server110may determine the receipt information to fill the data fields. As described above, the data fields may include, but not limited to, a purchased item (including the item name and description), a purchase date, a purchase amount, seller/retailer information, a retention period, warranty information, and a category of which the receipt is stored in the secure database. To fill the data fields, the server110may perform an optical character recognition (OCR) or text recognition of the photograph of the receipt or the scanned receipt. Additionally or alternatively, the server110may capture the data fields using a voice memo received from the customer. It should be appreciated that the computing device130and/or the server110may perform a voice recognition to translate the voice memo into text in the respective data field. Subsequently, the server110may store the receipt information as indicated in the data fields in the secure database associated with the financial institution. As discussed above, such a secure database may be a part of the server110. Alternatively, in some embodiments, the server110may access the secure database via a network such as the network150. This allows the customer to access the secure database to search, retrieve, view, print, text, or email any of the customer's receipts stored in the secure database, which is described further inFIGS.6A-6D. Additionally, in some embodiments, the customer may be able to purchase a warranty for the purchased item through a 3rdparty, such as an insurance company. For example, upon capturing of a receipt of a purchased item, the server110may query the customer whether the customer wishes to purchase a warranty or additional warranty for the purchased item. It should be appreciated that, in some embodiments, the financial institution associated with the server110may be affiliated with a particular insurance company. In such embodiments, the server110may provide different warranty options for the purchased item from the particular insurance company. In other embodiments, the server110may provide options for various insurance companies for the customer to select from. The warranty may be based upon the purchased price, the condition of the item, where it was purchased from, and/or the current warranty information of the purchased item. The processor112as disclosed herein may be any electronic device that is capable of processing data, for example a central processing unit (CPU), a graphics processing unit (GPU), a system on a chip (SoC), or any other suitable type of processor. It should be appreciated that the various operations of example methods described herein (i.e., performed by the server110) may be performed by one or more processors112. The memory114may be a random-access memory (RAM), read-only memory (ROM), a flash memory, or any other suitable type of memory that enables storage of data such as instruction codes that the processor112needs to access in order to implement any method as disclosed herein. A database120, which may be a single database or a collection of two or more databases, is coupled to the server110. In the illustrative embodiment, the database120is part of the server110. In some embodiments, the server110may access the database120via a network such as the network150. The server110may also include various software applications stored in the memory118and executable by the processor112. These software applications may include specific programs, routines, or scripts for performing functions associated with the methods described herein. Additionally, the software applications may include general-purpose software applications for data processing, database management, data analysis, network communication, web server operation, or other functions described herein or typically performed by a server. The network150is any suitable type of computer network that functionally couples at least one computing device130with the server110. The network150may include a proprietary network, a secure public internet, a virtual private network and/or one or more other types of networks, such as dedicated access lines, plain ordinary telephone lines, satellite links, cellular data networks, or combinations thereof. In embodiments where the network150comprises the Internet, data communications may take place over the network150via an Internet communication protocol. It should be appreciated that the receipt capturing tool may be used to record items, especially high-value items, that are protected by an insurance policy (e.g., a personal article insurance, a renters insurance, or a homeowners insurance). For example, if a home of an insured is broken into, the insured may need to provide a list of specific items that are stolen or damaged and how much those items were worth to file a claim. Additionally, the insured may need to prove that the ownership of the stolen or damaged items. As such, an insured may use the receipt capturing tool to record a proof of ownership of items and store in the secure database. The items may be purchased, gifted, and/or inherited. For the purchased item, the insured may choose to record the description of the item, the purchased date, the purchased price, the purchased location, the purchased merchant/store, the condition of the item, the category of the item, the warranty information, the insurance information (e.g., insurer, policy number), and/or the ownership information. It should be appreciated that, if the purchased item was automatically captured via the receipt capturing tool at or near the time of purchase, the insured may add data fields to capture additional information, such as the insurance information and/or the ownership information, at the time of capturing the receipt of the purchased item. Alternatively, the insured may modify the existing entry to add additional information by accessing the secure database. For the gifted or inherited item, the insured may choose to record the description of the item, the gifted or inherited date, the value of the gifted or inherited item, the gifted or inherited location, the condition of the gifted or inherited item at the time when the item was gifted or inherited to the insured, the category of the item, the warranty information, the insurance information (e.g., insurer, policy number), and/or the ownership information. Since the gifted or inherited item cannot be automatically captured by the receipt capturing tool, the insured may manually input these data fields associated with the gifted or inherited items to be stored in the secure database. Once the items are recorded and stored in the secure database, the insured may access the secure database to search for a specific item and/or generate a summary report based upon data fields (e.g., date, amount, category, and/or insurance information). As such, if the insured is required to provide a proof of ownership of certain items, the insured may access the secure database to generate a summary report of the certain items with one or more data fields. Exemplary Computer-Implemented Method Referring now toFIGS.2-5, a computer-implemented method200for capturing a receipt for a customer, who has one or more accounts (e.g., debit account(s) and/or credit card(s)) with a financial institution, and retaining the receipt at a secure database of the financial institution for record keeping is shown. In the illustrative embodiment, the method200is performed by a server of a financial institution (e.g.,110). In block202, the server110associated with a financial institution determines if the customer has at least one account associated with the financial institution that has a receipt capturing tool inactivated. If the server110determines that such an account does not exist in block204, the method200skips ahead to block218to determine if the customer has at least one account that has the receipt capturing tool previously activated, which is described further below. If, however, the server110determines that the customer has an account that has a receipt capturing tool inactivated in block204, the method200advances to block206. In block206, the server110prompts the customer whether to activate a receipt capturing tool for that account. In some embodiments, the server110may prompt the customer an option to activate a receipt capturing tool for one or more accounts. The prompt may be provided to the customer via a text, an email, and/or a phone call. If the customer has a preferred method of communication (e.g., a text, an email, and/or a phone call) set up with the financial institution, the server110communicates with the customer via the preferred method of communication. Subsequently, in block208, the server110receives a response from the customer whether to activate the receipt capturing tool. In some embodiments, the customer may select one or more accounts that the customer wants to activate the receipt capturing tool. If the server110determines in block210that the customer indicated that the customer would like to activate the receipt capturing tool for one or more accounts, the method200advances to block212. In block212, the server110prompts the customer to obtain a preferred communication method for receiving captured receipt alerts. The preferred communication method may include a text, an email, and/or a phone call. The server110receives the preferred communication method from the customer in block214and activates the receipt capturing tool for the one or more accounts indicated by the customer in block216. Subsequently, the method200proceeds to block222ofFIG.3. It should be appreciated that, when the receipt capturing tool is activated, the captured receipt alerts is transmitted to the customer when the server110determines that a customer's account (e.g., debit account or credit card) is used to purchase an item. The captured receipt alerts may include, for example, a purchased item, a purchase date, a purchase amount, and seller/retailer information. Additionally, the captured receipt alerts further include an inquiry to the customer whether to capture and store the receipt. An exemplary screen shot of the customer's mobile device (e.g., the computing device130) of such a captured receipt alert is shown inFIG.5A. The receipt capturing tool allows the customer to capture the receipt at or near the time of the purchase—whether the purchase is at a physical store or via an online merchant. Referring back to block210, if the server110determines that the customer indicated that the customer does not wish to activate the receipt capturing tool, the method200skips ahead to block218. In block218, the server110determines if there is at least one account of the customer that has the receipt capturing tool previously activated. For example, the customer may have an account that the customer already elected to opt in for the receipt capturing tool when the customer opened the account. If the server110determines that such an account exists, in block220, the method200advances to block222shown inFIG.3. If, however, the server110determines that the customer does not have an account that has the receipt capturing tool activated and that the customer does not wish to activate the receipt capturing tool for one or more of the customer's accounts, the method200skips ahead to block238shown inFIG.4for an option for manual receipt capturing process, which is discussed further below. In block222ofFIG.3, the server110determines whether the customer purchased an item using an account (e.g., a debit account or credit card) associated with the financial institution. If the server110determines that a purchase has not been made in block224, the method200loops back to block222to continue wait for the customer to make a purchase using an account associated with the financial institution. If, however, the server110determines that the purchase has been made using the account associated with the financial institution in block224, the method200advances to block226. In block226, the server110determines whether the account used to purchase the item has the receipt capturing tool activated. If the server110determines that the account used to purchase the item does not have the receipt capturing tool activated in block228, the method200skips ahead to block238shown inFIG.4for an option for manual receipt capturing process. If, however, the server110determines that the account used to purchase the item has the receipt capturing tool activated, the method200advances to block230. In block230, the server110transmits a captured receipt alert to the customer via the preferred communication method with the purchased information, including, but not limited to, a purchased item, a purchase date, a purchase amount, and seller information. The notification further includes an inquiry to the customer whether to capture the receipt of the purchased item. An exemplary screen shot of the customer's mobile device (e.g., the computing device130) of such a captured receipt alert is shown inFIG.5A. In response to the transmission of the captured receipt alert, if the server110determines that the customer does not wish to automatically capture the receipt of the purchased item using the receipt capturing tool, the method200skips ahead to block238shown inFIG.4for an option for manual receipt capturing process, which is discussed further below. If, however, the server110determines that the customer wishes to capture the receipt of the purchased item, the method200advances to block234. In block234, the server110analyzes the receipt to pre-fill data fields to be stored. An exemplary screen shot of the customer's mobile device (e.g., the computing device130) of a receipt detail screen with pre-filled data fields is shown inFIG.5C. The data fields may include, but not limited to, a purchased item (including the item name and description), a purchase date, a purchase amount, seller/retailer information, a retention period, warranty information, and a category of which the receipt is stored in the secure database. The retention period indicates how long the receipt will be stored in the secure database. In the illustrative embodiment, the retention period is set up as one year as a default but may be edited by the customer. Additionally, all purchases may be categorized as “Purchases” category as a default. However, the customer may create or add one or more one or more categories to identify the item (e.g., electronics, clothing, shoes, cosmetics, etc.). The warranty information may be collected from the receipt and/or may be retrieved from a third party server (e.g., a manufacture's web server, a retailer's web server). It should be appreciated that, in the illustrative embodiment, the data fields may be edited by the customer. In some embodiments, the data fields may be editable by the customer via a voice memo feature. For example, the customer may edit the purchased item information by selecting the corresponding data field and using the voice memo feature of the computing device130to specify that the purchased item is “Sony Flat Screen TV for Family Room.” To do so, the computing device130and/or the server110may perform a voice recognition to translate the voice memo into text in the respective data field. It should also be appreciated that some of the receipt information may be determined prior to block230to be included in the capture receipt alert. Subsequently, in block236, the server110stores the receipt information as indicated (or edited) in the data fields in the secure database associated with the financial institution. An exemplary screen shot of the customer's mobile device (e.g., the computing device130) that confirms that the receipt has been stored in a secure database (e.g., Secure Vault) is shown inFIG.5D. As discussed above, such a secure database may be a part of the server110. Alternatively, in some embodiments, the server110may access the secure database via a network such as the network150. This allows the customer to access the secure database to search, retrieve, view, print, text, or email any of the customer's receipts stored in the secure database, which is described further inFIGS.6A-6D. Referring now to block238ofFIG.4, the server110may determine whether to manually capture a receipt. The server110may receive a request from the customer that the customer wishes to capture a receipt manually. For example, the customer may use an application (e.g., a financial institution mobile app, a website application associated with the financial institution) associated with the financial institution to manually record a receipt. Exemplary screen shots of the financial institution mobile app are shown inFIGS.5A-5D and6A-6D. The customer may press a “Capture a Receipt” button612in the application running on the customer's mobile device (e.g., the computing device130) as shown inFIG.6Bto capture a receipt. Once the server110determines to manually capture the receipt, the method200proceeds to block240to prompt the customer to provide the receipt. The server110may prompt the customer to take a photograph of a receipt or upload a scanned receipt, as indicated in blocks242and242. For example, when the “Capture a Receipt” button612is pressed, the application may ask the customer to take a photograph of the receipt, as shown inFIG.5B. In some embodiments, the server110may receive a photographed receipt or a scanned receipt from the customer via a website associated with the financial institution. As described further below, the customer may provide a photographed receipt, a scanned receipt of a hard copy receipt, an electronic receipt, an electronic receipt email, a screenshot or photograph of the electronic receipt, or a screenshot or photograph of the electronic receipt email to manually capture the receipt of a purchased item. Additionally or alternatively, the server110may prompt the customer whether to manually capture the receipt information without providing a photographed or scanned receipt, as indicated in block246. As noted herein, in some embodiments, the item may be purchased at a physical store, the customer may receive a hard copy receipt, and take a photograph of the hard copy receipt via their mobile device. Additionally or alternatively, a server or other computing device at the physical store may transmit or send a photo of the receipt or an electronic receipt to the customer's mobile device. Other embodiments may relate to online purchases. As such, in those embodiments, the item may be purchased online from an online merchant, and the customer may receive an electronic receipt or an email detailing the purchase from the online merchant's server or computing device. The customer may then save the electronic receipt or email or a screenshot of the electronic receipt or email to their mobile device and/or transmit the electronic receipt or email or a screenshot of the electronic receipt or email to their mobile device to a remote server for storage and analysis. Additionally or alternatively, the customer may complete an online purchase via a first computing device, and receive an email or electronic receipt via the first computing device, and take a photograph or screenshot of the email or electronic receipt via their mobile device (or other second computing device) for storage and analysis. Once the receipt of the purchased item is received, the server110fills the data fields as indicated in block248. As described above, the data fields may include, but not limited to, a purchased item (including the item name and description), a purchase date, a purchase amount, seller/retailer information, a retention period, warranty information, a category of which the receipt is stored in the secure database, and a payment type (e.g., credit card, debit account, cash, check, etc.). The retention period indicates how long the receipt will be stored in the secure database. In the illustrative embodiment, the retention period is set up as one year as a default but may be editable by the customer. Additionally, all purchases may be categorized as “Purchases” category as a default. However, the customer may create or add one or more one or more categories to identify the item (e.g., electronics, clothing, shoes, cosmetics, etc.). To fill the data fields, the server110may analyze a photographed receipt, electronic receipt, receipt email, or a scanned receipt received from the customer. For example, the server110may perform an optical character recognition (OCR) or text recognition of the photograph of the hard copy receipt, the screenshot of the electronic receipt or receipt email, or the scanned receipt, as indicated in block250. Additionally or alternatively, the server110may capture and/or edit the data fields using a voice memo received from the customer, as indicated in block252. It should be appreciated that the computing device130and/or the server110may perform a voice recognition to translate the voice memo into text in the respective data field. Subsequently, in block254, the server110stores the receipt information as indicated in the data fields in the secure database associated with the financial institution. As discussed above, such a secure database may be a part of the server110. Alternatively, in some embodiments, the server110may access the secure database via a network such as the network150. This allows the customer to access the secure database to search, retrieve, view, print, text, or email any of the customer's receipts stored in the secure database, which is described further inFIGS.6A-6D. It should be appreciated that, in some embodiments, blocks202-220may be optional steps. It should also be understood that although the method200is described as being performed by the server110, in some examples, such method may be performed by an application running on the computing device130that is in communication with the server110. Exemplary Process Referring now toFIGS.5A-5D, exemplary screen shots of a financial institution mobile app running on of the customer's mobile device (e.g., the computing device130) for receipt capturing process are shown. When a customer purchases an item with a customer's account (e.g., debit card or credit card) that has the receipt capturing tool activated, the customer receives a captured receipt alert, as illustrated inFIG.5A. The captured receipt alert may include, for example, a purchased item, a purchase date, a purchase amount, and seller/retailer information. Additionally, the captured receipt alert further includes an inquiry to the customer whether to capture and store the receipt. The receipt capturing tool allows the customer to capture the receipt at or near the time of the purchase. As noted above, the receipt to be captured may be a hard copy or physical receipt, such as from an in-store purchase. The receipt to be captured may alternatively be an electronic receipt or email received from a physical store server or computing device (for an in-store purchase), or an electronic receipt or email received from an online merchant (for an online purchase). The physical receipt, electronic receipt, and/or email detailing a purchasing may be captured and/or input into the receipt capturing tool and the receipt capturing tool may automatically extract pertinent data as discussed herein, such as item, price, date, location, merchant, warranty information, and other data detailed herein. Returning to the Figures, a receipt detail screen with data fields may be pre-filled upon automatically receiving the purchase information via the receipt capturing tool. As shown inFIG.5C, the data fields may include a purchased item (including the item name and description), a purchase date, a purchase amount, and seller/retailer information. Although it is not shown inFIG.5C, the data fields may further include a retention period, warranty information, and a category of which the receipt is stored in the secure database. The retention period indicates how long the receipt will be stored in the secure database. In the illustrative embodiment, the retention period is set up as one year as a default but may be editable by the customer. Additionally, all purchases may be categorized as “Purchases” category as a default. However, the customer may create or add one or more one or more categories to identify the item (e.g., electronics, clothing, shoes, cosmetics, etc.). The warranty information may be collected from the receipt and/or may be retrieved from a third party server (e.g., a manufacture's web server, a retailer's web server). Those data fields may be editable by the customer. For example, the data fields may be edited by the customer via a user interface (e.g., a touch screen, a keyboard, and/or a microphone). As illustrated inFIG.5C, the customer may edit the purchased item information by selecting the corresponding data field to specify that the purchased item is “Sony Flat Screen TV for Family Room.” Subsequently, the receipt information as indicated (or edited) in the data fields may be stored in the secure database associated with the financial institution (also referred to as a secure vault in this application). The confirmation may be presented to the customer as shown inFIG.5D. As discussed above, such a secure database may be a part of the server (e.g.,110) associated with the financial institution. Alternatively, in some embodiments, the server (e.g.,110) may access the secure database via a network such as the network150. This allows the customer to access the secure database to search, retrieve, view, print, text, or email any of the customer's receipts stored in the secure database, which is described further inFIGS.6A-6D. In some embodiments, the customer's receipt may be searchable via the data fields and/or categories. Alternatively, the customer may choose to manually record receipt information of a purchased item via the financial institution mobile app running on of the customer's mobile device. For example, the customer may choose to manually record the receipt information in response to receiving, via the receipt capturing tool, a prompt from a server associated with a financial institution whether to capture a receipt of an item purchased with the customer's account (e.g., debit account or credit card). Alternatively, if the customer purchases an item using cash, the receipt capturing tool is not able to automatically detect such a purchase. In such cases, the customer may also choose to manually record receipt information of a purchased item. As illustrated inFIG.5B, the financial institution mobile app may prompt the customer to take a photograph of the hard copy receipt or alternatively save an electronic copy of the receipt (or an email detailing the purchase) that is transmitted to a computing device of the customer from the merchant or online merchant. A receipt detail screen with data fields may be pre-filled upon receiving the receipt from the customer. For example, an optical character recognition (OCR) or text recognition of the photograph of the receipt, the scanned receipt, an electronic receipt, or an email detailing the purchase may be performed to pre-fill the receipt information. As shown inFIG.5C, the data fields may include a purchased item (including the item name and description), a purchase date, a purchase amount, and seller/retailer information. Although it is not shown inFIG.5C, the data fields may further include a retention period, warranty information, and a category of which the receipt is stored in the secure database. As described above, those data fields may be editable by the customer. For example, the data fields may be edited by the customer via a user interface (e.g., a touch screen and/or a keyboard) and/or a voice recognition. As illustrated inFIG.5C, the customer may edit the purchased item information by selecting the corresponding data field to specify that the purchased item is “Sony Flat Screen TV for Family Room.” Subsequently, the receipt information as indicated (or edited) in the data fields may be stored in the secure database associated with the financial institution (also referred to as a secure vault in this application). The confirmation may be presented to the customer as shown inFIG.5D. As discussed above, such a secure database may be a part of the server (e.g.,110) associated with the financial institution. Alternatively, in some embodiments, the server (e.g.,110) may access the secure database via a network such as the network150. This allows the customer to access the secure database to search, retrieve, view, print, text, or email any of the customer's receipts stored in the secure database, which is described further inFIGS.6A-6D. Exemplary Process Referring now toFIGS.6A-6D, exemplary screen shots of a financial institution mobile app running on of the customer's mobile device (e.g., the computing device130) for retrieving process of a receipt are shown. To access the secure database, the customer may tap a “Receipts” button602shown inFIG.6A. Under the Receipt Vault menu, the customer has an option to manually capture and/or store a receipt by selecting a “Capture a Receipt” button612. As discussed above, the receipt may be a hard copy receipt, an electronic receipt, or email detailing a purchase. In some cases, the customer may not be able to produce a receipt. For example, the customer may have lost the receipt of a purchased item or an item was gifted or inherited to the customer. In such cases, the customer may utilize the “Capture a Receipt” button612to manually fill the data fields to capture the information of such items. The data fields for the purchased item may include the description of the item, the purchased date, the purchased price, the purchased location, the purchased merchant/store, the condition of the item, the category of the item, the warranty information, the insurance information (e.g., insurer, policy number), and/or the ownership information. The data fields for the gifted or inherited item may include the description of the item, the gifted or inherited date, the value of the gifted or inherited item, the gifted or inherited location, the condition of the gifted or inherited item at the time when the item was gifted or inherited to the insured, the category of the item, the warranty information, the insurance information (e.g., insurer, policy number), and/or the ownership information. Additionally, under the Receipt Value menu, the customer also has an option to access the secure database (i.e., Receipt Vault in this example) by selecting “Retrieve a Receipt” button614. It should be appreciated that, although it is not shown inFIG.6A-6D, receipts stored in the secure database (e.g., Receipt Vault in this example) may be presented to the customer in a chronological order when the “Retrieve a Receipt” button614is selected. When the “Retrieve a Receipt” button614is selected, the customer has an option to search for a particular item y a description, a seller, a purchase amount, and/or a purchase date, as illustrated inFIG.6C. For example, if the customer search with a term “TV” in the description field, the matching receipt is presented to the customer, as illustrated inFIG.6D. The customer then has an option to copy the receipt, print the receipt, or share the receipt via a text, an email, or any other suitable means. Exemplary Embodiments In one aspect, a computer-implemented method for conducting a receipt capturing process may be provided. The method may be implemented via one or more local or remote processors, servers, sensors, and/or transceivers, The method may include, via the one or more local or remote processors, servers, sensors, and/or transceivers: (1) determining whether a customer purchased an item (e.g., a product or service) using an account that has a receipt capturing tool activated; (2) transmitting, in response to determining that the customer purchased the item using the account that has the receipt capturing tool activated, a captured receipt alert to the customer inquiring whether to capture a receipt of the purchased item; (3) analyzing, in response to receiving an indication to capture the receipt of the purchased item, the receipt to determine receipt information; and/or (4) storing the receipt information in a secure database. The receipt information may include information detailing a purchased item, a purchase date, a purchase amount, seller information, a retention period, warranty information, and a category of which the receipt is stored in the secure database. The receipt information may be stored in the database or memory unit that may be remotely searchable, such as via a customer mobile device. The method may include additional, less, or alternate functionality, including that discussed elsewhere herein. For instance, the computer-implemented method may include (i) determining whether to activate a receipt capturing tool of a customer's account; (ii) receiving, in response to determining to activate the receipt capturing tool, a preferred communication method for receiving captured receipt alerts; and/or (iii) activating the receipt capturing tool of the account. The captured receipt alert may include at least one of a purchased item, a purchase date, a purchase amount, seller information, and an inquiry whether to capture and store the receipt. The receipt information may include at least one of a purchased item, a purchase date, a purchase amount, seller information, a retention period, warranty information, and a category of which the receipt is stored in the secure database. The computer-implemented method may include, via the one or more local or remote processors, servers, sensors, and/or transceivers: linking the receipt information in the stored database to a warranty covering the item or to an electronic version of the warranty covering the item to facilitate remote retrieval of the receipt information and the electronic version of the warranty at a later data, such as remote retrieval via a customer mobile device. The receipt information may be remotely searchable via a customer mobile device, such as searchable by category or data field, as discussed above. For instance, the receipt information may include at least one of a purchased item, a purchase date, a purchase amount, seller information, a retention period, warranty information, and a category of which the receipt is stored in the secure database, and the receipt information may be remotely searchable, such as via a customer mobile or other computing device, by type of purchased item, purchase date, purchase data range, purchase amount, purchase amount range, seller information, store, store chain, merchant chain, merchant, customer notes, and/or payment type (cash, credit, debit account, check, etc.). In some embodiments, the receipt may be captured by the receipt capture tool by the customer taking a photo of a hard copy or physical receipt via their mobile device. Additionally or alternatively, the receipt may be captured by the receipt capture tool by saving an electronic receipt sent or transmitted from a merchant or online merchant server or computing device to the customer mobile device or other computing device. Additionally or alternatively, the receipt may be captured by the receipt capture tool by saving an email detailing the purchasing sent or transmitted from a merchant or online merchant server or computing device to the customer mobile device or other computing device. In another aspect, a computer system for capturing a receipt associated with a purchase of an item may be provided. The computer system may include: (a) a network; (b) a computing device; and (c) a server communicatively coupled to the computing device via the network. The server may be configured to: (1) determine whether a customer purchased an item (e.g., a product or service) using an account that has a receipt capturing tool activated; (2) transmit, in response to a determination that the customer purchased the item using the account that has the receipt capturing tool activated, a captured receipt alert to the customer inquiring whether to capture a receipt of the purchased item; (3) analyze, in response to a receipt of an indication to capture the receipt of the purchased item, the receipt to determine receipt information; and/or (4) store the receipt information in a secure database. The receipt information may include information detailing a purchased item, a purchase date, a purchase amount, seller information, a retention period, warranty information, and a category of which the receipt is stored in the secure database. The receipt information may be stored in a database or memory unit in communication with the server and that may be remotely searchable, such as via a customer mobile device. The system may include additional, less, or alternate functionality, including that discussed elsewhere herein. For instance, the computer system and/or server may be further configured to: determine whether to activate a receipt capturing tool of a customer's account; receive, in response to a determination to activate the receipt capturing tool, a preferred communication method for receiving captured receipt alerts; and/or activate the receipt capturing tool of the account. The captured receipt alert may include at least one of a purchased item, a purchase date, a purchase amount, seller information, and an inquiry whether to capture and store the receipt. The receipt information may include at least one of a purchased item, a purchase date, a purchase amount, seller information, a retention period, warranty information, and a category of which the receipt is stored in the secure database. In another aspect, a computer-implemented method for conducting a receipt capturing process may be implemented via one or more local or remote processors, servers, sensors, and/or transceivers. The method may include, via the one or more local or remote processors, servers, sensors, and/or transceivers: (i) determining whether a customer purchased a service using a customer's account that has a receipt capturing tool activated; (ii) transmitting, in response to determining that the customer purchased the service using the account that has the receipt capturing tool activated, a captured receipt alert to the customer inquiring whether to capture a receipt of the purchased service; (iii) automatically capturing, in response to receiving an indication to capture the receipt of the purchased service, the receipt; (iv) analyzing the receipt to determine receipt information; and/or (v) storing the receipt information in a secure database. The method may include additional, less, or alternate actions, including those discussed elsewhere herein. For instance, the receipt information may include at least one of a purchased service, a purchase date, a purchase amount, seller information, a retention period, warranty information, and a category of which the receipt is stored in the secure database. The method may also include linking the receipt information in the stored database to a warranty covering the purchased service or to an electronic version of the warranty covering the purchased service to facilitate remote retrieval of the receipt information and the electronic version of the warranty at a later data, such as remote retrieval via a customer mobile device. The method may also include providing an option to purchase a warranty for the purchased service based upon the receipt information. In another aspect, a computer system for capturing a receipt associated with a purchase of a service may be provided. The system may include a network; a computing device; and a server communicatively coupled to the computing device via the network, the server may be configured to: (i) determine whether a customer purchased a service using a customer's account that has a receipt capturing tool activated; (ii) transmit, in response to a determination that the customer purchased the service using the account that has the receipt capturing tool activated, a captured receipt alert to the customer inquiring whether to capture a receipt of the purchased service; (iii) automatically capture, in response to a receipt of an indication to capture the receipt of the purchased service, the receipt; (iv) analyze the receipt to determine receipt information; and/or (v) store the receipt information in a secure database. The computer system may include additional, less, or alternate functionality, including that discussed elsewhere herein. Exemplary Features The present embodiments may reduce worry and increase confidence among customers by providing a simple way to capture, retain, and access receipts (and potentially other important documents) at or near the point of purchase. Here is how it may work: A customer purchases an item. If purchased using a debit account or credit card, the customer may immediately receive a prompt on their smart phone (text or email, based on personal preference) asking “Would you like to capture and store the receipt for your Seller Name purchase? If the customer answers, “Yes”, the mobile app launches and the customer is prompted to snap a photo and validate a few important data elements (most of which will be pre-filled): Item Purchased: (Customer inputs item name/description here) Purchase Date: Pre-filled from transaction record (editable) Purchase Amount: Pre-filled from transaction record (but editable) Seller: Pre-filled from transaction record (editable) Retention period: 1 yr (default), + and ˜buttons will allow customer to select up to 7 years Vault: Purchases (Default), Other Categories can be added After a receipt (or other document) is captured, the customer will have access to retrieve, view, print, text or email the receipt at any time. As long as the document is still in the Secure Vault (retention period not expired), the document would be retrievable via a chronological listing with search capability (search on date, description, seller, amount, etc.). This will be convenient be for that customer whose 60″ television stops working. They can retrieve the receipt instantly and see that the warranty is still in effect, and provide the necessary proof of purchase to the seller. Customers would also be able to capture receipts later by tapping an icon adjacent to the purchase on their transaction ledger. For purchases made by check or cash, the mobile app would allow a customer to capture and store a receipt as a function of the app built into the menu. Other capabilities of the present embodiments include OCR/Text recognition; voice memo; photo of items; and real-time and after-the-fact capture. For OCR/Text Recognition functionality the system may interpret text on the receipt to prepopulate data fields. For example, a ‘rubber band’ capability would allow the user to select a portion of text on the receipt. If possible the image would be converted to text to populate the field. For Voice Memo functionality, an option may be added to allow voice capture of a data field. For example, a voice memo icon would be provided adjacent to the input field. Selecting the icon would prompt the user to speak a brief description of the item and activate the microphone. The recording may be stored and voice recognition may attempt to convert the recorded words to text. For Photo of Item functionality, in lieu of a text description, a photo of the item could be stored. For example, a photo icon would be provided adjacent to the input field. Selecting the icon would prompt the user to take a photo of the item with the logo and or model number/serial number visible. The photo may be stored and OCR may attempt to convert model and serial number to text. For Real-time and After-the-fact Capture functionality, the receipt may be captured at point of sale or anytime thereafter. Other uses of the present embodiments may include capture warranty information functionality, budgeting/record keeping functionality, and catalog valuable items for insurance/home inventory functionality. The capture warranty information functionality may include or be associated with capturing an image of a warranty card or document; inputting the warranty term, such as in days, months, or years; displaying an estimated warranty expiration data (calculated from days, months or years from the provided purchase date); and/or inputting limitations (original purchaser only, transferable, etc.). The Budgeting/Record Keeping functionality may include capturing all physical and/or electronic receipts; offering receipt categorization (default and user-created); and/or Summary Reporting based upon data fields (Date, Amount, Category, etc.). The Catalog Valuable Items for Insurance/Home Inventory functionality may include and/or be related to capturing photos and descriptions of high-value items covered under insurance policy; inputting owner (default to logged-in user); inputting purchase date and price; inputting condition of item; offering item categorization (default and user-created); and/or Summary Reporting based upon data fields (Date, Amount, Category, etc.). Also, Catalog When an Item is Replaced or Repaired or Improvements were made and there is a warranty period functionality may be provided with the present embodiments. As repair or replacement examples, HVAC and other equipment may be installed having warranties; a new roof may be installed with a 15-, 20- or 25-year warranty; and/or automobile tires with a mileage warranty may be purchased. Additional Considerations While the preferred embodiments of the invention have been described, it should be understood that the invention is not so limited and modifications may be made without departing from the invention. The scope of the invention is defined by the appended claims, and all devices that come within the meaning of the claims, either literally or by equivalence, are intended to be embraced therein. It is therefore intended that the foregoing detailed description be regarded as illustrative rather than limiting, and that it be understood that it is the following claims, including all equivalents, that are intended to define the spirit and scope of this invention. Although the following text sets forth a detailed description of numerous different embodiments, it should be understood that the legal scope of the description is defined by the words of the claims set forth at the end of this patent and equivalents. The detailed description is to be construed as exemplary only and does not describe every possible embodiment since describing every possible embodiment would be impractical. Numerous alternative embodiments may be implemented, using either current technology or technology developed after the filing date of this patent, which would still fall within the scope of the claims. It should also be understood that, unless a term is expressly defined in this patent using the sentence “As used herein, the term” is hereby defined to mean . . . ” or a similar sentence, there is no intent to limit the meaning of that term, either expressly or by implication, beyond its plain or ordinary meaning, and such term should not be interpreted to be limited in scope based upon any statement made in any section of this patent (other than the language of the claims). To the extent that any term recited in the claims at the end of this disclosure is referred to in this disclosure in a manner consistent with a single meaning, that is done for sake of clarity only so as to not confuse the reader, and it is not intended that such claim term be limited, by implication or otherwise, to that single meaning. Finally, unless a claim element is defined by reciting the word “means” and a function without the recital of any structure, it is not intended that the scope of any claim element be interpreted based upon the application of 35 U.S.C. § 112(f). Throughout this specification, plural instances may implement components, operations, or structures described as a single instance. Although individual operations of one or more methods are illustrated and described as separate operations, one or more of the individual operations may be performed concurrently, and nothing requires that the operations be performed in the order illustrated. Structures and functionality presented as separate components in example configurations may be implemented as a combined structure or component. Similarly, structures and functionality presented as a single component may be implemented as separate components. These and other variations, modifications, additions, and improvements fall within the scope of the subject matter herein. Additionally, certain embodiments are described herein as including logic or a number of routines, subroutines, applications, or instructions. These may constitute either software (code embodied on a non-transitory, tangible machine-readable medium) or hardware. In hardware, the routines, etc., are tangible units capable of performing certain operations and may be configured or arranged in a certain manner. In example embodiments, one or more computer systems (e.g., a standalone, client or server computer system) or one or more hardware modules of a computer system (e.g., a processor or a group of processors) may be configured by software (e.g., an application or application portion) as a hardware module that operates to perform certain operations as described herein. In various embodiments, a hardware module may be implemented mechanically or electronically. For example, a hardware module may comprise dedicated circuitry or logic that is permanently configured (e.g., as a special-purpose processor, such as a field programmable gate array (FPGA) or an application-specific integrated circuit (ASIC) to perform certain operations. A hardware module may also comprise programmable logic or circuitry (e.g., as encompassed within a general-purpose processor or other programmable processor) that is temporarily configured by software to perform certain operations. It will be appreciated that the decision to implement a hardware module mechanically, in dedicated and permanently configured circuitry, or in temporarily configured circuitry (e.g., configured by software) may be driven by cost and time considerations. Accordingly, the term “hardware” should be understood to encompass a tangible entity, be that an entity that is physically constructed, permanently configured (e.g., hardwired), or temporarily configured (e.g., programmed) to operate in a certain manner or to perform certain operations described herein. Considering embodiments in which hardware is temporarily configured (e.g., programmed), the hardware need not be configured or instantiated at any one instance in time. Software may accordingly configure a processor, for example, to constitute a particular hardware module at one instance of time and to constitute a different hardware module at a different instance of time. Hardware elements can provide information to, and receive information from, other hardware elements. Accordingly, the described hardware may be regarded as being communicatively coupled. The various operations of example methods described herein may be performed, at least partially, by one or more processors that are temporarily configured (e.g., by software) or permanently configured to perform the relevant operations. Whether temporarily or permanently configured, such processors may constitute processor-implemented modules that operate to perform one or more operations or functions. The modules referred to herein may, in some example embodiments, comprise processor-implemented modules. Similarly, the methods or routines described herein may be at least partially processor-implemented. The performance of certain of the operations may be distributed among the one or more processors, not only residing within a single machine, but deployed across a number of machines. In some example embodiments, the processor or processors may be located in a single location (e.g., within a home environment, an office environment or as a server farm), while in other embodiments the processors may be distributed across a number of locations. Unless specifically stated otherwise, discussions herein using words such as “processing,” “computing,” “calculating,” “determining,” “presenting,” “displaying,” or the like may refer to actions or processes of a machine (e.g., a computer) that manipulates or transforms data represented as physical (e.g., electronic, magnetic, or optical) quantities within one or more memories (e.g., volatile memory, non-volatile memory, or a combination thereof), registers, or other machine components that receive, store, transmit, or display information. As used herein any reference to “one embodiment” or “an embodiment” means that a particular element, feature, structure, or characteristic described in connection with the embodiment may be included in at least one embodiment. The appearances of the phrase “in one embodiment” in various places in the specification are not necessarily all referring to the same embodiment. Some embodiments may be described using the expression “coupled” and “connected” along with their derivatives. For example, some embodiments may be described using the term “coupled” to indicate that two or more elements are in direct physical or electrical contact. The term “coupled,” however, may also mean that two or more elements are not in direct contact with each other, but yet still cooperate or interact with each other. The embodiments are not limited in this context. As used herein, the terms “comprises,” “comprising,” “includes,” “including,” “has,” “having” or any other variation thereof, are intended to cover a non-exclusive inclusion. For example, a process, method, article, or apparatus that comprises a list of elements is not necessarily limited to only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Further, unless expressly stated to the contrary, “or” refers to an inclusive or and not to an exclusive or. In addition, use of the “a” or “an” are employed to describe elements and components of the embodiments herein. This is done merely for convenience and to give a general sense of the description. In this description, and the claims that follow, the singular also includes the plural unless it is obvious that it is meant otherwise. This detailed description is to be construed as exemplary only and does not describe every possible embodiment, as describing every possible embodiment would be impractical, if not impossible. One could implement numerous alternate embodiments, using either current technology or technology developed after the filing date of this application. Upon reading this disclosure, those of skill in the art will appreciate still additional alternative structural and functional designs for system and a method for assigning mobile device data to a vehicle through the disclosed principles herein. Thus, while particular embodiments and applications have been illustrated and described, it is to be understood that the disclosed embodiments are not limited to the precise construction and components disclosed herein. Various modifications, changes and variations, which will be apparent to those skilled in the art, may be made in the arrangement, operation and details of the method and apparatus disclosed herein without departing from the spirit and scope defined in the appended claims. The particular features, structures, or characteristics of any specific embodiment may be combined in any suitable manner and in any suitable combination with one or more other embodiments, including the use of selected features without corresponding use of other features. In addition, many modifications may be made to adapt a particular application, situation or material to the essential scope and spirit of the present invention. It is to be understood that other variations and modifications of the embodiments of the present invention described and illustrated herein are possible in light of the teachings herein and are to be considered part of the spirit and scope of the present invention. | 79,624 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.